id
stringlengths
11
11
channel
stringclasses
2 values
channel_id
stringclasses
2 values
title
stringlengths
12
100
categories
sequence
tags
sequence
description
stringlengths
66
5k
text
stringlengths
577
90.4k
segments
list
v2GRWzIhaqQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "hebbian", "vision", "car", "ant", "quadruped", "neuroplasticity", "fire together wire together", "reinforcement learning", "deep rl", "deep reinforcement learning", "policy network", "policy gradient", "evolutionary methods", "evolution step", "population", "correlation", "gradient", "episode", "random", "adaptive", "reconfigure", "damage", "injury", "agent" ]
#ai #neuroscience #rl Reinforcement Learning is a powerful tool, but it lacks biological plausibility because it learns a fixed policy network. Animals use neuroplasticity to reconfigure their policies on the fly and quickly adapt to new situations. This paper uses Hebbian Learning, a biologically inspired technique, to have agents adapt random networks to high-performing solutions as an episode is progressing, leading to agents that can reconfigure themselves in response to new observations. OUTLINE: 0:00 - Intro & Overview 2:30 - Reinforcement Learning vs Hebbian Plasticity 9:00 - Episodes in Hebbian Learning 10:00 - Hebbian Plasticity Rules 18:10 - Quadruped Experiment Results 21:20 - Evolutionary Learning of Hebbian Plasticity 29:10 - More Experimental Results 34:50 - Conclusions 35:30 - Broader Impact Statement Videos: https://twitter.com/risi1979/status/1280544779630186499 Paper: https://arxiv.org/abs/2007.02686 Abstract: Lifelong learning and adaptability are two defining aspects of biological agents. Modern reinforcement learning (RL) approaches have shown significant progress in solving complex tasks, however once training is concluded, the found solutions are typically static and incapable of adapting to new information or perturbations. While it is still not completely understood how biological brains learn and adapt so efficiently from experience, it is believed that synaptic plasticity plays a prominent role in this process. Inspired by this biological mechanism, we propose a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent. We demonstrate our approach on several reinforcement learning tasks with different sensory modalities and more than 450K trainable plasticity parameters. We find that starting from completely random weights, the discovered Hebbian rules enable an agent to navigate a dynamical 2D-pixel environment; likewise they allow a simulated 3D quadrupedal robot to learn how to walk while adapting to different morphological damage in the absence of any explicit reward or error signal. Authors: Elias Najarro, Sebastian Risi Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, take a look at the following problem on the left right here. So you have this quadruped and the goal is to have it walk forward or in any direction as far as possible. Now, usually this is the domain of sort of reinforcement learning. So you have inputs, which is the sensors of the joints of the quadruped and you have outputs, which is how much force you want to put on each of the legs and you have to somehow learn a policy to make it walk forward. Reinforcement learning does that by sort of trial and error using an environment to learn the policy directly. However, this paper does something different. What it does is it learns a policy that is adaptive hearing training, which basically means that at the beginning of each episode, the policy is initialized randomly and by policy here, we mean a policy network, policy neural network, which you can see at the bottom. So that's initialized randomly and then during the episode, depending on the input, this network is changed and adapted in order to achieve high performance. So even at test time, the network is started randomly and then adapted during the episode. So this paper deals with this problem and tries to implement this sort of more biologically plausible way of learning a policy, adapting to the environment and achieve ultimately good performance in this task. And it has some nice property, namely that it can deal with these things, as you can see here, front right leg damage, front left leg damage, but we'll get to that later. But just so you know what's coming. So the paper is called Meta learning through Hebbian plasticity in random networks by Elias Naharo and Sebastian Rizzi. So we'll go through the paper, what it does, what evolutionary methods are really briefly, which they use, what Hebbian plasticity is the difference to classic reinforcement learning. And then we'll look at the experiments and that's going to be it. If you like content like this, as always, don't hesitate to subscribe and share it out. And tell me what you think in the comments. I still read all the comments. So I am very interested in what you think about works like this and about the video itself. Okay, so they say lifelong learning and adaptability are two defining aspects of biological agents. Modern reinforcement learning approaches have shown significant progress in solving complex tasks. However, once training is concluded, the found solutions are typically static and incapable of adapting to new information or perturbations. So they contrast the two things here. Reinforcement learning, as you know, is very powerful in these domains. But its goal is to learn a policy and then that policy is fixed and it's specific to that particular problem. However, biological agents, you know, humans, animals and so on, they are able to adapt usually very, very quickly. They give some sort of examples right here, like if a if an animal is born, it almost immediately knows how to walk. So even if it has some sort of injury, even if it has some sort of disability, usually the animal can walk pretty much instantly. And that means it sort of adapts to the body that it is in sort of reconfigures itself on the fly. And that's what we're going to explore here. So this isn't going to outcompete RL anytime soon. It's just a different way and a biologically more plausible way in order to do that. So again, they say, we still don't know completely how biological brains learn and adapt so efficiently from experience. It is believed that synaptic plasticity plays a prominent role in this process. And that's why they are using these Hebbian learning rules in order to configure the network. So let's contrast the two things for a second. In reinforcement learning, what you have is a policy network. Now the policy network is a neural network that maps sensory inputs to actions. Okay, so you have the observation goes in, and outcomes in action. This is your policy network. Now, during training in reinforcement learning, what you do is you have some sort of environment, okay, this is the environment. And you play this back and forth game with the environment. And you try to improve this policy network right here as best as you can in order to achieve a high reward. Then during testing, so this is train, then during testing, you freeze, you freeze this network right here. So you freeze the network. And then you simply play that game and you see how well it does. Okay, so this gives you some sort of reward. And that's going to be your testing reward. And you know, that can be generalization, it can be two different environments, and so on. But the crucial part is that you in train, you learn, and then you freeze during test. In this, in this particular paper right here, they do something different. So let's call that the Hebbian plasticity world. In the Hebbian plasticity world, again, you have your environment, and you play this game. But you play the game in episodes. And at the beginning of each episode, you initialize this using some sort of distribution here, a normal distribution, you initialize the network, and then you learn, you adapt. During the episode, you adapt the network to have good performance. Okay, so this thing right here, these are the Hebbian rules. So you update the network during the episode. And then at the end of the episode, you go back, you initialize the network, again, you start a new episode, and you again adapt that randomly initialized network. So what's actually learned here isn't the weight of the network. What's learned during training is these rules that transform any randomly initialized network into a high performing network. Now, of course, you might just object and say, Hey, wait a minute, I can just basically hard code the, you know, the optimal weights here into these Hebbian rules. Like my rules can simply, you know, not care about the input and simply output whatever good weights there are. And ultimately, that would lead back to RL. But as you will be able to see in the experiments, they also have some videos provided that I invite you to watch, you can really see that the network reconfigures itself. First of all, at the beginning, it reconfigures itself to a good state. But then also, as the episode is progressing, it continuously reconfigures itself, depending on the input. So this is the real power of these Hebbian rules in that during the episode, the network can continuously reconfigure itself in order to achieve higher reward. So it's not just that I can go from the random initialization to a good performing policy, I can adapt that policy depending on what the input is. So at test time in this Hebbian world, what we're going to do is again, we are going to freeze the learning rules. So you have to kind of rethink, we're going to freeze the Hebbian rules, but still, we're going to randomly initialize our policy in each episode. And then we're going to change that during the episode, okay, and then that's ultimately going to give us our reward. So the thing that's learned is just something different. Here, you learn the weights directly in the RL setting. And then the Hebbian plasticity setting, you learn the rules to update the weights dynamically depending on the input. This is a form of meta learning, right? It's not exactly but it is a form of meta learning. So let's see what those Hebbian rules are. And you can as again, you can see this right here during training. So this is one episode. And it always starts with these random networks at the beginning. And then you can see as you progress, there is structure emerging. And again, I linked to the videos. And you can see that during the episode, even this is changing, and this is especially visible on their other example that they have here, like this, this car example. So in this car example, during the video, you'll see that there's a curve like this. And then as imagine you're a driver, like there is a kind of a left curve coming and you adjust your mental state, let's say, to say, okay, I don't know what's around the curve, I need to be ready to break and so on. And then there is a straight piece coming and you'll be like, well, I see everything, you know, I can focus on different things that you can reconfigure your state in order to adapt to the the observation. And that's exactly what you'll see in that video is that the weights are continuously updating, not so much in these quarter pads to which we'll get later. So these Hebbian rules, what do they look like? These are biologically inspired rules. And they say the following. So this here is the delta W I J. And our perspective of policy networks is going to be that this is a neural network, as we said, and we'll just pick up one layer right here. And there is going to be weights right here, you know, weights from all to all these are going to be fully connected networks, and like this, and there's going to be neuron I somewhere here and neuron J somewhere here. Okay, so neuron I and neuron J are going to have a connection together, this thing right here. And there's going this, the question is going to be how do we update that weight from one time step to the next? Remembering the weights here are changed in each time step, each time step during the episode, we update the weights. So how are they going to be updated? Let's contrast this first to classic reinforcement learning. So in classic reinforcement learning, we would keep these weights the same during the entire episode. And then at the end of the episode, right, we keep those the same. And at the end of the episode, we'll get a reward. And then we'll go back, we'll look back and say, how do we need to change the weights such that in the next episode, the reward will be higher. And in again, in classic reinforcement learning, for example, in policy gradient methods, you will actually calculate a gradient with respect to these weights right here. Actually, let's let's go into that later when we contrast evolutionary methods. So the important part right here is that we change the weights in each time step. So how do we change the weights? Of course, we don't have access to the reward, right, in order to change the weights, the reward is going to come into play when we change the rules to change the weights. But during the episode, we don't have the reward. At least we assume we only get kind of the reward at the end. So we need a different method. And the method is going to be the following right here. The important things in this formula are going to be so how do we change the weights that's dependent on two quantities that appear during each time step, oh, I and oh, j. And these are going to be the outputs of neuron i and neuron j. So how do we change the connection that's going to be dependent on the output of neuron i, which is here called the pre synaptic output, and the output of neuron j, which is going to be the post synaptic output. The rule, the kind of mantra here is the fire together wire together means that if two neurons are active at the same time regularly, then they probably should be connected together because they already correlate. And you can see right here that there is a term in this formula that is oh, i times oh, j. So this here is the correlation between or the covariance, or just the product, if we're exact between these two neurons. And if they are both active regularly, then this quantity is going to be high. And if they're both not active regularly that or if one is active and the other one isn't that quantity is going to be low. And the a parameter here specifies how the weights are updated in response to this. So the a, b, c, d, and eta parameters right here are these are the learned parameters, these are going to be your learned rules to update the weights. So these change once after once per learning step was a once per. So after the episode is done, you're going to change these capital constants right here, including the eta, which is the learning rate. These things right here, these are per step. So this is each step gives you a different oh, i and oh, j. And then you'll adjust the weight based on that, you will see that these constants here, they are per weight. So for each weight in this neural network, we learn a separate rule of how to update that particular weight. So the algorithm can, it can basically decide for a particular way to can decide, well, if these two things fire together often, I want to update my weight very heavily in response to that. Okay, so if the a is very high, that means the connection responds very thoroughly to when the two neurons fire together. That is not the same as to say that connection should always be very strong, it's dependent on the input. So only when this quantity is high, should the network or should the weight be updated, and the a parameter modulates how well it's updated or how how strongly it's up, it can also be negative, it can be zero, basically meaning that, you know, it doesn't matter if they fire together, I don't want to update the weight, this particular weight in response to that. So you can see that you can learn these rules that can adapt to different inputs, because all of the changes the delta here is dependent on the inputs. So on the correlation, but also on the different inputs themselves. And then there is also a constant right here. Okay, this, as you can see, it's a linear function of the inputs of the OI and OJ and their product. So I hope this is clear that these Hebbian rules, you learn ABCD and ETA, and that gives rise to an adaptive network that can change and reconfigure itself over the course of an episode, depending on the inputs. And one of the things right here, and we'll get to how you actually learn the rules itself in a second. But one of the things right here is very visible, as I said in this first experiment, where it reconfigures itself continuously, but also in this experiment with this quadruped right here. So this quadruped, usually, it's you know, you simply walk in the direction that's your reward and RL is perfectly fine at this as well. However, this is a bit of a has a bit of a trick to it. Namely, you are always in one of three situations, either you have an undamaged quadruped, or it's kind of left leg, front left leg is damaged, or its front right leg is damaged. Okay, and you don't tell the you simply sample these situations uniformly, and you don't tell the algorithm which situation it is in. Now, if you look at if you compare two methods, one where you directly learn the weights, you learn a fixed policy to solve, you know, this is one task, right, this is one task. And all of these three things appear with equal probability. So you have to learn one policy to make all of this work. If you learn the weights directly, and you don't have a power, like there's no doubt that like a powerful RL approach could deal with this task. But if in this case, if you just put a standard weight learner with the same number of the same size of policy as the Hebbian they compare to, if you put a weight learner on it, it will not be able to solve this task satisfactorily, what it will do is it will say, well, I need one set of rules that make me walk as far as possible as often as possible. So if you can see at the table, I'm already showing you the results right here. The table right here, if you have these static weights, you can see that it's performing pretty well in two out of three situations, right. So it what it basically does, it says, okay, here is what where there's damage, what it does is it says, I'm going to learn to walk with my left leg using my left front leg. That means when I have no damage or damage to the right front leg, I'm just fine. And I'm just going to take the hit basically, where I have damage to the left front leg, because I'm it's just going to suck. So they solved they solve this like walk more than 100 steps. So that doesn't it, since it can only learn a fixed policy, it basically discards the case where there's damage to the left front leg, it takes that hit in order to be better in the other two methods, you can see it's outperforming the Hebbian rule in the other two methods. But this shows you kind of the difference and the power that these Hebbian rules or these generally neuroplasticity might have because the Hebbian one is perfectly capable of at least in part adapting to the different situations. Now you can see that is not symmetric. Also the Hebbian rules they learn to know there's 860 and there's 440 of a thing that should actually be symmetric, we do expect a drop when there's damage, but it's not symmetric, which means that also the Hebbian rules they kind of randomly focus on one over the other, but at least they're able in some degree to adapt to both. And that's because it depending on the input, you know, it has a rule in there that basically says, well, if the if the back left leg and the front light right leg, you know, if they fire together, then I want to, if they if they fire together, the sensors that show me that they're moving, if they fire together, I'm going to wire them together, because that's how I walk, you know, front, right, back, left, and then the other way around. And if that's not the case, I'm not going to wire them together. So that would be the situation where we have damage. Instead, if they are not wired together, I'm going to, you can do this in the next layer of the neural network, wire these other two things together, you know, if if the first thing is not the case, I'm going to wire these other two things together to make up for that loss. And there you can see there is kind of this logic built into the network. Now, again, I know you can do this with learning a fixed policy, you can achieve the same effects. The point here is just to show that given kind of a same size networks and so on, that you that there might be there might be like a qualitative difference in certain situations. Again, by no means this is meant to outcompete RL or anything like this. Okay, so we'll we went there. Now, how are these rules actually learned? And there we have to again make a distinction that is completely separate from the Hebbian non-Hebbian way. Okay, so the Hebbian non-Hebbian distinction was, do we learn the weights of the policy network directly? Or do we learn the rules to update the weights? Now the question is, whatever we learn, how do we learn it? And again, we have to draw the distinction this time between, I'm going to say classic R, even though the terminology is not really correct, classic RL and evolutionary methods. Okay, so in classic RL, what I would do is I would use my weights in order to obtain a reward. And then I would update my weights. So my delta W would be proportional to the gradient of W of the reward. Okay, so in the classic RL, especially in this is a policy gradient method right now, so I use my policy, my weights to get the reward, and then I would calculate a gradient. And you know, usually the reward isn't differentiable. So you have this reinforced trick in order to pull the reward out. And you can read all of this up if you look at policy gradient, the basic policy gradient methods. But this here tells me I need a gradient, usually this is going to be the reward times the gradient of my FW of my input. So what this means is, what this means is that if my reward is high, then I just want to know, what do I need to do to make more of what I just did? Okay, and the great hint ensures that for every single weight in your neural network, you know what to do. So the gradient means that I have an exact handle on how do I need to change this weight? How do I need to change that weight? How do I need to change this weight in order if the reward is high, and because of this multiplication here, I want to make more of what I just did. And the gradient tells me how. If the reward is low, on the other hand, I want to make less of what I just did. But also the gradient tells me how that can be achieved. I simply go into the other direction than I would if the reward is high. In evolutionary methods, we don't have, we don't do this gradient calculation. Now there can be advantages to not doing gradient calculation. Sometimes backpropagation simply isn't possible. Even if it is possible, and this is maybe the case where we're now, what we need to learn in our case is these rules to update the rules. And imagine you have an episode and that's kind of episode, you have step, step, step, step, and in each step, these rules are applied, right? In each of these steps, the rules are applied. And at the end, you get a reward. So what you need to do is to back propagate that reward through all the steps and then through all the rules. Okay. And that might be just computationally not feasible or the rules, the rules right here are pretty, pretty easy, but the rules might not be differentiable. You actually have the same problem in general in classic RL as well. But you know, you can cut off time steps and so on. There are various hacks. In any case, there can be advantages to not having that gradient and evolutionary methods are a way to do that. In evolutionary method, usually you are don't train one agent, you train a population of agents. So you have a bunch of these neural network agents in here. And the way you update the neural network agent is you simply let them run, you know, you let them run your app, the episode. So this is your W, one of them, you let them run the episode, they get a reward. And then you can do multiple things. So this depends on the evolutionary method. So you can either pick out the best performing agent, or you can update each agent according to some rule. The goal here is simply to basically, you always want to take your weights, you want to add some noise to them. And you want to see, does it get better or worse? If it gets better, good. If it gets worse, not good. Okay, the difference is without the gradient, you don't have a handle on how do you need to change each individual weight. All you can do is basically random walk and observe what happens. And if the random walk is, you know, turns out to be good, you go more into that direction of that random walk. So it's sort of a sort of a poor man's gradient method in these evolutionary methods. Again, completely independent of what we learn, you can use the evolutionary method to learn the fixed weights. And that's what actually what happens in the table I've shown you below. Or you can use the evolutionary method to learn the Hebbian update rules. As well, you can use RL to learn the fixed weight or the update rules. In this paper, they use evolutionary methods to learn the Hebbian update rules. And they compare mostly with using evolutionary methods to learn the fixed weights. Okay, the exact evolutionary step they use right here is the following. So HT here is going to be the thing that you learn. Now as compared to W being the network weights, H is going to be the Hebbian weights, since we learn the Hebbian weights. So how they'll update each agent is going to be they'll take the Hebbian weights. And this this here is how you update, right? This is your delta H. How do you update the Hebbian weights? Well, what you do is you you perform in random perturbations. So I take my weights and I add noise. I just add noise. Okay, so I I'm here. And I just make a bunch of versions of it. And then I observe how well are these versions doing? So how well are my random perturbations doing? This is going to be the fitness fi right here is going to be the fitness. And then I'm just going to perform a weighted average. So this is my weighted average of these new solutions. Okay, so if this solution here did pretty well, and this solution did pretty poorly, I want to walk, you know, in this direction. And then again, I do the same thing here from here, I do a bunch of perturbations. And maybe this one did pretty well. And this one did pretty poorly, I want to walk in this direction, and so on. Okay, so that's how you you'll change the you'll change weights or rules or whatever you want in an evolutionary method. As you know, it's pretty easy. It's easier than reinforcement learning, no back prop, no nothing. Basically a black box optimizer. There are more complicated evolutionary methods, but no, we don't go into those here right now. Okay, so again, I've already shown you these results. Now I said these static weights are also with evolutionary method, they also report what you would get with like a RL approach, like PPO, you would get kind of the same thing as they get as they get here. So, oh, sorry, this is not the same as the table. Yeah, I was confused for a second. This here is for the car environment. Okay, this is this vision based environment. So with their method, they get like an 870 rewards with the heavy and based approach. With the static weight, but still evolutionary method, they get a much lower reward. In fact, the heavy and based approach is about the same as you get here with an RL algorithm. And as we said, the RL algorithm more complicated. And if you use like a state of the art RL algorithm, not just PPO, you get a bit of a better performance, but not that much if you look at the actual numbers. So, you know, pretty cool to see that, again, this is not outperforming anything. This is simply showing that you can do that. They do a number of experiments where they go in the episode and they kind of change stuff in the episode. And one cool thing here is that they go and you know, this is an episode. So at the episode, you start with a random network each time in this heavy and setting. And then pretty quickly, the rules adapt for a high performing right. So it starts to walk, it reconfigures itself and starts to walk. The reward here again, it doesn't have access to that, but we can measure it, of course. And then at this step A right here, they simply go to the weights and zero them out. So they just delete these weights right here. And only 10 time steps later, it has reconfigured itself as you can see right here in order to walk again. So 10 time steps later, reconfigures itself, reconfigures itself. And after a short while right here, it's back to its kind of original performance, as you can see. So that's, I'd say that's fairly impressive in this very short amount of time able to recover from such an intervention. If you do this, of course, if you do this to your policy network, that's statically learned, it's going to be garbage. But I guess the fair comparison would be to delete the heavy and rules themselves. And you know, so it's not like it's not like this can adapt to new situations, or something like this, this is still learned for particular environments, right. But the point here is that you learn the rules. And this is kind of a study on neuroplasticity. Now, my question actually would be why this diagonal pattern appears. And I have not seen like a clear explanation. Especially is this anti diagonal pattern, it's not so much here in the output layer, right, this is the output layer, there are 21 actions or so. And this one is this, this dimension. So not that much there. But there seems to be this rule. And this is not the case at the beginning, right, you saw the beginning you saw at the beginning, it was pretty random matrix. So why? Why? Yeah, here, pretty random. And then there's this diagonal pattern, I don't know why. If you know, let me know. I mean, it's anti diagonal, maybe it is actually diagonal and the forward, the fully connected layer is just defined as something like WT times x. And but maybe this also depends on the random initialization. But there is no inherent way why particular neuron would, you know, care about sending information to like the same height of neuron on the other side. Or is there? I don't know I'm so is this a property of the evolutionary or the learning rules? It seems not because the learning rules don't depend on the position. I'm genuinely confused about this. And maybe, you know, maybe they've written it somewhere, and I've just overlooked it, though. I, they do reference it, they say, oh, there's this diagonal pattern appearing, but I don't think they ever say why it is diagonal. Okay, I might just be I might just be real dumb. Yeah. So they also, you know, they do some more experiments, they show, for example, that if you just have random Hebbian coefficients, then your algorithm just jumps around kind of in in weight space around the zero point. However, if you actually learn these Hebbian coefficients, as they do, you have like this clear attractor here. And you have these kind of oscillating curves. When, you know, when when you do that, and you can see here in the different situations where things are damaged, and so on. So all in all, I think it's a pretty interesting study. And I think this neuroplasticity is, it's a different way, you know, it's unclear to say if it will ever deliver the performance that RL delivers, but certainly there are situations where such plasticity is desired. And if we can also combine this with greater generalization performance, then, you know, we have agents that can quickly kind of reconfigure. And a lot of work by these these kind of open ended learning community also plays into this roles, all in all pretty, pretty cool, non standard way of doing things. Last thing, the broader impact statement. Every now and then we'll look at a broader impact statement, since these are new, just to get kind of an overview of what they look like. So they say the ethical and mutual societal consequences of this work are hard to predict, but likely similar to other work dealing with more adaptive agent and robots. In particular, by giving the robots the ability to still function when injured could make it easier from them being deployed in areas that have both a positive and negative impact on society. Okay, well, again, this it's not really giving robots the ability to still function when they're injured. I first I thought first I thought, okay, they train it when it's fully functioning, but then they damage it during test time. But as I understand it, as I understand the paper, they already train it with the damaged versions, they just don't tell the algorithm in which version it is right now. So it's not the same as being able to work when injured unless you've specifically trained for it. In this case, again, I could be wrong about this. Yeah. In the very long term robots that can adapt could help in industrial automation or help to care for the elderly. On the other hand, more adaptive robots could also be more easily used for military applications. The approach presented in this paper is far from being deployed in these areas, but it's important to discuss its potential long term consequences early on. Now, okay, so let's evaluate the broader impact statement. Let's, well, the first check to do is always to simply replace whatever their method is with the word technology. Okay, so let's do that. In the very long term, technology could help in industrial automation or help to care for the elderly. Check. On the other hand, technology could also be more easily used for military application. Check. Technology is far from being deployed in these areas. Okay, I guess some technology isn't, but advanced technology. Yeah. So again, the rule for broader impact statements seem to be you take whatever your method is and you go up until you find, you know, you're basically at technology or something equivalent, because no one actually I've never seen a broader impact statement that writes about the actual thing in the paper, they always go up like one layer or two. And then it basically regresses to technology, even even though very few papers actually would be able to discuss their particular thing, but you know, and that and then in terms of guidelines on broader impact statement, this one is missing, there's there's always this, the holy trifecta. So the holy trifecta is you go like a, you know, like you're a, you're a Catholic, you go with your finger to your head, chest, left and right. And you say technology good, technology bad, technology biased. Okay, so you want to write a broader impact statement, go up the layers, technology, good bad bias, and we're missing the bias here. So that's, you know, I'm just following what these guidelines to broader impact statements are. I don't make the rules. I'm sorry that the heavy ins make the rules apparently. I'm not heavy in. Okay, I've I hope you've enjoyed this paper and this video. Let me know what you think. Check out the videos that they have. I'll link them. And with that, I wish you a pleasant day. Bye bye.
[ { "end": 6.6000000000000005, "start": 0, "text": " Hi there, take a look at the following problem on the left right here. So you have this quadruped" }, { "end": 13.64, "start": 6.6000000000000005, "text": " and the goal is to have it walk forward or in any direction as far as possible. Now," }, { "end": 18.68, "start": 13.64, "text": " usually this is the domain of sort of reinforcement learning. So you have inputs, which is the" }, { "end": 24.7, "start": 18.68, "text": " sensors of the joints of the quadruped and you have outputs, which is how much force" }, { "end": 30.4, "start": 24.7, "text": " you want to put on each of the legs and you have to somehow learn a policy to make it" }, { "end": 36.76, "start": 30.4, "text": " walk forward. Reinforcement learning does that by sort of trial and error using an environment" }, { "end": 44.2, "start": 36.76, "text": " to learn the policy directly. However, this paper does something different. What it does" }, { "end": 50.8, "start": 44.2, "text": " is it learns a policy that is adaptive hearing training, which basically means that at the" }, { "end": 58.92, "start": 50.8, "text": " beginning of each episode, the policy is initialized randomly and by policy here, we mean a policy" }, { "end": 64.56, "start": 58.92, "text": " network, policy neural network, which you can see at the bottom. So that's initialized" }, { "end": 72.62, "start": 64.56, "text": " randomly and then during the episode, depending on the input, this network is changed and" }, { "end": 80.72, "start": 72.62, "text": " adapted in order to achieve high performance. So even at test time, the network is started" }, { "end": 87.92, "start": 80.72, "text": " randomly and then adapted during the episode. So this paper deals with this problem and" }, { "end": 95.96, "start": 87.92, "text": " tries to implement this sort of more biologically plausible way of learning a policy, adapting" }, { "end": 101.52, "start": 95.96, "text": " to the environment and achieve ultimately good performance in this task. And it has" }, { "end": 107.32, "start": 101.52, "text": " some nice property, namely that it can deal with these things, as you can see here, front" }, { "end": 112.75999999999999, "start": 107.32, "text": " right leg damage, front left leg damage, but we'll get to that later. But just so you know" }, { "end": 119, "start": 112.75999999999999, "text": " what's coming. So the paper is called Meta learning through Hebbian plasticity in random" }, { "end": 126.08, "start": 119, "text": " networks by Elias Naharo and Sebastian Rizzi. So we'll go through the paper, what it does," }, { "end": 131.84, "start": 126.08, "text": " what evolutionary methods are really briefly, which they use, what Hebbian plasticity is" }, { "end": 137.8, "start": 131.84, "text": " the difference to classic reinforcement learning. And then we'll look at the experiments and" }, { "end": 143.72, "start": 137.8, "text": " that's going to be it. If you like content like this, as always, don't hesitate to subscribe" }, { "end": 149.72, "start": 143.72, "text": " and share it out. And tell me what you think in the comments. I still read all the comments." }, { "end": 154.24, "start": 149.72, "text": " So I am very interested in what you think about works like this and about the video" }, { "end": 160.74, "start": 154.24, "text": " itself. Okay, so they say lifelong learning and adaptability are two defining aspects" }, { "end": 166.32000000000002, "start": 160.74, "text": " of biological agents. Modern reinforcement learning approaches have shown significant" }, { "end": 172.34, "start": 166.32000000000002, "text": " progress in solving complex tasks. However, once training is concluded, the found solutions" }, { "end": 179.60000000000002, "start": 172.34, "text": " are typically static and incapable of adapting to new information or perturbations. So they" }, { "end": 185.32000000000002, "start": 179.60000000000002, "text": " contrast the two things here. Reinforcement learning, as you know, is very powerful in" }, { "end": 191.76, "start": 185.32, "text": " these domains. But its goal is to learn a policy and then that policy is fixed and it's" }, { "end": 200.06, "start": 191.76, "text": " specific to that particular problem. However, biological agents, you know, humans, animals" }, { "end": 205.12, "start": 200.06, "text": " and so on, they are able to adapt usually very, very quickly. They give some sort of" }, { "end": 211.88, "start": 205.12, "text": " examples right here, like if a if an animal is born, it almost immediately knows how to" }, { "end": 219.04, "start": 211.88, "text": " walk. So even if it has some sort of injury, even if it has some sort of disability, usually" }, { "end": 226.62, "start": 219.04, "text": " the animal can walk pretty much instantly. And that means it sort of adapts to the body" }, { "end": 231.72, "start": 226.62, "text": " that it is in sort of reconfigures itself on the fly. And that's what we're going to" }, { "end": 238.24, "start": 231.72, "text": " explore here. So this isn't going to outcompete RL anytime soon. It's just a different way" }, { "end": 244.96, "start": 238.24, "text": " and a biologically more plausible way in order to do that. So again, they say, we still" }, { "end": 250.84, "start": 244.96, "text": " don't know completely how biological brains learn and adapt so efficiently from experience." }, { "end": 256.36, "start": 250.84, "text": " It is believed that synaptic plasticity plays a prominent role in this process. And that's" }, { "end": 263.72, "start": 256.36, "text": " why they are using these Hebbian learning rules in order to configure the network. So" }, { "end": 269.56, "start": 263.72, "text": " let's contrast the two things for a second. In reinforcement learning, what you have is" }, { "end": 275.52000000000004, "start": 269.56, "text": " a policy network. Now the policy network is a neural network that maps sensory inputs" }, { "end": 281.56, "start": 275.52000000000004, "text": " to actions. Okay, so you have the observation goes in, and outcomes in action. This is your" }, { "end": 287.44000000000005, "start": 281.56, "text": " policy network. Now, during training in reinforcement learning, what you do is you have some sort" }, { "end": 293.28000000000003, "start": 287.44000000000005, "text": " of environment, okay, this is the environment. And you play this back and forth game with" }, { "end": 301.44, "start": 293.28, "text": " the environment. And you try to improve this policy network right here as best as you can" }, { "end": 310.4, "start": 301.44, "text": " in order to achieve a high reward. Then during testing, so this is train, then during testing," }, { "end": 318.55999999999995, "start": 310.4, "text": " you freeze, you freeze this network right here. So you freeze the network. And then" }, { "end": 323.4, "start": 318.56, "text": " you simply play that game and you see how well it does. Okay, so this gives you some" }, { "end": 327.24, "start": 323.4, "text": " sort of reward. And that's going to be your testing reward. And you know, that can be" }, { "end": 333.28000000000003, "start": 327.24, "text": " generalization, it can be two different environments, and so on. But the crucial part is that you" }, { "end": 342.16, "start": 333.28000000000003, "text": " in train, you learn, and then you freeze during test. In this, in this particular paper right" }, { "end": 349.96000000000004, "start": 342.16, "text": " here, they do something different. So let's call that the Hebbian plasticity world. In" }, { "end": 357.08000000000004, "start": 349.96000000000004, "text": " the Hebbian plasticity world, again, you have your environment, and you play this game." }, { "end": 364.94000000000005, "start": 357.08000000000004, "text": " But you play the game in episodes. And at the beginning of each episode, you initialize" }, { "end": 369.72, "start": 364.94000000000005, "text": " this using some sort of distribution here, a normal distribution, you initialize the" }, { "end": 378.68, "start": 369.72, "text": " network, and then you learn, you adapt. During the episode, you adapt the network to have" }, { "end": 388.90000000000003, "start": 378.68, "text": " good performance. Okay, so this thing right here, these are the Hebbian rules. So you" }, { "end": 394.44000000000005, "start": 388.90000000000003, "text": " update the network during the episode. And then at the end of the episode, you go back," }, { "end": 400.76, "start": 394.44, "text": " you initialize the network, again, you start a new episode, and you again adapt that randomly" }, { "end": 405.96, "start": 400.76, "text": " initialized network. So what's actually learned here isn't the weight of the network. What's" }, { "end": 412.36, "start": 405.96, "text": " learned during training is these rules that transform any randomly initialized network" }, { "end": 419.04, "start": 412.36, "text": " into a high performing network. Now, of course, you might just object and say, Hey, wait a" }, { "end": 426.34000000000003, "start": 419.04, "text": " minute, I can just basically hard code the, you know, the optimal weights here into these" }, { "end": 432.8, "start": 426.34000000000003, "text": " Hebbian rules. Like my rules can simply, you know, not care about the input and simply" }, { "end": 437.76, "start": 432.8, "text": " output whatever good weights there are. And ultimately, that would lead back to RL. But" }, { "end": 443, "start": 437.76, "text": " as you will be able to see in the experiments, they also have some videos provided that I" }, { "end": 450, "start": 443, "text": " invite you to watch, you can really see that the network reconfigures itself. First of" }, { "end": 455.16, "start": 450, "text": " all, at the beginning, it reconfigures itself to a good state. But then also, as the episode" }, { "end": 461.08, "start": 455.16, "text": " is progressing, it continuously reconfigures itself, depending on the input. So this is" }, { "end": 466.08, "start": 461.08, "text": " the real power of these Hebbian rules in that during the episode, the network can continuously" }, { "end": 471.84, "start": 466.08, "text": " reconfigure itself in order to achieve higher reward. So it's not just that I can go from" }, { "end": 477, "start": 471.84, "text": " the random initialization to a good performing policy, I can adapt that policy depending" }, { "end": 484.38, "start": 477, "text": " on what the input is. So at test time in this Hebbian world, what we're going to do is again," }, { "end": 489.73999999999995, "start": 484.38, "text": " we are going to freeze the learning rules. So you have to kind of rethink, we're going" }, { "end": 498.9, "start": 489.73999999999995, "text": " to freeze the Hebbian rules, but still, we're going to randomly initialize our policy in" }, { "end": 506.28, "start": 498.9, "text": " each episode. And then we're going to change that during the episode, okay, and then that's" }, { "end": 513.06, "start": 506.28, "text": " ultimately going to give us our reward. So the thing that's learned is just something" }, { "end": 520.02, "start": 513.06, "text": " different. Here, you learn the weights directly in the RL setting. And then the Hebbian plasticity" }, { "end": 525.76, "start": 520.02, "text": " setting, you learn the rules to update the weights dynamically depending on the input." }, { "end": 532.84, "start": 525.76, "text": " This is a form of meta learning, right? It's not exactly but it is a form of meta learning." }, { "end": 538.5, "start": 532.84, "text": " So let's see what those Hebbian rules are. And you can as again, you can see this right" }, { "end": 545.78, "start": 538.5, "text": " here during training. So this is one episode. And it always starts with these random networks" }, { "end": 551.28, "start": 545.78, "text": " at the beginning. And then you can see as you progress, there is structure emerging." }, { "end": 556.68, "start": 551.28, "text": " And again, I linked to the videos. And you can see that during the episode, even this" }, { "end": 562.28, "start": 556.68, "text": " is changing, and this is especially visible on their other example that they have here," }, { "end": 567.68, "start": 562.28, "text": " like this, this car example. So in this car example, during the video, you'll see that" }, { "end": 572.92, "start": 567.68, "text": " there's a curve like this. And then as imagine you're a driver, like there is a kind of a" }, { "end": 579.4, "start": 572.92, "text": " left curve coming and you adjust your mental state, let's say, to say, okay, I don't know" }, { "end": 584.0799999999999, "start": 579.4, "text": " what's around the curve, I need to be ready to break and so on. And then there is a straight" }, { "end": 588.6, "start": 584.0799999999999, "text": " piece coming and you'll be like, well, I see everything, you know, I can focus on different" }, { "end": 594.84, "start": 588.6, "text": " things that you can reconfigure your state in order to adapt to the the observation." }, { "end": 599.28, "start": 594.84, "text": " And that's exactly what you'll see in that video is that the weights are continuously" }, { "end": 604.72, "start": 599.28, "text": " updating, not so much in these quarter pads to which we'll get later. So these Hebbian" }, { "end": 612.28, "start": 604.72, "text": " rules, what do they look like? These are biologically inspired rules. And they say the following." }, { "end": 621.72, "start": 612.28, "text": " So this here is the delta W I J. And our perspective of policy networks is going to be that this" }, { "end": 627.84, "start": 621.72, "text": " is a neural network, as we said, and we'll just pick up one layer right here. And there" }, { "end": 631.84, "start": 627.84, "text": " is going to be weights right here, you know, weights from all to all these are going to" }, { "end": 638.72, "start": 631.84, "text": " be fully connected networks, and like this, and there's going to be neuron I somewhere" }, { "end": 645.44, "start": 638.72, "text": " here and neuron J somewhere here. Okay, so neuron I and neuron J are going to have a" }, { "end": 651.48, "start": 645.44, "text": " connection together, this thing right here. And there's going this, the question is going" }, { "end": 657.84, "start": 651.48, "text": " to be how do we update that weight from one time step to the next? Remembering the weights" }, { "end": 664.22, "start": 657.84, "text": " here are changed in each time step, each time step during the episode, we update the weights." }, { "end": 670.6800000000001, "start": 664.22, "text": " So how are they going to be updated? Let's contrast this first to classic reinforcement" }, { "end": 675.5600000000001, "start": 670.6800000000001, "text": " learning. So in classic reinforcement learning, we would keep these weights the same during" }, { "end": 680.52, "start": 675.5600000000001, "text": " the entire episode. And then at the end of the episode, right, we keep those the same." }, { "end": 683.96, "start": 680.52, "text": " And at the end of the episode, we'll get a reward. And then we'll go back, we'll look" }, { "end": 688.12, "start": 683.96, "text": " back and say, how do we need to change the weights such that in the next episode, the" }, { "end": 694.24, "start": 688.12, "text": " reward will be higher. And in again, in classic reinforcement learning, for example, in policy" }, { "end": 700.9000000000001, "start": 694.24, "text": " gradient methods, you will actually calculate a gradient with respect to these weights right" }, { "end": 707.24, "start": 700.9000000000001, "text": " here. Actually, let's let's go into that later when we contrast evolutionary methods. So" }, { "end": 711.24, "start": 707.24, "text": " the important part right here is that we change the weights in each time step. So how do we" }, { "end": 716.4, "start": 711.24, "text": " change the weights? Of course, we don't have access to the reward, right, in order to change" }, { "end": 721.1, "start": 716.4, "text": " the weights, the reward is going to come into play when we change the rules to change the" }, { "end": 726.36, "start": 721.1, "text": " weights. But during the episode, we don't have the reward. At least we assume we only" }, { "end": 733.1, "start": 726.36, "text": " get kind of the reward at the end. So we need a different method. And the method is going" }, { "end": 739, "start": 733.1, "text": " to be the following right here. The important things in this formula are going to be so" }, { "end": 745.14, "start": 739, "text": " how do we change the weights that's dependent on two quantities that appear during each" }, { "end": 752.32, "start": 745.14, "text": " time step, oh, I and oh, j. And these are going to be the outputs of neuron i and neuron" }, { "end": 759.32, "start": 752.32, "text": " j. So how do we change the connection that's going to be dependent on the output of neuron" }, { "end": 764.58, "start": 759.32, "text": " i, which is here called the pre synaptic output, and the output of neuron j, which is going" }, { "end": 773.12, "start": 764.58, "text": " to be the post synaptic output. The rule, the kind of mantra here is the fire together" }, { "end": 779.5, "start": 773.12, "text": " wire together means that if two neurons are active at the same time regularly, then they" }, { "end": 786.34, "start": 779.5, "text": " probably should be connected together because they already correlate. And you can see right" }, { "end": 793.44, "start": 786.34, "text": " here that there is a term in this formula that is oh, i times oh, j. So this here is" }, { "end": 801.84, "start": 793.44, "text": " the correlation between or the covariance, or just the product, if we're exact between" }, { "end": 807.5200000000001, "start": 801.84, "text": " these two neurons. And if they are both active regularly, then this quantity is going to" }, { "end": 812.7, "start": 807.5200000000001, "text": " be high. And if they're both not active regularly that or if one is active and the other one" }, { "end": 819.22, "start": 812.7, "text": " isn't that quantity is going to be low. And the a parameter here specifies how the weights" }, { "end": 827.84, "start": 819.22, "text": " are updated in response to this. So the a, b, c, d, and eta parameters right here are" }, { "end": 833.44, "start": 827.84, "text": " these are the learned parameters, these are going to be your learned rules to update the" }, { "end": 840.08, "start": 833.44, "text": " weights. So these change once after once per learning step was a once per. So after the" }, { "end": 844.1600000000001, "start": 840.08, "text": " episode is done, you're going to change these capital constants right here, including the" }, { "end": 852.02, "start": 844.16, "text": " eta, which is the learning rate. These things right here, these are per step. So this is" }, { "end": 856.8, "start": 852.02, "text": " each step gives you a different oh, i and oh, j. And then you'll adjust the weight based" }, { "end": 862.88, "start": 856.8, "text": " on that, you will see that these constants here, they are per weight. So for each weight" }, { "end": 869.76, "start": 862.88, "text": " in this neural network, we learn a separate rule of how to update that particular weight." }, { "end": 876, "start": 869.76, "text": " So the algorithm can, it can basically decide for a particular way to can decide, well," }, { "end": 882.42, "start": 876, "text": " if these two things fire together often, I want to update my weight very heavily in response" }, { "end": 891.76, "start": 882.42, "text": " to that. Okay, so if the a is very high, that means the connection responds very thoroughly" }, { "end": 897.64, "start": 891.76, "text": " to when the two neurons fire together. That is not the same as to say that connection" }, { "end": 903.4, "start": 897.64, "text": " should always be very strong, it's dependent on the input. So only when this quantity is" }, { "end": 910.1999999999999, "start": 903.4, "text": " high, should the network or should the weight be updated, and the a parameter modulates" }, { "end": 917.48, "start": 910.1999999999999, "text": " how well it's updated or how how strongly it's up, it can also be negative, it can be" }, { "end": 922.96, "start": 917.48, "text": " zero, basically meaning that, you know, it doesn't matter if they fire together, I don't" }, { "end": 927.6, "start": 922.96, "text": " want to update the weight, this particular weight in response to that. So you can see" }, { "end": 934.12, "start": 927.6, "text": " that you can learn these rules that can adapt to different inputs, because all of the changes" }, { "end": 942.88, "start": 934.12, "text": " the delta here is dependent on the inputs. So on the correlation, but also on the different" }, { "end": 950.32, "start": 942.88, "text": " inputs themselves. And then there is also a constant right here. Okay, this, as you" }, { "end": 959.9200000000001, "start": 950.32, "text": " can see, it's a linear function of the inputs of the OI and OJ and their product. So I hope" }, { "end": 967.88, "start": 959.9200000000001, "text": " this is clear that these Hebbian rules, you learn ABCD and ETA, and that gives rise to" }, { "end": 974.4000000000001, "start": 967.88, "text": " an adaptive network that can change and reconfigure itself over the course of an episode, depending" }, { "end": 981.88, "start": 974.4, "text": " on the inputs. And one of the things right here, and we'll get to how you actually learn" }, { "end": 986.64, "start": 981.88, "text": " the rules itself in a second. But one of the things right here is very visible, as I said" }, { "end": 992.88, "start": 986.64, "text": " in this first experiment, where it reconfigures itself continuously, but also in this experiment" }, { "end": 998.88, "start": 992.88, "text": " with this quadruped right here. So this quadruped, usually, it's you know, you simply walk in" }, { "end": 1004.88, "start": 998.88, "text": " the direction that's your reward and RL is perfectly fine at this as well. However, this" }, { "end": 1010.88, "start": 1004.88, "text": " is a bit of a has a bit of a trick to it. Namely, you are always in one of three situations," }, { "end": 1019.4399999999999, "start": 1010.88, "text": " either you have an undamaged quadruped, or it's kind of left leg, front left leg is damaged," }, { "end": 1026.64, "start": 1019.4399999999999, "text": " or its front right leg is damaged. Okay, and you don't tell the you simply sample these" }, { "end": 1033.76, "start": 1026.64, "text": " situations uniformly, and you don't tell the algorithm which situation it is in. Now, if" }, { "end": 1040.0400000000002, "start": 1033.76, "text": " you look at if you compare two methods, one where you directly learn the weights, you" }, { "end": 1046.88, "start": 1040.0400000000002, "text": " learn a fixed policy to solve, you know, this is one task, right, this is one task. And" }, { "end": 1052.8400000000001, "start": 1046.88, "text": " all of these three things appear with equal probability. So you have to learn one policy" }, { "end": 1059.36, "start": 1052.84, "text": " to make all of this work. If you learn the weights directly, and you don't have a power," }, { "end": 1063.76, "start": 1059.36, "text": " like there's no doubt that like a powerful RL approach could deal with this task. But" }, { "end": 1070.36, "start": 1063.76, "text": " if in this case, if you just put a standard weight learner with the same number of the" }, { "end": 1077.04, "start": 1070.36, "text": " same size of policy as the Hebbian they compare to, if you put a weight learner on it, it" }, { "end": 1082.8, "start": 1077.04, "text": " will not be able to solve this task satisfactorily, what it will do is it will say, well, I need" }, { "end": 1089.28, "start": 1082.8, "text": " one set of rules that make me walk as far as possible as often as possible. So if you" }, { "end": 1095.48, "start": 1089.28, "text": " can see at the table, I'm already showing you the results right here. The table right" }, { "end": 1101.84, "start": 1095.48, "text": " here, if you have these static weights, you can see that it's performing pretty well in" }, { "end": 1110.04, "start": 1101.84, "text": " two out of three situations, right. So it what it basically does, it says, okay, here" }, { "end": 1116.3999999999999, "start": 1110.04, "text": " is what where there's damage, what it does is it says, I'm going to learn to walk with" }, { "end": 1122.72, "start": 1116.3999999999999, "text": " my left leg using my left front leg. That means when I have no damage or damage to the" }, { "end": 1128.36, "start": 1122.72, "text": " right front leg, I'm just fine. And I'm just going to take the hit basically, where I have" }, { "end": 1132.76, "start": 1128.36, "text": " damage to the left front leg, because I'm it's just going to suck. So they solved they" }, { "end": 1138.28, "start": 1132.76, "text": " solve this like walk more than 100 steps. So that doesn't it, since it can only learn" }, { "end": 1146.52, "start": 1138.28, "text": " a fixed policy, it basically discards the case where there's damage to the left front" }, { "end": 1152.02, "start": 1146.52, "text": " leg, it takes that hit in order to be better in the other two methods, you can see it's" }, { "end": 1157.96, "start": 1152.02, "text": " outperforming the Hebbian rule in the other two methods. But this shows you kind of the" }, { "end": 1164.22, "start": 1157.96, "text": " difference and the power that these Hebbian rules or these generally neuroplasticity might" }, { "end": 1172.56, "start": 1164.22, "text": " have because the Hebbian one is perfectly capable of at least in part adapting to the" }, { "end": 1178.32, "start": 1172.56, "text": " different situations. Now you can see that is not symmetric. Also the Hebbian rules they" }, { "end": 1185.28, "start": 1178.32, "text": " learn to know there's 860 and there's 440 of a thing that should actually be symmetric," }, { "end": 1191.74, "start": 1185.28, "text": " we do expect a drop when there's damage, but it's not symmetric, which means that also" }, { "end": 1198.44, "start": 1191.74, "text": " the Hebbian rules they kind of randomly focus on one over the other, but at least they're" }, { "end": 1206.2, "start": 1198.44, "text": " able in some degree to adapt to both. And that's because it depending on the input," }, { "end": 1211.1200000000001, "start": 1206.2, "text": " you know, it has a rule in there that basically says, well, if the if the back left leg and" }, { "end": 1217.72, "start": 1211.1200000000001, "text": " the front light right leg, you know, if they fire together, then I want to, if they if" }, { "end": 1222.44, "start": 1217.72, "text": " they fire together, the sensors that show me that they're moving, if they fire together," }, { "end": 1227.4, "start": 1222.44, "text": " I'm going to wire them together, because that's how I walk, you know, front, right, back," }, { "end": 1233.24, "start": 1227.4, "text": " left, and then the other way around. And if that's not the case, I'm not going to wire" }, { "end": 1237.44, "start": 1233.24, "text": " them together. So that would be the situation where we have damage. Instead, if they are" }, { "end": 1242.28, "start": 1237.44, "text": " not wired together, I'm going to, you can do this in the next layer of the neural network," }, { "end": 1247.72, "start": 1242.28, "text": " wire these other two things together, you know, if if the first thing is not the case," }, { "end": 1253.76, "start": 1247.72, "text": " I'm going to wire these other two things together to make up for that loss. And there you can" }, { "end": 1259.48, "start": 1253.76, "text": " see there is kind of this logic built into the network. Now, again, I know you can do" }, { "end": 1264.92, "start": 1259.48, "text": " this with learning a fixed policy, you can achieve the same effects. The point here is" }, { "end": 1272.76, "start": 1264.92, "text": " just to show that given kind of a same size networks and so on, that you that there might" }, { "end": 1279.0800000000002, "start": 1272.76, "text": " be there might be like a qualitative difference in certain situations. Again, by no means" }, { "end": 1288.4, "start": 1279.0800000000002, "text": " this is meant to outcompete RL or anything like this. Okay, so we'll we went there. Now," }, { "end": 1293.64, "start": 1288.4, "text": " how are these rules actually learned? And there we have to again make a distinction" }, { "end": 1300.68, "start": 1293.64, "text": " that is completely separate from the Hebbian non-Hebbian way. Okay, so the Hebbian non-Hebbian" }, { "end": 1306.0400000000002, "start": 1300.68, "text": " distinction was, do we learn the weights of the policy network directly? Or do we learn" }, { "end": 1312.96, "start": 1306.0400000000002, "text": " the rules to update the weights? Now the question is, whatever we learn, how do we learn it?" }, { "end": 1318.1200000000001, "start": 1312.96, "text": " And again, we have to draw the distinction this time between, I'm going to say classic" }, { "end": 1325.4799999999998, "start": 1318.12, "text": " R, even though the terminology is not really correct, classic RL and evolutionary methods." }, { "end": 1332.6799999999998, "start": 1325.4799999999998, "text": " Okay, so in classic RL, what I would do is I would use my weights in order to obtain" }, { "end": 1342.04, "start": 1332.6799999999998, "text": " a reward. And then I would update my weights. So my delta W would be proportional to the" }, { "end": 1350.72, "start": 1342.04, "text": " gradient of W of the reward. Okay, so in the classic RL, especially in this is a policy" }, { "end": 1355.8, "start": 1350.72, "text": " gradient method right now, so I use my policy, my weights to get the reward, and then I would" }, { "end": 1361.68, "start": 1355.8, "text": " calculate a gradient. And you know, usually the reward isn't differentiable. So you have" }, { "end": 1368.72, "start": 1361.68, "text": " this reinforced trick in order to pull the reward out. And you can read all of this up" }, { "end": 1375.96, "start": 1368.72, "text": " if you look at policy gradient, the basic policy gradient methods. But this here tells" }, { "end": 1383.92, "start": 1375.96, "text": " me I need a gradient, usually this is going to be the reward times the gradient of my" }, { "end": 1394.6000000000001, "start": 1383.92, "text": " FW of my input. So what this means is, what this means is that if my reward is high, then" }, { "end": 1403.24, "start": 1394.6, "text": " I just want to know, what do I need to do to make more of what I just did? Okay, and" }, { "end": 1410.36, "start": 1403.24, "text": " the great hint ensures that for every single weight in your neural network, you know what" }, { "end": 1416.9599999999998, "start": 1410.36, "text": " to do. So the gradient means that I have an exact handle on how do I need to change this" }, { "end": 1422.52, "start": 1416.9599999999998, "text": " weight? How do I need to change that weight? How do I need to change this weight in order" }, { "end": 1427.6399999999999, "start": 1422.52, "text": " if the reward is high, and because of this multiplication here, I want to make more of" }, { "end": 1432.6, "start": 1427.6399999999999, "text": " what I just did. And the gradient tells me how. If the reward is low, on the other hand," }, { "end": 1438.44, "start": 1432.6, "text": " I want to make less of what I just did. But also the gradient tells me how that can be" }, { "end": 1445.08, "start": 1438.44, "text": " achieved. I simply go into the other direction than I would if the reward is high. In evolutionary" }, { "end": 1451.6399999999999, "start": 1445.08, "text": " methods, we don't have, we don't do this gradient calculation. Now there can be advantages to" }, { "end": 1456.72, "start": 1451.64, "text": " not doing gradient calculation. Sometimes backpropagation simply isn't possible. Even" }, { "end": 1463.64, "start": 1456.72, "text": " if it is possible, and this is maybe the case where we're now, what we need to learn in" }, { "end": 1468.88, "start": 1463.64, "text": " our case is these rules to update the rules. And imagine you have an episode and that's" }, { "end": 1475.68, "start": 1468.88, "text": " kind of episode, you have step, step, step, step, and in each step, these rules are applied," }, { "end": 1480.6200000000001, "start": 1475.68, "text": " right? In each of these steps, the rules are applied. And at the end, you get a reward." }, { "end": 1486.6799999999998, "start": 1480.62, "text": " So what you need to do is to back propagate that reward through all the steps and then" }, { "end": 1491.9599999999998, "start": 1486.6799999999998, "text": " through all the rules. Okay. And that might be just computationally not feasible or the" }, { "end": 1499.4799999999998, "start": 1491.9599999999998, "text": " rules, the rules right here are pretty, pretty easy, but the rules might not be differentiable." }, { "end": 1505.8799999999999, "start": 1499.4799999999998, "text": " You actually have the same problem in general in classic RL as well. But you know, you can" }, { "end": 1510.36, "start": 1505.8799999999999, "text": " cut off time steps and so on. There are various hacks. In any case, there can be advantages" }, { "end": 1516.4399999999998, "start": 1510.36, "text": " to not having that gradient and evolutionary methods are a way to do that. In evolutionary" }, { "end": 1523.24, "start": 1516.4399999999998, "text": " method, usually you are don't train one agent, you train a population of agents. So you have" }, { "end": 1531.32, "start": 1523.24, "text": " a bunch of these neural network agents in here. And the way you update the neural network" }, { "end": 1535.9199999999998, "start": 1531.32, "text": " agent is you simply let them run, you know, you let them run your app, the episode. So" }, { "end": 1544.92, "start": 1535.92, "text": " this is your W, one of them, you let them run the episode, they get a reward. And then" }, { "end": 1548.88, "start": 1544.92, "text": " you can do multiple things. So this depends on the evolutionary method. So you can either" }, { "end": 1557.14, "start": 1548.88, "text": " pick out the best performing agent, or you can update each agent according to some rule." }, { "end": 1562.8000000000002, "start": 1557.14, "text": " The goal here is simply to basically, you always want to take your weights, you want" }, { "end": 1568.84, "start": 1562.8, "text": " to add some noise to them. And you want to see, does it get better or worse? If it gets" }, { "end": 1574.28, "start": 1568.84, "text": " better, good. If it gets worse, not good. Okay, the difference is without the gradient," }, { "end": 1578.36, "start": 1574.28, "text": " you don't have a handle on how do you need to change each individual weight. All you" }, { "end": 1583.04, "start": 1578.36, "text": " can do is basically random walk and observe what happens. And if the random walk is, you" }, { "end": 1588.68, "start": 1583.04, "text": " know, turns out to be good, you go more into that direction of that random walk. So it's" }, { "end": 1596.42, "start": 1588.68, "text": " sort of a sort of a poor man's gradient method in these evolutionary methods. Again, completely" }, { "end": 1601.68, "start": 1596.42, "text": " independent of what we learn, you can use the evolutionary method to learn the fixed" }, { "end": 1608, "start": 1601.68, "text": " weights. And that's what actually what happens in the table I've shown you below. Or you" }, { "end": 1612.44, "start": 1608, "text": " can use the evolutionary method to learn the Hebbian update rules. As well, you can use" }, { "end": 1617.44, "start": 1612.44, "text": " RL to learn the fixed weight or the update rules. In this paper, they use evolutionary" }, { "end": 1624.3200000000002, "start": 1617.44, "text": " methods to learn the Hebbian update rules. And they compare mostly with using evolutionary" }, { "end": 1632.8400000000001, "start": 1624.3200000000002, "text": " methods to learn the fixed weights. Okay, the exact evolutionary step they use right" }, { "end": 1638.88, "start": 1632.8400000000001, "text": " here is the following. So HT here is going to be the thing that you learn. Now as compared" }, { "end": 1644.64, "start": 1638.88, "text": " to W being the network weights, H is going to be the Hebbian weights, since we learn" }, { "end": 1652.0400000000002, "start": 1644.64, "text": " the Hebbian weights. So how they'll update each agent is going to be they'll take the" }, { "end": 1658.2, "start": 1652.0400000000002, "text": " Hebbian weights. And this this here is how you update, right? This is your delta H. How" }, { "end": 1666.72, "start": 1658.2, "text": " do you update the Hebbian weights? Well, what you do is you you perform in random perturbations." }, { "end": 1673.2, "start": 1666.72, "text": " So I take my weights and I add noise. I just add noise. Okay, so I I'm here. And I just" }, { "end": 1680.64, "start": 1673.2, "text": " make a bunch of versions of it. And then I observe how well are these versions doing?" }, { "end": 1685.8, "start": 1680.64, "text": " So how well are my random perturbations doing? This is going to be the fitness fi right here" }, { "end": 1691.44, "start": 1685.8, "text": " is going to be the fitness. And then I'm just going to perform a weighted average. So this" }, { "end": 1700.3, "start": 1691.44, "text": " is my weighted average of these new solutions. Okay, so if this solution here did pretty" }, { "end": 1707.32, "start": 1700.3, "text": " well, and this solution did pretty poorly, I want to walk, you know, in this direction." }, { "end": 1714.6399999999999, "start": 1707.32, "text": " And then again, I do the same thing here from here, I do a bunch of perturbations. And maybe" }, { "end": 1719.24, "start": 1714.6399999999999, "text": " this one did pretty well. And this one did pretty poorly, I want to walk in this direction," }, { "end": 1727.7, "start": 1719.24, "text": " and so on. Okay, so that's how you you'll change the you'll change weights or rules" }, { "end": 1734.44, "start": 1727.7, "text": " or whatever you want in an evolutionary method. As you know, it's pretty easy. It's easier" }, { "end": 1741.6200000000001, "start": 1734.44, "text": " than reinforcement learning, no back prop, no nothing. Basically a black box optimizer." }, { "end": 1747.48, "start": 1741.6200000000001, "text": " There are more complicated evolutionary methods, but no, we don't go into those here right" }, { "end": 1756.3600000000001, "start": 1747.48, "text": " now. Okay, so again, I've already shown you these results. Now I said these static weights" }, { "end": 1763.28, "start": 1756.36, "text": " are also with evolutionary method, they also report what you would get with like a RL approach," }, { "end": 1774.32, "start": 1763.28, "text": " like PPO, you would get kind of the same thing as they get as they get here. So, oh, sorry," }, { "end": 1779.4399999999998, "start": 1774.32, "text": " this is not the same as the table. Yeah, I was confused for a second. This here is for" }, { "end": 1786.64, "start": 1779.44, "text": " the car environment. Okay, this is this vision based environment. So with their method, they" }, { "end": 1794.26, "start": 1786.64, "text": " get like an 870 rewards with the heavy and based approach. With the static weight, but" }, { "end": 1799.8, "start": 1794.26, "text": " still evolutionary method, they get a much lower reward. In fact, the heavy and based" }, { "end": 1806.56, "start": 1799.8, "text": " approach is about the same as you get here with an RL algorithm. And as we said, the" }, { "end": 1815.2, "start": 1806.56, "text": " RL algorithm more complicated. And if you use like a state of the art RL algorithm," }, { "end": 1822.12, "start": 1815.2, "text": " not just PPO, you get a bit of a better performance, but not that much if you look at the actual" }, { "end": 1829.24, "start": 1822.12, "text": " numbers. So, you know, pretty cool to see that, again, this is not outperforming anything." }, { "end": 1837.32, "start": 1829.24, "text": " This is simply showing that you can do that. They do a number of experiments where they" }, { "end": 1843.32, "start": 1837.32, "text": " go in the episode and they kind of change stuff in the episode. And one cool thing here" }, { "end": 1850, "start": 1843.32, "text": " is that they go and you know, this is an episode. So at the episode, you start with a random" }, { "end": 1856.44, "start": 1850, "text": " network each time in this heavy and setting. And then pretty quickly, the rules adapt for" }, { "end": 1863.24, "start": 1856.44, "text": " a high performing right. So it starts to walk, it reconfigures itself and starts to walk." }, { "end": 1867.74, "start": 1863.24, "text": " The reward here again, it doesn't have access to that, but we can measure it, of course." }, { "end": 1874.72, "start": 1867.74, "text": " And then at this step A right here, they simply go to the weights and zero them out. So they" }, { "end": 1881.98, "start": 1874.72, "text": " just delete these weights right here. And only 10 time steps later, it has reconfigured" }, { "end": 1888.8, "start": 1881.98, "text": " itself as you can see right here in order to walk again. So 10 time steps later, reconfigures" }, { "end": 1894.92, "start": 1888.8, "text": " itself, reconfigures itself. And after a short while right here, it's back to its kind of" }, { "end": 1904.56, "start": 1894.92, "text": " original performance, as you can see. So that's, I'd say that's fairly impressive in this very" }, { "end": 1911.4, "start": 1904.56, "text": " short amount of time able to recover from such an intervention. If you do this, of course," }, { "end": 1916.3400000000001, "start": 1911.4, "text": " if you do this to your policy network, that's statically learned, it's going to be garbage." }, { "end": 1921.44, "start": 1916.3400000000001, "text": " But I guess the fair comparison would be to delete the heavy and rules themselves. And" }, { "end": 1929.5600000000002, "start": 1921.44, "text": " you know, so it's not like it's not like this can adapt to new situations, or something" }, { "end": 1934.3600000000001, "start": 1929.5600000000002, "text": " like this, this is still learned for particular environments, right. But the point here is" }, { "end": 1941.1000000000001, "start": 1934.3600000000001, "text": " that you learn the rules. And this is kind of a study on neuroplasticity. Now, my question" }, { "end": 1948.6399999999999, "start": 1941.1, "text": " actually would be why this diagonal pattern appears. And I have not seen like a clear" }, { "end": 1955.6999999999998, "start": 1948.6399999999999, "text": " explanation. Especially is this anti diagonal pattern, it's not so much here in the output" }, { "end": 1961.9599999999998, "start": 1955.6999999999998, "text": " layer, right, this is the output layer, there are 21 actions or so. And this one is this," }, { "end": 1968.1, "start": 1961.9599999999998, "text": " this dimension. So not that much there. But there seems to be this rule. And this is not" }, { "end": 1973.48, "start": 1968.1, "text": " the case at the beginning, right, you saw the beginning you saw at the beginning, it" }, { "end": 1982.84, "start": 1973.48, "text": " was pretty random matrix. So why? Why? Yeah, here, pretty random. And then there's this" }, { "end": 1989.76, "start": 1982.84, "text": " diagonal pattern, I don't know why. If you know, let me know. I mean, it's anti diagonal," }, { "end": 1994.48, "start": 1989.76, "text": " maybe it is actually diagonal and the forward, the fully connected layer is just defined" }, { "end": 2005.56, "start": 1994.48, "text": " as something like WT times x. And but maybe this also depends on the random initialization." }, { "end": 2012.1200000000001, "start": 2005.56, "text": " But there is no inherent way why particular neuron would, you know, care about sending" }, { "end": 2022.6, "start": 2012.1200000000001, "text": " information to like the same height of neuron on the other side. Or is there? I don't know" }, { "end": 2029.8799999999999, "start": 2022.6, "text": " I'm so is this a property of the evolutionary or the learning rules? It seems not because" }, { "end": 2038.56, "start": 2029.8799999999999, "text": " the learning rules don't depend on the position. I'm genuinely confused about this. And maybe," }, { "end": 2042.9199999999998, "start": 2038.56, "text": " you know, maybe they've written it somewhere, and I've just overlooked it, though. I, they" }, { "end": 2047.12, "start": 2042.9199999999998, "text": " do reference it, they say, oh, there's this diagonal pattern appearing, but I don't think" }, { "end": 2057.2799999999997, "start": 2047.12, "text": " they ever say why it is diagonal. Okay, I might just be I might just be real dumb. Yeah." }, { "end": 2061.44, "start": 2057.2799999999997, "text": " So they also, you know, they do some more experiments, they show, for example, that" }, { "end": 2067.18, "start": 2061.44, "text": " if you just have random Hebbian coefficients, then your algorithm just jumps around kind" }, { "end": 2073.12, "start": 2067.18, "text": " of in in weight space around the zero point. However, if you actually learn these Hebbian" }, { "end": 2078.2799999999997, "start": 2073.12, "text": " coefficients, as they do, you have like this clear attractor here. And you have these kind" }, { "end": 2085.72, "start": 2078.2799999999997, "text": " of oscillating curves. When, you know, when when you do that, and you can see here in" }, { "end": 2091.24, "start": 2085.72, "text": " the different situations where things are damaged, and so on. So all in all, I think" }, { "end": 2098.12, "start": 2091.24, "text": " it's a pretty interesting study. And I think this neuroplasticity is, it's a different" }, { "end": 2103.6, "start": 2098.12, "text": " way, you know, it's unclear to say if it will ever deliver the performance that RL delivers," }, { "end": 2109.8199999999997, "start": 2103.6, "text": " but certainly there are situations where such plasticity is desired. And if we can also" }, { "end": 2116.2599999999998, "start": 2109.8199999999997, "text": " combine this with greater generalization performance, then, you know, we have agents that can quickly" }, { "end": 2123.4, "start": 2116.2599999999998, "text": " kind of reconfigure. And a lot of work by these these kind of open ended learning community" }, { "end": 2130.36, "start": 2123.4, "text": " also plays into this roles, all in all pretty, pretty cool, non standard way of doing things." }, { "end": 2134.96, "start": 2130.36, "text": " Last thing, the broader impact statement. Every now and then we'll look at a broader" }, { "end": 2139.2000000000003, "start": 2134.96, "text": " impact statement, since these are new, just to get kind of an overview of what they look" }, { "end": 2143.92, "start": 2139.2000000000003, "text": " like. So they say the ethical and mutual societal consequences of this work are hard to predict," }, { "end": 2150.42, "start": 2143.92, "text": " but likely similar to other work dealing with more adaptive agent and robots. In particular," }, { "end": 2154.04, "start": 2150.42, "text": " by giving the robots the ability to still function when injured could make it easier" }, { "end": 2161.16, "start": 2154.04, "text": " from them being deployed in areas that have both a positive and negative impact on society." }, { "end": 2167.52, "start": 2161.16, "text": " Okay, well, again, this it's not really giving robots the ability to still function when" }, { "end": 2174.36, "start": 2167.52, "text": " they're injured. I first I thought first I thought, okay, they train it when it's fully" }, { "end": 2181.48, "start": 2174.36, "text": " functioning, but then they damage it during test time. But as I understand it, as I understand" }, { "end": 2186.76, "start": 2181.48, "text": " the paper, they already train it with the damaged versions, they just don't tell the" }, { "end": 2195.6800000000003, "start": 2186.76, "text": " algorithm in which version it is right now. So it's not the same as being able to work" }, { "end": 2201, "start": 2195.6800000000003, "text": " when injured unless you've specifically trained for it. In this case, again, I could be wrong" }, { "end": 2207.34, "start": 2201, "text": " about this. Yeah. In the very long term robots that can adapt could help in industrial automation" }, { "end": 2213.6, "start": 2207.34, "text": " or help to care for the elderly. On the other hand, more adaptive robots could also be more" }, { "end": 2218.28, "start": 2213.6, "text": " easily used for military applications. The approach presented in this paper is far from" }, { "end": 2222.64, "start": 2218.28, "text": " being deployed in these areas, but it's important to discuss its potential long term consequences" }, { "end": 2229.92, "start": 2222.64, "text": " early on. Now, okay, so let's evaluate the broader impact statement. Let's, well, the" }, { "end": 2238.4, "start": 2229.92, "text": " first check to do is always to simply replace whatever their method is with the word technology." }, { "end": 2248.76, "start": 2238.4, "text": " Okay, so let's do that. In the very long term, technology could help in industrial automation" }, { "end": 2254.4, "start": 2248.76, "text": " or help to care for the elderly. Check. On the other hand, technology could also be more" }, { "end": 2261.08, "start": 2254.4, "text": " easily used for military application. Check. Technology is far from being deployed in these" }, { "end": 2269.4, "start": 2261.08, "text": " areas. Okay, I guess some technology isn't, but advanced technology. Yeah. So again, the" }, { "end": 2273.88, "start": 2269.4, "text": " rule for broader impact statements seem to be you take whatever your method is and you" }, { "end": 2282.78, "start": 2273.88, "text": " go up until you find, you know, you're basically at technology or something equivalent, because" }, { "end": 2288.5600000000004, "start": 2282.78, "text": " no one actually I've never seen a broader impact statement that writes about the actual" }, { "end": 2294.36, "start": 2288.5600000000004, "text": " thing in the paper, they always go up like one layer or two. And then it basically regresses" }, { "end": 2302.0400000000004, "start": 2294.36, "text": " to technology, even even though very few papers actually would be able to discuss their particular" }, { "end": 2309.2000000000003, "start": 2302.0400000000004, "text": " thing, but you know, and that and then in terms of guidelines on broader impact statement," }, { "end": 2313.96, "start": 2309.2, "text": " this one is missing, there's there's always this, the holy trifecta. So the holy trifecta" }, { "end": 2318.6, "start": 2313.96, "text": " is you go like a, you know, like you're a, you're a Catholic, you go with your finger" }, { "end": 2325.3599999999997, "start": 2318.6, "text": " to your head, chest, left and right. And you say technology good, technology bad, technology" }, { "end": 2331.7999999999997, "start": 2325.3599999999997, "text": " biased. Okay, so you want to write a broader impact statement, go up the layers, technology," }, { "end": 2339.52, "start": 2331.8, "text": " good bad bias, and we're missing the bias here. So that's, you know, I'm just following" }, { "end": 2343.76, "start": 2339.52, "text": " what these guidelines to broader impact statements are. I don't make the rules. I'm sorry that" }, { "end": 2350.96, "start": 2343.76, "text": " the heavy ins make the rules apparently. I'm not heavy in. Okay, I've I hope you've enjoyed" }, { "end": 2356.1200000000003, "start": 2350.96, "text": " this paper and this video. Let me know what you think. Check out the videos that they" }, { "end": 2361.56, "start": 2356.12, "text": " have. I'll link them. And with that, I wish you a pleasant day. Bye bye." } ]
nv6oFDp6rNQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Hopfield Networks is All You Need (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "schmidhuber", "hochreiter", "lstm", "gru", "rnn", "hopfield", "attention", "attention is all you need", "transformer", "bert", "query", "key", "value", "routing", "pattern", "retrieval", "store", "error", "exponental", "binary", "continuous", "hopfield network", "lse", "energy function", "update rule", "metastable", "separation" ]
#ai #transformer #attention Hopfield Networks are one of the classic models of biological memory networks. This paper generalizes modern Hopfield Networks to continuous states and shows that the corresponding update rule is equal to the attention mechanism used in modern Transformers. It further analyzes a pre-trained BERT model through the lens of Hopfield Networks and uses a Hopfield Attention Layer to perform Immune Repertoire Classification. OUTLINE: 0:00 - Intro & Overview 1:35 - Binary Hopfield Networks 5:55 - Continuous Hopfield Networks 8:15 - Update Rules & Energy Functions 13:30 - Connection to Transformers 14:35 - Hopfield Attention Layers 26:45 - Theoretical Analysis 48:10 - Investigating BERT 1:02:30 - Immune Repertoire Classification Paper: https://arxiv.org/abs/2008.02217 Code: https://github.com/ml-jku/hopfield-layers Immune Repertoire Classification Paper: https://arxiv.org/abs/2007.13505 My Video on Attention: https://youtu.be/iDulhoQ2pro My Video on BERT: https://youtu.be/-9evrZnBorM Abstract: We show that the transformer attention mechanism is the update rule of a modern Hopfield network with continuous states. This new Hopfield network can store exponentially (with the dimension) many patterns, converges with one update, and has exponentially small retrieval errors. The number of stored patterns is traded off against convergence speed and retrieval error. The new Hopfield network has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. Transformer and BERT models operate in their first layers preferably in the global averaging regime, while they operate in higher layers in metastable states. The gradient in transformers is maximal for metastable states, is uniformly distributed for global averaging, and vanishes for a fixed point near a stored pattern. Using the Hopfield network interpretation, we analyzed learning of transformer and BERT models. Learning starts with attention heads that average and then most of them switch to metastable states. However, the majority of heads in the first layers still averages and can be replaced by averaging, e.g. our proposed Gaussian weighting. In contrast, heads in the last layers steadily learn and seem to use metastable states to collect information created in lower layers. These heads seem to be a promising target for improving transformers. Neural networks with Hopfield networks outperform other methods on immune repertoire classification, where the Hopfield net stores several hundreds of thousands of patterns. We provide a new PyTorch layer called "Hopfield", which allows to equip deep learning architectures with modern Hopfield networks as a new powerful concept comprising pooling, memory, and attention. GitHub: this https URL Authors: Hubert Ramsauer, Bernhard Schäfl, Johannes Lehner, Philipp Seidl, Michael Widrich, Lukas Gruber, Markus Holzleitner, Milena Pavlović, Geir Kjetil Sandve, Victor Greiff, David Kreil, Michael Kopp, Günter Klambauer, Johannes Brandstetter, Sepp Hochreiter Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there. Today we'll look at hopfield networks is all you need by researchers from the Johannes Kepler University in Linz and the University of Oslo. So on high level this paper proposes a new type of hopfield networks that generalizes modern hopfield networks from binary patterns to continuous patterns and then shows that the retrieval update rule of these new hopfield networks is equivalent to the attention mechanism that's used in modern transformers. And it's actually a more general formulation of the attention mechanism and therefore it can be used to do kind of a variety of things to improve modern deep learning. And it also has a companion paper where it applies this to some kind of immunology research and achieves state-of-the-art in a task that is specifically suited to this type of attention. Alright let's dive in together we'll go over what this paper does what it proposes and so on. If you like pay if you like videos like this consider subscribing, you know sharing it out and I hope you're enjoying this. Alright also thanks to my discord community for you know very helpful bringing me up to speed on this paper. Super interesting discussions there. If you're not on our discord yet I invite you to join it's fun. Okay so what is a hopfield network? A hopfield network is a pretty kind of old style, an old conceptualization of a neural network. So in a hopfield network what your goal would be is you can conceptualize it as a bit of a neural network. So let's say we have five neurons or something like this. What your goal would be is to have a neural network where you can store so-called patterns and a pattern in this case would be a binary string of size 5. So for example 10100 or 11010 and you'd have a list of these patterns and what your goal would be is to store these patterns in the neural network such that and here you know we'll just consider everything to be sort of connected to everything else and what your goal would be in this is that you can kind of store patterns inside this neural network and you adjust the weights somehow. So this was as I said this was this was this is kind of an old model. You store you adapt the weights such that you store these patterns. And what does it mean for a pattern to be stored? If you have stored a pattern you will then be able to retrieve it and you retrieve a pattern in these kind of old style Hopfield networks by providing a partial pattern. So what you'll say is for example I want a pattern that starts with 110 and you give that to the network and there would be a so-called update rule and the update rule is kind of an internal rule. So let's just go through this. So here this 110 maybe this is 110 and then they would kind of send messages around so this update rule would somehow adjust the value of this and this neuron here to what's most compatible with the network weights and if the network weights have been adjusted correctly this will turn out then at the end of applying this update rule that this is a 1 and this is a 0 and therefore this pattern here is retrieved. Now had I input 101 at the beginning then the outcome would be different. Hopefully this pattern here would have been retrieved. So you can see the applications of this like you can have the first three digits as sort of a database key and then the last ones as sort of the value that you store along with it and then you can simply provide the first few. You can also provide you don't always have to provide three so this all depends. This is sort of an as I said an old conceptualization of neural networks so people were imagining that this is kind of how the brain works you know fire together wire together and also with research into this it turns out that you know you might think you know there's there's kind of five neurons so maybe I can store five different patterns you know accurately because if I store too many patterns right if I have many many many many patterns then I can't expect to be able to retrieve all the patterns again because some of them will just be so equal that you know many will start maybe with this and and I won't have a chance to to retrieve the one I want or the update rule will make a mistake so you might think this might be like five because I have five neurons or maybe ten because I have ten connections but it turns out that in modern hopfield networks with the appropriate update rule you can store exponentially many patterns in these networks exponentially many in the in the dimension of the in the dimension of the patterns and here I guess that would be the length of the pattern so this is a little bit surprising the kind of storage capacity of these networks and will this this paper here generalizes that to continuous to continuous states so what do we mean with continuous states I guess I mean continuous patterns so no longer is a pattern a binary string but a pattern now is a string of floating point numbers like 0.5 1.3 and so on and you know a string of floating or a sequence of floating point numbers is naturally depicted as a vector okay so our patterns are going to be different vectors that we store and you know in high dimensions that the the vectors will be kind of separated well from each other as long as we don't have too many but this paper shows that all these properties for the modern hopfield that works that hold for binary strings still hold if you go to these kind of continuous to these vector patterns that means you can store exponentially many patterns in the dimensions of the vector which is pretty surprising right because you'd think like you know after you have one vector per dimension that you know after that it might get a bit shaky but no you can actually store exponentially many that's pretty surprising and this paper is a lot about how to do that and the fact that that happens and so on so we've talked about update rules for these kind of hopfield networks and I haven't really specified what that is I've just said that you know I enter a pattern and then the network does something and outcomes outcomes the whatever the pattern that matches my query so this here is called a query you might already this is on purpose like the kind of overlap between the attention mechanism lingo and the hopfield network lingo we're going to conflate the two to kind of make clear where the two overlap if you don't know what an attention mechanism is or aren't familiar with it watch my video on attention is all you need once you watch that this video will make a lot more sense all right so in what the update rule does is specifically in the update rule that there isn't only one right there are many different proposals of hopfield networks and they all lead to different properties but what an update rule does ultimately is it minimizes what's called an energy so every type of hopfield network is associated with an energy function and this the energy function of the modern hopfield network for binary strings is this energy function right here so with X X is the pattern the pattern this is the kind of state of the hopfield network and these are the whatever is stored in the network and then the X here is the query that you enter into the network and then the energy here tells you this quantity you have to minimize this quantity in order to retrieve the pattern that you want okay now we are never directly working with the energy as such so what you could do is for example use backprop or something to use gradient descent to decrease the energy but usually along with an energy function comes an update function and the update function is what I've talked about here like you do something and then the network does something and then you get the pattern out what the network does is it minimizes its energy function and the update rule is made such that the corresponding energy function is minimized so the energy function is more like a theoretical consideration that you say okay here is my energy function of my hopfield network and the there will be a corresponding update rule that minimizes that energy function and if you use that update rule maybe multiple times then the energy function will be minimized and you will have retrieved your pattern or not if if you have too many patterns stored it might also fail right so they they say what the update rules are in the in the text here for the old hopfield networks but we're not really interested in the old ones we're interested in the ones that this paper cares about namely where the patterns that you store in the hopfield network are these vectors over our vector patterns and the query is also a vector pattern so you want to store all of these patterns into the hopfield network I'm gonna draw it like this here I'm gonna store it into the hopfield network and then after that you want to come up with a query and the query is like this and in the case of the binary strings we had something like well I sort of know half of my binary string now in the vector hopfield network it's more like well I sort of kind of know the direction that my vector should point in okay and you will read what you want to retrieve is the vector that has kind of a large inner product okay so if I enter this query into my hopfield network what I hope is that this vector here is retrieved now you see it's not exactly the same vector like they do point if I translate that here by I it's maybe something like this but so they are different but you want to say well I kind of know what I want I kind of want something like this and then the hopfield network would answer with oh I have something like this it's this right here okay so you that the connection to attention mechanism should become pretty pretty obvious right now but you know the to actually establish this formally is the kind of the point of this paper and you know it's pretty cool to see so they formulate this new energy right here this is the energy of this new continuous hopfield network specifically they have to have this term right here because they are now have continuous states and continuous queries this if you minimize the energy it basically means that your query can never you know go to infinity because you have the the query right here in the energy function the update rule is this right here and we'll look at that in a moment but remember the update rule is what you actually implement in code so if I if I have a query right here I plug it in here this is the state of my hopfield network and I apply this rule multiple times and out comes the kind of answer of the hopfield network to my question so the I input this and the out comes this after I update after I apply the update rule maybe multiple times right and interestingly you can already see that this here if you rewrite a bunch of these quantities so if you rewrite the beta here which is the softmax temperature in a way to be 1 over square root of D and if you take the query the psi here to be the query matrix and if you take the X here to be the key matrix then this is equivalent to the update or sorry the attention mechanism of a modern transformer so that's the point of the paper is that we can look at the transformer attention mechanism as a hopfield network and they have this interesting this interesting diagram at the end right here so the appendix you know this is typical I guess sep hoher I remember the cellul paper had like 60 pages of machine proof appendix this also this is like 70 page appendix crazy but at the end of the appendix you'll find this diagram right here now usually in an attention mechanism you have whatever the the input is so you have an input right here so this is attention mechanisms or at least transformers they work on sequences or sets of objects and from these you'll generate three things you'll generate the you'll generate the queries the keys and the values now you can either generate the queries from the same objects which would be self attention or you can generate the queries from like a different object over here it doesn't it doesn't matter too much for our discussions but either you you know have a reference input or you have you know this kind of same input all the way and then what you do is use three different heads or three different matrices to transform that input into queries keys and values so I often conceptualize this as you have kind of your input set and each of the input sets outputs a key and also each one which would be a vector and also each one outputs a query so I often draw this here the same sequence and each one outputs a query and the query sort of the query is kind of a request for information so the key exposes sort of what exposes something about the input here so this could be a sentence down here this could be my cat is very pretty and the the the vector the key vector right here could encode something like this is a noun or this is an animal or anything like this right and the query here it could ask for for other things so for example since this is cat this vector right here the query vector is generated from that you know token cat now it could recognize that cat is a noun and it could ask the other nodes to basically say are there any adjectives around here because you know adjectives because it itself is a noun it's the object of the sentence right it could ask are there any kind of adjectives that describe the object because that would be naturally a thing to ask if you were the noun you would want to know are there any kind of modifiers for for me so it could output the query and the query here could mean you know this direction could mean adjectives and you see here the word pretty is an adjective so it itself would output a key that says by the way I'm an adjective right so if the cat asks then if this node asks for an adjective and this outputs the adjective vector then because the inner product between the two things is high this will be routed here so attention mechanism is basically information routing that's how I always describe it but in this paper we look at it more like these here are the patterns that are stored in a hopfield network and I by inputting a query and the dot product being the update rule of the hopfield network I retrieve from the hopfield network I retrieve the appropriate pattern that I ask for okay and then you know the values the values are simply a modification of the keys in this form but a lot of people also do keys and values to be the same thing yeah but the this routing of information happens here where you multiply the queries and the keys and then you put a softmax over them okay so if you just look from the perspective of a single node like this node here this cat node what it would do is it would inner product its own query vector with all of the key vectors right so it would build an inner product with all of these and then it would normalize it would put it through a softmax which will kind of give it a distribution right so here would give it like so this is actually matches because my well my is also very important for cat this this this is just an accident I did not plan this this here this also well many things match but but in our example we would just say that this last one it's not only higher it's also wider it matches very well right and so the information routing would route mostly information from this pretty token to the cat token which makes sense in our case right this is the attention mechanism now since if we are interpreting this as a hopfield network and the update rule here is the dot product you can actually think of applying this rule multiple times so what happens now if we and this is where this update rule comes in what happens if we take this distribution and we don't aggregate the values like usually we would aggregate the values by this distribution what if we aggregate the keys by this distribution okay what comes out well if we look at this and you know let's just assume that this key right here matches really well but the others also match a little bit what would come out would be a weighted average where a lot of weight is put on this particular key so what will turn out would be something like something that's very close to that key you can see I'm going to draw the old key here in green and I'm going to draw the old query in blue so you see that it's then whatever comes out is not the query but it's also not that only key that matches right it's kind of a weighted average but with that key dominating okay now since you know in a hopfield network what we would do is we would go again we would put this new thing the red thing instead of the query vector okay so we would use this aggregated keys this weighted average as a new query vector for that node right here so we duplicate that node over here I'll use that query vector again and do the same thing again okay inner product with all of the query vectors and now since this is already an aggregate of the query vectors what's going to happen of course the distribution that's going to come out is going to be weighted even more heavily into the direction so let's make it even wider into the direction of that key that matches okay and you can pretty clearly see if I do that iteratively then that will lead to a situation where everything is like very low except that one key will sort of dominate the distribution and ultra high and ultra wide okay and that's how that's exactly how a hopfield network works right I would input the query which would be sort of what I want right I kind of know what I want okay and then I apply this rule multiple times and with each time I refine refine refine until I decide on a pattern the hopfield network is made for pattern retrieval and these here are the patterns that I want to retrieve so here the patterns aren't kind of stored in the network beforehand but the patterns are also generated like in an attention layer so the keys are generated by the previous layer or by these matrices but that doesn't matter for the hopfield network update rule so you see here that the attention mechanism can be interpreted as simply one step making one step of this update rule but you can think of making actually multiple steps and retrieving the particular key so you know deciding on a sort of a hard routing of particular information now that only works if if there are no other vectors that are close to that particular key right so if the query is this and you know the way I drew it here you can see that there are many there is this one and this one and this one that matches so technically the way I drew it what would happen most likely is no many no matter how many times you apply your update rule it would sort of result in kind of the average of the three keys right so because they're all matching and they would all contribute to that weighted average of the query in the next step and then that means basically the conversions would be to something in the middle and that's going to be a central point of this paper in which situation we are so they call the first part is retrieving a single pattern and they call the second situation where you have multiple patterns that all match that are not well separated from each other they call this a meta stable state and it's going to be pretty interesting to look at transform like BERT language models and look at where they actually are are they actually operating in this single pattern retrieval mode or are they operating in the meta stable state mode all right so here you can see it in the diagram the only thing different this from a hop field network sorry from an attention mechanism is this branch right here so here you ask do you want to do multiple updates after you've you've multiplied the queries and the keys do you want to do multiple updates if yes so if you're in a this hop field network situation you want to do multiple updates then you go back as you can see and you do you use the keys together with the output of the softmax to generate a new query so this query queue here is now generated from the output here and the key so the keys are the same these are this is the same thing it's just put here twice okay this is exactly what we discussed okay I hope that's some somehow clear that these the the attention mechanism is simply a one step hop field network pattern retrieval algorithm with a particular update rule that is that is matches this energy function that they propose right here of course they do this you know particularly because the update rule that turns out is the transformer update rule but but I actually don't know if they backwards engineered the energy function to match the transformer or if they first came up with a continuous hop field networks and then just kind of discovered that it's like the transformer will maybe never find out okay so let's go there are a couple of theorems I believe there are four five theorems right here that show that kind of make some points about this about this stuff and we'll go through them won't go through the proofs or any you know super in-depth meaning but it's pretty cool to go through them and they are proved very rigorously as I said there's a 70 page appendix so have a look at that if you're up for it okay so they say here we have an update rule this is our update rule for our new hop field networks so the first theorem they serve is the update rule that we propose converges globally if we apply the update rule repeatedly the energy for t goes equals infinity and the energy will converge sorry the energy will converge to a fixed point this being a fixed point for t equals sorry for t goes to infinity yeah if this is a fixed point basically saying that if I apply this update rule here over and over and over again it will it will make this energy function converge to a fixed it will make this energy function converge don't want to say anything mistakenly here or claim too much but that basically connects the update rule to the energy okay so just showing like this really is the update rule for that particular energy function okay now as as itself it's not super duper interesting yet but now we get to theorem 2 so theorem 2 for the iteration that's the update rule that we just looked at we have we have that this convergence holds as t goes to infinity for some stationary point furthermore this quantity here goes to zero so that means this is the the update at t plus one and this is the update at t and the difference between them goes to zero so that means not only does the energy converge but the iterates themselves converge so the algorithm actually converges the individual updates of the algorithm so this e new at some point that will no longer change because the the norm between it and the previous one will go to zero you can see that either the sequence here converges or in the other case the set of limit points yada yada is a connecting subset this is a bit over the top here they say okay it can either converge to a point or it can converge to a connected subset but if the loss is finite then any sequence generated by the iteration equation 3 converges to some fixed point so you know basically saying that here we oh this is not the loss I'm sorry no this is the domain never mind I am an idiot this is basically saying that this algorithm will converge okay and they define here what it means for a pattern to be stored and retrieved and that's for establishing what the kind of storage capacity for Hopfield network is so we've established that the update rule minimizes the appropriate energy and the update rule will converge at some point which means that we can you know if it converges we can retrieve the pattern that it converges to so now we define how many patterns can we actually store for that we need to know what does it mean for a pattern to be stored so we assume that we have patterns and these patterns are called X okay X I we have n different patterns each one is called X with a subscript we assume that around every pattern a sphere is given so how do we imagine this we have these patterns and this is this is just a space that they consider patterns of the on a sphere but we'll just conceptualize it as this will have a space there are patterns we want to store okay and we'll say around every pattern there is a sphere okay sphere like this and naturally the patterns are going to be there's going to be a notion of well separated patterns and you can imagine this a little bit like these spheres won't be touching each other if these spheres aren't touching each other that means that the patterns are kind of well separated and that means that if we initialize the query remember the query here is a vector that kind of sort of looks like a pattern and that means the query is kind of close to the pattern in some notion of distance so if we initialize the query somewhere in that sphere then it might if it converges to that sphere to that pattern then we retrieve the pattern okay now it gets a bit more complicated than this but not much more we say a pattern is stored if there is a single fixed point inside the sphere to which all points that start inside the sphere converge and none of the spheres intersect so the sphere of point I doesn't intersect with the sphere of point J so that's where we say all these spheres are non intersecting we say X is retrieved if the iteration equation 3 converged to the single fixed point in that sphere the retrieval error is the distance so you'll notice you have two things you have X I this is the actual pattern and you have X I star this is the retrieved pattern so these hopeful they don't always have to give you the same thing that you stored that's part of the the nature of continuous neural networks whatnot so for every sphere we say there is a pattern there is a sphere now we as pattern is stored if every I can start wherever I want in this sphere okay wherever I want it will always converge to a point that's inside the sphere and maybe that point isn't the pattern that I stored but actually this point right here but wherever I start I will always converge to that particular point if that's the case then I have stored this particular pattern now the fact is I don't retrieve this particular pattern I retrieve the blue thing but I can then define the error of retrieval the error of retrieval is simply the distance between the two things ideally this distance is very small right but you know we can't can't guarantee it now there are going to be theorems that deal exactly with this retrieval error but first you can see that here if if these spheres become larger you you can't accurately store a pattern anymore so this is the kind of ideal situation but there are also situations where these spheres you know if I have these patterns right here these spheres are so large kind of the the attractions of the patterns are so large that if I start let's say here then I don't converge to either of these two patterns I converge to like something in the middle I converge to maybe this point right here and that's going to be one of these meta stable states okay we're going to encounter situations like this but we're also going to encounter situations like this and the bottom thing isn't necessarily bad and that's what you have to keep in mind and yeah as I said we'll get to it but just keep this kind of sphere image in mind okay so first we'll just deal with the you know the up the top situation where we store patterns and then retrieve patterns so we'll assume a failure probability which is P and P is going to be no pretty pretty low for their example so they have P equals 0.001 you know like a 0.1 percent error probability of retrieving your pattern things like this and randomly chosen patterns on the sphere with radius M we define some constants yada yada yada then with probability 1 minus P the number of random patterns that can be stored and stored in the sense of having these spheres around them so that you can retrieve them accurately or at least you can retrieve something that's close to them is is bounded lower bounded by this quantity right here so there's the square root of P there is this constant C but then you see that D is in the exponent right here so that means it's exponential in the number of dimensions so that's that's pretty cool so if you add a dimension you exponentially increase the number of the number of patterns you can store and you know that's that is a kind of I mean it's it's been known for modern Hopfield networks with binary strings so it's not uber surprising but if you have you know it's not what you would imagine like that okay so they may give a few examples of these you have to accept these constants you know in a particular fashion such that this is given and so on but they say examples here are where C is something like 3 and D is 20 so if you were to add a 21st dimension then your I guess storage capacity would increase by a factor of 3 which pretty cool alright so this is how many that we can store infinitely not sorry exponentially many patterns in these networks now they deal they say the next theorem states that the update rule typically converges after one update if the patterns are well separated okay so if we're in a situation where these patterns are well separated which is kind of like this but you can also imagine this in terms of dot products because we operate in the space of dot products so if the patterns are well separated that sort of means that they all kind of sort of point away from each other and this notion of separation is going to be captured by this quantity right here this is the separation of example of pattern I which is just the inner product with itself minus the maximum inner product with any other pattern and this quantity is going to be large when no other pattern is close to it so when the separation is large then the update rule the retrieval rule of calculating you know I have a query calculate the inner product with all of those then I reweigh all of the patterns by that inner product by the softmax then I use that new thing as a query again and so on as we discussed it will converge to the closest pattern but this theorem says it actually converges pretty fast and here I have my problems with saying that it converges after one step typically converges after one update because that you know genuinely depends on a lot of constants as we'll see but it does converge exponentially fast in this separation constant as a theorem for says with query psi after one update the distance of the new point to the fixed point is exponentially small in the separation Delta I the precise bound using Jacobian and its value in the mean value theorem are the following so here you can see this is the distance between the updated psi after one step and the and the fixed point right here this is what it converges to is going to be the distance as it was before times this thing right here so you can see since this is a this is a multiplicative update and in this Jacobian so this is expanded down here this is this you can see here you have the you have this sorry yeah this is this so this is bounded by that you have the exponent the exponential function negative this separation right here so the higher the separation the faster this algorithm converges okay to say that it converges after one step is you know it might be a bit of bragging I don't know if this is a common thing if you have like an exponential convergence that you are allowed to say it's after one step I'm not sure especially what I'm not sure about is that you have n here as linear constants in that factor okay so if you if you and that's what they do in their code so if you look at their code and the codes available which is pretty cool it's implemented in pytorch as a general module that can you can just drop in so this is not only for transformers this is for you can replace like LSTM's you can replace pooling mechanisms you can you know do a whole bunch of stuff in their paper in the company paper they do this multi instance learning with giant sets on using these hopfield layers so pretty pretty cool this code is definitely worth kind of checking out and maybe you want to replace some stuff with you but the question is how many of these update steps should you do right because we looked at the diagram at least in the attention mechanism it seems like you have attention layers right you have a transformer and the transformer consists of you know you have this input right here and you go through layer layer layer layer layer and in each layer there's contained in it and one of these attention mechanism right this entire thing is in this layer okay and now if you interpret this as a hopfield network and you want to do multiple steps that means you go this branch right here so in each layer potentially you do multiple steps of these things so for whatever computational constraints transformers had already this will certainly make it worse but also you need to decide how many steps you want to do now you can hard code that of course but they say you should do these steps until this norm here until the norm between the old and the new is small enough so where is that so you can't measure how close you are to the convergence points right because you don't know in practice but you can measure how far you're away you can measure where did we have it you can measure this quantity right here that's something you can measure how far two iterates are apart so what you'll simply do is you'll measure that and if that is small enough then you'll you'll stop but that I guess is very related to this so how if you we've already proven it converges to this X star so I guess we can approximate this quantity right here with the quantity above and that tells you how many updates you need to do and that quantity is linear not only linear but actually here quadratic in n I don't care you know yes it's exponential in the separation but it's quadratic in n and if I've learned anything from kind of my fast code courses is that constants actually matter when you're not dealing with infinity with an infinite number of steps so the number of the number of steps you need to do I guess will depend on the sequence length in a quadratic fashion so I'm not sure you can always claim this is convergence in one step now I might be super mistaken here and none of this will can none of this actually makes a difference in the in the light of the exponential decay here but I would just I'm just a bit worried saying this usually converges in one step it's clear I guess why they do it right because the attention mechanism in transformers is a one-step application of this rule and this here is kind of a theoretical justification for interpreting this precisely as a hopfield network because you'd say well in a hopfield network you would do multiple steps but wait wait we can actually prove that even if you interpret it as a hopfield network you it can it usually converges after one step so what you're actually doing in a transformer is applying a hopfield network update rule to convergence so yeah I'm not yeah I might be bickering on a high level here luxury problems theorem five then says so theorem four is how fast does this converge theorem five the last theorem right here says that the retrieval error of a pattern then so this is the this is what you converge to and this is what you've stored is bounded by again something that's exponential in the separation right here as you can see okay so that was the theorem so if we go quickly through them again theorems one and two deal with the convergence of this algorithm and the fact that it actually minimizes the proposed energy then theorem three says you can store exponentially many patterns in terms of the dimension of your space and theorems four and five say that this update rule will converge exponentially fast after after one step if you believe that and the retrieval error will also go down exponentially fast with the number of update steps that you do okay that sounds pretty pretty pretty good but we've heard it it's very dependent on how well separated these patterns are and it turns out that is you know at least in transformers they aren't always well separated and that might be on purpose remember the the states here the patterns aren't pre stored like in a classic hopfield network but the patterns if you interpret an attention mechanism as this are also generated by the network itself so the pattern matrix that you retrieve from and the query are generated by the attention mechanism in this case as I said this is applicable to many many more domains than just this but yeah so there's another slight modification that you have to do to make this actually equivalent to an attention mechanism and that is you'll have to recast the value because usually what you'll do is you have some sort of input and then you make queries keys and values from that using different heads the only thing to make it formally equivalent is you have to make the values generated from the keys so the keys give rise to the values as you can see right here that you first multiply with the key matrix and then with the value matrix I think that's you know that I don't I doubt that this will will change anything if you if you the only way that could really change anything is if this matrix here would be super low rank like collapse the space of into like very few dimensions which the value matrix wouldn't do so you know but just letting you know that the technical equality requires this slight modification okay now we said that it might not you know be that this is always super well separate and you retrieve a single pattern and that's what they research here in a pre trained BERT model so they take a pre trained BERT model from I guess from hugging face and they run they just run a data set through it and what they do is so for each for each query and sorry for each attention head because you have multiple ones of these attention heads right in each layer so in each layer you have multiple of these heads for each head they look at over the course of the whole data set how do these softmax distributions look like so when you believe that this is a hopfield network and you believe that this converges in one step then if the patterns are well separated what we would expect is a distribution as we said like this okay there would be one dominant pattern that you retrieve you know that's what you want to retrieve that's what comes out but a bang you retrieve that accurate pattern anything else would mean that the hopfield network sort of failed right it wouldn't give you back one particular pattern so they have I think that's a pretty it's a pretty smart experiment they look how many bars do we need to add how many of these bars in the softmax distribution do we need to add to reach 90% so it depends a bit on the temperature of the softmax which is hard coded in attention mechanism bdb is 1 this squared over d so they say how many do we need to add to get to 0.9 to 90% of the mass of this distribution and if this is the hopfield network where you retrieve one pattern then one will be enough right one of these bars will probably be I don't know like 99% okay but there are other cases imagine the case where the patterns and the query you retrieve the spheres that it gives rise to are all like overlapping okay so what that will do is it won't converge to any particular pattern but the attractor space in this kind so you can imagine if you have two spheres that are apart from each other the update rule converges either so if it's closer to here it converge here if it's closer to here it'll converge here but if they are overlapping like this the energy landscape will actually make it such that it will neither if it starts somewhere it will neither converge to here nor to here it will actually converge to somewhere in the middle okay into the mean of the stored patterns and if we take that to the extreme what could be is it could be that the softmax distribution looks completely uniform okay which would basically mean that you know I don't care where my information comes from just average and this has its applications so if you for example want to make a sentiment classifier very cheap way to do that is to simply take pre-trained word embeddings like glove or word2vec you know assign each word word embedding and then just average the word embeddings okay and you count on the fact if there are a lot of kind of negative words in there like bad sad angry the word embedding kind of will you know reflect that and the average word embedding will point more into the bad direction and if there's a lot of happy words the average will point into the happy direction okay so there are applications of averaging information not caring particularly where it comes from and in that case what we'd expect is that this number and we'll call that so we'll call that the number K in this case it equals one but in this case K equals I guess N the number of inputs okay because we need well not maybe N but you know approximately we need almost all of them to to reach the 90% okay and there is an in-between and these are called these meta stable states where and the in-between is something like you'd have a couple of patterns here a couple here and a couple maybe here it's almost like a clustering like and these overlap and these overlap and these overlap but they don't overlap with each other which means that if you start somewhere here you would converge to the mean but not to the mean of all the patterns but just to the mean of these patterns and here here and here here so this this is like a clustering in latent space right so you can interpret these Hopfield update rules as somehow you know getting not going to a particular pattern but going to sort of a cluster and this is if you ask something like hey is there any adjective around right and all of these patterns they kind of overlap in that space in that query space of adjective they overlap and therefore the update rule would converge to sort of the mean which would basically say yes there is an adjective here right and the information would not be routed so that the distribution if we start here writing we converge to this the distribution would look something like small small small and then you'd have a couple of large ones all right you'd have like maybe two or three or four of large ones and these would exactly correspond to the patterns here so the information will be routed from all of those in that cluster to this particular note that asks the query okay these are these are what's called these meta stable states and what they do is they calculate over the entire data set this number K and here they show you the distribution so in these plots what you'll see is over the entire data set K goes into that direction so I guess let's go to Tiz here this this seems pretty easy so K is in this direction and this is simply the amount of like how so in each you let a data point run through it you measure K for that particular layer one you see this is layer one head four okay this is one layer one attention head and then you can see that the number K is distributed like this okay so contrast this to this head right here where it's a lot of weight on the number one or like very few numbers okay so these blue ones would be these are your typical like when you retrieve one particular pattern so this attention head we can sort of conclude in this particular attention head this is very specific it looks at its input it looks at its token and it decides what information do I want and it retrieves one particular thing from the other nodes okay whereas here it's more like kind of an averaging it's more like I want this kind of information and on average I don't even know what the sequence length is here I guess it's maybe 512 so of the 512 the median this number is always the median and median it collects information from 231 of them okay so you can see that this corresponds these green and orange ones correspond to these meta stable states where there's kind of an implicit clustering done in the in this space of attention whereas the blue ones they correspond to attention heads that ask for particular information retrieve one particular maybe few patterns and happy with that and the red ones here you can see that they often just average they just you know because K is so high means that I need all of the I need all of these bars to get to the 90% or I need almost all of them which basically means it's a uniform distribution right so it's like I don't care where information comes from just average whatever average I just want the average in some particular space and as we said that also has its uses interesting how this translate through so this here is as we go down the BERT model on the bottom of layer one you see there are a lot of these averaging operations going on so a lot of the heads are simply doing averaging and as you go up the layers the heads get more and more specific in the types of information they seek but then again in the last layers interestingly you get into a lot of these meta stable states again which I guess I get interpret this as you as you want I'm gonna leave this up to you but it sort of says like here you want kind of general patterns at the bottom and then the middle layers are kind of the logical workhorses so you look for very specific things in the input this is this is where I guess this is where the thinking happens so this is sort of pre-processing I'm just making stuff up here by the way this is this must be a no way true this is maybe thinking and this this here this might already be output again because you know after that you have language modeling or classification so this might already be like aggregating types of information this is how I sort of interpreted okay yeah so so this these these experiments are pretty pretty pretty interesting and here they have they do these are the last experiments for this paper they do an interesting experiment where they actually replace the attention heads by simply an average mechanism and later they actually replace them by Gaussians but in this case they simply average and they show that look if I replace layer one with this averaging the perplexity doesn't rise that much so it's pretty good even if I replace an entire layer here with averaging it perplexity goes more up and you can see the correspondence if you remember the previous plot the correspondence is pretty one-to-one with how much blue and green heads there are as contrast to how much red and orange ones there are so here you have lots of blue ones and you can see that the error kind of goes up and interestingly here you have more meta stable states at the end but still the perplexity goes up more so I guess you can only really replace the red ones with the averaging so this is always averaging in one particular layer and they go into more detail here where they say look this is this is layer 6 and this is layer 12 so this is one particular attention head from layer 6 and layer 12 and the updates don't be confused it goes in this direction okay I was confused at first and you can see right here this number K at first you know it's kind of spread out but then it pretty quickly converges to a very small number and there is this kind of point right here I don't know if the learning rates decrease I don't think so I think that's just kind of a a phase transition right here this is the blue line by the way the blue training line a phase transition where all of a sudden these just these attention heads they somehow decide okay this is the thing I want to specialize in this is the type of task I want like a sub task of linguistic sub task I want to specialize in and then they concentrate on one particular pattern per input so they are really specializing whereas in the last layer you see here that even during training they are sort of continuously learning so first they also do this averaging then they go into this meta stable region right this is this meta stable region K isn't one but also K isn't a very high number so they continuously learn and it's even indicative of this training might not be done here first of all and second of all it would be really interesting to see how this works out with you know sizes of transformers and like especially these these huge transformers just the fact that they can keep learning the more we train them might be you know be interpreted in the light of what kind of states they converge to and the fact that their attention heads I don't know how does this go on do they stay in the meta stable states because it makes sense to have meta stable states as I said it makes sense to kind of cluster things or are they simply into is this simply an intermediate step and if you go really far down they would actually also converge to the K equals one where they really specialize or if you do we need more attention heads for this I don't know it's just I think this is just the the beginning of kind of research in this direction I think just this kind of number K how it's how it's made it's pretty simple and apparently it's pretty pretty revealing so you know that's pretty cool so that was the paper and its experiments it's it's a pretty sizable paper as I said even the paper itself is ten pages and then there is this immune repertoire classification which I will like spend one minute looking at it so you have you have these set classifications so for each human you obtain a set of immune receptors and you simply obtain one label whether that human is immune to a particular disease or not and your task is kind and then a different human has a different set you have no idea which one of these things is responsible for it being for the human being for the human being immune or not in fact there is a you can't even decide based on these you can only decide based on like sub sequences of these and they might be in combination with each other so there might not be a single one responsible but like a combination but you don't have labels for the individual ones and you have different ones per human and they are different length all of this is just a giant giant task and you have many of them you have tens of thousands per human right so they build a system here where first they do these 1d convolutions to process the inside sequences and then they do this hop field attention mechanism or with with learned queries over these things and then they train on the output label and surprisingly that actually works even with tens of thousands of inside sequences and only one label for all of them and so they they achieve I guess favorable results compared to other baselines on this task using these hop field network which is pretty interesting but I let you look at that paper yourself so I hope this somehow made it a bit clear what happens here and it would actually be pretty interesting if we you know to see what happens if we just do maybe two rounds of these updates is this even desirable right is it desirable to run this to convergence is there something good about not running into convergence or does it actually not matter because it actually does converge in one step I don't know but have a look at the code it's pretty cool and I hope you enjoyed this video I'm sure you have many open questions as do I don't hesitate to ask me in the comments or join our discord as I said there are lots of helpful people on our discord and I'll see you next time bye bye
[ { "end": 4.86, "start": 0, "text": " Hi there. Today we'll look at hopfield networks is all you need by researchers" }, { "end": 11.64, "start": 4.86, "text": " from the Johannes Kepler University in Linz and the University of Oslo. So on" }, { "end": 17.2, "start": 11.64, "text": " high level this paper proposes a new type of hopfield networks that generalizes" }, { "end": 22.740000000000002, "start": 17.2, "text": " modern hopfield networks from binary patterns to continuous patterns and then" }, { "end": 29.060000000000002, "start": 22.740000000000002, "text": " shows that the retrieval update rule of these new hopfield networks is equivalent" }, { "end": 35.36, "start": 29.06, "text": " to the attention mechanism that's used in modern transformers. And it's actually a" }, { "end": 39.8, "start": 35.36, "text": " more general formulation of the attention mechanism and therefore it can" }, { "end": 44.68, "start": 39.8, "text": " be used to do kind of a variety of things to improve modern deep learning." }, { "end": 50.2, "start": 44.68, "text": " And it also has a companion paper where it applies this to some kind of" }, { "end": 56.04, "start": 50.2, "text": " immunology research and achieves state-of-the-art in a task that is" }, { "end": 62, "start": 56.04, "text": " specifically suited to this type of attention. Alright let's dive in together" }, { "end": 69.12, "start": 62, "text": " we'll go over what this paper does what it proposes and so on. If you like pay if" }, { "end": 73.32, "start": 69.12, "text": " you like videos like this consider subscribing, you know sharing it out and" }, { "end": 82.12, "start": 73.32, "text": " I hope you're enjoying this. Alright also thanks to my discord community for you" }, { "end": 87.88000000000001, "start": 82.12, "text": " know very helpful bringing me up to speed on this paper. Super interesting" }, { "end": 92.92, "start": 87.88000000000001, "text": " discussions there. If you're not on our discord yet I invite you to join it's" }, { "end": 102.10000000000001, "start": 92.92, "text": " fun. Okay so what is a hopfield network? A hopfield network is a pretty kind of" }, { "end": 110.36000000000001, "start": 102.10000000000001, "text": " old style, an old conceptualization of a neural network. So in a hopfield network" }, { "end": 115.92, "start": 110.36, "text": " what your goal would be is you can conceptualize it as a bit of a neural" }, { "end": 122.64, "start": 115.92, "text": " network. So let's say we have five neurons or something like this. What your" }, { "end": 127.1, "start": 122.64, "text": " goal would be is to have a neural network where you can store so-called" }, { "end": 134.07999999999998, "start": 127.1, "text": " patterns and a pattern in this case would be a binary string of size 5. So" }, { "end": 143.84, "start": 134.08, "text": " for example 10100 or 11010 and you'd have a list of these patterns and what" }, { "end": 148.84, "start": 143.84, "text": " your goal would be is to store these patterns in the neural network such that" }, { "end": 152.76000000000002, "start": 148.84, "text": " and here you know we'll just consider everything to be sort of connected to" }, { "end": 160.48000000000002, "start": 152.76000000000002, "text": " everything else and what your goal would be in this is that you can kind of store" }, { "end": 167.16, "start": 160.48, "text": " patterns inside this neural network and you adjust the weights somehow. So this" }, { "end": 173.64, "start": 167.16, "text": " was as I said this was this was this is kind of an old model. You store you adapt" }, { "end": 177.72, "start": 173.64, "text": " the weights such that you store these patterns. And what does it mean for a" }, { "end": 182.48, "start": 177.72, "text": " pattern to be stored? If you have stored a pattern you will then be" }, { "end": 187.35999999999999, "start": 182.48, "text": " able to retrieve it and you retrieve a pattern in these kind of old style" }, { "end": 193.68, "start": 187.36, "text": " Hopfield networks by providing a partial pattern. So what you'll say is for" }, { "end": 199.44000000000003, "start": 193.68, "text": " example I want a pattern that starts with 110 and you give that to the" }, { "end": 204.48000000000002, "start": 199.44000000000003, "text": " network and there would be a so-called update rule and the update rule is kind" }, { "end": 211.36, "start": 204.48000000000002, "text": " of an internal rule. So let's just go through this. So here this 110 maybe" }, { "end": 217.64000000000001, "start": 211.36, "text": " this is 110 and then they would kind of send messages around so this update" }, { "end": 224.28, "start": 217.64000000000001, "text": " rule would somehow adjust the value of this and this neuron here to what's most" }, { "end": 229.24, "start": 224.28, "text": " compatible with the network weights and if the network weights have been" }, { "end": 234.20000000000002, "start": 229.24, "text": " adjusted correctly this will turn out then at the end of applying this update" }, { "end": 240.8, "start": 234.20000000000002, "text": " rule that this is a 1 and this is a 0 and therefore this pattern here is" }, { "end": 248.76000000000002, "start": 240.8, "text": " retrieved. Now had I input 101 at the beginning then the outcome would be" }, { "end": 254, "start": 248.76000000000002, "text": " different. Hopefully this pattern here would have been retrieved. So you" }, { "end": 259.40000000000003, "start": 254, "text": " can see the applications of this like you can have the first three digits as" }, { "end": 263.92, "start": 259.40000000000003, "text": " sort of a database key and then the last ones as sort of the value that you store" }, { "end": 267.84000000000003, "start": 263.92, "text": " along with it and then you can simply provide the first few. You can also" }, { "end": 273.96, "start": 267.84, "text": " provide you don't always have to provide three so this all depends. This is" }, { "end": 278.64, "start": 273.96, "text": " sort of an as I said an old conceptualization of neural networks so" }, { "end": 283.52, "start": 278.64, "text": " people were imagining that this is kind of how the brain works you know fire" }, { "end": 291.35999999999996, "start": 283.52, "text": " together wire together and also with research into this it turns out that you" }, { "end": 294.59999999999997, "start": 291.35999999999996, "text": " know you might think you know there's there's kind of five neurons so maybe I" }, { "end": 299.08000000000004, "start": 294.6, "text": " can store five different patterns you know accurately because if I store too" }, { "end": 305.56, "start": 299.08000000000004, "text": " many patterns right if I have many many many many patterns then I can't expect" }, { "end": 310.44, "start": 305.56, "text": " to be able to retrieve all the patterns again because some of them will just be" }, { "end": 317.08000000000004, "start": 310.44, "text": " so equal that you know many will start maybe with this and and I won't have a" }, { "end": 324.08000000000004, "start": 317.08000000000004, "text": " chance to to retrieve the one I want or the update rule will make a mistake so" }, { "end": 327.4, "start": 324.08, "text": " you might think this might be like five because I have five neurons or maybe ten" }, { "end": 332.91999999999996, "start": 327.4, "text": " because I have ten connections but it turns out that in modern hopfield" }, { "end": 338.91999999999996, "start": 332.91999999999996, "text": " networks with the appropriate update rule you can store exponentially many" }, { "end": 345.76, "start": 338.91999999999996, "text": " patterns in these networks exponentially many in the in the dimension of the in" }, { "end": 349.26, "start": 345.76, "text": " the dimension of the patterns and here I guess that would be the length of the" }, { "end": 354.65999999999997, "start": 349.26, "text": " pattern so this is a little bit surprising the kind of storage capacity" }, { "end": 363.36, "start": 354.65999999999997, "text": " of these networks and will this this paper here generalizes that to continuous" }, { "end": 368.84, "start": 363.36, "text": " to continuous states so what do we mean with continuous states I guess I mean" }, { "end": 374.59999999999997, "start": 368.84, "text": " continuous patterns so no longer is a pattern a binary string but a pattern" }, { "end": 382.84000000000003, "start": 374.6, "text": " now is a string of floating point numbers like 0.5 1.3 and so on and you" }, { "end": 387.24, "start": 382.84000000000003, "text": " know a string of floating or a sequence of floating point numbers is naturally" }, { "end": 392.44, "start": 387.24, "text": " depicted as a vector okay so our patterns are going to be different" }, { "end": 400.24, "start": 392.44, "text": " vectors that we store and you know in high dimensions that the the vectors" }, { "end": 405, "start": 400.24, "text": " will be kind of separated well from each other as long as we don't have too many" }, { "end": 411.64, "start": 405, "text": " but this paper shows that all these properties for the modern hopfield that" }, { "end": 417.40000000000003, "start": 411.64, "text": " works that hold for binary strings still hold if you go to these kind of" }, { "end": 423.68, "start": 417.40000000000003, "text": " continuous to these vector patterns that means you can store exponentially many" }, { "end": 428.52, "start": 423.68, "text": " patterns in the dimensions of the vector which is pretty surprising right" }, { "end": 434.2, "start": 428.52, "text": " because you'd think like you know after you have one vector per dimension that" }, { "end": 438.71999999999997, "start": 434.2, "text": " you know after that it might get a bit shaky but no you can actually store" }, { "end": 444.08, "start": 438.71999999999997, "text": " exponentially many that's pretty surprising and this paper is a lot about" }, { "end": 449.35999999999996, "start": 444.08, "text": " how to do that and the fact that that happens and so on so we've talked about" }, { "end": 455.84, "start": 449.35999999999996, "text": " update rules for these kind of hopfield networks and I haven't really specified" }, { "end": 459.79999999999995, "start": 455.84, "text": " what that is I've just said that you know I enter a pattern and then the" }, { "end": 465.47999999999996, "start": 459.79999999999995, "text": " network does something and outcomes outcomes the whatever the pattern that" }, { "end": 472.03999999999996, "start": 465.47999999999996, "text": " matches my query so this here is called a query you might already this is on" }, { "end": 478.59999999999997, "start": 472.03999999999996, "text": " purpose like the kind of overlap between the attention mechanism lingo and the" }, { "end": 482.96, "start": 478.59999999999997, "text": " hopfield network lingo we're going to conflate the two to kind of make clear" }, { "end": 488.12, "start": 482.96, "text": " where the two overlap if you don't know what an attention mechanism is or aren't" }, { "end": 492.59999999999997, "start": 488.12, "text": " familiar with it watch my video on attention is all you need once you watch" }, { "end": 499.88, "start": 492.59999999999997, "text": " that this video will make a lot more sense all right so in what the update" }, { "end": 504.44, "start": 499.88, "text": " rule does is specifically in the update rule that there isn't only one right" }, { "end": 508.52, "start": 504.44, "text": " there are many different proposals of hopfield networks and they all lead to" }, { "end": 513.88, "start": 508.52, "text": " different properties but what an update rule does ultimately is it minimizes" }, { "end": 520, "start": 513.88, "text": " what's called an energy so every type of hopfield network is associated with an" }, { "end": 526.1999999999999, "start": 520, "text": " energy function and this the energy function of the modern hopfield network" }, { "end": 533.96, "start": 526.1999999999999, "text": " for binary strings is this energy function right here so with X X is the" }, { "end": 540.72, "start": 533.96, "text": " pattern the pattern this is the kind of state of the hopfield network and these" }, { "end": 545.08, "start": 540.72, "text": " are the whatever is stored in the network and then the X here is the" }, { "end": 553.32, "start": 545.08, "text": " query that you enter into the network and then the energy here tells you this" }, { "end": 558.1800000000001, "start": 553.32, "text": " quantity you have to minimize this quantity in order to retrieve the" }, { "end": 564.16, "start": 558.18, "text": " pattern that you want okay now we are never directly working with the energy" }, { "end": 568.7199999999999, "start": 564.16, "text": " as such so what you could do is for example use backprop or something to" }, { "end": 576.2399999999999, "start": 568.7199999999999, "text": " use gradient descent to decrease the energy but usually along with an energy" }, { "end": 581.4799999999999, "start": 576.2399999999999, "text": " function comes an update function and the update function is what I've talked" }, { "end": 585.04, "start": 581.4799999999999, "text": " about here like you do something and then the network does something and then" }, { "end": 590.8399999999999, "start": 585.04, "text": " you get the pattern out what the network does is it minimizes its energy function" }, { "end": 595.8, "start": 590.8399999999999, "text": " and the update rule is made such that the corresponding energy function is" }, { "end": 600.4399999999999, "start": 595.8, "text": " minimized so the energy function is more like a theoretical consideration that" }, { "end": 605.52, "start": 600.4399999999999, "text": " you say okay here is my energy function of my hopfield network and the there" }, { "end": 610.52, "start": 605.52, "text": " will be a corresponding update rule that minimizes that energy function and if" }, { "end": 615.84, "start": 610.52, "text": " you use that update rule maybe multiple times then the energy function will be" }, { "end": 621.48, "start": 615.84, "text": " minimized and you will have retrieved your pattern or not if if you have too" }, { "end": 628.24, "start": 621.48, "text": " many patterns stored it might also fail right so they they say what the update" }, { "end": 633.76, "start": 628.24, "text": " rules are in the in the text here for the old hopfield networks but we're not" }, { "end": 637.76, "start": 633.76, "text": " really interested in the old ones we're interested in the ones that this paper" }, { "end": 641.72, "start": 637.76, "text": " cares about namely where the patterns that you store in the hopfield network" }, { "end": 648.12, "start": 641.72, "text": " are these vectors over our vector patterns and the query is also a vector" }, { "end": 652.96, "start": 648.12, "text": " pattern so you want to store all of these patterns into the hopfield network" }, { "end": 658.2, "start": 652.96, "text": " I'm gonna draw it like this here I'm gonna store it into the hopfield" }, { "end": 663.2, "start": 658.2, "text": " network and then after that you want to come up with a query and the query is" }, { "end": 670, "start": 663.2, "text": " like this and in the case of the binary strings we had something like well I" }, { "end": 677.1600000000001, "start": 670, "text": " sort of know half of my binary string now in the vector hopfield network it's" }, { "end": 683.36, "start": 677.1600000000001, "text": " more like well I sort of kind of know the direction that my vector should point" }, { "end": 690.6, "start": 683.36, "text": " in okay and you will read what you want to retrieve is the vector that has kind" }, { "end": 696, "start": 690.6, "text": " of a large inner product okay so if I enter this query into my hopfield" }, { "end": 700.48, "start": 696, "text": " network what I hope is that this vector here is retrieved now you see it's not" }, { "end": 705.12, "start": 700.48, "text": " exactly the same vector like they do point if I translate that here by I it's" }, { "end": 712.16, "start": 705.12, "text": " maybe something like this but so they are different but you want to say well I" }, { "end": 716.88, "start": 712.16, "text": " kind of know what I want I kind of want something like this and then the hopfield" }, { "end": 720.64, "start": 716.88, "text": " network would answer with oh I have something like this it's this right here" }, { "end": 727, "start": 720.64, "text": " okay so you that the connection to attention mechanism should become pretty" }, { "end": 733.64, "start": 727, "text": " pretty obvious right now but you know the to actually establish this formally" }, { "end": 740.04, "start": 733.64, "text": " is the kind of the point of this paper and you know it's pretty cool to see so" }, { "end": 745.2, "start": 740.04, "text": " they formulate this new energy right here this is the energy of this new" }, { "end": 752, "start": 745.2, "text": " continuous hopfield network specifically they have to have this term right here" }, { "end": 755.96, "start": 752, "text": " because they are now have continuous states and continuous queries this if" }, { "end": 760.2800000000001, "start": 755.96, "text": " you minimize the energy it basically means that your query can never you know" }, { "end": 766.5600000000001, "start": 760.2800000000001, "text": " go to infinity because you have the the query right here in the energy function" }, { "end": 772.9200000000001, "start": 766.5600000000001, "text": " the update rule is this right here and we'll look at that in a moment but" }, { "end": 780.56, "start": 772.92, "text": " remember the update rule is what you actually implement in code so if I if I" }, { "end": 786.3199999999999, "start": 780.56, "text": " have a query right here I plug it in here this is the state of my hopfield" }, { "end": 795.28, "start": 786.3199999999999, "text": " network and I apply this rule multiple times and out comes the kind of answer" }, { "end": 802.48, "start": 795.28, "text": " of the hopfield network to my question so the I input this and the out comes" }, { "end": 808.96, "start": 802.48, "text": " this after I update after I apply the update rule maybe multiple times right" }, { "end": 815.84, "start": 808.96, "text": " and interestingly you can already see that this here if you rewrite a bunch of" }, { "end": 819.48, "start": 815.84, "text": " these quantities so if you rewrite the beta here which is the softmax" }, { "end": 826.3000000000001, "start": 819.48, "text": " temperature in a way to be 1 over square root of D and if you take the query the" }, { "end": 832.16, "start": 826.3000000000001, "text": " psi here to be the query matrix and if you take the X here to be the key matrix" }, { "end": 838.64, "start": 832.16, "text": " then this is equivalent to the update or sorry the attention mechanism of a" }, { "end": 844.0799999999999, "start": 838.64, "text": " modern transformer so that's the point of the paper is that we can look at the" }, { "end": 851.56, "start": 844.0799999999999, "text": " transformer attention mechanism as a hopfield network and they have this" }, { "end": 861.36, "start": 851.56, "text": " interesting this interesting diagram at the end right here so the appendix you" }, { "end": 868.28, "start": 861.36, "text": " know this is typical I guess sep hoher I remember the cellul paper had like 60" }, { "end": 875.2, "start": 868.28, "text": " pages of machine proof appendix this also this is like 70 page appendix crazy" }, { "end": 880.6800000000001, "start": 875.2, "text": " but at the end of the appendix you'll find this diagram right here now usually" }, { "end": 887.36, "start": 880.6800000000001, "text": " in an attention mechanism you have whatever the the input is so you have an" }, { "end": 892.92, "start": 887.36, "text": " input right here so this is attention mechanisms or at least transformers they" }, { "end": 899.04, "start": 892.92, "text": " work on sequences or sets of objects and from these you'll generate three things" }, { "end": 906.48, "start": 899.04, "text": " you'll generate the you'll generate the queries the keys and the values now you" }, { "end": 910.64, "start": 906.48, "text": " can either generate the queries from the same objects which would be self" }, { "end": 914.5600000000001, "start": 910.64, "text": " attention or you can generate the queries from like a different object over" }, { "end": 921.68, "start": 914.56, "text": " here it doesn't it doesn't matter too much for our discussions but either you" }, { "end": 927.3199999999999, "start": 921.68, "text": " you know have a reference input or you have you know this kind of same input" }, { "end": 935.28, "start": 927.3199999999999, "text": " all the way and then what you do is use three different heads or three different" }, { "end": 942.52, "start": 935.28, "text": " matrices to transform that input into queries keys and values so I often" }, { "end": 947.84, "start": 942.52, "text": " conceptualize this as you have kind of your input set and each of the input" }, { "end": 956.24, "start": 947.84, "text": " sets outputs a key and also each one which would be a vector and also each" }, { "end": 964.6, "start": 956.24, "text": " one outputs a query so I often draw this here the same sequence and each one" }, { "end": 971.76, "start": 964.6, "text": " outputs a query and the query sort of the query is kind of a request for" }, { "end": 979.64, "start": 971.76, "text": " information so the key exposes sort of what exposes something about the input" }, { "end": 986.28, "start": 979.64, "text": " here so this could be a sentence down here this could be my cat is very pretty" }, { "end": 995.36, "start": 986.28, "text": " and the the the vector the key vector right here could encode something like" }, { "end": 1000.8, "start": 995.36, "text": " this is a noun or this is an animal or anything like this right and the query" }, { "end": 1009.52, "start": 1000.8, "text": " here it could ask for for other things so for example since this is cat this" }, { "end": 1015.4, "start": 1009.52, "text": " vector right here the query vector is generated from that you know token cat" }, { "end": 1024.04, "start": 1015.4, "text": " now it could recognize that cat is a noun and it could ask the other nodes to" }, { "end": 1030.96, "start": 1024.04, "text": " basically say are there any adjectives around here because you know adjectives" }, { "end": 1036.56, "start": 1030.96, "text": " because it itself is a noun it's the object of the sentence right it could" }, { "end": 1040.1599999999999, "start": 1036.56, "text": " ask are there any kind of adjectives that describe the object because that" }, { "end": 1045.1599999999999, "start": 1040.1599999999999, "text": " would be naturally a thing to ask if you were the noun you would want to know are" }, { "end": 1051.3999999999999, "start": 1045.1599999999999, "text": " there any kind of modifiers for for me so it could output the query and the" }, { "end": 1056.1200000000001, "start": 1051.4, "text": " query here could mean you know this direction could mean adjectives and you" }, { "end": 1064, "start": 1056.1200000000001, "text": " see here the word pretty is an adjective so it itself would output a key that" }, { "end": 1070.64, "start": 1064, "text": " says by the way I'm an adjective right so if the cat asks then if this node" }, { "end": 1077.72, "start": 1070.64, "text": " asks for an adjective and this outputs the adjective vector then because the" }, { "end": 1082.64, "start": 1077.72, "text": " inner product between the two things is high this will be routed here so" }, { "end": 1086.16, "start": 1082.64, "text": " attention mechanism is basically information routing that's how I always" }, { "end": 1093.04, "start": 1086.16, "text": " describe it but in this paper we look at it more like these here are the patterns" }, { "end": 1100.44, "start": 1093.04, "text": " that are stored in a hopfield network and I by inputting a query and the dot" }, { "end": 1105.32, "start": 1100.44, "text": " product being the update rule of the hopfield network I retrieve from the" }, { "end": 1111.6399999999999, "start": 1105.32, "text": " hopfield network I retrieve the appropriate pattern that I ask for okay" }, { "end": 1118.48, "start": 1111.6399999999999, "text": " and then you know the values the values are simply a modification of the keys in" }, { "end": 1123.6799999999998, "start": 1118.48, "text": " this form but a lot of people also do keys and values to be the same thing" }, { "end": 1129.2, "start": 1123.6799999999998, "text": " yeah but the this routing of information happens here where you multiply the" }, { "end": 1136.22, "start": 1129.2, "text": " queries and the keys and then you put a softmax over them okay so if you just" }, { "end": 1142, "start": 1136.22, "text": " look from the perspective of a single node like this node here this cat node" }, { "end": 1148.04, "start": 1142, "text": " what it would do is it would inner product its own query vector with all" }, { "end": 1152.96, "start": 1148.04, "text": " of the key vectors right so it would build an inner product with all of these" }, { "end": 1157.1200000000001, "start": 1152.96, "text": " and then it would normalize it would put it through a softmax which will kind of" }, { "end": 1163, "start": 1157.12, "text": " give it a distribution right so here would give it like so this is actually" }, { "end": 1168.12, "start": 1163, "text": " matches because my well my is also very important for cat this this this is just" }, { "end": 1174.3999999999999, "start": 1168.12, "text": " an accident I did not plan this this here this also well many things match but" }, { "end": 1180.3999999999999, "start": 1174.3999999999999, "text": " but in our example we would just say that this last one it's not only higher" }, { "end": 1188.64, "start": 1180.4, "text": " it's also wider it matches very well right and so the information routing" }, { "end": 1195.92, "start": 1188.64, "text": " would route mostly information from this pretty token to the cat token which" }, { "end": 1202.7800000000002, "start": 1195.92, "text": " makes sense in our case right this is the attention mechanism now since if we" }, { "end": 1209.6000000000001, "start": 1202.7800000000002, "text": " are interpreting this as a hopfield network and the update rule here is the" }, { "end": 1215.12, "start": 1209.6, "text": " dot product you can actually think of applying this rule multiple times so" }, { "end": 1222.9199999999998, "start": 1215.12, "text": " what happens now if we and this is where this update rule comes in what happens" }, { "end": 1230.3999999999999, "start": 1222.9199999999998, "text": " if we take this distribution and we don't aggregate the values like usually" }, { "end": 1235.08, "start": 1230.3999999999999, "text": " we would aggregate the values by this distribution what if we aggregate the" }, { "end": 1240.6, "start": 1235.08, "text": " keys by this distribution okay what comes out well if we look at this and" }, { "end": 1245, "start": 1240.6, "text": " you know let's just assume that this key right here matches really well but the" }, { "end": 1249.6799999999998, "start": 1245, "text": " others also match a little bit what would come out would be a weighted" }, { "end": 1255.1599999999999, "start": 1249.6799999999998, "text": " average where a lot of weight is put on this particular key so what will turn" }, { "end": 1259.52, "start": 1255.1599999999999, "text": " out would be something like something that's very close to that key you can" }, { "end": 1267.84, "start": 1259.52, "text": " see I'm going to draw the old key here in green and I'm going to draw the old" }, { "end": 1277.6399999999999, "start": 1267.84, "text": " query in blue so you see that it's then whatever comes out is not the query but" }, { "end": 1282.48, "start": 1277.6399999999999, "text": " it's also not that only key that matches right it's kind of a weighted average" }, { "end": 1288.6, "start": 1282.48, "text": " but with that key dominating okay now since you know in a hopfield network" }, { "end": 1294.28, "start": 1288.6, "text": " what we would do is we would go again we would put this new thing the red thing" }, { "end": 1300.08, "start": 1294.28, "text": " instead of the query vector okay so we would use this aggregated keys this" }, { "end": 1306, "start": 1300.08, "text": " weighted average as a new query vector for that node right here so we duplicate" }, { "end": 1310.76, "start": 1306, "text": " that node over here I'll use that query vector again and do the same thing" }, { "end": 1315.6799999999998, "start": 1310.76, "text": " again okay inner product with all of the query vectors and now since this is" }, { "end": 1320.28, "start": 1315.68, "text": " already an aggregate of the query vectors what's going to happen of course" }, { "end": 1325.28, "start": 1320.28, "text": " the distribution that's going to come out is going to be weighted even more" }, { "end": 1333.44, "start": 1325.28, "text": " heavily into the direction so let's make it even wider into the direction of that" }, { "end": 1339.24, "start": 1333.44, "text": " key that matches okay and you can pretty clearly see if I do that iteratively" }, { "end": 1346.96, "start": 1339.24, "text": " then that will lead to a situation where everything is like very low except that" }, { "end": 1353.76, "start": 1346.96, "text": " one key will sort of dominate the distribution and ultra high and ultra" }, { "end": 1359, "start": 1353.76, "text": " wide okay and that's how that's exactly how a hopfield network works right I" }, { "end": 1363.72, "start": 1359, "text": " would input the query which would be sort of what I want right I kind of know" }, { "end": 1370.08, "start": 1363.72, "text": " what I want okay and then I apply this rule multiple times and with each time I" }, { "end": 1375.88, "start": 1370.08, "text": " refine refine refine until I decide on a pattern the hopfield network is made" }, { "end": 1380.4, "start": 1375.88, "text": " for pattern retrieval and these here are the patterns that I want to retrieve so" }, { "end": 1385.4, "start": 1380.4, "text": " here the patterns aren't kind of stored in the network beforehand but the" }, { "end": 1391.48, "start": 1385.4, "text": " patterns are also generated like in an attention layer so the keys are" }, { "end": 1397.1200000000001, "start": 1391.48, "text": " generated by the previous layer or by these matrices but that doesn't matter" }, { "end": 1402.28, "start": 1397.1200000000001, "text": " for the hopfield network update rule so you see here that the attention mechanism" }, { "end": 1407.1200000000001, "start": 1402.28, "text": " can be interpreted as simply one step making one step of this update rule but" }, { "end": 1411.72, "start": 1407.1200000000001, "text": " you can think of making actually multiple steps and retrieving the" }, { "end": 1418.56, "start": 1411.72, "text": " particular key so you know deciding on a sort of a hard routing of particular" }, { "end": 1427.48, "start": 1418.56, "text": " information now that only works if if there are no other vectors that are" }, { "end": 1432.6399999999999, "start": 1427.48, "text": " close to that particular key right so if the query is this and you know the way I" }, { "end": 1437.44, "start": 1432.6399999999999, "text": " drew it here you can see that there are many there is this one and this one and" }, { "end": 1444.48, "start": 1437.44, "text": " this one that matches so technically the way I drew it what would happen most" }, { "end": 1449.52, "start": 1444.48, "text": " likely is no many no matter how many times you apply your update rule it" }, { "end": 1455.08, "start": 1449.52, "text": " would sort of result in kind of the average of the three keys right so" }, { "end": 1460.3600000000001, "start": 1455.08, "text": " because they're all matching and they would all contribute to that weighted" }, { "end": 1464.64, "start": 1460.3600000000001, "text": " average of the query in the next step and then that means basically the" }, { "end": 1468.48, "start": 1464.64, "text": " conversions would be to something in the middle and that's going to be a central" }, { "end": 1475.28, "start": 1468.48, "text": " point of this paper in which situation we are so they call the first part is" }, { "end": 1479.92, "start": 1475.28, "text": " retrieving a single pattern and they call the second situation where you have" }, { "end": 1484.44, "start": 1479.92, "text": " multiple patterns that all match that are not well separated from each other" }, { "end": 1488.16, "start": 1484.44, "text": " they call this a meta stable state and it's going to be pretty interesting to" }, { "end": 1495.4, "start": 1488.16, "text": " look at transform like BERT language models and look at where they actually" }, { "end": 1500.16, "start": 1495.4, "text": " are are they actually operating in this single pattern retrieval mode or are they" }, { "end": 1508.6000000000001, "start": 1500.16, "text": " operating in the meta stable state mode all right so here you can see it in the" }, { "end": 1513.92, "start": 1508.6000000000001, "text": " diagram the only thing different this from a hop field network sorry from an" }, { "end": 1519.64, "start": 1513.92, "text": " attention mechanism is this branch right here so here you ask do you want to do" }, { "end": 1525.4, "start": 1519.64, "text": " multiple updates after you've you've multiplied the queries and the keys do" }, { "end": 1530.3200000000002, "start": 1525.4, "text": " you want to do multiple updates if yes so if you're in a this hop field network" }, { "end": 1534.8400000000001, "start": 1530.3200000000002, "text": " situation you want to do multiple updates then you go back as you can see" }, { "end": 1542.5400000000002, "start": 1534.8400000000001, "text": " and you do you use the keys together with the output of the softmax to" }, { "end": 1549.0400000000002, "start": 1542.5400000000002, "text": " generate a new query so this query queue here is now generated from the output" }, { "end": 1553.32, "start": 1549.04, "text": " here and the key so the keys are the same these are this is the same thing" }, { "end": 1561.3999999999999, "start": 1553.32, "text": " it's just put here twice okay this is exactly what we discussed okay I hope" }, { "end": 1566.72, "start": 1561.3999999999999, "text": " that's some somehow clear that these the the attention mechanism is simply a one" }, { "end": 1574.24, "start": 1566.72, "text": " step hop field network pattern retrieval algorithm with a particular update rule" }, { "end": 1581.36, "start": 1574.24, "text": " that is that is matches this energy function that they propose right here of" }, { "end": 1585.52, "start": 1581.36, "text": " course they do this you know particularly because the update rule that" }, { "end": 1590.8, "start": 1585.52, "text": " turns out is the transformer update rule but but I actually don't know if they" }, { "end": 1594.74, "start": 1590.8, "text": " backwards engineered the energy function to match the transformer or if they" }, { "end": 1599.84, "start": 1594.74, "text": " first came up with a continuous hop field networks and then just kind of" }, { "end": 1604.8, "start": 1599.84, "text": " discovered that it's like the transformer will maybe never find out" }, { "end": 1613.08, "start": 1604.8, "text": " okay so let's go there are a couple of theorems I believe there are four five" }, { "end": 1619.08, "start": 1613.08, "text": " theorems right here that show that kind of make some points about this about" }, { "end": 1623.28, "start": 1619.08, "text": " this stuff and we'll go through them won't go through the proofs or any you" }, { "end": 1627.9599999999998, "start": 1623.28, "text": " know super in-depth meaning but it's pretty cool to go through them and they" }, { "end": 1632.44, "start": 1627.96, "text": " are proved very rigorously as I said there's a 70 page appendix so have a" }, { "end": 1639.24, "start": 1632.44, "text": " look at that if you're up for it okay so they say here we have an update rule" }, { "end": 1644.68, "start": 1639.24, "text": " this is our update rule for our new hop field networks so the first theorem they" }, { "end": 1651.08, "start": 1644.68, "text": " serve is the update rule that we propose converges globally if we apply" }, { "end": 1659.08, "start": 1651.08, "text": " the update rule repeatedly the energy for t goes equals infinity and the" }, { "end": 1664.96, "start": 1659.08, "text": " energy will converge sorry the energy will converge to a fixed point this" }, { "end": 1671.4399999999998, "start": 1664.96, "text": " being a fixed point for t equals sorry for t goes to infinity yeah if this is" }, { "end": 1676.1999999999998, "start": 1671.4399999999998, "text": " a fixed point basically saying that if I apply this update rule here over and" }, { "end": 1682.88, "start": 1676.2, "text": " over and over again it will it will make this energy function converge to a fixed" }, { "end": 1687.96, "start": 1682.88, "text": " it will make this energy function converge don't want to say anything" }, { "end": 1694.52, "start": 1687.96, "text": " mistakenly here or claim too much but that basically connects the update rule" }, { "end": 1699.64, "start": 1694.52, "text": " to the energy okay so just showing like this really is the update rule for that" }, { "end": 1706.1200000000001, "start": 1699.64, "text": " particular energy function okay now as as itself it's not super duper" }, { "end": 1714.24, "start": 1706.1200000000001, "text": " interesting yet but now we get to theorem 2 so theorem 2 for the iteration" }, { "end": 1722.2, "start": 1714.24, "text": " that's the update rule that we just looked at we have we have that this" }, { "end": 1728.44, "start": 1722.2, "text": " convergence holds as t goes to infinity for some stationary point furthermore" }, { "end": 1738.2, "start": 1728.44, "text": " this quantity here goes to zero so that means this is the the update at t plus" }, { "end": 1744.3600000000001, "start": 1738.2, "text": " one and this is the update at t and the difference between them goes to zero so" }, { "end": 1748.76, "start": 1744.3600000000001, "text": " that means not only does the energy converge but the iterates themselves" }, { "end": 1755.02, "start": 1748.76, "text": " converge so the algorithm actually converges the individual updates of the" }, { "end": 1760, "start": 1755.02, "text": " algorithm so this e new at some point that will no longer change because the" }, { "end": 1766.08, "start": 1760, "text": " the norm between it and the previous one will go to zero you can see that either" }, { "end": 1772.12, "start": 1766.08, "text": " the sequence here converges or in the other case the set of limit points yada" }, { "end": 1779.16, "start": 1772.12, "text": " yada is a connecting subset this is a bit over the top here they say okay it" }, { "end": 1784.52, "start": 1779.16, "text": " can either converge to a point or it can converge to a connected subset but if" }, { "end": 1790.8799999999999, "start": 1784.52, "text": " the loss is finite then any sequence generated by the iteration equation 3" }, { "end": 1797.76, "start": 1790.8799999999999, "text": " converges to some fixed point so you know basically saying that here we oh" }, { "end": 1806.36, "start": 1797.76, "text": " this is not the loss I'm sorry no this is the domain never mind I am an idiot" }, { "end": 1814.24, "start": 1806.36, "text": " this is basically saying that this algorithm will converge okay and they" }, { "end": 1820.48, "start": 1814.24, "text": " define here what it means for a pattern to be stored and retrieved and that's" }, { "end": 1825.04, "start": 1820.48, "text": " for establishing what the kind of storage capacity for Hopfield network" }, { "end": 1829.2, "start": 1825.04, "text": " is so we've established that the update rule minimizes the appropriate energy" }, { "end": 1835.16, "start": 1829.2, "text": " and the update rule will converge at some point which means that we can you" }, { "end": 1841.18, "start": 1835.16, "text": " know if it converges we can retrieve the pattern that it converges to so now we" }, { "end": 1846.04, "start": 1841.18, "text": " define how many patterns can we actually store for that we need to know what does" }, { "end": 1851.4, "start": 1846.04, "text": " it mean for a pattern to be stored so we assume that we have patterns and these" }, { "end": 1856.68, "start": 1851.4, "text": " patterns are called X okay X I we have n different patterns each one is called X" }, { "end": 1865.44, "start": 1856.68, "text": " with a subscript we assume that around every pattern a sphere is given so how" }, { "end": 1871.0800000000002, "start": 1865.44, "text": " do we imagine this we have these patterns and this is this is just a space" }, { "end": 1876.8200000000002, "start": 1871.0800000000002, "text": " that they consider patterns of the on a sphere but we'll just conceptualize it as" }, { "end": 1880.76, "start": 1876.8200000000002, "text": " this will have a space there are patterns we want to store okay and we'll" }, { "end": 1886.92, "start": 1880.76, "text": " say around every pattern there is a sphere okay sphere like this and" }, { "end": 1892.1200000000001, "start": 1886.92, "text": " naturally the patterns are going to be there's going to be a notion of well" }, { "end": 1897.8, "start": 1892.12, "text": " separated patterns and you can imagine this a little bit like these spheres" }, { "end": 1902.32, "start": 1897.8, "text": " won't be touching each other if these spheres aren't touching each other that" }, { "end": 1907.1999999999998, "start": 1902.32, "text": " means that the patterns are kind of well separated and that means that if we" }, { "end": 1911.52, "start": 1907.1999999999998, "text": " initialize the query remember the query here is a vector that kind of sort of" }, { "end": 1915.8, "start": 1911.52, "text": " looks like a pattern and that means the query is kind of close to the pattern in" }, { "end": 1921.08, "start": 1915.8, "text": " some notion of distance so if we initialize the query somewhere in that" }, { "end": 1931.76, "start": 1921.08, "text": " sphere then it might if it converges to that sphere to that pattern then we" }, { "end": 1937.4399999999998, "start": 1931.76, "text": " retrieve the pattern okay now it gets a bit more complicated than this but not" }, { "end": 1944.6799999999998, "start": 1937.4399999999998, "text": " much more we say a pattern is stored if there is a single fixed point inside the" }, { "end": 1951.16, "start": 1944.68, "text": " sphere to which all points that start inside the sphere converge and none of" }, { "end": 1955.88, "start": 1951.16, "text": " the spheres intersect so the sphere of point I doesn't intersect with the" }, { "end": 1961.24, "start": 1955.88, "text": " sphere of point J so that's where we say all these spheres are non intersecting" }, { "end": 1968.24, "start": 1961.24, "text": " we say X is retrieved if the iteration equation 3 converged to the single fixed" }, { "end": 1973.3600000000001, "start": 1968.24, "text": " point in that sphere the retrieval error is the distance so you'll notice you" }, { "end": 1977.8799999999999, "start": 1973.36, "text": " have two things you have X I this is the actual pattern and you have X I star" }, { "end": 1983, "start": 1977.8799999999999, "text": " this is the retrieved pattern so these hopeful they don't always have to give" }, { "end": 1987.28, "start": 1983, "text": " you the same thing that you stored that's part of the the nature of" }, { "end": 1993.6399999999999, "start": 1987.28, "text": " continuous neural networks whatnot so for every sphere we say there is a" }, { "end": 2002.6799999999998, "start": 1993.6399999999999, "text": " pattern there is a sphere now we as pattern is stored if every I can start" }, { "end": 2007.8, "start": 2002.68, "text": " wherever I want in this sphere okay wherever I want it will always converge" }, { "end": 2013.0800000000002, "start": 2007.8, "text": " to a point that's inside the sphere and maybe that point isn't the pattern that" }, { "end": 2016.76, "start": 2013.0800000000002, "text": " I stored but actually this point right here but wherever I start I will always" }, { "end": 2022.1200000000001, "start": 2016.76, "text": " converge to that particular point if that's the case then I have stored this" }, { "end": 2027.04, "start": 2022.1200000000001, "text": " particular pattern now the fact is I don't retrieve this particular pattern I" }, { "end": 2031.8400000000001, "start": 2027.04, "text": " retrieve the blue thing but I can then define the error of retrieval the error" }, { "end": 2037, "start": 2031.84, "text": " of retrieval is simply the distance between the two things ideally this" }, { "end": 2041.9599999999998, "start": 2037, "text": " distance is very small right but you know we can't can't guarantee it now" }, { "end": 2046.84, "start": 2041.9599999999998, "text": " there are going to be theorems that deal exactly with this retrieval error but" }, { "end": 2058.24, "start": 2046.84, "text": " first you can see that here if if these spheres become larger you you can't" }, { "end": 2064.4799999999996, "start": 2058.24, "text": " accurately store a pattern anymore so this is the kind of ideal situation but" }, { "end": 2068.3199999999997, "start": 2064.4799999999996, "text": " there are also situations where these spheres you know if I have these" }, { "end": 2073.68, "start": 2068.3199999999997, "text": " patterns right here these spheres are so large kind of the the attractions of the" }, { "end": 2080.52, "start": 2073.68, "text": " patterns are so large that if I start let's say here then I don't converge to" }, { "end": 2084.24, "start": 2080.52, "text": " either of these two patterns I converge to like something in the middle I" }, { "end": 2089, "start": 2084.24, "text": " converge to maybe this point right here and that's going to be one of these" }, { "end": 2094.2, "start": 2089, "text": " meta stable states okay we're going to encounter situations like this but we're" }, { "end": 2097.52, "start": 2094.2, "text": " also going to encounter situations like this and the bottom thing isn't" }, { "end": 2105.52, "start": 2097.52, "text": " necessarily bad and that's what you have to keep in mind and yeah as I said we'll" }, { "end": 2113.7999999999997, "start": 2105.52, "text": " get to it but just keep this kind of sphere image in mind okay so first we'll" }, { "end": 2118, "start": 2113.8, "text": " just deal with the you know the up the top situation where we store patterns" }, { "end": 2124.48, "start": 2118, "text": " and then retrieve patterns so we'll assume a failure probability which is P" }, { "end": 2129.2400000000002, "start": 2124.48, "text": " and P is going to be no pretty pretty low for their example so they have P" }, { "end": 2136.28, "start": 2129.2400000000002, "text": " equals 0.001 you know like a 0.1 percent error probability of retrieving your" }, { "end": 2142.5600000000004, "start": 2136.28, "text": " pattern things like this and randomly chosen patterns on the sphere with" }, { "end": 2149.52, "start": 2142.56, "text": " radius M we define some constants yada yada yada then with probability 1 minus" }, { "end": 2155.16, "start": 2149.52, "text": " P the number of random patterns that can be stored and stored in the sense of" }, { "end": 2161.92, "start": 2155.16, "text": " having these spheres around them so that you can retrieve them accurately or at" }, { "end": 2168.36, "start": 2161.92, "text": " least you can retrieve something that's close to them is is bounded lower" }, { "end": 2172.48, "start": 2168.36, "text": " bounded by this quantity right here so there's the square root of P there is" }, { "end": 2178.4, "start": 2172.48, "text": " this constant C but then you see that D is in the exponent right here so that" }, { "end": 2183.68, "start": 2178.4, "text": " means it's exponential in the number of dimensions so that's that's pretty cool" }, { "end": 2190.44, "start": 2183.68, "text": " so if you add a dimension you exponentially increase the number of" }, { "end": 2197.16, "start": 2190.44, "text": " the number of patterns you can store and you know that's that is a kind of I" }, { "end": 2202.28, "start": 2197.16, "text": " mean it's it's been known for modern Hopfield networks with binary strings so" }, { "end": 2207.32, "start": 2202.28, "text": " it's not uber surprising but if you have you know it's not what you would imagine" }, { "end": 2213.52, "start": 2207.32, "text": " like that okay so they may give a few examples of these you have to" }, { "end": 2217.32, "start": 2213.52, "text": " accept these constants you know in a particular fashion such that this is" }, { "end": 2223.6800000000003, "start": 2217.32, "text": " given and so on but they say examples here are where C is something like 3" }, { "end": 2235.3999999999996, "start": 2223.68, "text": " and D is 20 so if you were to add a 21st dimension then your I guess storage" }, { "end": 2244.7599999999998, "start": 2235.3999999999996, "text": " capacity would increase by a factor of 3 which pretty cool alright so this is how" }, { "end": 2249.72, "start": 2244.7599999999998, "text": " many that we can store infinitely not sorry exponentially many patterns in" }, { "end": 2260.16, "start": 2249.72, "text": " these networks now they deal they say the next theorem states that the update" }, { "end": 2263.7599999999998, "start": 2260.16, "text": " rule typically converges after one update if the patterns are well" }, { "end": 2268.74, "start": 2263.7599999999998, "text": " separated okay so if we're in a situation where these patterns are well" }, { "end": 2272.2799999999997, "start": 2268.74, "text": " separated which is kind of like this but you can also imagine this in terms of" }, { "end": 2277.2, "start": 2272.2799999999997, "text": " dot products because we operate in the space of dot products so if the patterns" }, { "end": 2281.24, "start": 2277.2, "text": " are well separated that sort of means that they all kind of sort of point away" }, { "end": 2286.62, "start": 2281.24, "text": " from each other and this notion of separation is going to be captured by" }, { "end": 2292.52, "start": 2286.62, "text": " this quantity right here this is the separation of example of pattern I which" }, { "end": 2298.7999999999997, "start": 2292.52, "text": " is just the inner product with itself minus the maximum inner product with any" }, { "end": 2305.48, "start": 2298.7999999999997, "text": " other pattern and this quantity is going to be large when no other pattern is" }, { "end": 2311.78, "start": 2305.48, "text": " close to it so when the separation is large then the update rule the retrieval" }, { "end": 2317.84, "start": 2311.78, "text": " rule of calculating you know I have a query calculate the inner product with" }, { "end": 2324.12, "start": 2317.84, "text": " all of those then I reweigh all of the patterns by that inner product by the" }, { "end": 2329.76, "start": 2324.12, "text": " softmax then I use that new thing as a query again and so on as we discussed it" }, { "end": 2336.88, "start": 2329.76, "text": " will converge to the closest pattern but this theorem says it actually converges" }, { "end": 2342.8, "start": 2336.88, "text": " pretty fast and here I have my problems with saying that it converges after one" }, { "end": 2350.1200000000003, "start": 2342.8, "text": " step typically converges after one update because that you know genuinely" }, { "end": 2356.32, "start": 2350.1200000000003, "text": " depends on a lot of constants as we'll see but it does converge exponentially" }, { "end": 2362.8, "start": 2356.32, "text": " fast in this separation constant as a theorem for says with query psi after" }, { "end": 2368.6400000000003, "start": 2362.8, "text": " one update the distance of the new point to the fixed point is exponentially" }, { "end": 2374.0800000000004, "start": 2368.6400000000003, "text": " small in the separation Delta I the precise bound using Jacobian and its" }, { "end": 2379.84, "start": 2374.0800000000004, "text": " value in the mean value theorem are the following so here you can see this is" }, { "end": 2388.2000000000003, "start": 2379.84, "text": " the distance between the updated psi after one step and the and the fixed" }, { "end": 2393.88, "start": 2388.2000000000003, "text": " point right here this is what it converges to is going to be the" }, { "end": 2400.04, "start": 2393.88, "text": " distance as it was before times this thing right here so you can see since" }, { "end": 2408.1000000000004, "start": 2400.04, "text": " this is a this is a multiplicative update and in this Jacobian so this is" }, { "end": 2418.96, "start": 2408.1, "text": " expanded down here this is this you can see here you have the you have this" }, { "end": 2425.44, "start": 2418.96, "text": " sorry yeah this is this so this is bounded by that you have the exponent" }, { "end": 2431.88, "start": 2425.44, "text": " the exponential function negative this separation right here so the higher the" }, { "end": 2437.64, "start": 2431.88, "text": " separation the faster this algorithm converges okay to say that it converges" }, { "end": 2442.8799999999997, "start": 2437.64, "text": " after one step is you know it might be a bit of bragging I don't know if this is" }, { "end": 2446.96, "start": 2442.8799999999997, "text": " a common thing if you have like an exponential convergence that you are" }, { "end": 2452.48, "start": 2446.96, "text": " allowed to say it's after one step I'm not sure especially what I'm not sure" }, { "end": 2460.24, "start": 2452.48, "text": " about is that you have n here as linear constants in that factor okay so if you" }, { "end": 2466, "start": 2460.24, "text": " if you and that's what they do in their code so if you look at their code and" }, { "end": 2469.48, "start": 2466, "text": " the codes available which is pretty cool it's implemented in pytorch as a" }, { "end": 2473.08, "start": 2469.48, "text": " general module that can you can just drop in so this is not only for" }, { "end": 2477.88, "start": 2473.08, "text": " transformers this is for you can replace like LSTM's you can replace pooling" }, { "end": 2483.6, "start": 2477.88, "text": " mechanisms you can you know do a whole bunch of stuff in their paper in the" }, { "end": 2490.72, "start": 2483.6, "text": " company paper they do this multi instance learning with giant sets on" }, { "end": 2495.56, "start": 2490.72, "text": " using these hopfield layers so pretty pretty cool this code is definitely" }, { "end": 2499.44, "start": 2495.56, "text": " worth kind of checking out and maybe you want to replace some stuff with you but" }, { "end": 2505.44, "start": 2499.44, "text": " the question is how many of these update steps should you do right because we" }, { "end": 2509.92, "start": 2505.44, "text": " looked at the diagram at least in the attention mechanism it seems like you" }, { "end": 2514.36, "start": 2509.92, "text": " have attention layers right you have a transformer and the transformer consists" }, { "end": 2518.2799999999997, "start": 2514.36, "text": " of you know you have this input right here and you go through layer layer" }, { "end": 2524.64, "start": 2518.2799999999997, "text": " layer layer layer and in each layer there's contained in it and one of these" }, { "end": 2531.44, "start": 2524.64, "text": " attention mechanism right this entire thing is in this layer okay and now if" }, { "end": 2536.4, "start": 2531.44, "text": " you interpret this as a hopfield network and you want to do multiple steps that" }, { "end": 2540.3599999999997, "start": 2536.4, "text": " means you go this branch right here so in each layer potentially you do" }, { "end": 2547.48, "start": 2540.3599999999997, "text": " multiple steps of these things so for whatever computational constraints" }, { "end": 2554.04, "start": 2547.48, "text": " transformers had already this will certainly make it worse but also you" }, { "end": 2557.6, "start": 2554.04, "text": " need to decide how many steps you want to do now you can hard code that of" }, { "end": 2564.7599999999998, "start": 2557.6, "text": " course but they say you should do these steps until this norm here until the" }, { "end": 2570.84, "start": 2564.7599999999998, "text": " norm between the old and the new is small enough so where is that so you" }, { "end": 2574.24, "start": 2570.84, "text": " can't measure how close you are to the convergence points right because you" }, { "end": 2579.48, "start": 2574.24, "text": " don't know in practice but you can measure how far you're away you can" }, { "end": 2583.6, "start": 2579.48, "text": " measure where did we have it you can measure this quantity right here that's" }, { "end": 2588.3199999999997, "start": 2583.6, "text": " something you can measure how far two iterates are apart so what you'll simply" }, { "end": 2594.44, "start": 2588.3199999999997, "text": " do is you'll measure that and if that is small enough then you'll you'll stop but" }, { "end": 2600.7599999999998, "start": 2594.44, "text": " that I guess is very related to this so how if you we've already proven it" }, { "end": 2608.2799999999997, "start": 2600.7599999999998, "text": " converges to this X star so I guess we can approximate this quantity right here" }, { "end": 2612.36, "start": 2608.2799999999997, "text": " with the quantity above and that tells you how many updates you need to do and" }, { "end": 2618.7200000000003, "start": 2612.36, "text": " that quantity is linear not only linear but actually here quadratic in n I don't" }, { "end": 2625.4, "start": 2618.7200000000003, "text": " care you know yes it's exponential in the separation but it's quadratic in n" }, { "end": 2633.6, "start": 2625.4, "text": " and if I've learned anything from kind of my fast code courses is that constants" }, { "end": 2637.56, "start": 2633.6, "text": " actually matter when you're not dealing with infinity with an infinite number of" }, { "end": 2645.24, "start": 2637.56, "text": " steps so the number of the number of steps you need to do I guess will" }, { "end": 2649.92, "start": 2645.24, "text": " depend on the sequence length in a quadratic fashion so I'm not sure you" }, { "end": 2655.08, "start": 2649.92, "text": " can always claim this is convergence in one step now I might be super mistaken" }, { "end": 2661.16, "start": 2655.08, "text": " here and none of this will can none of this actually makes a difference in the" }, { "end": 2666.08, "start": 2661.16, "text": " in the light of the exponential decay here but I would just I'm just a bit" }, { "end": 2670.2, "start": 2666.08, "text": " worried saying this usually converges in one step it's clear I guess why they do" }, { "end": 2675.88, "start": 2670.2, "text": " it right because the attention mechanism in transformers is a one-step application" }, { "end": 2681.7999999999997, "start": 2675.88, "text": " of this rule and this here is kind of a theoretical justification for" }, { "end": 2685.92, "start": 2681.7999999999997, "text": " interpreting this precisely as a hopfield network because you'd say well" }, { "end": 2690.12, "start": 2685.92, "text": " in a hopfield network you would do multiple steps but wait wait we can" }, { "end": 2693.84, "start": 2690.12, "text": " actually prove that even if you interpret it as a hopfield network you" }, { "end": 2697.92, "start": 2693.84, "text": " it can it usually converges after one step so what you're actually doing in a" }, { "end": 2703.56, "start": 2697.92, "text": " transformer is applying a hopfield network update rule to convergence so" }, { "end": 2709.2000000000003, "start": 2703.56, "text": " yeah I'm not yeah I might be bickering on a high level here luxury problems" }, { "end": 2716.92, "start": 2709.2000000000003, "text": " theorem five then says so theorem four is how fast does this converge theorem" }, { "end": 2723.2400000000002, "start": 2716.92, "text": " five the last theorem right here says that the retrieval error of a pattern" }, { "end": 2727.2, "start": 2723.24, "text": " then so this is the this is what you converge to and this is what you've" }, { "end": 2735.3199999999997, "start": 2727.2, "text": " stored is bounded by again something that's exponential in the separation" }, { "end": 2742.8399999999997, "start": 2735.3199999999997, "text": " right here as you can see okay so that was the theorem so if we go quickly" }, { "end": 2747.9599999999996, "start": 2742.8399999999997, "text": " through them again theorems one and two deal with the convergence of this" }, { "end": 2752.4399999999996, "start": 2747.9599999999996, "text": " algorithm and the fact that it actually minimizes the proposed energy then" }, { "end": 2759, "start": 2752.44, "text": " theorem three says you can store exponentially many patterns in terms of" }, { "end": 2766.52, "start": 2759, "text": " the dimension of your space and theorems four and five say that this update rule" }, { "end": 2771.84, "start": 2766.52, "text": " will converge exponentially fast after after one step if you believe that and" }, { "end": 2776.96, "start": 2771.84, "text": " the retrieval error will also go down exponentially fast with the number of" }, { "end": 2783.08, "start": 2776.96, "text": " update steps that you do okay that sounds pretty pretty pretty good but" }, { "end": 2788.52, "start": 2783.08, "text": " we've heard it it's very dependent on how well separated these patterns are" }, { "end": 2794.16, "start": 2788.52, "text": " and it turns out that is you know at least in transformers they aren't" }, { "end": 2800.08, "start": 2794.16, "text": " always well separated and that might be on purpose remember the the states here" }, { "end": 2804.92, "start": 2800.08, "text": " the patterns aren't pre stored like in a classic hopfield network but the" }, { "end": 2810.08, "start": 2804.92, "text": " patterns if you interpret an attention mechanism as this are also generated by" }, { "end": 2815.28, "start": 2810.08, "text": " the network itself so the pattern matrix that you retrieve from and the query are" }, { "end": 2821.12, "start": 2815.28, "text": " generated by the attention mechanism in this case as I said this is applicable" }, { "end": 2829.08, "start": 2821.12, "text": " to many many more domains than just this but yeah so there's another slight" }, { "end": 2833.12, "start": 2829.08, "text": " modification that you have to do to make this actually equivalent to an attention" }, { "end": 2838.88, "start": 2833.12, "text": " mechanism and that is you'll have to recast the value because usually what" }, { "end": 2842.88, "start": 2838.88, "text": " you'll do is you have some sort of input and then you make queries keys and" }, { "end": 2847.8399999999997, "start": 2842.88, "text": " values from that using different heads the only thing to make it formally" }, { "end": 2852.88, "start": 2847.8399999999997, "text": " equivalent is you have to make the values generated from the keys so the" }, { "end": 2857.56, "start": 2852.88, "text": " keys give rise to the values as you can see right here that you first multiply" }, { "end": 2861.56, "start": 2857.56, "text": " with the key matrix and then with the value matrix I think that's you know" }, { "end": 2870.52, "start": 2861.56, "text": " that I don't I doubt that this will will change anything if you if you the only" }, { "end": 2874.2, "start": 2870.52, "text": " way that could really change anything is if this matrix here would be super low" }, { "end": 2880.6, "start": 2874.2, "text": " rank like collapse the space of into like very few dimensions which the value" }, { "end": 2885.44, "start": 2880.6, "text": " matrix wouldn't do so you know but just letting you know that the technical" }, { "end": 2894.7200000000003, "start": 2885.44, "text": " equality requires this slight modification okay now we said that it" }, { "end": 2899.28, "start": 2894.7200000000003, "text": " might not you know be that this is always super well separate and you" }, { "end": 2903.76, "start": 2899.28, "text": " retrieve a single pattern and that's what they research here in a pre trained" }, { "end": 2908.26, "start": 2903.76, "text": " BERT model so they take a pre trained BERT model from I guess from hugging" }, { "end": 2915.76, "start": 2908.26, "text": " face and they run they just run a data set through it and what they do is so" }, { "end": 2920.44, "start": 2915.76, "text": " for each for each query and sorry for each attention head because you have" }, { "end": 2926.1600000000003, "start": 2920.44, "text": " multiple ones of these attention heads right in each layer so in each layer you" }, { "end": 2931.1200000000003, "start": 2926.1600000000003, "text": " have multiple of these heads for each head they look at over the course of the" }, { "end": 2938.24, "start": 2931.12, "text": " whole data set how do these softmax distributions look like so when you" }, { "end": 2943.2799999999997, "start": 2938.24, "text": " believe that this is a hopfield network and you believe that this converges in" }, { "end": 2948.64, "start": 2943.2799999999997, "text": " one step then if the patterns are well separated what we would expect is a" }, { "end": 2955.9, "start": 2948.64, "text": " distribution as we said like this okay there would be one dominant pattern" }, { "end": 2959.96, "start": 2955.9, "text": " that you retrieve you know that's what you want to retrieve that's what comes" }, { "end": 2965.96, "start": 2959.96, "text": " out but a bang you retrieve that accurate pattern anything else would" }, { "end": 2969.8, "start": 2965.96, "text": " mean that the hopfield network sort of failed right it wouldn't give you back" }, { "end": 2975.56, "start": 2969.8, "text": " one particular pattern so they have I think that's a pretty it's a pretty" }, { "end": 2982.12, "start": 2975.56, "text": " smart experiment they look how many bars do we need to add how many of these bars" }, { "end": 2988.42, "start": 2982.12, "text": " in the softmax distribution do we need to add to reach 90% so it depends a bit" }, { "end": 2992.16, "start": 2988.42, "text": " on the temperature of the softmax which is hard coded in attention mechanism" }, { "end": 3000.44, "start": 2992.16, "text": " bdb is 1 this squared over d so they say how many do we need to add to get to 0.9" }, { "end": 3009.2000000000003, "start": 3000.44, "text": " to 90% of the mass of this distribution and if this is the hopfield network" }, { "end": 3014, "start": 3009.2000000000003, "text": " where you retrieve one pattern then one will be enough right one of these bars" }, { "end": 3020.52, "start": 3014, "text": " will probably be I don't know like 99% okay but there are other cases imagine" }, { "end": 3026.8, "start": 3020.52, "text": " the case where the patterns and the query you retrieve the spheres that it" }, { "end": 3033.8, "start": 3026.8, "text": " gives rise to are all like overlapping okay so what that will do is it won't" }, { "end": 3039.8, "start": 3033.8, "text": " converge to any particular pattern but the attractor space in this kind so you" }, { "end": 3044.84, "start": 3039.8, "text": " can imagine if you have two spheres that are apart from each other the update" }, { "end": 3049.1600000000003, "start": 3044.84, "text": " rule converges either so if it's closer to here it converge here if it's closer" }, { "end": 3056.6000000000004, "start": 3049.1600000000003, "text": " to here it'll converge here but if they are overlapping like this the energy" }, { "end": 3061.32, "start": 3056.6000000000004, "text": " landscape will actually make it such that it will neither if it starts" }, { "end": 3064.96, "start": 3061.32, "text": " somewhere it will neither converge to here nor to here it will actually" }, { "end": 3071.2, "start": 3064.96, "text": " converge to somewhere in the middle okay into the mean of the stored patterns and" }, { "end": 3077.88, "start": 3071.2, "text": " if we take that to the extreme what could be is it could be that the softmax" }, { "end": 3083.88, "start": 3077.88, "text": " distribution looks completely uniform okay which would basically mean that you" }, { "end": 3088.12, "start": 3083.88, "text": " know I don't care where my information comes from just average and this has its" }, { "end": 3094.36, "start": 3088.12, "text": " applications so if you for example want to make a sentiment classifier very cheap" }, { "end": 3097.88, "start": 3094.36, "text": " way to do that is to simply take pre-trained word embeddings like glove" }, { "end": 3103.2400000000002, "start": 3097.88, "text": " or word2vec you know assign each word word embedding and then just average the" }, { "end": 3106.6400000000003, "start": 3103.2400000000002, "text": " word embeddings okay and you count on the fact if there are a lot of kind of" }, { "end": 3112.76, "start": 3106.6400000000003, "text": " negative words in there like bad sad angry the word embedding kind of will" }, { "end": 3116.5, "start": 3112.76, "text": " you know reflect that and the average word embedding will point more into the" }, { "end": 3121.28, "start": 3116.5, "text": " bad direction and if there's a lot of happy words the average will point into" }, { "end": 3126.8, "start": 3121.28, "text": " the happy direction okay so there are applications of averaging information" }, { "end": 3133.92, "start": 3126.8, "text": " not caring particularly where it comes from and in that case what we'd expect" }, { "end": 3139.44, "start": 3133.92, "text": " is that this number and we'll call that so we'll call that the number K in this" }, { "end": 3146.1200000000003, "start": 3139.44, "text": " case it equals one but in this case K equals I guess N the number of inputs" }, { "end": 3152.08, "start": 3146.12, "text": " okay because we need well not maybe N but you know approximately we need almost" }, { "end": 3160.96, "start": 3152.08, "text": " all of them to to reach the 90% okay and there is an in-between and these are" }, { "end": 3167.12, "start": 3160.96, "text": " called these meta stable states where and the in-between is something like you'd" }, { "end": 3173.64, "start": 3167.12, "text": " have a couple of patterns here a couple here and a couple maybe here it's almost" }, { "end": 3180.44, "start": 3173.64, "text": " like a clustering like and these overlap and these overlap and these overlap but" }, { "end": 3184.8799999999997, "start": 3180.44, "text": " they don't overlap with each other which means that if you start somewhere here" }, { "end": 3188.2799999999997, "start": 3184.8799999999997, "text": " you would converge to the mean but not to the mean of all the patterns but just" }, { "end": 3193.3199999999997, "start": 3188.2799999999997, "text": " to the mean of these patterns and here here and here here so this this is like" }, { "end": 3197.64, "start": 3193.3199999999997, "text": " a clustering in latent space right so you can interpret these Hopfield update" }, { "end": 3203.2, "start": 3197.64, "text": " rules as somehow you know getting not going to a particular pattern but going" }, { "end": 3208.16, "start": 3203.2, "text": " to sort of a cluster and this is if you ask something like hey is there any" }, { "end": 3212.56, "start": 3208.16, "text": " adjective around right and all of these patterns they kind of overlap in that" }, { "end": 3217.3199999999997, "start": 3212.56, "text": " space in that query space of adjective they overlap and therefore the update" }, { "end": 3221.7999999999997, "start": 3217.3199999999997, "text": " rule would converge to sort of the mean which would basically say yes there is" }, { "end": 3227.4399999999996, "start": 3221.7999999999997, "text": " an adjective here right and the information would not be routed so that" }, { "end": 3232.2799999999997, "start": 3227.4399999999996, "text": " the distribution if we start here writing we converge to this the" }, { "end": 3235.76, "start": 3232.28, "text": " distribution would look something like small small small and then you'd have a" }, { "end": 3242.0400000000004, "start": 3235.76, "text": " couple of large ones all right you'd have like maybe two or three or four of" }, { "end": 3247.0400000000004, "start": 3242.0400000000004, "text": " large ones and these would exactly correspond to the patterns here so the" }, { "end": 3253.6400000000003, "start": 3247.0400000000004, "text": " information will be routed from all of those in that cluster to this particular" }, { "end": 3258.98, "start": 3253.6400000000003, "text": " note that asks the query okay these are these are what's called these meta stable" }, { "end": 3263.12, "start": 3258.98, "text": " states and what they do is they calculate over the entire data set this" }, { "end": 3268.6, "start": 3263.12, "text": " number K and here they show you the distribution so in these plots what" }, { "end": 3274.76, "start": 3268.6, "text": " you'll see is over the entire data set K goes into that direction so I guess" }, { "end": 3282.36, "start": 3274.76, "text": " let's go to Tiz here this this seems pretty easy so K is in this direction" }, { "end": 3289.7200000000003, "start": 3282.36, "text": " and this is simply the amount of like how so in each you let a data point run" }, { "end": 3293.84, "start": 3289.7200000000003, "text": " through it you measure K for that particular layer one you see this is" }, { "end": 3300.96, "start": 3293.84, "text": " layer one head four okay this is one layer one attention head and then you" }, { "end": 3310.6400000000003, "start": 3300.96, "text": " can see that the number K is distributed like this okay so contrast this to this" }, { "end": 3316.2799999999997, "start": 3310.64, "text": " head right here where it's a lot of weight on the number one or like very" }, { "end": 3322, "start": 3316.2799999999997, "text": " few numbers okay so these blue ones would be these are your typical like" }, { "end": 3326.8399999999997, "start": 3322, "text": " when you retrieve one particular pattern so this attention head we can sort of" }, { "end": 3332.72, "start": 3326.8399999999997, "text": " conclude in this particular attention head this is very specific it looks at" }, { "end": 3339, "start": 3332.72, "text": " its input it looks at its token and it decides what information do I want and" }, { "end": 3346.16, "start": 3339, "text": " it retrieves one particular thing from the other nodes okay whereas here it's" }, { "end": 3351.44, "start": 3346.16, "text": " more like kind of an averaging it's more like I want this kind of information" }, { "end": 3356.04, "start": 3351.44, "text": " and on average I don't even know what the sequence length is here I guess it's" }, { "end": 3364.56, "start": 3356.04, "text": " maybe 512 so of the 512 the median this number is always the median and median" }, { "end": 3372, "start": 3364.56, "text": " it collects information from 231 of them okay so you can see that this" }, { "end": 3376.56, "start": 3372, "text": " corresponds these green and orange ones correspond to these meta stable states" }, { "end": 3383, "start": 3376.56, "text": " where there's kind of an implicit clustering done in the in this space of" }, { "end": 3387.2799999999997, "start": 3383, "text": " attention whereas the blue ones they correspond to attention heads that ask" }, { "end": 3393.96, "start": 3387.2799999999997, "text": " for particular information retrieve one particular maybe few patterns and happy" }, { "end": 3400.5, "start": 3393.96, "text": " with that and the red ones here you can see that they often just average they" }, { "end": 3406.56, "start": 3400.5, "text": " just you know because K is so high means that I need all of the I need all of" }, { "end": 3410.76, "start": 3406.56, "text": " these bars to get to the 90% or I need almost all of them which basically means" }, { "end": 3415.8, "start": 3410.76, "text": " it's a uniform distribution right so it's like I don't care where information" }, { "end": 3420.32, "start": 3415.8, "text": " comes from just average whatever average I just want the average in some" }, { "end": 3428.56, "start": 3420.32, "text": " particular space and as we said that also has its uses interesting how this" }, { "end": 3433.2400000000002, "start": 3428.56, "text": " translate through so this here is as we go down the BERT model on the bottom of" }, { "end": 3437.2400000000002, "start": 3433.2400000000002, "text": " layer one you see there are a lot of these averaging operations going on so a" }, { "end": 3441.88, "start": 3437.2400000000002, "text": " lot of the heads are simply doing averaging and as you go up the layers" }, { "end": 3448.6000000000004, "start": 3441.88, "text": " the heads get more and more specific in the types of information they seek but" }, { "end": 3452.68, "start": 3448.6, "text": " then again in the last layers interestingly you get into a lot of" }, { "end": 3459.64, "start": 3452.68, "text": " these meta stable states again which I guess I get interpret this as you as you" }, { "end": 3464.2, "start": 3459.64, "text": " want I'm gonna leave this up to you but it sort of says like here you want kind" }, { "end": 3468.2, "start": 3464.2, "text": " of general patterns at the bottom and then the middle layers are kind of the" }, { "end": 3473.38, "start": 3468.2, "text": " logical workhorses so you look for very specific things in the input this is" }, { "end": 3479.96, "start": 3473.38, "text": " this is where I guess this is where the thinking happens so this is sort of" }, { "end": 3486.48, "start": 3479.96, "text": " pre-processing I'm just making stuff up here by the way this is this must be a" }, { "end": 3494.92, "start": 3486.48, "text": " no way true this is maybe thinking and this this here this might already be" }, { "end": 3498.2200000000003, "start": 3494.92, "text": " output again because you know after that you have language modeling or" }, { "end": 3505, "start": 3498.22, "text": " classification so this might already be like aggregating types of information" }, { "end": 3512.52, "start": 3505, "text": " this is how I sort of interpreted okay yeah so so this these these experiments" }, { "end": 3519.7999999999997, "start": 3512.52, "text": " are pretty pretty pretty interesting and here they have they do these are the" }, { "end": 3524.04, "start": 3519.7999999999997, "text": " last experiments for this paper they do an interesting experiment where they" }, { "end": 3530.88, "start": 3524.04, "text": " actually replace the attention heads by simply an average mechanism and later" }, { "end": 3535.12, "start": 3530.88, "text": " they actually replace them by Gaussians but in this case they simply average and" }, { "end": 3540.9, "start": 3535.12, "text": " they show that look if I replace layer one with this averaging the perplexity" }, { "end": 3547.32, "start": 3540.9, "text": " doesn't rise that much so it's pretty good even if I replace an entire layer" }, { "end": 3553.02, "start": 3547.32, "text": " here with averaging it perplexity goes more up and you can see the" }, { "end": 3556.28, "start": 3553.02, "text": " correspondence if you remember the previous plot the correspondence is" }, { "end": 3562.62, "start": 3556.28, "text": " pretty one-to-one with how much blue and green heads there are as contrast to how" }, { "end": 3570.2599999999998, "start": 3562.62, "text": " much red and orange ones there are so here you have lots of blue ones and you" }, { "end": 3577.16, "start": 3570.2599999999998, "text": " can see that the error kind of goes up and interestingly here you have more" }, { "end": 3582.52, "start": 3577.16, "text": " meta stable states at the end but still the perplexity goes up more so I guess" }, { "end": 3588.2, "start": 3582.52, "text": " you can only really replace the red ones with the averaging so this is always" }, { "end": 3596.22, "start": 3588.2, "text": " averaging in one particular layer and they go into more detail here where they" }, { "end": 3601.12, "start": 3596.22, "text": " say look this is this is layer 6 and this is layer 12 so this is one" }, { "end": 3605.44, "start": 3601.12, "text": " particular attention head from layer 6 and layer 12 and the updates don't be" }, { "end": 3610.34, "start": 3605.44, "text": " confused it goes in this direction okay I was confused at first and you can see" }, { "end": 3615.56, "start": 3610.34, "text": " right here this number K at first you know it's kind of spread out but then" }, { "end": 3621.44, "start": 3615.56, "text": " it pretty quickly converges to a very small number and there is this kind of" }, { "end": 3624.52, "start": 3621.44, "text": " point right here I don't know if the learning rates decrease I don't think so" }, { "end": 3628.52, "start": 3624.52, "text": " I think that's just kind of a a phase transition right here this is the blue" }, { "end": 3633.84, "start": 3628.52, "text": " line by the way the blue training line a phase transition where all of a sudden" }, { "end": 3638.4, "start": 3633.84, "text": " these just these attention heads they somehow decide okay this is the thing I" }, { "end": 3643.2400000000002, "start": 3638.4, "text": " want to specialize in this is the type of task I want like a sub task of" }, { "end": 3647.58, "start": 3643.2400000000002, "text": " linguistic sub task I want to specialize in and then they concentrate on one" }, { "end": 3653.58, "start": 3647.58, "text": " particular pattern per input so they are really specializing whereas in the last" }, { "end": 3658.76, "start": 3653.58, "text": " layer you see here that even during training they are sort of continuously" }, { "end": 3663.78, "start": 3658.76, "text": " learning so first they also do this averaging then they go into this meta" }, { "end": 3669.5400000000004, "start": 3663.78, "text": " stable region right this is this meta stable region K isn't one but also K" }, { "end": 3676.44, "start": 3669.5400000000004, "text": " isn't a very high number so they continuously learn and it's even" }, { "end": 3681.96, "start": 3676.44, "text": " indicative of this training might not be done here first of all and second of all" }, { "end": 3686.44, "start": 3681.96, "text": " it would be really interesting to see how this works out with you know sizes of" }, { "end": 3690.7200000000003, "start": 3686.44, "text": " transformers and like especially these these huge transformers just the fact" }, { "end": 3696.9599999999996, "start": 3690.72, "text": " that they can keep learning the more we train them might be you know be" }, { "end": 3702.4399999999996, "start": 3696.9599999999996, "text": " interpreted in the light of what kind of states they converge to and the fact" }, { "end": 3707, "start": 3702.4399999999996, "text": " that their attention heads I don't know how does this go on do they stay in the" }, { "end": 3711.04, "start": 3707, "text": " meta stable states because it makes sense to have meta stable states as I" }, { "end": 3716.8599999999997, "start": 3711.04, "text": " said it makes sense to kind of cluster things or are they simply into is this" }, { "end": 3721.04, "start": 3716.86, "text": " simply an intermediate step and if you go really far down they would actually" }, { "end": 3727.1600000000003, "start": 3721.04, "text": " also converge to the K equals one where they really specialize or if you do we" }, { "end": 3731.6800000000003, "start": 3727.1600000000003, "text": " need more attention heads for this I don't know it's just I think this is" }, { "end": 3737.1200000000003, "start": 3731.6800000000003, "text": " just the the beginning of kind of research in this direction I think just" }, { "end": 3743.8, "start": 3737.1200000000003, "text": " this kind of number K how it's how it's made it's pretty simple and apparently" }, { "end": 3750.04, "start": 3743.8, "text": " it's pretty pretty revealing so you know that's pretty cool so that was the paper" }, { "end": 3755.44, "start": 3750.04, "text": " and its experiments it's it's a pretty sizable paper as I said even the paper" }, { "end": 3760.7200000000003, "start": 3755.44, "text": " itself is ten pages and then there is this immune repertoire classification" }, { "end": 3766.7200000000003, "start": 3760.7200000000003, "text": " which I will like spend one minute looking at it so you have you have these" }, { "end": 3771.6400000000003, "start": 3766.7200000000003, "text": " set classifications so for each human you obtain a set of immune receptors and" }, { "end": 3776.64, "start": 3771.64, "text": " you simply obtain one label whether that human is immune to a particular disease" }, { "end": 3781.52, "start": 3776.64, "text": " or not and your task is kind and then a different human has a different set you" }, { "end": 3786.96, "start": 3781.52, "text": " have no idea which one of these things is responsible for it being for the" }, { "end": 3794.52, "start": 3786.96, "text": " human being for the human being immune or not in fact there is a you can't even" }, { "end": 3799.92, "start": 3794.52, "text": " decide based on these you can only decide based on like sub sequences of" }, { "end": 3803.96, "start": 3799.92, "text": " these and they might be in combination with each other so there might not be a" }, { "end": 3807.84, "start": 3803.96, "text": " single one responsible but like a combination but you don't have labels for" }, { "end": 3811.32, "start": 3807.84, "text": " the individual ones and you have different ones per human and they are" }, { "end": 3817.76, "start": 3811.32, "text": " different length all of this is just a giant giant task and you have many of" }, { "end": 3823.44, "start": 3817.76, "text": " them you have tens of thousands per human right so they build a system here" }, { "end": 3827.76, "start": 3823.44, "text": " where first they do these 1d convolutions to process the inside" }, { "end": 3834.6400000000003, "start": 3827.76, "text": " sequences and then they do this hop field attention mechanism or with with" }, { "end": 3840.76, "start": 3834.6400000000003, "text": " learned queries over these things and then they train on the output label and" }, { "end": 3846.1600000000003, "start": 3840.76, "text": " surprisingly that actually works even with tens of thousands of inside" }, { "end": 3853.28, "start": 3846.1600000000003, "text": " sequences and only one label for all of them and so they they achieve I guess" }, { "end": 3859.0800000000004, "start": 3853.28, "text": " favorable results compared to other baselines on this task using these hop" }, { "end": 3863.2000000000003, "start": 3859.0800000000004, "text": " field network which is pretty interesting but I let you look at that" }, { "end": 3870.1600000000003, "start": 3863.2000000000003, "text": " paper yourself so I hope this somehow made it a bit clear what happens here" }, { "end": 3877, "start": 3870.1600000000003, "text": " and it would actually be pretty interesting if we you know to see what" }, { "end": 3883.2000000000003, "start": 3877, "text": " happens if we just do maybe two rounds of these updates is this even desirable" }, { "end": 3888.24, "start": 3883.2, "text": " right is it desirable to run this to convergence is there something good" }, { "end": 3891.8399999999997, "start": 3888.24, "text": " about not running into convergence or does it actually not matter because it" }, { "end": 3897.3599999999997, "start": 3891.8399999999997, "text": " actually does converge in one step I don't know but have a look at the code" }, { "end": 3903.7999999999997, "start": 3897.3599999999997, "text": " it's pretty cool and I hope you enjoyed this video I'm sure you have many open" }, { "end": 3909.56, "start": 3903.7999999999997, "text": " questions as do I don't hesitate to ask me in the comments or join our discord" }, { "end": 3913.96, "start": 3909.56, "text": " as I said there are lots of helpful people on our discord and I'll see you" }, { "end": 3940.84, "start": 3913.96, "text": " next time bye bye" } ]
udS2OPohs_s
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
I TRAINED AN AI TO SOLVE 2+2 (w/ Live Coding)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "2+2", "twitter", "woke", "math", "algebra", "james lindsay", "culture", "definition", "gan", "generative adversarial networks", "generator", "discriminator", "live coding", "deep learning tutorial" ]
#ai #tech #code A whole bunch of humans are arguing whether 2+2=4 or 2+2=5. Pointless! Let the machines handle this! Colab: https://colab.research.google.com/drive/1tDjFW7CFGQG8vHdUAVNpr2EG9z0JZGYC?usp=sharing Disclaimer: This is a joke. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, you might have seen the recent debate about 2 plus 2, where everyone tries to weigh in, the big question being is 2 plus 2 equal to 4, or is 2 plus 2 equal to 5? And for some reason, the entirety of Western civilization hangs in the balance right here. But everyone's missing the point. Everyone's just kind of arguing about this. But I want to point something out right here. Let's have a look at the accounts arguing right here. You know, James, Eric, you know what all of these have in common? They're humans. Humans arguing about fundamental questions of the universe and culture. What could possibly go wrong? So today, we're going to replace fallible, weak minded humans by AI. We're going to build an AI that's going to answer the question, what is 2 plus 2? Now first thing we're going to do is to import PyTorch. If you're using TensorFlow, what's wrong with you? Come on, just checking whether CUDA is available. CUDA is basically shorthanded AI for magic. So don't worry about that part. Now we're going to borrow quite a bit of code from the PyTorch example, because they've already implemented sort of the same thing. So the model we're going to use right here is going to be a generative adversarial network. Now you might be wondering, hey, is it really smart to build AI on something that's called adversarial? Isn't that a little bit dangerous? To that I say, all right now, so we're going to grab the code from over here. First thing we need is the model itself. Now the model is composed of a generator and a discriminator. The generator is right here. Plink plonk. Let's plop that in here. That looks good. Look at that generator. Those convolutions, batch norms, relu's. This is going to be so artificial and so intelligent, you won't believe it. So the generator is responsible for basically outputting things. In our case, what we're going to input a two and a plus and a two, and then the output should be, you know, whatever the result of that is. Now as a data set, we're going to use the famous MNIST data set. This data set is a very challenging data set. It's very large data set. But I think in order to tackle an important question like this, we need to go for the cram de la cram of data sets. So MNIST is a data set that contains a lot of these handwritten digits. You might think these are just numbers, but these are more than numbers. These numbers have a meaning. So the computer just sees this in numbers. But as a human, you would see this right here. Be the zero. This data set is filled with digits. Four. Wow. That's one of the things we need. Look at that. A nine. Beautiful. Beautiful. So our goal is going to be to try to make the network learn what two plus two is. Now if you know machine learning, you know that you need training data. So we need a labeled data set of two plus two equals and then whatever two plus two equals. So first, we're going to filter out all of the examples where that show a two. So we need to train this network, right? So we need a number of training steps, you know, in AI, we like to train for a lot of steps. Let's just go for 9000. What we'll do is we'll train 9000 times 64 images and the AI is going to learn what two plus two is. Alright, so in each step, we need to create a batch of training samples. What we need is a two plus and two. So for the two is we can just select two of the two's that we had before. Now the plus is a little bit more tricky. So in order to make a plus, there's none in the MNIST data set. You have to understand the MNIST data set is also quite old. I think it was invented before the plus sign was invented. So that's not in the data set. So we have to create a plus by ourselves. It's going to be hard, but we'll give it a try. Now I'm usually way too dumb to use mesh grid, but I'm just gonna try. I mean, you know, what can go wrong? Okay, so as you can see, we're absolutely on the wrong track right here. Ladies and gentlemen, the most beautiful plus in the history of AI. Alright, so we got a plus and we got all of our two's. So now let's put them together. Look at that. Two plus two, next sample, two plus two, next sample, two plus two. So our AI is going to be trained on data samples just like this. Now in order to make the generator accept samples like this, we sort of need to change a little bit because if we try to just put this into the generator, probably it won't work. You see, there's an error. The generator is not artificially intelligent enough yet. So we need to make it take samples. So our samples are of size 28 by 84. And what the generator right now expects is a sample of size 100 by 512 by four by four. So you may notice we have never made use of our batch size. So let's fix that right now. So now we're training in batches of images, but it's still not cool for the generator. So we need to change the generator right here. What's this good for? Nothing, nothing. All right, so it expects the input to be of a certain size. And we are going to change that right here. We also don't want any strides. Strides are for losers. And let's see where that gets us. Okay, so we made our generator accept images that we want and produce images of the size that we want. Now the entire question here is we need labels for our training data set because who's to say what two plus two is. And as I said, usually I would outsource this to grad students, but these are humans as well. So we're kind of in a pinch right here. So what we're going to do is employ a heuristic. We're going to ask our machine right here what two plus two for the training examples is. Okay. So in Python, you can do this by typing two plus two. And you know, in this case, that happens to be four, but who knows? So for each of these training examples, we're going to take the class label, which is provided in the data set, and we're going to take these class labels and add them together. And whatever comes out is going to be the label for this. In this case, it's four, you know, but it could be anything. And we're just going to use these as training data for our model. So for that, we're going to need the label of the first sample and the label of the second sample. And our final label is simply going to be label one plus the label two. As I said, this is a heuristic for training the AI. Now, usually in a generative adversarial network or a GAN for short, you'd have something that's called a generator, which we do. And you'd have something that's called a discriminator. Now, I have my problems with this discrimination. There is no space for discrimination in the AI field. So we're going to leave away the discriminator right here. I'm sorry. I'm sorry. We're going to directly go to the loss from the generator. In order to calculate the loss, we need a reference. And for that, we're simply going to go to our data set with our label and find any of the images that correspond to that label. So if our heuristic, if our oracle says two plus two is equal to nine, we're just going to go to our data set, get a nine and put that as a training output. Okay. Okay. So if we look at one of the labels that just happens to be a four in this case, but we're going to go through the entire number of 9000 steps. And in each steps, we'll train 64 of these different combinations of two plus two. And we'll give one of the labels each time and we'll see what the AI comes up with. For that, we need a loss. Now the loss we're going to use here is going to be the L2 loss. Now there's some controversy, but you know, it is the most powerful loss proven and we have to employ the most powerful tools. So let's do that. So our loss here at the beginning is 509. Now that's a lot of loss. That's a big loss. We need to get that loss down. And to do that, we need one of these optimizers. Now optimizers are kind of the secret workhorses of AI and people don't talk about them enough. I wish there was like a field of research that deals with optimizers, like could be called optimization or something like this. I'm not sure. I just, I just think it would make a lot of sense. So my favorite learning rate is three E minus four just because it contains all of the different things, like a letter and a dash. And that seems like a pretty good thing to do. So we're going to use Adam here as an optimizer. Adam, I know, I don't know Adam personally, but I know a couple of his friends and they tell me he's pretty good. So you know, it's going to go zero grad and I'm dumb. So I need to look up how to use an optimizer and boom. Okay. Okay. So it's again a four. I'm sorry about this. I think this is it. This is it. This is AI history right here, right now for five steps, 10 steps. All right. I have waited and waited and waited and it's finally done. We have now trained the generator to calculate what two plus two equals from the training data set. So now we actually need to ask it what is two plus two. And of course we can't ask it a sample that it has already seen. We need to take a new sample from the test set as is customary in machine learning. So let's get the MNIST test set. Now the test data set consists of images as does the train data set, but the model has never seen the test data set before. This is a property we call generalization. So let's find two nice twos. All right. That's the first one. Okay. These are two nice twos. Let's put them together. Okay. So this is going to be our input to the generator. Okay. So I'm putting the test sample here into the generator that is trained and I've labeled the output in all caps just to tell the model that this is really important computation. I'm just going to run this cell for a couple of times just to make sure that generator is in fact very sure about how important that is. All right. I think that's enough. Let's have a look at that final output. I'm shaking. Are you ready for AI history?
[ { "end": 5.08, "start": 0, "text": " Hi there, you might have seen the recent debate about 2 plus 2, where everyone tries to weigh" }, { "end": 12.040000000000001, "start": 5.08, "text": " in, the big question being is 2 plus 2 equal to 4, or is 2 plus 2 equal to 5?" }, { "end": 19.44, "start": 12.040000000000001, "text": " And for some reason, the entirety of Western civilization hangs in the balance right here." }, { "end": 21.68, "start": 19.44, "text": " But everyone's missing the point." }, { "end": 24.04, "start": 21.68, "text": " Everyone's just kind of arguing about this." }, { "end": 26.46, "start": 24.04, "text": " But I want to point something out right here." }, { "end": 29.92, "start": 26.46, "text": " Let's have a look at the accounts arguing right here." }, { "end": 34.92, "start": 29.92, "text": " You know, James, Eric, you know what all of these have in common?" }, { "end": 36.92, "start": 34.92, "text": " They're humans." }, { "end": 41.8, "start": 36.92, "text": " Humans arguing about fundamental questions of the universe and culture." }, { "end": 43.040000000000006, "start": 41.8, "text": " What could possibly go wrong?" }, { "end": 49.28, "start": 43.040000000000006, "text": " So today, we're going to replace fallible, weak minded humans by AI." }, { "end": 56.08, "start": 49.28, "text": " We're going to build an AI that's going to answer the question, what is 2 plus 2?" }, { "end": 59.08, "start": 56.08, "text": " Now first thing we're going to do is to import PyTorch." }, { "end": 62.239999999999995, "start": 59.08, "text": " If you're using TensorFlow, what's wrong with you?" }, { "end": 65.84, "start": 62.239999999999995, "text": " Come on, just checking whether CUDA is available." }, { "end": 69.1, "start": 65.84, "text": " CUDA is basically shorthanded AI for magic." }, { "end": 71.08, "start": 69.1, "text": " So don't worry about that part." }, { "end": 76.02, "start": 71.08, "text": " Now we're going to borrow quite a bit of code from the PyTorch example, because they've" }, { "end": 78.68, "start": 76.02, "text": " already implemented sort of the same thing." }, { "end": 84.08, "start": 78.68, "text": " So the model we're going to use right here is going to be a generative adversarial network." }, { "end": 88.88, "start": 84.08, "text": " Now you might be wondering, hey, is it really smart to build AI on something that's called" }, { "end": 89.88, "start": 88.88, "text": " adversarial?" }, { "end": 92.28, "start": 89.88, "text": " Isn't that a little bit dangerous?" }, { "end": 97.34, "start": 92.28, "text": " To that I say, all right now, so we're going to grab the code from over here." }, { "end": 99.75999999999999, "start": 97.34, "text": " First thing we need is the model itself." }, { "end": 103.38, "start": 99.75999999999999, "text": " Now the model is composed of a generator and a discriminator." }, { "end": 105.44, "start": 103.38, "text": " The generator is right here." }, { "end": 106.44, "start": 105.44, "text": " Plink plonk." }, { "end": 108.48, "start": 106.44, "text": " Let's plop that in here." }, { "end": 109.88, "start": 108.48, "text": " That looks good." }, { "end": 112, "start": 109.88, "text": " Look at that generator." }, { "end": 115.8, "start": 112, "text": " Those convolutions, batch norms, relu's." }, { "end": 120.52, "start": 115.8, "text": " This is going to be so artificial and so intelligent, you won't believe it." }, { "end": 125.42, "start": 120.52, "text": " So the generator is responsible for basically outputting things." }, { "end": 131.34, "start": 125.42, "text": " In our case, what we're going to input a two and a plus and a two, and then the output" }, { "end": 135, "start": 131.34, "text": " should be, you know, whatever the result of that is." }, { "end": 139.18, "start": 135, "text": " Now as a data set, we're going to use the famous MNIST data set." }, { "end": 142.36, "start": 139.18, "text": " This data set is a very challenging data set." }, { "end": 143.70000000000002, "start": 142.36, "text": " It's very large data set." }, { "end": 148.56, "start": 143.70000000000002, "text": " But I think in order to tackle an important question like this, we need to go for the" }, { "end": 150.96, "start": 148.56, "text": " cram de la cram of data sets." }, { "end": 156.26000000000002, "start": 150.96, "text": " So MNIST is a data set that contains a lot of these handwritten digits." }, { "end": 159.64000000000001, "start": 156.26000000000002, "text": " You might think these are just numbers, but these are more than numbers." }, { "end": 161.28, "start": 159.64000000000001, "text": " These numbers have a meaning." }, { "end": 164.16, "start": 161.28, "text": " So the computer just sees this in numbers." }, { "end": 167.5, "start": 164.16, "text": " But as a human, you would see this right here." }, { "end": 169.72, "start": 167.5, "text": " Be the zero." }, { "end": 172.88, "start": 169.72, "text": " This data set is filled with digits." }, { "end": 173.88, "start": 172.88, "text": " Four." }, { "end": 174.88, "start": 173.88, "text": " Wow." }, { "end": 176.36, "start": 174.88, "text": " That's one of the things we need." }, { "end": 177.36, "start": 176.36, "text": " Look at that." }, { "end": 178.36, "start": 177.36, "text": " A nine." }, { "end": 179.36, "start": 178.36, "text": " Beautiful." }, { "end": 180.36, "start": 179.36, "text": " Beautiful." }, { "end": 185.6, "start": 180.36, "text": " So our goal is going to be to try to make the network learn what two plus two is." }, { "end": 189.02, "start": 185.6, "text": " Now if you know machine learning, you know that you need training data." }, { "end": 196.08, "start": 189.02, "text": " So we need a labeled data set of two plus two equals and then whatever two plus two" }, { "end": 197.08, "start": 196.08, "text": " equals." }, { "end": 201.72, "start": 197.08, "text": " So first, we're going to filter out all of the examples where that show a two." }, { "end": 204.36, "start": 201.72, "text": " So we need to train this network, right?" }, { "end": 209, "start": 204.36, "text": " So we need a number of training steps, you know, in AI, we like to train for a lot of" }, { "end": 210, "start": 209, "text": " steps." }, { "end": 213.20000000000002, "start": 210, "text": " Let's just go for 9000." }, { "end": 218.9, "start": 213.20000000000002, "text": " What we'll do is we'll train 9000 times 64 images and the AI is going to learn what two" }, { "end": 219.9, "start": 218.9, "text": " plus two is." }, { "end": 224.70000000000002, "start": 219.9, "text": " Alright, so in each step, we need to create a batch of training samples." }, { "end": 227.83999999999997, "start": 224.7, "text": " What we need is a two plus and two." }, { "end": 233.38, "start": 227.83999999999997, "text": " So for the two is we can just select two of the two's that we had before." }, { "end": 236.2, "start": 233.38, "text": " Now the plus is a little bit more tricky." }, { "end": 241.01999999999998, "start": 236.2, "text": " So in order to make a plus, there's none in the MNIST data set." }, { "end": 244, "start": 241.01999999999998, "text": " You have to understand the MNIST data set is also quite old." }, { "end": 248.12, "start": 244, "text": " I think it was invented before the plus sign was invented." }, { "end": 249.66, "start": 248.12, "text": " So that's not in the data set." }, { "end": 252.12, "start": 249.66, "text": " So we have to create a plus by ourselves." }, { "end": 255.16, "start": 252.12, "text": " It's going to be hard, but we'll give it a try." }, { "end": 259.08, "start": 255.16, "text": " Now I'm usually way too dumb to use mesh grid, but I'm just gonna try." }, { "end": 261.16, "start": 259.08, "text": " I mean, you know, what can go wrong?" }, { "end": 268.88, "start": 261.16, "text": " Okay, so as you can see, we're absolutely on the wrong track right here." }, { "end": 274.4, "start": 268.88, "text": " Ladies and gentlemen, the most beautiful plus in the history of AI." }, { "end": 280.4, "start": 274.4, "text": " Alright, so we got a plus and we got all of our two's." }, { "end": 281.98, "start": 280.4, "text": " So now let's put them together." }, { "end": 282.98, "start": 281.98, "text": " Look at that." }, { "end": 292.70000000000005, "start": 282.98, "text": " Two plus two, next sample, two plus two, next sample, two plus two." }, { "end": 297.02000000000004, "start": 292.70000000000005, "text": " So our AI is going to be trained on data samples just like this." }, { "end": 301.70000000000005, "start": 297.02000000000004, "text": " Now in order to make the generator accept samples like this, we sort of need to change" }, { "end": 307.26, "start": 301.70000000000005, "text": " a little bit because if we try to just put this into the generator, probably it won't" }, { "end": 308.26, "start": 307.26, "text": " work." }, { "end": 309.26, "start": 308.26, "text": " You see, there's an error." }, { "end": 312.5, "start": 309.26, "text": " The generator is not artificially intelligent enough yet." }, { "end": 315, "start": 312.5, "text": " So we need to make it take samples." }, { "end": 318.42, "start": 315, "text": " So our samples are of size 28 by 84." }, { "end": 326.42, "start": 318.42, "text": " And what the generator right now expects is a sample of size 100 by 512 by four by four." }, { "end": 329.7, "start": 326.42, "text": " So you may notice we have never made use of our batch size." }, { "end": 331.21999999999997, "start": 329.7, "text": " So let's fix that right now." }, { "end": 336.02, "start": 331.21999999999997, "text": " So now we're training in batches of images, but it's still not cool for the generator." }, { "end": 338.14, "start": 336.02, "text": " So we need to change the generator right here." }, { "end": 340.09999999999997, "start": 338.14, "text": " What's this good for?" }, { "end": 341.09999999999997, "start": 340.09999999999997, "text": " Nothing, nothing." }, { "end": 345.34, "start": 341.09999999999997, "text": " All right, so it expects the input to be of a certain size." }, { "end": 348.3, "start": 345.34, "text": " And we are going to change that right here." }, { "end": 351.9, "start": 348.3, "text": " We also don't want any strides." }, { "end": 353.97999999999996, "start": 351.9, "text": " Strides are for losers." }, { "end": 355.5, "start": 353.97999999999996, "text": " And let's see where that gets us." }, { "end": 362.06, "start": 355.5, "text": " Okay, so we made our generator accept images that we want and produce images of the size" }, { "end": 363.18, "start": 362.06, "text": " that we want." }, { "end": 368.5, "start": 363.18, "text": " Now the entire question here is we need labels for our training data set because who's to" }, { "end": 370.66, "start": 368.5, "text": " say what two plus two is." }, { "end": 376.34000000000003, "start": 370.66, "text": " And as I said, usually I would outsource this to grad students, but these are humans as" }, { "end": 377.34000000000003, "start": 376.34000000000003, "text": " well." }, { "end": 379.96000000000004, "start": 377.34000000000003, "text": " So we're kind of in a pinch right here." }, { "end": 382.38, "start": 379.96000000000004, "text": " So what we're going to do is employ a heuristic." }, { "end": 389.3, "start": 382.38, "text": " We're going to ask our machine right here what two plus two for the training examples" }, { "end": 390.3, "start": 389.3, "text": " is." }, { "end": 391.3, "start": 390.3, "text": " Okay." }, { "end": 396.74, "start": 391.3, "text": " So in Python, you can do this by typing two plus two." }, { "end": 401.5, "start": 396.74, "text": " And you know, in this case, that happens to be four, but who knows?" }, { "end": 407.82, "start": 401.5, "text": " So for each of these training examples, we're going to take the class label, which is provided" }, { "end": 412.14, "start": 407.82, "text": " in the data set, and we're going to take these class labels and add them together." }, { "end": 415.62, "start": 412.14, "text": " And whatever comes out is going to be the label for this." }, { "end": 419.26, "start": 415.62, "text": " In this case, it's four, you know, but it could be anything." }, { "end": 423.3, "start": 419.26, "text": " And we're just going to use these as training data for our model." }, { "end": 427.62, "start": 423.3, "text": " So for that, we're going to need the label of the first sample and the label of the second" }, { "end": 428.62, "start": 427.62, "text": " sample." }, { "end": 433.26, "start": 428.62, "text": " And our final label is simply going to be label one plus the label two." }, { "end": 436.65999999999997, "start": 433.26, "text": " As I said, this is a heuristic for training the AI." }, { "end": 442.86, "start": 436.65999999999997, "text": " Now, usually in a generative adversarial network or a GAN for short, you'd have something that's" }, { "end": 445.21999999999997, "start": 442.86, "text": " called a generator, which we do." }, { "end": 447.76, "start": 445.21999999999997, "text": " And you'd have something that's called a discriminator." }, { "end": 451.38, "start": 447.76, "text": " Now, I have my problems with this discrimination." }, { "end": 455.4, "start": 451.38, "text": " There is no space for discrimination in the AI field." }, { "end": 458.06, "start": 455.4, "text": " So we're going to leave away the discriminator right here." }, { "end": 459.06, "start": 458.06, "text": " I'm sorry." }, { "end": 460.06, "start": 459.06, "text": " I'm sorry." }, { "end": 463.34, "start": 460.06, "text": " We're going to directly go to the loss from the generator." }, { "end": 467.7, "start": 463.34, "text": " In order to calculate the loss, we need a reference." }, { "end": 473.38, "start": 467.7, "text": " And for that, we're simply going to go to our data set with our label and find any of" }, { "end": 475.94, "start": 473.38, "text": " the images that correspond to that label." }, { "end": 481.8, "start": 475.94, "text": " So if our heuristic, if our oracle says two plus two is equal to nine, we're just going" }, { "end": 486.66, "start": 481.8, "text": " to go to our data set, get a nine and put that as a training output." }, { "end": 487.66, "start": 486.66, "text": " Okay." }, { "end": 488.66, "start": 487.66, "text": " Okay." }, { "end": 494.14, "start": 488.66, "text": " So if we look at one of the labels that just happens to be a four in this case, but we're" }, { "end": 498.82, "start": 494.14, "text": " going to go through the entire number of 9000 steps." }, { "end": 504.24, "start": 498.82, "text": " And in each steps, we'll train 64 of these different combinations of two plus two." }, { "end": 509.44, "start": 504.24, "text": " And we'll give one of the labels each time and we'll see what the AI comes up with." }, { "end": 510.62, "start": 509.44, "text": " For that, we need a loss." }, { "end": 513.74, "start": 510.62, "text": " Now the loss we're going to use here is going to be the L2 loss." }, { "end": 520.98, "start": 513.74, "text": " Now there's some controversy, but you know, it is the most powerful loss proven and we" }, { "end": 523.6, "start": 520.98, "text": " have to employ the most powerful tools." }, { "end": 524.8, "start": 523.6, "text": " So let's do that." }, { "end": 527.7, "start": 524.8, "text": " So our loss here at the beginning is 509." }, { "end": 530.58, "start": 527.7, "text": " Now that's a lot of loss." }, { "end": 531.82, "start": 530.58, "text": " That's a big loss." }, { "end": 533.72, "start": 531.82, "text": " We need to get that loss down." }, { "end": 536.5400000000001, "start": 533.72, "text": " And to do that, we need one of these optimizers." }, { "end": 542.82, "start": 536.5400000000001, "text": " Now optimizers are kind of the secret workhorses of AI and people don't talk about them enough." }, { "end": 548.0600000000001, "start": 542.82, "text": " I wish there was like a field of research that deals with optimizers, like could be" }, { "end": 551.1, "start": 548.0600000000001, "text": " called optimization or something like this." }, { "end": 552.1, "start": 551.1, "text": " I'm not sure." }, { "end": 555.28, "start": 552.1, "text": " I just, I just think it would make a lot of sense." }, { "end": 561.58, "start": 555.28, "text": " So my favorite learning rate is three E minus four just because it contains all of the different" }, { "end": 565.5, "start": 561.58, "text": " things, like a letter and a dash." }, { "end": 568.7800000000001, "start": 565.5, "text": " And that seems like a pretty good thing to do." }, { "end": 571.1800000000001, "start": 568.7800000000001, "text": " So we're going to use Adam here as an optimizer." }, { "end": 577.5200000000001, "start": 571.1800000000001, "text": " Adam, I know, I don't know Adam personally, but I know a couple of his friends and they" }, { "end": 579.5, "start": 577.5200000000001, "text": " tell me he's pretty good." }, { "end": 583.94, "start": 579.5, "text": " So you know, it's going to go zero grad and I'm dumb." }, { "end": 587.58, "start": 583.94, "text": " So I need to look up how to use an optimizer and boom." }, { "end": 588.58, "start": 587.58, "text": " Okay." }, { "end": 589.58, "start": 588.58, "text": " Okay." }, { "end": 590.58, "start": 589.58, "text": " So it's again a four." }, { "end": 591.58, "start": 590.58, "text": " I'm sorry about this." }, { "end": 592.58, "start": 591.58, "text": " I think this is it." }, { "end": 593.58, "start": 592.58, "text": " This is it." }, { "end": 601.14, "start": 593.58, "text": " This is AI history right here, right now for five steps, 10 steps." }, { "end": 602.14, "start": 601.14, "text": " All right." }, { "end": 606.9000000000001, "start": 602.14, "text": " I have waited and waited and waited and it's finally done." }, { "end": 613.26, "start": 606.9000000000001, "text": " We have now trained the generator to calculate what two plus two equals from the training" }, { "end": 614.26, "start": 613.26, "text": " data set." }, { "end": 617.1400000000001, "start": 614.26, "text": " So now we actually need to ask it what is two plus two." }, { "end": 619.7, "start": 617.1400000000001, "text": " And of course we can't ask it a sample that it has already seen." }, { "end": 626.44, "start": 619.7, "text": " We need to take a new sample from the test set as is customary in machine learning." }, { "end": 628.26, "start": 626.44, "text": " So let's get the MNIST test set." }, { "end": 634.46, "start": 628.26, "text": " Now the test data set consists of images as does the train data set, but the model has" }, { "end": 637.38, "start": 634.46, "text": " never seen the test data set before." }, { "end": 639.86, "start": 637.38, "text": " This is a property we call generalization." }, { "end": 642.38, "start": 639.86, "text": " So let's find two nice twos." }, { "end": 643.58, "start": 642.38, "text": " All right." }, { "end": 644.58, "start": 643.58, "text": " That's the first one." }, { "end": 645.58, "start": 644.58, "text": " Okay." }, { "end": 647.0200000000001, "start": 645.58, "text": " These are two nice twos." }, { "end": 648.0200000000001, "start": 647.0200000000001, "text": " Let's put them together." }, { "end": 649.0200000000001, "start": 648.0200000000001, "text": " Okay." }, { "end": 651.62, "start": 649.02, "text": " So this is going to be our input to the generator." }, { "end": 652.62, "start": 651.62, "text": " Okay." }, { "end": 659.14, "start": 652.62, "text": " So I'm putting the test sample here into the generator that is trained and I've labeled" }, { "end": 664.54, "start": 659.14, "text": " the output in all caps just to tell the model that this is really important computation." }, { "end": 671.02, "start": 664.54, "text": " I'm just going to run this cell for a couple of times just to make sure that generator" }, { "end": 675.86, "start": 671.02, "text": " is in fact very sure about how important that is." }, { "end": 676.86, "start": 675.86, "text": " All right." }, { "end": 677.86, "start": 676.86, "text": " I think that's enough." }, { "end": 684.58, "start": 677.86, "text": " Let's have a look at that final output." }, { "end": 685.58, "start": 684.58, "text": " I'm shaking." }, { "end": 708.82, "start": 685.58, "text": " Are you ready for AI history?" } ]
ml3Y1ljVSQ8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
PCGRL: Procedural Content Generation via Reinforcement Learning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "reinforcement learning", "level design", "game design", "video game", "sobokan", "sokoban", "zelda", "maze", "agent", "turtle", "observation", "reward", "action", "space", "deep rl", "deep reinforcement learning", "content", "minecraft" ]
#ai #research #gaming Deep RL is usually used to solve games, but this paper turns the process on its head and applies RL to game level creation. Compared to traditional approaches, it frames level design as a sequential decision making progress and ends up with a fast and diverse level generator. OUTLINE: 0:00 - Intro & Overview 1:30 - Level Design via Reinforcement Learning 3:00 - Reinforcement Learning 4:45 - Observation Space 5:40 - Action Space 15:40 - Change Percentage Limit 20:50 - Quantitative Results 22:10 - Conclusion & Outlook Paper: https://arxiv.org/abs/2001.09212 Code: https://github.com/amidos2006/gym-pcgrl Abstract: We investigate how reinforcement learning can be used to train level-designing agents. This represents a new approach to procedural content generation in games, where level design is framed as a game, and the content generator itself is learned. By seeing the design problem as a sequential task, we can use reinforcement learning to learn how to take the next action so that the expected final level quality is maximized. This approach can be used when few or no examples exist to train from, and the trained generator is very fast. We investigate three different ways of transforming two-dimensional level design problems into Markov decision processes and apply these to three game environments. Authors: Ahmed Khalifa, Philip Bontrager, Sam Earle, Julian Togelius ERRATA: - The reward is given after each step. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Have you ever wondered how video game levels are made? Yeah, me neither, but this paper has. And in this paper you can see a reinforcement learning agent that has learned how to make video game levels in various ways. So this is implemented for this game here where the goal is simply to make the longest maze. This game here is an adaptation to The Legend of Zelda where you have to get a key to the door. And this game here is called Sobocon where you have to put all of the crates onto the green squares in order to solve it. So it's a puzzle game. Alright, and this is done via reinforcement learning. So the paper we're going to look at is called PCGRL Procedural Content Generation Via Reinforcement Learning by Ahmed Khalifa, Philippe Bontrager, Sam Early and Julian Togelius. Now this paper is basically just a fun paper I feel, and it shows how to frame a problem in terms of reinforcement learning and then how to solve it. It's pretty straightforward, it's fairly short and the code is available and all so you can go check it out yourself. They say we investigate how reinforcement learning can be used to train level designing agents. So usually we do reinforcement learning for playing games themselves and now we use reinforcement learning to train an agent that can design a level. So we don't design the level itself straightforward, we design the agent that designs the level. And what's the advantage here? The advantage is of course the agent could then potentially generate multiple different levels once we have trained it. Let's say this represents a new approach to procedural content generation in games where level design is framed as a game itself. So the design of the level is now the game. And the content generator itself is learned. By seeing the design problem as a sequential task we can use reinforcement learning to learn how to take the next action so that the expected final level quality is maximized. This approach can be used when few or no examples exist to train from and the train generator is very very fast. So this is the outset of the problem formulation. Now we're going to go through the steps you have to do in order to make this work. There are a few things that I think this paper does quite well. The first thing is you actually have to frame the problem in terms of reinforcement learning. So what is reinforcement learning? It's pretty simple. In reinforcement learning you have this agent-environment split. So at each step the environment is going to send the agent an observation. So the environment is going to send an observation to the agent and the agent needs to take an action in response to that. Now something happens in here. We don't worry about that. The environment is going to send the next observation that is a result from taking this action and it is also going to send the reward for this action. So at each step the agent gets an observation and a reward for the last action it took and it has to output the next action. Now the environment of course has to somehow decide how do I represent the observation. This is the representation. How do I transform one observation to the next observation given an action? The action comes in and transforms the last state to the next state and then how do I give the reward? How do I calculate the reward? So these things are the things you have to decide on. The observation space, how the reward is calculated, the action space and how an action transforms one representation into the next representation. So this is what we're going to look at, the different variants. We're not going to look at specifically how reinforcement learning is done because once you have an environment like this you can just plug it into a standard reinforcement learning algorithm and it will solve it for you. So that's the power of basically having standardized or representations. So the observation space of this problem is going to be pretty simple. All the games we're dealing with here are in this... Oh, I already did some drawings... are in this framework of this grid world game. So you have this grid, this level is subdivided into this grid and that naturally corresponds of course to a 2D matrix. Now each point in this matrix has a number and the number describes what type of tile this tile is. So as you can see right here, the 1 is going to be a wall while the 0 is going to be empty space. 3 here is one of these crates and 2 is the player. So you get the point, right? Each number corresponds to a type of tile. So far so good. That's the observation space. Now what is the action space? What is the agent can do? At each step they say the agent can change one of these tiles. So you can change one of these tiles, let's say this one right here. You can change it to a different one or you can just leave it. So this is a wall right now and it makes the problem fairly interesting to have a wall right here in the middle. So since we're looking at this tile, we might just wanna leave this right there. We could also change it. We could actually change it such that to a 2, such that there is another player right here. Can I even draw this? This yellow isn't. Such that there is now two player tiles in the game. This would of course be an invalid level and the reinforcement learning agent ultimately should learn to produce valid and good levels. So at each step you can change one tile and of course the goal is to make a better and better and better level over time. Now how do you choose which tile to change? That's a thing you have to define and they define three different ways in which the agent can choose which tile to change. In the narrow formulation they themselves, so the environment, chooses the frame, the tile to change. So the environment will say now you can change this tile if you want. And in the next step it will say now you can change this one if you want. Now you can change this one if you want. Now this is completely random how the environment chooses. Actually it doesn't have to be, but the environment chooses and that is problematic for the agent because the agent cannot kind of predict which tile it can change next and therefore it cannot really plan ahead how it wants to change the level. It can only make very very local very greedy choices. It can be like oh I'm right here I might actually build a wall right here. Yes that seems good. An example is maybe you want to make the level more interesting. Maybe you think that the crate up here is a bit close to this field here. You have to push it onto this field and that's fairly easy right? So you just push it like up and then to the left. Actually it's not that easy because there's the wall right here and you have to go around. Actually you probably have to push this down. But let's say the level is too easy and you want to move the move the crate to a bit like let's say here. In this framework where the environment tells you which tiles to change. Once you come across this tile you can delete it. But then you have to wait and wait and wait and wait until at random this tile where you want to put it is selected. This might actually never happen because the episode might be over and if it never happens you are in an invalid level. So the agent here is basically forced to greedily make the level valid before it can make it interesting and then it can only make it interesting in sort of local ways. So the second way that the second formulation here is the turtle formulation. Now you might know this from the turtle graphics where basically you have this little turtle thing and you can always move it either like you know down, up, left or right and then you can always put a dot or not put a dot and thereby you can like trace out things. This is like intro to programming. Same here. So now the agent is given a starting square and it can choose to change it or not but it can also choose how to move to the next square. So to the right, up, left or down. So it can choose. So you can go along and say okay now I'm here I want to change it to a 2, now I'm here, I want to change it to a 2, now I'm here and so on. So it can basically do things like build long walls and things like this so it can plan ahead more considerably. But still if you regard the problem from before if it wants to place the crate to a different location it can, like if maybe it's here okay the agent is here and then it can say okay I wanna not change but move, not change and move, not change move and then it can delete and then it has to move over here step by step until it can place it again. So it can plan ahead considerably longer actually it can just move straight over because the agent itself is not constrained by walls. So it can move ahead quite a bit but it's still kind of localized changes because it can move one tile at a time right and if in between the episode ends it's again an invalid level. So the third formulation is the most powerful formulation. It's called the wide formulation and this is where the agent at each time step cannot only choose how to change the tile but can freely choose the next tile to change. So it could say in one step it could say I want to delete this tile and then in the next step it could say I want to place it right here. So this is, so it can plan ahead considerably. So how you design the action space is very important for how your agent, for what your agent can possibly learn and how easy it is for the agent to learn because it's gonna be pretty easy for this agent to learn to move crates like this where even though the other agent that moves one tile at a time can also do it it has to plan ahead for longer so it has to sort of invest more of the reinforcement learning power into doing these sorts of things. But of course it's being more constrained also means you have less actions at your disposal. Like this last agent it has a lot of actions it can do. It can choose any tile at once right so that can also introduce considerable exploration dilemma and you have to trade these things off when you design things like that. Alright so this is the action space. Now how the observation evolves into the next observation should be fairly clear. I mean that's already given by the action space. If you ask yourself if you're in this situation right here and the agent deletes the crate then the crate is no longer there. So if it changes this to a zero then it's just empty space now. So that's fairly obvious here. Now the last thing we need to do is the reward calculation. What reward do you give the agent? Here you can give the agent the reward either let's say at the very end. You cannot give it a reward for the entire episode and give it a reward at the very end. Reinforcement learning algorithms are able to deal with this to a certain degree. You can also decide to give it at each step. Now the way they do it here I believe is they give it at the end and they have multiple components to the reward. So the reward in this case is how well the level fulfills certain goals that the programmer sets. So the goals in Sobocon are basically the rules of the game and that means there is only one player. If there are two or none then the reward is less. There are at least one crate and there are as many crates as green fields. So here you can see there are only two crates but three green fields so the agent will get a penalty for producing a level like this. And then the last thing is the level has to be solvable. And for checking solvability the authors of this paper simply employ a Sobocon solver. They have a Sobocon solver that is like a tree search algorithm that tries to solve the level. If it can't solve the level then the level is invalid and the agent gets a worse reward than whenever the level is solvable. So how you design the reward is also very important. If you only give a one reward when all the goals are fulfilled and give a zero reward as soon as one of the goals is not fulfilled, a reinforcement learning agent is going to have a very very much difficult time to learn that. So you have to kind of design the reward so you help the agent realize what's important. So maybe if there's only one crate missing but you know in fact the level is solvable, except for that maybe one green field is going to be empty, then you could still give a fairly high reward but you could just give a higher reward when the level is actually solvable. Or all the rules are fulfilled and there is a crate here. The other thing to notice here is that in this case you actually do need a solver for the level since it's a puzzle game. That means your agent is only going to produce levels that are as difficult as your solver can solve. So that's going to be a considerable problem. But that's a limitation here. But all of their rewards are hard-coded so to say. So the reward is given by the environment. So now that we have observations which are these matrices right here, we have actions which and we actually have three different ways of formulating actions and we have reward, they can simply plug this into a standard reinforcement learning algorithm. Now they have one last thing that they have which is this change percentage parameter. So what they say is they give the agent an initial state and then the agent is allowed to change it around like here. So on the left you have this initial state. This is sort of a random initial state and you allow the agent now to change it in this stepwise fashion and you always update the agent. By the way, the agent, as you might imagine, the agent takes this matrix right here and puts it, shoves it through like a few convolutional layers and then decides on an action. I'm almost forgetting that this is so obvious by now that the agent is like a standard deep learning taking in a 2D, doing some convolutions and then having like a policy output. So you shove this into a proximal policy optimization algorithm which is a standard reinforcement learning algorithm and you allow to change these things. Now what they do is they only allow the agent to change the levels by so much because what they say is if we start out from these different states we would, you can decide on two things. Either you can train the agent to find you the best possible level ever, right? But then it would sort of ignore the starting state. It would just learn which level gives me the highest reward and it would just change all the tiles always to that. It would just try to change the, to always reach that best possible state and forget the start state. So they say, okay, the last constraint is the agent can only change like 20 percent of the tiles at most and after that we end the episode or we just don't allow the agent to change anything anymore. It needs to first, so if it changed this here to empty space and wants to change something else, first needs to change this back and then it can change something else. So you can do that. So this constrains the agent and kind of teaches it that in order to get a higher reward it must sort of adjust the starting state to something that gets higher reward. And that's one way of making the the levels that you generate more diverse. It's sort of a unique problem to this particular kind of reinforcement learning problem because sometimes, like most of the time, you just want to find the highest reward, whatever. But here you also want to maximize diversity of the levels you generate and therefore you could say that's a pretty good, you know, that's a pretty good constraint to put into that. So that's a thing I like here about this paper. This change percentage constraint. Now at inference time you can change that. So at training time you only change, whatever, 20 percent. But at inference time you can technically let the agent run for longer. As you can see, I think here they just let it run until it, you know, finds something good, like this one right here. Fairly good from the starting state. And you can see it sort of still adjusts to the starting state right here. So you can see that this it connects the the two dots on the top. So the goal is to make the longest possible maze or a long maze. So it connects these two. You can see here also this one connects them. And then it goes out here and connects to this one. So it's fairly good at relying on this starting state. You can see that these turtle and wide representations that can actually choose where to go and where to change something are considerably or, you know, more powerful than this narrow thing. Especially if you look at this level right here. Which again is the importance of designing the action space. Well, it is going to directly affect the outcome that you're going to have. Alright, and you see the same thing here for this Zelda game. Now here you can see the starting state often involves, let's say, here you have two players and you have three keys and that's an invalid starting state. And sometimes the door cannot be reached. Sometimes the door is actually not even there, like here. And you can see that the agent, all of the agents, sort of learn to make at least valid levels where you have the player and the door and the key right here being able to reach everything. So that's, you know, fairly cool because counting is one of these things that the neural networks aren't necessarily super good at. So it's nice to see that, you know, they can... Here they have two players and they they're deleting one of them. Here they have three crates and they actually make it such that the number of crates and the number of green tiles agree. So, you know, that's fairly cool that this comes out. And here you can see the different power of the algorithms. So in this binary problem, and this is the Zelda problem, this is Sobhakan problem, you can see that as you allow the agent at inference time to change more and more of the level, the percentage of levels where the agent gets a good level, like succeeds in building a valid level, goes up and up. And now this, as I already said, this narrow representation here appears to be a bit less powerful than the others. Interestingly, in Sobhakan, the best one is this turtle representation where you can only change one tile at a time and not the more powerful wide representation. That's probably because, I'm going to guess that's because the either the reinforcement learning algorithm isn't, you know, powerful enough or their representation, like the CNN is maybe mis-architectured a bit. You know, technically this representation should be able to achieve higher scores, but not as easily because, as I said, the action space is so much higher. So it's more difficult to learn, but ultimately, it should learn it better. Alright, so this is, this was this paper. It's, I think it's fairly cool and fairly fun to view it from this particular perspective. And they discuss that the future could be that humans solve this together, because usually when you have assisted level design, you would have something, some sort of like an optimizer running to optimize the level you're working on directly. Like you'd say, okay, make something here and it would sort of run for a while and that takes, you know, takes time. Now this here, this agent at inference time is very, very fast. So it can, you know, work together with humans. So the human would say, for example, oh here, please make a wall right here, because that's gonna make the level more interesting, but make it such that the level is still, you know, interesting and solvable. And then the agent can, you know, go across, do some things that's gonna be super fast. And agents and humans could work together at this. Now one drawback, of course, is that in a puzzle game like SoboCon, you know, you have to make sure the level is solvable. And here, luckily, you can employ a solver, but as the puzzles get more difficult, that's not super, like that's not going to be the case that much. And also they remark that most of the levels generated are fairly easy, because their reward only depends on whether or not the level is solvable by an easy solver, right? So you could give some reward for how difficult the level is, but then again, that depends on your solver. So an interesting next step would be to evolve these or to train these as you train reinforcement learning agents to solve these kinds of games. So kind of do a curriculum learning, sort of a GAN setting between level generator and reinforcement learning algorithm, like reinforcement learning game player to sort of evolve levels and agents at the same time. I think it's sort of like these poet approaches, except you would directly learn. I think that would be a nice direction for this work. In any case, the code is available. You can even plug in your own games and make your own levels, so check this out. And with that, I'll see you next time. Bye bye.
[ { "end": 4.84, "start": 0, "text": " Hi there! Have you ever wondered how video game levels are made?" }, { "end": 7.96, "start": 4.84, "text": " Yeah, me neither, but this paper has." }, { "end": 12.68, "start": 7.96, "text": " And in this paper you can see a reinforcement learning agent that has" }, { "end": 13.280000000000001, "start": 12.68, "text": " learned" }, { "end": 16.6, "start": 13.280000000000001, "text": " how to make video game levels" }, { "end": 20.080000000000002, "start": 16.6, "text": " in various ways. So this is implemented for" }, { "end": 24.12, "start": 20.080000000000002, "text": " this game here where the goal is simply to make the longest maze." }, { "end": 28.080000000000002, "start": 24.12, "text": " This game here is an adaptation to The Legend of Zelda" }, { "end": 32.199999999999996, "start": 28.08, "text": " where you have to get a key to the door. And this game here is called" }, { "end": 35.9, "start": 32.199999999999996, "text": " Sobocon where you have to put all of the crates" }, { "end": 40.64, "start": 35.9, "text": " onto the green squares in order to solve it. So it's a puzzle game." }, { "end": 43.839999999999996, "start": 40.64, "text": " Alright, and this is done via" }, { "end": 48.76, "start": 43.839999999999996, "text": " reinforcement learning. So the paper we're going to look at is called" }, { "end": 52.12, "start": 48.76, "text": " PCGRL Procedural Content Generation Via" }, { "end": 56.06, "start": 52.12, "text": " Reinforcement Learning by Ahmed Khalifa, Philippe Bontrager," }, { "end": 60, "start": 56.06, "text": " Sam Early and Julian Togelius. Now this paper" }, { "end": 63.84, "start": 60, "text": " is basically just a fun paper" }, { "end": 69.48, "start": 63.84, "text": " I feel, and it shows how to frame a problem in terms of reinforcement learning" }, { "end": 73.36, "start": 69.48, "text": " and then how to solve it. It's pretty straightforward, it's fairly short" }, { "end": 76.5, "start": 73.36, "text": " and the code is available and all" }, { "end": 80.04, "start": 76.5, "text": " so you can go check it out yourself." }, { "end": 83.04, "start": 80.04, "text": " They say" }, { "end": 86.12, "start": 83.04, "text": " we investigate how reinforcement learning" }, { "end": 89.48, "start": 86.12, "text": " can be used to train level designing agents." }, { "end": 93.52000000000001, "start": 89.48, "text": " So usually" }, { "end": 96.80000000000001, "start": 93.52000000000001, "text": " we do reinforcement learning for playing games" }, { "end": 100.4, "start": 96.80000000000001, "text": " themselves and now we use reinforcement learning" }, { "end": 104.16000000000001, "start": 100.4, "text": " to train an agent that can design a level." }, { "end": 107.72, "start": 104.16000000000001, "text": " So we don't design the level itself" }, { "end": 111.96000000000001, "start": 107.72, "text": " straightforward, we design the agent that designs the level." }, { "end": 115.36, "start": 111.96, "text": " And what's the advantage here? The advantage is of course" }, { "end": 118.36, "start": 115.36, "text": " the agent could then potentially" }, { "end": 122.16, "start": 118.36, "text": " generate multiple different levels once we have trained it." }, { "end": 127.24, "start": 122.16, "text": " Let's say this represents a new approach to procedural content generation in" }, { "end": 127.88, "start": 127.24, "text": " games" }, { "end": 131.51999999999998, "start": 127.88, "text": " where level design is framed as a game itself." }, { "end": 136.78, "start": 131.51999999999998, "text": " So the design of the level is now the game. And the content generator" }, { "end": 141.48, "start": 136.78, "text": " itself is learned. By seeing the design problem as a sequential task we can use" }, { "end": 145.28, "start": 141.48, "text": " reinforcement learning to learn how to take the next action" }, { "end": 150, "start": 145.28, "text": " so that the expected final level quality is maximized." }, { "end": 154.51999999999998, "start": 150, "text": " This approach can be used when few or no examples exist to train from" }, { "end": 157.67999999999998, "start": 154.51999999999998, "text": " and the train generator is very very fast." }, { "end": 160.72, "start": 157.67999999999998, "text": " So this is the outset of the" }, { "end": 165.79999999999998, "start": 160.72, "text": " problem formulation. Now we're going to go through the steps you have to do" }, { "end": 168.83999999999997, "start": 165.79999999999998, "text": " in order to make this work. There are a few things" }, { "end": 172.88, "start": 168.84, "text": " that I think this paper does quite well." }, { "end": 176.56, "start": 172.88, "text": " The first thing is you actually have to frame the problem in terms of" }, { "end": 177.88, "start": 176.56, "text": " reinforcement learning." }, { "end": 181.92000000000002, "start": 177.88, "text": " So what is reinforcement learning? It's pretty simple. In reinforcement learning" }, { "end": 185.2, "start": 181.92000000000002, "text": " you have this agent-environment split." }, { "end": 189.16, "start": 185.2, "text": " So at each step the environment is going to send the agent an" }, { "end": 192.64000000000001, "start": 189.16, "text": " observation. So the environment is going to send an observation" }, { "end": 195.84, "start": 192.64000000000001, "text": " to the agent and the agent needs to take an" }, { "end": 200.20000000000002, "start": 195.84, "text": " action in response to that. Now" }, { "end": 203.92000000000002, "start": 200.20000000000002, "text": " something happens in here. We don't worry about that." }, { "end": 207.36, "start": 203.92000000000002, "text": " The environment is going to send the next observation" }, { "end": 211.4, "start": 207.36, "text": " that is a result from taking this action" }, { "end": 214.76, "start": 211.4, "text": " and it is also going to send the reward" }, { "end": 217.92000000000002, "start": 214.76, "text": " for this action. So at each step the agent gets" }, { "end": 223.12, "start": 217.92000000000002, "text": " an observation and a reward for the last action it took and it has to output" }, { "end": 226.72, "start": 223.12, "text": " the next action. Now the environment of course has to" }, { "end": 231.24, "start": 226.72, "text": " somehow decide how do I represent the observation." }, { "end": 234.72, "start": 231.24, "text": " This is the representation. How do I transform" }, { "end": 238.36, "start": 234.72, "text": " one observation to the next observation given an action?" }, { "end": 243.36, "start": 238.36, "text": " The action comes in and transforms the last state to the next state" }, { "end": 247.48000000000002, "start": 243.36, "text": " and then how do I give the reward? How do I calculate the reward?" }, { "end": 252.8, "start": 247.48000000000002, "text": " So these things are the things you have to decide on. The observation space," }, { "end": 256.44, "start": 252.8, "text": " how the reward is calculated, the action space" }, { "end": 259.64, "start": 256.44, "text": " and how an action transforms one" }, { "end": 263.04, "start": 259.64, "text": " representation into the next representation." }, { "end": 266.92, "start": 263.04, "text": " So this is what we're going to look at, the different variants." }, { "end": 270.68, "start": 266.92, "text": " We're not going to look at specifically how reinforcement learning is done" }, { "end": 274.76, "start": 270.68, "text": " because once you have an environment like this you can just plug it into a" }, { "end": 277.12, "start": 274.76, "text": " standard reinforcement learning algorithm" }, { "end": 280.52, "start": 277.12, "text": " and it will solve it for you. So" }, { "end": 284.56, "start": 280.52, "text": " that's the power of basically having standardized or" }, { "end": 289.28, "start": 284.56, "text": " representations. So the observation space" }, { "end": 292.56, "start": 289.28, "text": " of this problem is going to be pretty simple. All the games we're dealing with" }, { "end": 293.08, "start": 292.56, "text": " here" }, { "end": 296.4, "start": 293.08, "text": " are in this... Oh, I already did some drawings..." }, { "end": 299.88, "start": 296.4, "text": " are in this framework" }, { "end": 303.2, "start": 299.88, "text": " of this grid world game." }, { "end": 307.91999999999996, "start": 303.2, "text": " So you have this grid, this level is subdivided into this grid" }, { "end": 311.04, "start": 307.92, "text": " and that naturally corresponds of course to a 2D" }, { "end": 314.68, "start": 311.04, "text": " matrix. Now each point in this matrix has a number" }, { "end": 318.84000000000003, "start": 314.68, "text": " and the number describes what type of tile this tile is." }, { "end": 321.84000000000003, "start": 318.84000000000003, "text": " So as you can see right here, the 1 is going to be" }, { "end": 325.76, "start": 321.84000000000003, "text": " a wall while the 0 is going to be empty space." }, { "end": 329, "start": 325.76, "text": " 3 here is one of these crates" }, { "end": 334.08000000000004, "start": 329, "text": " and 2 is the player. So you get the point, right? Each number corresponds to a type" }, { "end": 337.84000000000003, "start": 334.08000000000004, "text": " of tile. So far so good." }, { "end": 340.91999999999996, "start": 337.84, "text": " That's the observation space. Now what is the" }, { "end": 344.4, "start": 340.91999999999996, "text": " action space? What is the agent can do?" }, { "end": 347.28, "start": 344.4, "text": " At each step they say the agent can change" }, { "end": 351.47999999999996, "start": 347.28, "text": " one of these tiles. So you can change one of these tiles," }, { "end": 355.79999999999995, "start": 351.47999999999996, "text": " let's say this one right here. You can change it to a different one or you can" }, { "end": 358.76, "start": 355.79999999999995, "text": " just leave it. So this is a wall right now and it makes the problem fairly" }, { "end": 360.79999999999995, "start": 358.76, "text": " interesting to have a wall right here" }, { "end": 365, "start": 360.79999999999995, "text": " in the middle. So since we're looking at this tile, we might just wanna" }, { "end": 368.92, "start": 365, "text": " leave this right there. We could also change it. We could actually change it" }, { "end": 369.76, "start": 368.92, "text": " such that" }, { "end": 372.72, "start": 369.76, "text": " to a 2, such that there is another player" }, { "end": 376.44, "start": 372.72, "text": " right here. Can I even draw this?" }, { "end": 380.52, "start": 376.44, "text": " This yellow isn't. Such that there is now two player" }, { "end": 384.04, "start": 380.52, "text": " tiles in the game. This would of course be an invalid level and" }, { "end": 387.28, "start": 384.04, "text": " the reinforcement learning agent ultimately should learn to produce" }, { "end": 390.88, "start": 387.28, "text": " valid and good levels." }, { "end": 395.04, "start": 390.88, "text": " So at each step you can change one tile and of course the goal is to make a" }, { "end": 396.92, "start": 395.04, "text": " better and better and better level" }, { "end": 401.08, "start": 396.92, "text": " over time. Now how do you choose which tile" }, { "end": 404.36, "start": 401.08, "text": " to change? That's a thing you have to define" }, { "end": 409.28, "start": 404.36, "text": " and they define three different ways in which the agent can choose" }, { "end": 413, "start": 409.28, "text": " which tile to change. In the narrow" }, { "end": 416.6, "start": 413, "text": " formulation they themselves, so the environment," }, { "end": 421.20000000000005, "start": 416.6, "text": " chooses the frame, the tile to change. So the environment will say" }, { "end": 425.8, "start": 421.20000000000005, "text": " now you can change this tile if you want. And in the next step it will say" }, { "end": 429.8, "start": 425.8, "text": " now you can change this one if you want. Now you can change this one if you want." }, { "end": 432.96000000000004, "start": 429.8, "text": " Now this is completely random" }, { "end": 436.32000000000005, "start": 432.96000000000004, "text": " how the environment chooses. Actually it doesn't have to be, but the environment chooses" }, { "end": 439.72, "start": 436.32000000000005, "text": " and that is problematic for the agent because" }, { "end": 444.20000000000005, "start": 439.72, "text": " the agent cannot kind of predict which tile it can change next" }, { "end": 447.32, "start": 444.2, "text": " and therefore it cannot really plan ahead" }, { "end": 451.88, "start": 447.32, "text": " how it wants to change the level. It can only make very very local very greedy" }, { "end": 452.64, "start": 451.88, "text": " choices." }, { "end": 456.84, "start": 452.64, "text": " It can be like oh I'm right here" }, { "end": 460.2, "start": 456.84, "text": " I might actually build a wall right here." }, { "end": 463.59999999999997, "start": 460.2, "text": " Yes that seems good." }, { "end": 468.56, "start": 463.59999999999997, "text": " An example is maybe you want to make the level more interesting." }, { "end": 471.84, "start": 468.56, "text": " Maybe you think that the crate up here" }, { "end": 475.56, "start": 471.84, "text": " is a bit close to this field here. You have to push it onto this field and" }, { "end": 477.2, "start": 475.56, "text": " that's fairly easy right?" }, { "end": 480.56, "start": 477.2, "text": " So you just push it like up and then to the left." }, { "end": 483.59999999999997, "start": 480.56, "text": " Actually it's not that easy because there's the wall right here and you have to go" }, { "end": 484.44, "start": 483.59999999999997, "text": " around." }, { "end": 488.71999999999997, "start": 484.44, "text": " Actually you probably have to push this down." }, { "end": 491.96, "start": 488.71999999999997, "text": " But let's say the level is too easy and you want to move the" }, { "end": 495.79999999999995, "start": 491.96, "text": " move the crate to a bit like let's say here." }, { "end": 499.79999999999995, "start": 495.79999999999995, "text": " In this framework where the environment tells you which" }, { "end": 503, "start": 499.8, "text": " tiles to change." }, { "end": 506.96000000000004, "start": 503, "text": " Once you come across this tile" }, { "end": 510.64, "start": 506.96000000000004, "text": " you can delete it. But then you have to wait" }, { "end": 514.52, "start": 510.64, "text": " and wait and wait and wait until" }, { "end": 518.24, "start": 514.52, "text": " at random this tile where you want to put it is selected." }, { "end": 522.04, "start": 518.24, "text": " This might actually never happen because the episode might be over" }, { "end": 525.12, "start": 522.04, "text": " and if it never happens you are in an invalid level." }, { "end": 532.12, "start": 525.12, "text": " So the agent here is basically forced to greedily make the level" }, { "end": 536.28, "start": 532.12, "text": " valid before it can make it interesting and then it can only make it interesting" }, { "end": 537.8, "start": 536.28, "text": " in sort of local ways." }, { "end": 540.96, "start": 537.8, "text": " So the second way that the" }, { "end": 545, "start": 540.96, "text": " second formulation here is the turtle formulation." }, { "end": 548.6, "start": 545, "text": " Now you might know this from the turtle graphics" }, { "end": 552.2, "start": 548.6, "text": " where basically you have this little" }, { "end": 555.84, "start": 552.2, "text": " turtle thing and you can always move it" }, { "end": 559.1600000000001, "start": 555.84, "text": " either like you know down, up, left or right" }, { "end": 563.6, "start": 559.1600000000001, "text": " and then you can always put a dot or not put a dot and thereby you can like" }, { "end": 566.88, "start": 563.6, "text": " trace out things." }, { "end": 570.4000000000001, "start": 566.88, "text": " This is like intro to programming. Same here." }, { "end": 573.9200000000001, "start": 570.4000000000001, "text": " So now the agent is given a starting square" }, { "end": 577, "start": 573.9200000000001, "text": " and it can choose to change it or not but it can also choose" }, { "end": 580, "start": 577, "text": " how to move to the next square. So to the right," }, { "end": 585.6, "start": 580, "text": " up, left or down. So it can choose. So you can go along and say okay now I'm here" }, { "end": 590, "start": 585.6, "text": " I want to change it to a 2, now I'm here, I want to change it to a 2," }, { "end": 594.4, "start": 590, "text": " now I'm here and so on. So it can basically" }, { "end": 598.04, "start": 594.4, "text": " do things like build long walls and things like this" }, { "end": 601.48, "start": 598.04, "text": " so it can plan ahead more considerably." }, { "end": 604.56, "start": 601.48, "text": " But still if you regard the problem from before" }, { "end": 608.12, "start": 604.56, "text": " if it wants to place the crate to a different" }, { "end": 611.84, "start": 608.12, "text": " location it can, like if maybe it's here" }, { "end": 615.96, "start": 611.84, "text": " okay the agent is here and then it can say okay I wanna" }, { "end": 619.8, "start": 615.96, "text": " not change but move, not change and move, not change move and then it can delete" }, { "end": 622.84, "start": 619.8, "text": " and then it has to move over here step by step" }, { "end": 627.92, "start": 622.84, "text": " until it can place it again. So it can plan ahead considerably" }, { "end": 632.04, "start": 627.92, "text": " longer actually it can just move straight over because the agent itself is not" }, { "end": 633.44, "start": 632.04, "text": " constrained by walls." }, { "end": 636.84, "start": 633.44, "text": " So it can move ahead quite a bit" }, { "end": 641.4, "start": 636.84, "text": " but it's still kind of localized changes because it can move one tile at a time" }, { "end": 644.52, "start": 641.4, "text": " right and if in between the episode ends" }, { "end": 650.08, "start": 644.52, "text": " it's again an invalid level. So the third formulation is the most powerful" }, { "end": 650.9200000000001, "start": 650.08, "text": " formulation." }, { "end": 654.9200000000001, "start": 650.9200000000001, "text": " It's called the wide formulation and this is where the agent at each" }, { "end": 659.4, "start": 654.9200000000001, "text": " time step cannot only choose how to change the tile but can freely choose the" }, { "end": 660.76, "start": 659.4, "text": " next tile to change." }, { "end": 664.4000000000001, "start": 660.76, "text": " So it could say in one step" }, { "end": 667.92, "start": 664.4, "text": " it could say I want to delete this tile" }, { "end": 671.84, "start": 667.92, "text": " and then in the next step it could say I want to place it right here." }, { "end": 676.24, "start": 671.84, "text": " So this is, so it can plan ahead considerably." }, { "end": 679.92, "start": 676.24, "text": " So how you design the action space is very important" }, { "end": 685.12, "start": 679.92, "text": " for how your agent, for what your agent can possibly learn" }, { "end": 688.84, "start": 685.12, "text": " and how easy it is for the agent to learn because it's gonna be pretty easy" }, { "end": 691.96, "start": 688.84, "text": " for this agent to learn to move crates like this" }, { "end": 697.08, "start": 691.96, "text": " where even though the other agent that moves one tile at a time can also do it" }, { "end": 700.96, "start": 697.08, "text": " it has to plan ahead for longer so it has to sort of invest more" }, { "end": 706.76, "start": 700.96, "text": " of the reinforcement learning power into doing these sorts of things." }, { "end": 709.88, "start": 706.76, "text": " But of course it's being more constrained also means" }, { "end": 713.12, "start": 709.88, "text": " you have less actions at your disposal." }, { "end": 716.5600000000001, "start": 713.12, "text": " Like this last agent it has a lot of actions it can do. It can choose" }, { "end": 720.48, "start": 716.5600000000001, "text": " any tile at once right so that can also introduce considerable" }, { "end": 723.64, "start": 720.48, "text": " exploration dilemma and" }, { "end": 728.04, "start": 723.64, "text": " you have to trade these things off when you design things like that." }, { "end": 731.16, "start": 728.04, "text": " Alright so this is the action space. Now how" }, { "end": 735.5600000000001, "start": 731.16, "text": " the observation evolves into the next observation should be" }, { "end": 739.48, "start": 735.5600000000001, "text": " fairly clear. I mean that's already given by the action space. If you ask yourself" }, { "end": 742.72, "start": 739.48, "text": " if you're in this situation right here" }, { "end": 746.6, "start": 742.72, "text": " and the agent deletes the crate" }, { "end": 751.48, "start": 746.6, "text": " then the crate is no longer there. So if it changes this to a zero" }, { "end": 754.8000000000001, "start": 751.48, "text": " then it's just empty space now." }, { "end": 758.5600000000001, "start": 754.8000000000001, "text": " So that's fairly obvious here. Now the last thing we need to do is the" }, { "end": 760.8000000000001, "start": 758.5600000000001, "text": " reward calculation." }, { "end": 763.76, "start": 760.8000000000001, "text": " What reward do you give the agent?" }, { "end": 766.76, "start": 763.76, "text": " Here you can give the agent the reward either" }, { "end": 769.88, "start": 766.76, "text": " let's say at the very end. You cannot give it a reward" }, { "end": 772.96, "start": 769.88, "text": " for the entire episode and give it a reward at the very end." }, { "end": 777.4000000000001, "start": 772.96, "text": " Reinforcement learning algorithms are able to deal with this to a certain" }, { "end": 778.2, "start": 777.4000000000001, "text": " degree." }, { "end": 782.36, "start": 778.2, "text": " You can also decide to give it at each step." }, { "end": 786.24, "start": 782.36, "text": " Now the way they do it here I believe is they give it at the end" }, { "end": 790.72, "start": 786.24, "text": " and they have multiple components to the reward." }, { "end": 794.2, "start": 790.72, "text": " So the reward in this case is how well" }, { "end": 798.84, "start": 794.2, "text": " the level fulfills certain goals that the programmer sets." }, { "end": 803.72, "start": 798.84, "text": " So the goals in Sobocon are basically the rules of the game" }, { "end": 807.2, "start": 803.72, "text": " and that means there is only one player." }, { "end": 811.96, "start": 807.2, "text": " If there are two or none then the reward is less." }, { "end": 817.52, "start": 811.96, "text": " There are at least one crate and there are as many crates as green fields." }, { "end": 821.64, "start": 817.52, "text": " So here you can see there are only two crates but three green fields so the" }, { "end": 823.1600000000001, "start": 821.64, "text": " agent will get a penalty" }, { "end": 826.88, "start": 823.1600000000001, "text": " for producing a level like this. And then the last thing is" }, { "end": 830.2, "start": 826.88, "text": " the level has to be solvable. And for" }, { "end": 834.4399999999999, "start": 830.2, "text": " checking solvability the" }, { "end": 837.72, "start": 834.4399999999999, "text": " authors of this paper simply employ a Sobocon solver." }, { "end": 842.2, "start": 837.72, "text": " They have a Sobocon solver that is like a tree search algorithm" }, { "end": 845.4399999999999, "start": 842.2, "text": " that tries to solve the level. If it can't solve the level" }, { "end": 848.72, "start": 845.4399999999999, "text": " then the level is invalid and the agent gets a" }, { "end": 851.96, "start": 848.72, "text": " worse reward than whenever the level" }, { "end": 856, "start": 851.96, "text": " is solvable. So how you design the reward" }, { "end": 859.2, "start": 856, "text": " is also very important. If you only give" }, { "end": 863.76, "start": 859.2, "text": " a one reward when all the goals are fulfilled and give a zero reward as soon" }, { "end": 868.08, "start": 863.76, "text": " as one of the goals is not fulfilled, a reinforcement learning agent is going to" }, { "end": 869.32, "start": 868.08, "text": " have a very very much" }, { "end": 873.32, "start": 869.32, "text": " difficult time to learn that. So you have to kind of design the reward" }, { "end": 877.04, "start": 873.32, "text": " so you help the agent realize what's important. So maybe" }, { "end": 880.68, "start": 877.04, "text": " if there's only one crate missing but you know in fact the level is" }, { "end": 883.72, "start": 880.68, "text": " solvable, except for that" }, { "end": 886.76, "start": 883.72, "text": " maybe one green field is going to be empty," }, { "end": 890.6, "start": 886.76, "text": " then you could still give a fairly high reward but you could just give a higher" }, { "end": 891.24, "start": 890.6, "text": " reward" }, { "end": 895.84, "start": 891.24, "text": " when the level is actually solvable. Or all the rules are fulfilled" }, { "end": 899.6, "start": 895.84, "text": " and there is a crate here. The other thing" }, { "end": 902.9200000000001, "start": 899.6, "text": " to notice here is that in this case you actually do need" }, { "end": 906.24, "start": 902.9200000000001, "text": " a solver for the level since it's a puzzle game." }, { "end": 911.0400000000001, "start": 906.24, "text": " That means your agent is only going to produce levels that are" }, { "end": 914.76, "start": 911.04, "text": " as difficult as your solver can solve. So that's going to be" }, { "end": 918.56, "start": 914.76, "text": " a considerable problem. But that's a limitation" }, { "end": 921.8, "start": 918.56, "text": " here. But all of their rewards are hard-coded" }, { "end": 925.4399999999999, "start": 921.8, "text": " so to say. So the reward is given by the environment." }, { "end": 929.0799999999999, "start": 925.4399999999999, "text": " So now that we have observations" }, { "end": 932.8, "start": 929.0799999999999, "text": " which are these matrices right here, we have actions which" }, { "end": 936.4399999999999, "start": 932.8, "text": " and we actually have three different ways of formulating actions and we have" }, { "end": 937.12, "start": 936.4399999999999, "text": " reward," }, { "end": 942, "start": 937.12, "text": " they can simply plug this into a standard reinforcement learning algorithm." }, { "end": 945.48, "start": 942, "text": " Now they have one last thing that they have which is this" }, { "end": 949.52, "start": 945.48, "text": " change percentage parameter. So what they say is they give" }, { "end": 955.2, "start": 949.52, "text": " the agent an initial state and then the agent is allowed to change it around" }, { "end": 958.16, "start": 955.2, "text": " like here. So on the left you have this initial" }, { "end": 962.76, "start": 958.16, "text": " state. This is sort of a random initial state and you allow the agent now to" }, { "end": 966.32, "start": 962.76, "text": " change it in this stepwise fashion and you always update the agent." }, { "end": 970.84, "start": 966.32, "text": " By the way, the agent, as you might imagine, the agent takes this matrix" }, { "end": 975.72, "start": 970.84, "text": " right here and puts it, shoves it through like a few convolutional layers and then" }, { "end": 977.1600000000001, "start": 975.72, "text": " decides on an action." }, { "end": 982.6, "start": 977.1600000000001, "text": " I'm almost forgetting that this is so obvious by now that" }, { "end": 987, "start": 982.6, "text": " the agent is like a standard deep learning" }, { "end": 991.12, "start": 987, "text": " taking in a 2D, doing some convolutions and then having" }, { "end": 994.84, "start": 991.12, "text": " like a policy output." }, { "end": 999, "start": 994.84, "text": " So you shove this into a proximal policy optimization algorithm" }, { "end": 1001.76, "start": 999, "text": " which is a standard reinforcement learning algorithm and you allow to" }, { "end": 1004.88, "start": 1001.76, "text": " change these things. Now what they do is they" }, { "end": 1009.9200000000001, "start": 1004.88, "text": " only allow the agent to change the levels by so much because what they say is" }, { "end": 1013.32, "start": 1009.9200000000001, "text": " if we start out from these different states" }, { "end": 1016.32, "start": 1013.32, "text": " we would, you can decide on two things." }, { "end": 1019.96, "start": 1016.32, "text": " Either you can train the agent to find you the best" }, { "end": 1023.0400000000001, "start": 1019.96, "text": " possible level ever, right? But then" }, { "end": 1026.3999999999999, "start": 1023.04, "text": " it would sort of ignore the starting state. It would just learn which level" }, { "end": 1028.44, "start": 1026.3999999999999, "text": " gives me the highest reward" }, { "end": 1031.72, "start": 1028.44, "text": " and it would just change all the tiles always to that." }, { "end": 1037.48, "start": 1031.72, "text": " It would just try to change the, to always reach that best possible state" }, { "end": 1040.68, "start": 1037.48, "text": " and forget the start state. So they say, okay," }, { "end": 1044.12, "start": 1040.68, "text": " the last constraint is the agent can only change" }, { "end": 1047.24, "start": 1044.12, "text": " like 20 percent of the tiles at most" }, { "end": 1051.1599999999999, "start": 1047.24, "text": " and after that we end the episode or we just don't allow the agent to" }, { "end": 1053.24, "start": 1051.16, "text": " change anything anymore." }, { "end": 1057.76, "start": 1053.24, "text": " It needs to first, so if it changed this here to empty space and" }, { "end": 1061.28, "start": 1057.76, "text": " wants to change something else, first needs to change this back and then it can" }, { "end": 1062.68, "start": 1061.28, "text": " change something else." }, { "end": 1066.48, "start": 1062.68, "text": " So you can do that. So this constrains the agent and kind of teaches it" }, { "end": 1071.3600000000001, "start": 1066.48, "text": " that in order to get a higher reward it must sort of adjust the starting state" }, { "end": 1074.4, "start": 1071.3600000000001, "text": " to something that gets higher reward." }, { "end": 1077.92, "start": 1074.4, "text": " And that's one way of making the" }, { "end": 1082.3200000000002, "start": 1077.92, "text": " the levels that you generate more diverse. It's sort of a unique problem" }, { "end": 1086.72, "start": 1082.3200000000002, "text": " to this particular kind of reinforcement learning problem because" }, { "end": 1092.44, "start": 1086.72, "text": " sometimes, like most of the time, you just want to find the highest reward, whatever." }, { "end": 1096.72, "start": 1092.44, "text": " But here you also want to maximize diversity of the levels you" }, { "end": 1098.48, "start": 1096.72, "text": " generate and therefore you could" }, { "end": 1102.1200000000001, "start": 1098.48, "text": " say that's a pretty good, you know, that's a pretty good constraint" }, { "end": 1107.48, "start": 1102.1200000000001, "text": " to put into that. So that's a thing I like here about this paper." }, { "end": 1113.28, "start": 1107.48, "text": " This change percentage constraint. Now at inference time you can change that." }, { "end": 1117.52, "start": 1113.28, "text": " So at training time you only change, whatever, 20 percent. But at inference time" }, { "end": 1121.16, "start": 1117.52, "text": " you can technically let the agent run for longer. As you can see, I think here" }, { "end": 1125.68, "start": 1121.16, "text": " they just let it run until it, you know, finds something good, like this one right" }, { "end": 1129.88, "start": 1125.68, "text": " here. Fairly good from the starting state. And you can see it sort of still" }, { "end": 1135.52, "start": 1129.88, "text": " adjusts to the starting state right here. So you can see that this it connects the" }, { "end": 1142.04, "start": 1135.52, "text": " the two dots on the top. So the goal is to make the longest possible maze or a" }, { "end": 1147.08, "start": 1142.04, "text": " long maze. So it connects these two. You can see here also this one connects them." }, { "end": 1155.08, "start": 1147.08, "text": " And then it goes out here and connects to this one. So it's fairly good at" }, { "end": 1159.6399999999999, "start": 1155.08, "text": " relying on this starting state. You can see that these turtle and wide" }, { "end": 1163.08, "start": 1159.6399999999999, "text": " representations that can actually choose where to go and where to change" }, { "end": 1169.1599999999999, "start": 1163.08, "text": " something are considerably or, you know, more powerful than this narrow thing." }, { "end": 1175.3999999999999, "start": 1169.1599999999999, "text": " Especially if you look at this level right here. Which again is the importance" }, { "end": 1181.96, "start": 1175.3999999999999, "text": " of designing the action space. Well, it is going to directly affect the outcome" }, { "end": 1188.1599999999999, "start": 1181.96, "text": " that you're going to have. Alright, and you see the same thing here for this" }, { "end": 1193.8400000000001, "start": 1188.16, "text": " Zelda game. Now here you can see the starting state often involves, let's say," }, { "end": 1198.48, "start": 1193.8400000000001, "text": " here you have two players and you have three keys and that's an invalid" }, { "end": 1203.3600000000001, "start": 1198.48, "text": " starting state. And sometimes the door cannot be reached. Sometimes the" }, { "end": 1207.1200000000001, "start": 1203.3600000000001, "text": " door is actually not even there, like here. And you can see that the agent, all" }, { "end": 1212.16, "start": 1207.1200000000001, "text": " of the agents, sort of learn to make at least valid levels where you have the" }, { "end": 1219.96, "start": 1212.16, "text": " player and the door and the key right here being able to reach everything." }, { "end": 1224.92, "start": 1219.96, "text": " So that's, you know, fairly cool because counting is one of these things" }, { "end": 1229.44, "start": 1224.92, "text": " that the neural networks aren't necessarily super good at. So it's nice" }, { "end": 1236.6000000000001, "start": 1229.44, "text": " to see that, you know, they can... Here they have two players and they" }, { "end": 1241.52, "start": 1236.6000000000001, "text": " they're deleting one of them. Here they have three crates and they actually make" }, { "end": 1249.8799999999999, "start": 1241.52, "text": " it such that the number of crates and the number of green tiles agree. So, you" }, { "end": 1254.24, "start": 1249.8799999999999, "text": " know, that's fairly cool that this comes out. And here you can see the" }, { "end": 1260.04, "start": 1254.24, "text": " different power of the algorithms. So in this binary problem, and this is the" }, { "end": 1265, "start": 1260.04, "text": " Zelda problem, this is Sobhakan problem, you can see that as you allow the agent" }, { "end": 1270.24, "start": 1265, "text": " at inference time to change more and more of the level, the percentage of" }, { "end": 1277.24, "start": 1270.24, "text": " levels where the agent gets a good level, like succeeds in building a" }, { "end": 1282.52, "start": 1277.24, "text": " valid level, goes up and up. And now this, as I already said, this narrow" }, { "end": 1286.88, "start": 1282.52, "text": " representation here appears to be a bit less powerful than the others." }, { "end": 1293.04, "start": 1286.88, "text": " Interestingly, in Sobhakan, the best one is this turtle representation where you" }, { "end": 1297.08, "start": 1293.04, "text": " can only change one tile at a time and not the more powerful wide" }, { "end": 1302.12, "start": 1297.08, "text": " representation. That's probably because, I'm going to guess that's because the" }, { "end": 1306.4399999999998, "start": 1302.12, "text": " either the reinforcement learning algorithm isn't, you know, powerful" }, { "end": 1313.76, "start": 1306.4399999999998, "text": " enough or their representation, like the CNN is maybe mis-architectured a bit." }, { "end": 1318.84, "start": 1313.76, "text": " You know, technically this representation should be able to achieve higher" }, { "end": 1325.72, "start": 1318.84, "text": " scores, but not as easily because, as I said, the action space is so much higher." }, { "end": 1332.32, "start": 1325.72, "text": " So it's more difficult to learn, but ultimately, it should learn it better." }, { "end": 1340.3600000000001, "start": 1332.32, "text": " Alright, so this is, this was this paper. It's, I think it's fairly cool and fairly" }, { "end": 1344.8, "start": 1340.3600000000001, "text": " fun to view it from this particular perspective. And they discuss that the" }, { "end": 1350.3600000000001, "start": 1344.8, "text": " future could be that humans solve this together, because usually when you have" }, { "end": 1354.48, "start": 1350.3600000000001, "text": " assisted level design, you would have something, some sort of like an optimizer" }, { "end": 1358.72, "start": 1354.48, "text": " running to optimize the level you're working on directly. Like you'd say, okay," }, { "end": 1363.24, "start": 1358.72, "text": " make something here and it would sort of run for a while and that takes, you know," }, { "end": 1368.96, "start": 1363.24, "text": " takes time. Now this here, this agent at inference time is very, very fast. So it" }, { "end": 1374.16, "start": 1368.96, "text": " can, you know, work together with humans. So the human would say, for example, oh" }, { "end": 1377.96, "start": 1374.16, "text": " here, please make a wall right here, because that's gonna make the level more" }, { "end": 1381.04, "start": 1377.96, "text": " interesting, but make it such that the level is still, you know, interesting and" }, { "end": 1385.6399999999999, "start": 1381.04, "text": " solvable. And then the agent can, you know, go across, do some things that's" }, { "end": 1391.2, "start": 1385.6399999999999, "text": " gonna be super fast. And agents and humans could work together at this. Now" }, { "end": 1398.1599999999999, "start": 1391.2, "text": " one drawback, of course, is that in a puzzle game like SoboCon, you know, you" }, { "end": 1402.3999999999999, "start": 1398.1599999999999, "text": " have to make sure the level is solvable. And here, luckily, you can employ a" }, { "end": 1409.52, "start": 1402.3999999999999, "text": " solver, but as the puzzles get more difficult, that's not super, like" }, { "end": 1413.08, "start": 1409.52, "text": " that's not going to be the case that much. And also they remark that most of" }, { "end": 1417.12, "start": 1413.08, "text": " the levels generated are fairly easy, because their reward only depends on" }, { "end": 1423.24, "start": 1417.12, "text": " whether or not the level is solvable by an easy solver, right? So you could give" }, { "end": 1427.7, "start": 1423.24, "text": " some reward for how difficult the level is, but then again, that depends on your" }, { "end": 1433.84, "start": 1427.7, "text": " solver. So an interesting next step would be to evolve these or to train these as" }, { "end": 1439.36, "start": 1433.84, "text": " you train reinforcement learning agents to solve these kinds of games. So kind" }, { "end": 1444.76, "start": 1439.36, "text": " of do a curriculum learning, sort of a GAN setting between level generator and" }, { "end": 1450.3999999999999, "start": 1444.76, "text": " reinforcement learning algorithm, like reinforcement learning game player to" }, { "end": 1456.6, "start": 1450.3999999999999, "text": " sort of evolve levels and agents at the same time. I think it's sort of like" }, { "end": 1461.8, "start": 1456.6, "text": " these poet approaches, except you would directly learn. I think that would be a" }, { "end": 1467.9199999999998, "start": 1461.8, "text": " nice direction for this work. In any case, the code is available. You can even" }, { "end": 1473.8000000000002, "start": 1467.92, "text": " plug in your own games and make your own levels, so check this out. And with that," }, { "end": 1502.28, "start": 1473.8, "text": " I'll see you next time. Bye bye." } ]
WVPE62Gk3EM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Big Bird: Transformers for Longer Sequences (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "google research", "bigbird", "big bird", "bert", "attention", "attention is all you need", "longformer", "random attention", "quadratic attention", "attention mechanism", "qa", "natural questions", "hotpot qa", "genomics", "nlp", "natural language processing", "transformer", "transformers", "fully connected", "sparse attention", "graph", "star graph", "turing complete", "universal approximation", "window attention", "convolution" ]
#ai #nlp #attention The quadratic resource requirements of the attention mechanism are the main roadblock in scaling up transformers to long sequences. This paper replaces the full quadratic attention mechanism by a combination of random attention, window attention, and global attention. Not only does this allow the processing of longer sequences, translating to state-of-the-art experimental results, but also the paper shows that BigBird comes with theoretical guarantees of universal approximation and turing completeness. OUTLINE: 0:00 - Intro & Overview 1:50 - Quadratic Memory in Full Attention 4:55 - Architecture Overview 6:35 - Random Attention 10:10 - Window Attention 13:45 - Global Attention 15:40 - Architecture Summary 17:10 - Theoretical Result 22:00 - Experimental Parameters 25:35 - Structured Block Computations 29:30 - Recap 31:50 - Experimental Results 34:05 - Conclusion Paper: https://arxiv.org/abs/2007.14062 My Video on Attention: https://youtu.be/iDulhoQ2pro My Video on BERT: https://youtu.be/-9evrZnBorM My Video on Longformer: https://youtu.be/_8KNb5iqblE ... and its memory requirements: https://youtu.be/gJR28onlqzs Abstract: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data. Authors: Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at Big Bird Transformers for Longer Sequences by Manil Zahir and Gurugar Uganesh et al. of Google Research. So this paper on a high level proposes to replace the quadratic attention mechanism in transformers by a mix of random attention, windowed attention, and selective global attention, therefore achieving a complexity of linear memory requirement instead of quadratic memory requirement. And as a result of that, they can process longer sequences than traditional transformers like BERT and achieve better results in some NLP tasks, and they also evaluate on genomics tasks. So we'll go through this paper a bit, look a bit at the proof because they give a theoretical kind of guarantee that their random attention mechanism can still be Turing complete and can still achieve the same things as a full attention mechanism, but we'll also look at the drawbacks. I sort of have mixed feelings about this paper, and I think I'll voice my concerns as we go through here. But first, let's look at the paper, let's look at the architecture, and I think this is actually a pretty cool paper for the empirical progression of the field to process longer sequences with transformers. As always, if you like content like this, feel free to share it around, leave a like and tell me in the comments what you think about the paper and about what I think, whatever, you just, just go nuts. Alright, so the basic premise right here is that the transformers, they've been pretty impactful, especially in NLP. So they say transformer based models such as BERT have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency, mainly in terms of memory, on the sequence length due to their full attention mechanism. So really briefly, the full attention mechanism, and I've done numerous videos about attention mechanism BERT, attention is all you need, and so on. So if you want a detailed explanation of what that is, just go look up the corresponding videos. But briefly, what you'll have in NLP is a set of tokens, a sequence of tokens as an input, and you want to transform them layer after layer into sort of a higher order representation of that same sequence. And for that, you build these layers out of nodes and you have as many nodes usually as you have tokens in the sequence. And the next set of, so each token is represented by a vector at the beginning, and each layer transforms the sequence, as I said, into sort of a higher level representation. So you want the vector of this token right here to be a better representation than the vector was right here. And you do that by incorporating information from all the other tokens into that particular vector. Now, as I said, this is called an attention mechanism, and we don't actually have to go into how it works right here, but you can see pretty clearly that if you want to do this for every token, you need to have information routed from every token to every token, like from here to here, from here to here, and so on. And this is just one token, and then you need to do it for this token and for this token and for this token. So what you'll ultimately get, if n is your sequence length, you'll get some n squared amount of computation and memory requirements for this. So this is a problem. And usually, this means that, you know, this sequence length in BERT, this is limited to something like 512 tokens, which is okay for some applications. But if you want to summarize, you know, entire articles, entire books, even, or do question answering with lots of context, it's not really enough. So people have been thinking about how to scale this input, how to scale this. And of course, the main culprit is this quadratic attention mechanism, because if you, you know, scale the 512, you need, you know, four times the amount of compute and memory. So how does this paper go about reducing that quadratic dependency? The goal right here is, of course, to get this to some O of n, right? Because then, as we double the input length, we simply need to double the compute requirements, and that would be fantastic. And that's what this paper does. And it does so without sacrificing the properties of the transformer. So here's the architecture that Big Bird proposes. By the way, Big Bird, another character from Sesame Street, I guess, will continue the naming here after Elmo and BERT. You know, I'm waiting for the model that's the count. Yeah, that's going to be a fun model. So Big Bird basically has three different types of attention. And here, these are adjacency matrices in this attention mechanism. So here is the input layer, and the output layer is right here. So that basically means that node i right here would be connected. Sorry, that's not a straight line, would be connected to this particular node and also to this particular node. So we're now trying, if we have node i right here, we're now trying to not connect it to all of these nodes, but we'll say, we'll just select some at random and then connect it to that. Okay, this is what we call random attention. And you can pretty clearly see if you connect each of the i nodes to r equals 2, to two random nodes, then you don't have an n squared anymore. But you'll have a like an O of r times n, which you know, if r is a constant is an O of n attention mechanism. Okay, so the main goal between the random attention mechanism is that for each query, basically, you select random tokens that you attend to, and that random number is a fixed number that's not dependent on the sequence length. And the paper is a little bit unclear about whether or not those random ones are the same for every sequence or are switched up, or the same for every layer or are switched up. But they formulate all of this as sort of in sort of a graph in sort of a random graph. So they're, they formulate the attention mechanism in form of a graph. So if we transform all of these nodes into a graph, a full attention mechanism would mean that each graph, each node is connected to each of the other nodes, right, fully connected graph, I don't, maybe that's it. So that would be a full attention mechanism. And then they say, well, if we just have random connections between these things, then there are some theorems from graph theory that say that each random walk in this graph is going to, so this graph is going to mix pretty quickly. So I can get from each node to each other node by a random walk in a logarithmic time. And this random walk, which basically means that you go from here to here, this would be one layer of the transformer. And then if you want to go from here to here, that you would have to do that in the next layer. So this formulation as a random graph leads me to believe that layer after layer, the random attention pattern is going to be the same. But also the formulation of the paper leads me to believe that the this random attention differs from sequence to sequence. So I believe what's happening is that they get a new sequence, then they decide on this pattern right here once and then they use this layer after layer, the same pattern again. So you can see that in the traditional attention, information can basically flow from each of the nodes to each other node in one single step, right? Because each node is connected to each other node. You see this in the graph right here. However, if we only select a subset, then you know, it needs to if I want to go from, as I said, from here to here, then I need to do it in two steps. And therefore I need two layers. And that's going to be the culprit of this method here. And while it is mentioned in the paper, it's sort of I feel at least that's my my assessment of this paper, it's kind of swept under the rug a little bit. I mean, they do have a theorem that clearly says we can construct an example of a task that in the full attention setting can be solved with a single step. So a single layer that in our random attention setting needs a lot of layers, a lot of steps. But you know, the rest of the paper is sort of shaky on on this thing. But nevertheless, you can see how the random attention can, if you have enough layers, do the same information routing as the full attention. Okay. However, this is not a property of the random attention. And we'll see this in the next thing right here. So the next ingredient that this paper uses is window attention. And you can see over here that Big Bird is ultimately going to be a combination of the three types of attention, which will, which we are looking at here. So window attention basically means that each each i each token at the i of position is going to attend to itself, of course. So here is i, but it is also going to attend to its neighbors. So here is i minus one and here is i plus one. And this is a you know, this is a window size w that you can that is a parameter, but also it is a constant and therefore you again go from n squared to w times n, which you know is o of n if w is a constant. And this might be familiar to you, because we've already seen this in the long former paper. We made a video or I think even two videos on the long former, which used exactly the window attention in combination with the global attention. And if you want to know more about that, go watch these videos. But the new thing in Big Bird right here is this addition of the random attention. Again, the the window here is is has exactly the same properties as the random attention. So you have instead of a fully connected graph, you have a sparsely connected graph. Now if you have random attention, the sparsely connected graph is like like the one on the right. But if you have a windowed attention, you can it is kind of not randomly connected, but each node is connected to its neighbors like this. And you can also see that if I want to go from this node to this node right here, I can't do it in one step, but I can do it in two steps. I go here and I go here. So in the terms of the attention layers, if I want to go from node one to node three, I have to do it in two steps because each node is only connected to its neighbors. So the connection patterns would sort of look like this. So I have to go from one to two and then in the next layer from two to three. So the paper basically makes up for the lack of full attention by adding layers. And you also might recognize this from a convolution operation. This basically because it is a convolution operation, right in a convolution, each node only aggregates input from its neighbors for the next layer. And then we know that as we go up the layers, the de facto window that each node looks at is going to be like a cone kind of like this. So this is very similar to how a convolutional neural network works. And the reasoning is very similar because the reasoning is, well, in a sentence, the most important words for any given word are probably going to be its neighbors, like the words around it. And as you go up the layers, you branch out more and more. But ultimately, this neighborhood principle holds in NLP as well. So again, we already saw this in the long former, but that's the reason behind the window attention and that's the second ingredient. And then the third ingredient is this global attention. Now the global attention is selected tokens that are so important and that's fixed by the developers that are so important that they are connected to everything else. So for example, in these transformers, you often have what's this kind of CLS token. So this is a special token that you prepend to some piece of text and the output of this token is going to be your classification output because you don't want to bind your classification if you need to classify the entire sequence. You don't want to bind that decision to one particular word. What you want to do is you want to have an extra token and that's this CLS token that kind of aggregates information from all of this. So layer after layer, layer after layer, you'll have, so if we go here, layer after layer, we have this one special node. And in each step, every single other node is able to send information right here to this node and receive information from this node. So now, as a result of this, as you may be able to see, every single path is kind of a maximum length of two because if I want to go from any node to any other node, I can simply send information to this global node and then the global node in the next step can send information to whatever other node. And that is a property that they use in their proof that this attention mechanism is as sort of as powerful as the classic full attention mechanism. And we'll go through that in one second. But first, I hope this was clear that this combination of random attention, window attention and global attention is what is called Big Bird. They have some engineering tricks that go along with this, but in concept, you can imagine Big Bird being long former plus these random attention right here. And as an engineer, as an NLP engineer, that makes kind of total sense. I totally believe that the introduction, the addition of these random attention of these random attention patterns can absolutely help your classification or whatever your NLP tasks because more attention, better. And I also am completely willing to believe that using the full attention matrix, while it is, of course, more accurate, it won't hurt too much to leave some of that attention away because essentially all the path lengths are just becoming two or even with the random attention are really short or logarithmic to route information from a node to some other node. So the loss that you incur is kind of in a logarithmic scale in terms of performance, while the gain that you make is sort of in a in a quadratic or like a linear scale, you go from quadratic to linear. And that seems to me like a good empirical trade off. All right, however, the the proofs here, the proof of of how how these how these things are constructed are a little bit. I don't know. So what they do in the proof that this function can sort of is a universal approximator. People have already shown that full attention mechanisms are universal approximators. So they show here that this sparse attention mechanism is also a universal approximator. They make big use of star graphs. What they say is, OK, if we have a star graph, which is one node connected right here to every other node, this is a star graph. If we have a star graph, we can achieve the same thing than with a full graph. A full graph is where every node is connected to every other node. But as I already said, what they need for this is multiple layers of this star graph. So and that has to do with the fact that if I want to route information, I basically have to go via this middle node right here. And there is an additional complication because this middle node in our case right here is only one node. I can't route information at the same like I can't have this routing right here at the same time that I have this routing right here, like going from here to here, because I only have one middle node. And I kind of this is not how that like this is very dumb math. But maybe you have to imagine that there is one memory slot. And you can only use that one memory slot at the same time for one of these things. So essentially, what you'll have to do is you'll have to do the green thing first. And then in the next step, you'll have to do the blue thing second. And then so these are now pairwise routing between nodes. But ultimately, what an attention mechanism does is it does everything to everything right in a single layer, it routes information from all the nodes to all the other nodes. And to achieve that, so you need multiple rounds of this. And it turns out that in the worst case, you actually need n rounds of this. So you know, you trade off your you go from n squared to n memory and compute requirements in a single layer. But in the worst case, you need n layers to recover the power of the full of the full transformer. And that is the last one of their theoretical results right here. So first, they prove universal approximations. And second, they prove Turing completeness. These two properties have been proven for full attention mechanisms. And third, they prove that there are tasks where you actually do need n layers to solve them with their limited attention. So you know, I'm not sure but I feel you can make any sort of polynomial algorithm into a linear algorithm like this. Like I have a I have like a cool sorting algorithm, right? So if this is my sequence that I want to sort, what I can do is I can simply, you know, take a random subset of them, like this, this and this and then kind of go and sort them and then put them like I send them to the to the global memory like this, I sort them, and then I put them back, right? And if I do this for enough, if I do this for enough rounds, okay, you know, if I do this for enough rounds, you know, at the worst case, I need n rounds to sort my or log n rounds if I do it smartly. But you know, in, you know, the single step here is the single step is just O of n. So I have now an O of n sorting algorithm. I you know, I have my sort of a bit of worry to express things like that. And yeah, but you know, it is from an empirical standpoint, I absolutely believe that this this is enough. Now my second coral right here is that if you look at the proof, first of all, what it makes use is this star graph, and the star graph corresponds to the global attention. So that's not much to do with the random attention, though they use the random attention in their proof, but I at least believe that it would be possible with the global attention only. And then the second thing is if you look at the parameters that they use for the for the experiments, and I've already said this in the long former video. So in the long form of video, it turned out that if you look at how big these window attention is, it turns out that it you're still well, you know, the original BERT attended to 512 tokens. And then you look at the window and the window was still 512 tokens. It's just that the global attention was even more so ultimately they ended up using more memory than the original BERT. And here, if I look at the parameters of their thing, and they have multiple experiments right here, and I believe this is the the base version. So this is the base version, they also have this large version. But here, this is the 12 layer version. And you can see they have this block length. And we'll get into the block length in one second. But then you can see that their window size is three times the block length, the number of random tokens is three times the block length, and the number of global tokens is two times the block length. So that results in eight times B. So eight times 64 is, you know, Can I calculate this? Or am I stupid? It's 512. Yes, actually calculated this before. So this is 512 tokens. So you know, you you go from from BERT that has 512 tokens and attends to 512 tokens to also attending to 512 tokens. Of course, the advantage here is that they now have 4096 sequence length. So they have the freedom to not attend to as many tokens as they have in the input length. But you know, to put it in perspective, this here uses more memory and more compute on on its face than BERT, because BERT attends to as many tokens but has a smaller input sequence. And you know, I, there's sort of a thing where in order to make these sparse attention things work, you have to go pretty, pretty, you know, high in the number of things you attend to, you can leave away some but it's not like you can scale up orders of magnitude of your input sequence length. So that's the this promise of linear attention is sort of it's kind of fulfilled but not there yet. The second thing I would like to point out is that in a lot of cases, the number of random tokens is actually set to zero. So really making use, I believe, of these of the of the global of the number of global tokens. So it that seems a bit strange in that they continuously refer to their random attention mechanism. But then in a lot of experiments, they don't actually have a random attention mechanism. I believe they have to do that because that's kind of what makes them different from the long former in principle, but still, yeah. So the last novelty, let's say is an engineering novelty in that they now always consider not single, for example, they don't consider single random attention, they always consider these in blocks. And that's because our current hardware is really bad at sparse stuff. Really bad at single indexing, gathering single things. So if you can do everything in blocks, you basically get you get these blocks almost for free. So it takes only marginally longer to retrieve this full two by two block right here than it would to retrieve the single instance right here. Of course, that means you have, you know, four times you still use four times more memory, but it is not four times slower than the original thing. So you can use these blocks right here. You can do it for the random attention, you can do it for the window attention, as you can see here. So you break this window pattern a little bit into blocks. And that makes it a lot faster. So that speeds up, get the speed up almost for free. And then they make another approximation in that the way they do this windowing is, and I just go really briefly. So you can see right here that it would be very cumbersome to gather. So what we need, we're just going to focus this this dotted thing right here is a bit confusing. So you want to attend to these things. And these you can just get out with a matrix slice really easy. But then you want to attend to this kind of blocky thing right here from the window attention, right, like this thing. And this is hard to get out because you'd have to kind of index each row individually. And that's very slow. So what they do, there is this matrix roll operation, where you can sort of roll the axis around. So what you'll do is you'll take this thing right here, and you put it to the left right here, and you'll take, for example, this thing right here, and you'll put it to the right or no, like it's, it's up and down. But in essence, that's what you do. And you can you can fold all of this blue stuff into a rectangular matrix. If you know if you can see right here. So you kind of roll this back, roll this back, roll this forward, and you replace whatever is missing by these. Now this again gives you some inaccuracies because this block right here was never intended to be attended to. And all of a sudden you see you have the K6 in here. So it gives you a bit of inaccuracies at the edges of the sequence. But you can take that, you know, you can take that hit for the increased performance that you gain by now having a rectangular matrix. TPUs are really efficient at this, not as efficient at this. And then the only thing that's really slow is gathering these random blocks right here. But also by having the same amount of random blocks per input token, what you'll do is you'll end up with just one of these columns right here, or you know, R of these columns. And that again gives you a rectangular matrix. So this thing right here you can process very, very efficiently using a TPU. And you know, the mistakes you make are basically this thing right here and this thing right here, because those weren't intended and are at the edges of the sequence. So these were the tricks of Big Bird to quickly summarize. Big Bird is basically taking a transformer saying, well, why do we need all of this attention, all of this full attention, maybe we only need some of that and can already do a big job, a good job, especially, you know, considering the attention mechanism goes over multiple layers. So we don't need a routing from each token to each token, we can make up for not having a fully connected graph by simply running multiple layers. So their sparsity is first of all, you have this random attention, which I believe changes from sequence to sequence, but stays within or among the layers of the same sequence. Then you have the window attention with the reasoning. So the reasoning behind the random attention is that if you have a randomly connected graph, the path lengths are on average logarithmic. So you can route information efficiently. The reasoning behind the window attention is that probably neighbor information is very important and that has been shown empirically. And then the global attention, the reasoning behind this is that some of the tokens that are fixed by the developers are so important that it's very beneficial that each other node is connected to them and that they are connected to each other node. The result of that is the Big Bird attention mechanism, which is basically long former, which already had these two plus the random attention. This achieves a linear complexity in terms of memory and compute, though linear has to be qualified a bit because it's modified by the window size, by the number of random attention tokens, by the number of global tokens, and in practice often ends up being fairly large ish. And also the theoretical guarantees now come with the fact that you need multiple layers. In the worst case, you need sequence length amount of layers, which in the worst case would result right back into a quadratic requirement for memory and compute. They do some engineering, some engineering tricks right here, and their results are pretty good. So the results in various tasks and we'll, we'll look at some of the tasks right here. So these are def set results using base size models. For example, where you can see they do outperform basic Roberta models, they outperform long former, which may mean that the random attention is useful, but you know, in these things, it's also always may just mean that you've thrown more compute at it. At least I'm not really looking that they outperform the models because as you can see right here, if they compare to state of the art and you know, granted, these are models that have been trained specifically for these tasks and are crafted and engineered and Big Bird manages to Big Bird manages to hold itself against them in a lot of tasks and even get state of the art on some. What I'm more interested in is that it, you know, it can reach good numbers. It doesn't necessarily have to be state of the art, but it can reach good numbers, which tells me that, okay, probably the, the empirical hit that I take by not having the full attention is, you know, is justifiable by the speed up and memory savings I do get. Yeah, especially when result, when you see results mixed like this, you know, sometimes the other model is good and sometimes the Big Bird is good on different variations and so on. I would not, you know, I would not make a big deal out of the fact that it is state of the art. I get that the authors have to do that. I would do so as well, but you know, you don't, don't think that this is the, like the best thing now. It's very probable. They just thrown also a lot of compute at it. What is cool is they do some genomics experiments. So not only do they have NLP state of the art, but also they go into genomics and experiment with data there. I don't want to go into that because ultimately it's another task and I believe the paper is about the architecture. All right. So that was Big Bird. I hope you enjoyed this video and learned. I learned something. Certainly. If you want to check out the proofs, they're actually pretty entertaining to read and yeah, I'll see you next time. Bye bye.
[ { "end": 6.88, "start": 0, "text": " Hi there, today we'll look at Big Bird Transformers for Longer Sequences by Manil Zahir and Gurugar" }, { "end": 9.84, "start": 6.88, "text": " Uganesh et al. of Google Research." }, { "end": 14.5, "start": 9.84, "text": " So this paper on a high level proposes to replace the quadratic attention mechanism" }, { "end": 23.32, "start": 14.5, "text": " in transformers by a mix of random attention, windowed attention, and selective global attention," }, { "end": 29.54, "start": 23.32, "text": " therefore achieving a complexity of linear memory requirement instead of quadratic memory" }, { "end": 31.2, "start": 29.54, "text": " requirement." }, { "end": 36.2, "start": 31.2, "text": " And as a result of that, they can process longer sequences than traditional transformers" }, { "end": 43.04, "start": 36.2, "text": " like BERT and achieve better results in some NLP tasks, and they also evaluate on genomics" }, { "end": 44.04, "start": 43.04, "text": " tasks." }, { "end": 49.46, "start": 44.04, "text": " So we'll go through this paper a bit, look a bit at the proof because they give a theoretical" }, { "end": 57.24, "start": 49.46, "text": " kind of guarantee that their random attention mechanism can still be Turing complete and" }, { "end": 63.28, "start": 57.24, "text": " can still achieve the same things as a full attention mechanism, but we'll also look at" }, { "end": 64.28, "start": 63.28, "text": " the drawbacks." }, { "end": 70.24000000000001, "start": 64.28, "text": " I sort of have mixed feelings about this paper, and I think I'll voice my concerns as we go" }, { "end": 71.24000000000001, "start": 70.24000000000001, "text": " through here." }, { "end": 75.6, "start": 71.24000000000001, "text": " But first, let's look at the paper, let's look at the architecture, and I think this" }, { "end": 82.28, "start": 75.6, "text": " is actually a pretty cool paper for the empirical progression of the field to process longer" }, { "end": 85.08, "start": 82.28, "text": " sequences with transformers." }, { "end": 90, "start": 85.08, "text": " As always, if you like content like this, feel free to share it around, leave a like" }, { "end": 96.44, "start": 90, "text": " and tell me in the comments what you think about the paper and about what I think, whatever," }, { "end": 100.28, "start": 96.44, "text": " you just, just go nuts." }, { "end": 108.6, "start": 100.28, "text": " Alright, so the basic premise right here is that the transformers, they've been pretty" }, { "end": 111.04, "start": 108.6, "text": " impactful, especially in NLP." }, { "end": 115.2, "start": 111.04, "text": " So they say transformer based models such as BERT have been one of the most successful" }, { "end": 117.76, "start": 115.2, "text": " deep learning models for NLP." }, { "end": 123.28, "start": 117.76, "text": " Unfortunately, one of their core limitations is the quadratic dependency, mainly in terms" }, { "end": 127.92, "start": 123.28, "text": " of memory, on the sequence length due to their full attention mechanism." }, { "end": 133.44, "start": 127.92, "text": " So really briefly, the full attention mechanism, and I've done numerous videos about attention" }, { "end": 136.28, "start": 133.44, "text": " mechanism BERT, attention is all you need, and so on." }, { "end": 141.96, "start": 136.28, "text": " So if you want a detailed explanation of what that is, just go look up the corresponding" }, { "end": 142.96, "start": 141.96, "text": " videos." }, { "end": 149.6, "start": 142.96, "text": " But briefly, what you'll have in NLP is a set of tokens, a sequence of tokens as an input," }, { "end": 157.62, "start": 149.6, "text": " and you want to transform them layer after layer into sort of a higher order representation" }, { "end": 159.6, "start": 157.62, "text": " of that same sequence." }, { "end": 164.88, "start": 159.6, "text": " And for that, you build these layers out of nodes and you have as many nodes usually as" }, { "end": 166.96, "start": 164.88, "text": " you have tokens in the sequence." }, { "end": 174.74, "start": 166.96, "text": " And the next set of, so each token is represented by a vector at the beginning, and each layer" }, { "end": 179.04000000000002, "start": 174.74, "text": " transforms the sequence, as I said, into sort of a higher level representation." }, { "end": 186.32, "start": 179.04000000000002, "text": " So you want the vector of this token right here to be a better representation than the" }, { "end": 188.56, "start": 186.32, "text": " vector was right here." }, { "end": 195.12, "start": 188.56, "text": " And you do that by incorporating information from all the other tokens into that particular" }, { "end": 196.12, "start": 195.12, "text": " vector." }, { "end": 200.72, "start": 196.12, "text": " Now, as I said, this is called an attention mechanism, and we don't actually have to go" }, { "end": 205.92, "start": 200.72, "text": " into how it works right here, but you can see pretty clearly that if you want to do" }, { "end": 212.92, "start": 205.92, "text": " this for every token, you need to have information routed from every token to every token, like" }, { "end": 216.64, "start": 212.92, "text": " from here to here, from here to here, and so on." }, { "end": 220.6, "start": 216.64, "text": " And this is just one token, and then you need to do it for this token and for this token" }, { "end": 222.06, "start": 220.6, "text": " and for this token." }, { "end": 226.66, "start": 222.06, "text": " So what you'll ultimately get, if n is your sequence length, you'll get some n squared" }, { "end": 231.16, "start": 226.66, "text": " amount of computation and memory requirements for this." }, { "end": 232.45999999999998, "start": 231.16, "text": " So this is a problem." }, { "end": 237.35999999999999, "start": 232.45999999999998, "text": " And usually, this means that, you know, this sequence length in BERT, this is limited to" }, { "end": 243.72000000000003, "start": 237.36, "text": " something like 512 tokens, which is okay for some applications." }, { "end": 249.36, "start": 243.72000000000003, "text": " But if you want to summarize, you know, entire articles, entire books, even, or do question" }, { "end": 253.28, "start": 249.36, "text": " answering with lots of context, it's not really enough." }, { "end": 259.5, "start": 253.28, "text": " So people have been thinking about how to scale this input, how to scale this." }, { "end": 264.88, "start": 259.5, "text": " And of course, the main culprit is this quadratic attention mechanism, because if you, you know," }, { "end": 270.76, "start": 264.88, "text": " scale the 512, you need, you know, four times the amount of compute and memory." }, { "end": 275.71999999999997, "start": 270.76, "text": " So how does this paper go about reducing that quadratic dependency?" }, { "end": 281.12, "start": 275.71999999999997, "text": " The goal right here is, of course, to get this to some O of n, right?" }, { "end": 287.28, "start": 281.12, "text": " Because then, as we double the input length, we simply need to double the compute requirements," }, { "end": 288.71999999999997, "start": 287.28, "text": " and that would be fantastic." }, { "end": 290.74, "start": 288.71999999999997, "text": " And that's what this paper does." }, { "end": 296.2, "start": 290.74, "text": " And it does so without sacrificing the properties of the transformer." }, { "end": 300.84000000000003, "start": 296.2, "text": " So here's the architecture that Big Bird proposes." }, { "end": 306.86, "start": 300.84000000000003, "text": " By the way, Big Bird, another character from Sesame Street, I guess, will continue the" }, { "end": 309.84000000000003, "start": 306.86, "text": " naming here after Elmo and BERT." }, { "end": 315.8, "start": 309.84000000000003, "text": " You know, I'm waiting for the model that's the count." }, { "end": 319.40000000000003, "start": 315.8, "text": " Yeah, that's going to be a fun model." }, { "end": 323.96, "start": 319.4, "text": " So Big Bird basically has three different types of attention." }, { "end": 327.76, "start": 323.96, "text": " And here, these are adjacency matrices in this attention mechanism." }, { "end": 333.53999999999996, "start": 327.76, "text": " So here is the input layer, and the output layer is right here." }, { "end": 337.84, "start": 333.53999999999996, "text": " So that basically means that node i right here would be connected." }, { "end": 343.52, "start": 337.84, "text": " Sorry, that's not a straight line, would be connected to this particular node and also" }, { "end": 345.34, "start": 343.52, "text": " to this particular node." }, { "end": 353.44, "start": 345.34, "text": " So we're now trying, if we have node i right here, we're now trying to not connect it to" }, { "end": 359.84, "start": 353.44, "text": " all of these nodes, but we'll say, we'll just select some at random and then connect it" }, { "end": 360.84, "start": 359.84, "text": " to that." }, { "end": 363.64, "start": 360.84, "text": " Okay, this is what we call random attention." }, { "end": 371.15999999999997, "start": 363.64, "text": " And you can pretty clearly see if you connect each of the i nodes to r equals 2, to two" }, { "end": 376.40000000000003, "start": 371.16, "text": " random nodes, then you don't have an n squared anymore." }, { "end": 383.56, "start": 376.40000000000003, "text": " But you'll have a like an O of r times n, which you know, if r is a constant is an O" }, { "end": 386.32000000000005, "start": 383.56, "text": " of n attention mechanism." }, { "end": 394, "start": 386.32000000000005, "text": " Okay, so the main goal between the random attention mechanism is that for each query," }, { "end": 401.44, "start": 394, "text": " basically, you select random tokens that you attend to, and that random number is a fixed" }, { "end": 405.2, "start": 401.44, "text": " number that's not dependent on the sequence length." }, { "end": 412.32, "start": 405.2, "text": " And the paper is a little bit unclear about whether or not those random ones are the same" }, { "end": 418.28, "start": 412.32, "text": " for every sequence or are switched up, or the same for every layer or are switched up." }, { "end": 423.82, "start": 418.28, "text": " But they formulate all of this as sort of in sort of a graph in sort of a random graph." }, { "end": 428.96, "start": 423.82, "text": " So they're, they formulate the attention mechanism in form of a graph." }, { "end": 435, "start": 428.96, "text": " So if we transform all of these nodes into a graph, a full attention mechanism would" }, { "end": 441.36, "start": 435, "text": " mean that each graph, each node is connected to each of the other nodes, right, fully connected" }, { "end": 447.02, "start": 441.36, "text": " graph, I don't, maybe that's it." }, { "end": 448.88, "start": 447.02, "text": " So that would be a full attention mechanism." }, { "end": 456.32, "start": 448.88, "text": " And then they say, well, if we just have random connections between these things, then there" }, { "end": 462.64, "start": 456.32, "text": " are some theorems from graph theory that say that each random walk in this graph is going" }, { "end": 466.15999999999997, "start": 462.64, "text": " to, so this graph is going to mix pretty quickly." }, { "end": 473.56, "start": 466.15999999999997, "text": " So I can get from each node to each other node by a random walk in a logarithmic time." }, { "end": 478.28, "start": 473.56, "text": " And this random walk, which basically means that you go from here to here, this would" }, { "end": 481.65999999999997, "start": 478.28, "text": " be one layer of the transformer." }, { "end": 485.91999999999996, "start": 481.65999999999997, "text": " And then if you want to go from here to here, that you would have to do that in the next" }, { "end": 486.91999999999996, "start": 485.91999999999996, "text": " layer." }, { "end": 493.03999999999996, "start": 486.91999999999996, "text": " So this formulation as a random graph leads me to believe that layer after layer, the" }, { "end": 497.03999999999996, "start": 493.03999999999996, "text": " random attention pattern is going to be the same." }, { "end": 503.76, "start": 497.03999999999996, "text": " But also the formulation of the paper leads me to believe that the this random attention" }, { "end": 505.91999999999996, "start": 503.76, "text": " differs from sequence to sequence." }, { "end": 512.96, "start": 505.92, "text": " So I believe what's happening is that they get a new sequence, then they decide on this" }, { "end": 519.28, "start": 512.96, "text": " pattern right here once and then they use this layer after layer, the same pattern again." }, { "end": 527.84, "start": 519.28, "text": " So you can see that in the traditional attention, information can basically flow from each of" }, { "end": 532.08, "start": 527.84, "text": " the nodes to each other node in one single step, right?" }, { "end": 534.48, "start": 532.08, "text": " Because each node is connected to each other node." }, { "end": 536.9200000000001, "start": 534.48, "text": " You see this in the graph right here." }, { "end": 545.5600000000001, "start": 536.9200000000001, "text": " However, if we only select a subset, then you know, it needs to if I want to go from," }, { "end": 550.12, "start": 545.5600000000001, "text": " as I said, from here to here, then I need to do it in two steps." }, { "end": 552.4, "start": 550.12, "text": " And therefore I need two layers." }, { "end": 555.28, "start": 552.4, "text": " And that's going to be the culprit of this method here." }, { "end": 562.04, "start": 555.28, "text": " And while it is mentioned in the paper, it's sort of I feel at least that's my my assessment" }, { "end": 566.12, "start": 562.04, "text": " of this paper, it's kind of swept under the rug a little bit." }, { "end": 573.24, "start": 566.12, "text": " I mean, they do have a theorem that clearly says we can construct an example of a task" }, { "end": 577.2199999999999, "start": 573.24, "text": " that in the full attention setting can be solved with a single step." }, { "end": 585.8399999999999, "start": 577.2199999999999, "text": " So a single layer that in our random attention setting needs a lot of layers, a lot of steps." }, { "end": 592.08, "start": 585.84, "text": " But you know, the rest of the paper is sort of shaky on on this thing." }, { "end": 598.64, "start": 592.08, "text": " But nevertheless, you can see how the random attention can, if you have enough layers," }, { "end": 602.24, "start": 598.64, "text": " do the same information routing as the full attention." }, { "end": 603.24, "start": 602.24, "text": " Okay." }, { "end": 607.52, "start": 603.24, "text": " However, this is not a property of the random attention." }, { "end": 609.82, "start": 607.52, "text": " And we'll see this in the next thing right here." }, { "end": 614.4200000000001, "start": 609.82, "text": " So the next ingredient that this paper uses is window attention." }, { "end": 618.92, "start": 614.42, "text": " And you can see over here that Big Bird is ultimately going to be a combination of the" }, { "end": 623.28, "start": 618.92, "text": " three types of attention, which will, which we are looking at here." }, { "end": 630.4, "start": 623.28, "text": " So window attention basically means that each each i each token at the i of position is" }, { "end": 633.5799999999999, "start": 630.4, "text": " going to attend to itself, of course." }, { "end": 639.02, "start": 633.5799999999999, "text": " So here is i, but it is also going to attend to its neighbors." }, { "end": 642.68, "start": 639.02, "text": " So here is i minus one and here is i plus one." }, { "end": 649.5, "start": 642.68, "text": " And this is a you know, this is a window size w that you can that is a parameter, but also" }, { "end": 657.06, "start": 649.5, "text": " it is a constant and therefore you again go from n squared to w times n, which you know" }, { "end": 661.2199999999999, "start": 657.06, "text": " is o of n if w is a constant." }, { "end": 665.8, "start": 661.2199999999999, "text": " And this might be familiar to you, because we've already seen this in the long former" }, { "end": 666.8, "start": 665.8, "text": " paper." }, { "end": 674.4399999999999, "start": 666.8, "text": " We made a video or I think even two videos on the long former, which used exactly the" }, { "end": 678.54, "start": 674.4399999999999, "text": " window attention in combination with the global attention." }, { "end": 682.0999999999999, "start": 678.54, "text": " And if you want to know more about that, go watch these videos." }, { "end": 688.92, "start": 682.0999999999999, "text": " But the new thing in Big Bird right here is this addition of the random attention." }, { "end": 698.54, "start": 688.92, "text": " Again, the the window here is is has exactly the same properties as the random attention." }, { "end": 704.3, "start": 698.54, "text": " So you have instead of a fully connected graph, you have a sparsely connected graph." }, { "end": 710.42, "start": 704.3, "text": " Now if you have random attention, the sparsely connected graph is like like the one on the" }, { "end": 711.42, "start": 710.42, "text": " right." }, { "end": 717.38, "start": 711.42, "text": " But if you have a windowed attention, you can it is kind of not randomly connected," }, { "end": 721.12, "start": 717.38, "text": " but each node is connected to its neighbors like this." }, { "end": 726.58, "start": 721.12, "text": " And you can also see that if I want to go from this node to this node right here, I" }, { "end": 729.98, "start": 726.58, "text": " can't do it in one step, but I can do it in two steps." }, { "end": 732.5, "start": 729.98, "text": " I go here and I go here." }, { "end": 741.42, "start": 732.5, "text": " So in the terms of the attention layers, if I want to go from node one to node three," }, { "end": 745.58, "start": 741.42, "text": " I have to do it in two steps because each node is only connected to its neighbors." }, { "end": 750.98, "start": 745.58, "text": " So the connection patterns would sort of look like this." }, { "end": 757.3000000000001, "start": 750.98, "text": " So I have to go from one to two and then in the next layer from two to three." }, { "end": 764.82, "start": 757.3000000000001, "text": " So the paper basically makes up for the lack of full attention by adding layers." }, { "end": 769.22, "start": 764.82, "text": " And you also might recognize this from a convolution operation." }, { "end": 775.5400000000001, "start": 769.22, "text": " This basically because it is a convolution operation, right in a convolution, each node" }, { "end": 780.54, "start": 775.54, "text": " only aggregates input from its neighbors for the next layer." }, { "end": 786.3399999999999, "start": 780.54, "text": " And then we know that as we go up the layers, the de facto window that each node looks at" }, { "end": 790.0999999999999, "start": 786.3399999999999, "text": " is going to be like a cone kind of like this." }, { "end": 794.38, "start": 790.0999999999999, "text": " So this is very similar to how a convolutional neural network works." }, { "end": 799.38, "start": 794.38, "text": " And the reasoning is very similar because the reasoning is, well, in a sentence, the" }, { "end": 804.74, "start": 799.38, "text": " most important words for any given word are probably going to be its neighbors, like the" }, { "end": 806.42, "start": 804.74, "text": " words around it." }, { "end": 809.98, "start": 806.42, "text": " And as you go up the layers, you branch out more and more." }, { "end": 815.9, "start": 809.98, "text": " But ultimately, this neighborhood principle holds in NLP as well." }, { "end": 821.54, "start": 815.9, "text": " So again, we already saw this in the long former, but that's the reason behind the window" }, { "end": 823.82, "start": 821.54, "text": " attention and that's the second ingredient." }, { "end": 827.46, "start": 823.82, "text": " And then the third ingredient is this global attention." }, { "end": 836.0600000000001, "start": 827.46, "text": " Now the global attention is selected tokens that are so important and that's fixed by" }, { "end": 843.14, "start": 836.0600000000001, "text": " the developers that are so important that they are connected to everything else." }, { "end": 851.1, "start": 843.14, "text": " So for example, in these transformers, you often have what's this kind of CLS token." }, { "end": 857.78, "start": 851.1, "text": " So this is a special token that you prepend to some piece of text and the output of this" }, { "end": 863.74, "start": 857.78, "text": " token is going to be your classification output because you don't want to bind your classification" }, { "end": 866.3000000000001, "start": 863.74, "text": " if you need to classify the entire sequence." }, { "end": 870.6, "start": 866.3000000000001, "text": " You don't want to bind that decision to one particular word." }, { "end": 875.4200000000001, "start": 870.6, "text": " What you want to do is you want to have an extra token and that's this CLS token that" }, { "end": 879.0600000000001, "start": 875.4200000000001, "text": " kind of aggregates information from all of this." }, { "end": 885.3, "start": 879.06, "text": " So layer after layer, layer after layer, you'll have, so if we go here, layer after layer," }, { "end": 887.8599999999999, "start": 885.3, "text": " we have this one special node." }, { "end": 895.14, "start": 887.8599999999999, "text": " And in each step, every single other node is able to send information right here to" }, { "end": 900.7399999999999, "start": 895.14, "text": " this node and receive information from this node." }, { "end": 911.78, "start": 900.74, "text": " So now, as a result of this, as you may be able to see, every single path is kind of" }, { "end": 916.5, "start": 911.78, "text": " a maximum length of two because if I want to go from any node to any other node, I can" }, { "end": 922.54, "start": 916.5, "text": " simply send information to this global node and then the global node in the next step" }, { "end": 926.5600000000001, "start": 922.54, "text": " can send information to whatever other node." }, { "end": 933.5799999999999, "start": 926.56, "text": " And that is a property that they use in their proof that this attention mechanism is as" }, { "end": 937.2199999999999, "start": 933.5799999999999, "text": " sort of as powerful as the classic full attention mechanism." }, { "end": 940.3399999999999, "start": 937.2199999999999, "text": " And we'll go through that in one second." }, { "end": 944.9, "start": 940.3399999999999, "text": " But first, I hope this was clear that this combination of random attention, window attention" }, { "end": 952.8199999999999, "start": 944.9, "text": " and global attention is what is called Big Bird." }, { "end": 957.38, "start": 952.82, "text": " They have some engineering tricks that go along with this, but in concept, you can imagine" }, { "end": 963.1400000000001, "start": 957.38, "text": " Big Bird being long former plus these random attention right here." }, { "end": 968.46, "start": 963.1400000000001, "text": " And as an engineer, as an NLP engineer, that makes kind of total sense." }, { "end": 976.36, "start": 968.46, "text": " I totally believe that the introduction, the addition of these random attention of these" }, { "end": 983.82, "start": 976.36, "text": " random attention patterns can absolutely help your classification or whatever your NLP tasks" }, { "end": 986.9, "start": 983.82, "text": " because more attention, better." }, { "end": 993.38, "start": 986.9, "text": " And I also am completely willing to believe that using the full attention matrix, while" }, { "end": 999.54, "start": 993.38, "text": " it is, of course, more accurate, it won't hurt too much to leave some of that attention" }, { "end": 1005.46, "start": 999.54, "text": " away because essentially all the path lengths are just becoming two or even with the random" }, { "end": 1011.58, "start": 1005.46, "text": " attention are really short or logarithmic to route information from a node to some other" }, { "end": 1012.58, "start": 1011.58, "text": " node." }, { "end": 1019.7800000000001, "start": 1012.58, "text": " So the loss that you incur is kind of in a logarithmic scale in terms of performance," }, { "end": 1025.02, "start": 1019.7800000000001, "text": " while the gain that you make is sort of in a in a quadratic or like a linear scale, you" }, { "end": 1027.76, "start": 1025.02, "text": " go from quadratic to linear." }, { "end": 1031.3400000000001, "start": 1027.76, "text": " And that seems to me like a good empirical trade off." }, { "end": 1042.98, "start": 1031.34, "text": " All right, however, the the proofs here, the proof of of how how these how these things" }, { "end": 1046.3799999999999, "start": 1042.98, "text": " are constructed are a little bit." }, { "end": 1047.3799999999999, "start": 1046.3799999999999, "text": " I don't know." }, { "end": 1054.8999999999999, "start": 1047.3799999999999, "text": " So what they do in the proof that this function can sort of is a universal approximator." }, { "end": 1060.8, "start": 1054.8999999999999, "text": " People have already shown that full attention mechanisms are universal approximators." }, { "end": 1066.1399999999999, "start": 1060.8, "text": " So they show here that this sparse attention mechanism is also a universal approximator." }, { "end": 1068.5, "start": 1066.1399999999999, "text": " They make big use of star graphs." }, { "end": 1073.8999999999999, "start": 1068.5, "text": " What they say is, OK, if we have a star graph, which is one node connected right here to" }, { "end": 1077.48, "start": 1073.8999999999999, "text": " every other node, this is a star graph." }, { "end": 1084.22, "start": 1077.48, "text": " If we have a star graph, we can achieve the same thing than with a full graph." }, { "end": 1087.94, "start": 1084.22, "text": " A full graph is where every node is connected to every other node." }, { "end": 1093.98, "start": 1087.94, "text": " But as I already said, what they need for this is multiple layers of this star graph." }, { "end": 1099.7, "start": 1093.98, "text": " So and that has to do with the fact that if I want to route information, I basically have" }, { "end": 1103.4, "start": 1099.7, "text": " to go via this middle node right here." }, { "end": 1107.74, "start": 1103.4, "text": " And there is an additional complication because this middle node in our case right here is" }, { "end": 1110.18, "start": 1107.74, "text": " only one node." }, { "end": 1116.74, "start": 1110.18, "text": " I can't route information at the same like I can't have this routing right here at the" }, { "end": 1123.34, "start": 1116.74, "text": " same time that I have this routing right here, like going from here to here, because I only" }, { "end": 1125.04, "start": 1123.34, "text": " have one middle node." }, { "end": 1129.42, "start": 1125.04, "text": " And I kind of this is not how that like this is very dumb math." }, { "end": 1135.54, "start": 1129.42, "text": " But maybe you have to imagine that there is one memory slot." }, { "end": 1141.02, "start": 1135.54, "text": " And you can only use that one memory slot at the same time for one of these things." }, { "end": 1145.8, "start": 1141.02, "text": " So essentially, what you'll have to do is you'll have to do the green thing first." }, { "end": 1150.26, "start": 1145.8, "text": " And then in the next step, you'll have to do the blue thing second." }, { "end": 1154.58, "start": 1150.26, "text": " And then so these are now pairwise routing between nodes." }, { "end": 1159.56, "start": 1154.58, "text": " But ultimately, what an attention mechanism does is it does everything to everything right" }, { "end": 1164.3799999999999, "start": 1159.56, "text": " in a single layer, it routes information from all the nodes to all the other nodes." }, { "end": 1168.6599999999999, "start": 1164.3799999999999, "text": " And to achieve that, so you need multiple rounds of this." }, { "end": 1173.9199999999998, "start": 1168.6599999999999, "text": " And it turns out that in the worst case, you actually need n rounds of this." }, { "end": 1181.94, "start": 1173.92, "text": " So you know, you trade off your you go from n squared to n memory and compute requirements" }, { "end": 1183.5800000000002, "start": 1181.94, "text": " in a single layer." }, { "end": 1190.48, "start": 1183.5800000000002, "text": " But in the worst case, you need n layers to recover the power of the full of the full" }, { "end": 1192.02, "start": 1190.48, "text": " transformer." }, { "end": 1196.46, "start": 1192.02, "text": " And that is the last one of their theoretical results right here." }, { "end": 1200.22, "start": 1196.46, "text": " So first, they prove universal approximations." }, { "end": 1203.0600000000002, "start": 1200.22, "text": " And second, they prove Turing completeness." }, { "end": 1207.3799999999999, "start": 1203.06, "text": " These two properties have been proven for full attention mechanisms." }, { "end": 1213.4199999999998, "start": 1207.3799999999999, "text": " And third, they prove that there are tasks where you actually do need n layers to solve" }, { "end": 1217.54, "start": 1213.4199999999998, "text": " them with their limited attention." }, { "end": 1227.94, "start": 1217.54, "text": " So you know, I'm not sure but I feel you can make any sort of polynomial algorithm into" }, { "end": 1229.6, "start": 1227.94, "text": " a linear algorithm like this." }, { "end": 1232.84, "start": 1229.6, "text": " Like I have a I have like a cool sorting algorithm, right?" }, { "end": 1238.78, "start": 1232.84, "text": " So if this is my sequence that I want to sort, what I can do is I can simply, you know, take" }, { "end": 1245.78, "start": 1238.78, "text": " a random subset of them, like this, this and this and then kind of go and sort them and" }, { "end": 1252.06, "start": 1245.78, "text": " then put them like I send them to the to the global memory like this, I sort them, and" }, { "end": 1255.78, "start": 1252.06, "text": " then I put them back, right?" }, { "end": 1262.4199999999998, "start": 1255.78, "text": " And if I do this for enough, if I do this for enough rounds, okay, you know, if I do" }, { "end": 1267.66, "start": 1262.42, "text": " this for enough rounds, you know, at the worst case, I need n rounds to sort my or log n" }, { "end": 1269.54, "start": 1267.66, "text": " rounds if I do it smartly." }, { "end": 1276.42, "start": 1269.54, "text": " But you know, in, you know, the single step here is the single step is just O of n." }, { "end": 1280.14, "start": 1276.42, "text": " So I have now an O of n sorting algorithm." }, { "end": 1287.26, "start": 1280.14, "text": " I you know, I have my sort of a bit of worry to express things like that." }, { "end": 1296.42, "start": 1287.26, "text": " And yeah, but you know, it is from an empirical standpoint, I absolutely believe that this" }, { "end": 1298.82, "start": 1296.42, "text": " this is enough." }, { "end": 1304.74, "start": 1298.82, "text": " Now my second coral right here is that if you look at the proof, first of all, what" }, { "end": 1310.3799999999999, "start": 1304.74, "text": " it makes use is this star graph, and the star graph corresponds to the global attention." }, { "end": 1314.46, "start": 1310.3799999999999, "text": " So that's not much to do with the random attention, though they use the random attention in their" }, { "end": 1323.74, "start": 1314.46, "text": " proof, but I at least believe that it would be possible with the global attention only." }, { "end": 1330.74, "start": 1323.74, "text": " And then the second thing is if you look at the parameters that they use for the for the" }, { "end": 1334.8600000000001, "start": 1330.74, "text": " experiments, and I've already said this in the long former video." }, { "end": 1340.14, "start": 1334.8600000000001, "text": " So in the long form of video, it turned out that if you look at how big these window attention" }, { "end": 1347.9, "start": 1340.14, "text": " is, it turns out that it you're still well, you know, the original BERT attended to 512" }, { "end": 1348.9, "start": 1347.9, "text": " tokens." }, { "end": 1353.3400000000001, "start": 1348.9, "text": " And then you look at the window and the window was still 512 tokens." }, { "end": 1357.5400000000002, "start": 1353.3400000000001, "text": " It's just that the global attention was even more so ultimately they ended up using more" }, { "end": 1360.14, "start": 1357.5400000000002, "text": " memory than the original BERT." }, { "end": 1367.6200000000001, "start": 1360.14, "text": " And here, if I look at the parameters of their thing, and they have multiple experiments" }, { "end": 1371.26, "start": 1367.62, "text": " right here, and I believe this is the the base version." }, { "end": 1376.1, "start": 1371.26, "text": " So this is the base version, they also have this large version." }, { "end": 1380.86, "start": 1376.1, "text": " But here, this is the 12 layer version." }, { "end": 1383.86, "start": 1380.86, "text": " And you can see they have this block length." }, { "end": 1388.1999999999998, "start": 1383.86, "text": " And we'll get into the block length in one second." }, { "end": 1393.6799999999998, "start": 1388.1999999999998, "text": " But then you can see that their window size is three times the block length, the number" }, { "end": 1397.98, "start": 1393.68, "text": " of random tokens is three times the block length, and the number of global tokens is" }, { "end": 1399.5800000000002, "start": 1397.98, "text": " two times the block length." }, { "end": 1411.5800000000002, "start": 1399.5800000000002, "text": " So that results in eight times B. So eight times 64 is, you know," }, { "end": 1413.46, "start": 1411.5800000000002, "text": " Can I calculate this?" }, { "end": 1415.66, "start": 1413.46, "text": " Or am I stupid?" }, { "end": 1416.66, "start": 1415.66, "text": " It's 512." }, { "end": 1420.42, "start": 1416.66, "text": " Yes, actually calculated this before." }, { "end": 1423.3400000000001, "start": 1420.42, "text": " So this is 512 tokens." }, { "end": 1432.1799999999998, "start": 1423.34, "text": " So you know, you you go from from BERT that has 512 tokens and attends to 512 tokens to" }, { "end": 1435.1, "start": 1432.1799999999998, "text": " also attending to 512 tokens." }, { "end": 1442.4199999999998, "start": 1435.1, "text": " Of course, the advantage here is that they now have 4096 sequence length." }, { "end": 1450.58, "start": 1442.4199999999998, "text": " So they have the freedom to not attend to as many tokens as they have in the input length." }, { "end": 1458.54, "start": 1450.58, "text": " But you know, to put it in perspective, this here uses more memory and more compute on" }, { "end": 1466.26, "start": 1458.54, "text": " on its face than BERT, because BERT attends to as many tokens but has a smaller input" }, { "end": 1469.4199999999998, "start": 1466.26, "text": " sequence." }, { "end": 1475.8799999999999, "start": 1469.4199999999998, "text": " And you know, I, there's sort of a thing where in order to make these sparse attention things" }, { "end": 1482.0200000000002, "start": 1475.88, "text": " work, you have to go pretty, pretty, you know, high in the number of things you attend to," }, { "end": 1487.7, "start": 1482.0200000000002, "text": " you can leave away some but it's not like you can scale up orders of magnitude of your" }, { "end": 1489.88, "start": 1487.7, "text": " input sequence length." }, { "end": 1495.3000000000002, "start": 1489.88, "text": " So that's the this promise of linear attention is sort of it's kind of fulfilled but not" }, { "end": 1496.3000000000002, "start": 1495.3000000000002, "text": " there yet." }, { "end": 1501.8600000000001, "start": 1496.3000000000002, "text": " The second thing I would like to point out is that in a lot of cases, the number of random" }, { "end": 1504.7, "start": 1501.8600000000001, "text": " tokens is actually set to zero." }, { "end": 1511.18, "start": 1504.7, "text": " So really making use, I believe, of these of the of the global of the number of global" }, { "end": 1512.76, "start": 1511.18, "text": " tokens." }, { "end": 1520.02, "start": 1512.76, "text": " So it that seems a bit strange in that they continuously refer to their random attention" }, { "end": 1521.6200000000001, "start": 1520.02, "text": " mechanism." }, { "end": 1527.22, "start": 1521.6200000000001, "text": " But then in a lot of experiments, they don't actually have a random attention mechanism." }, { "end": 1530.98, "start": 1527.22, "text": " I believe they have to do that because that's kind of what makes them different from the" }, { "end": 1537.06, "start": 1530.98, "text": " long former in principle, but still, yeah." }, { "end": 1544.76, "start": 1537.06, "text": " So the last novelty, let's say is an engineering novelty in that they now always consider not" }, { "end": 1549.66, "start": 1544.76, "text": " single, for example, they don't consider single random attention, they always consider these" }, { "end": 1550.66, "start": 1549.66, "text": " in blocks." }, { "end": 1556.14, "start": 1550.66, "text": " And that's because our current hardware is really bad at sparse stuff." }, { "end": 1559.88, "start": 1556.14, "text": " Really bad at single indexing, gathering single things." }, { "end": 1566.0200000000002, "start": 1559.88, "text": " So if you can do everything in blocks, you basically get you get these blocks almost" }, { "end": 1567.0200000000002, "start": 1566.0200000000002, "text": " for free." }, { "end": 1572.48, "start": 1567.0200000000002, "text": " So it takes only marginally longer to retrieve this full two by two block right here than" }, { "end": 1576.46, "start": 1572.48, "text": " it would to retrieve the single instance right here." }, { "end": 1582.38, "start": 1576.46, "text": " Of course, that means you have, you know, four times you still use four times more memory," }, { "end": 1585.9, "start": 1582.38, "text": " but it is not four times slower than the original thing." }, { "end": 1589.94, "start": 1585.9, "text": " So you can use these blocks right here." }, { "end": 1593.26, "start": 1589.94, "text": " You can do it for the random attention, you can do it for the window attention, as you" }, { "end": 1594.26, "start": 1593.26, "text": " can see here." }, { "end": 1598.22, "start": 1594.26, "text": " So you break this window pattern a little bit into blocks." }, { "end": 1601.02, "start": 1598.22, "text": " And that makes it a lot faster." }, { "end": 1605.64, "start": 1601.02, "text": " So that speeds up, get the speed up almost for free." }, { "end": 1613.02, "start": 1605.64, "text": " And then they make another approximation in that the way they do this windowing is, and" }, { "end": 1615.98, "start": 1613.02, "text": " I just go really briefly." }, { "end": 1624.06, "start": 1615.98, "text": " So you can see right here that it would be very cumbersome to gather." }, { "end": 1629.5, "start": 1624.06, "text": " So what we need, we're just going to focus this this dotted thing right here is a bit" }, { "end": 1630.5, "start": 1629.5, "text": " confusing." }, { "end": 1634.86, "start": 1630.5, "text": " So you want to attend to these things." }, { "end": 1639.16, "start": 1634.86, "text": " And these you can just get out with a matrix slice really easy." }, { "end": 1644.9, "start": 1639.16, "text": " But then you want to attend to this kind of blocky thing right here from the window attention," }, { "end": 1646.92, "start": 1644.9, "text": " right, like this thing." }, { "end": 1653.3400000000001, "start": 1646.92, "text": " And this is hard to get out because you'd have to kind of index each row individually." }, { "end": 1654.8000000000002, "start": 1653.3400000000001, "text": " And that's very slow." }, { "end": 1659.5600000000002, "start": 1654.8000000000002, "text": " So what they do, there is this matrix roll operation, where you can sort of roll the" }, { "end": 1661.1000000000001, "start": 1659.5600000000002, "text": " axis around." }, { "end": 1665.8000000000002, "start": 1661.1000000000001, "text": " So what you'll do is you'll take this thing right here, and you put it to the left right" }, { "end": 1670.98, "start": 1665.8, "text": " here, and you'll take, for example, this thing right here, and you'll put it to the right" }, { "end": 1674.78, "start": 1670.98, "text": " or no, like it's, it's up and down." }, { "end": 1677.12, "start": 1674.78, "text": " But in essence, that's what you do." }, { "end": 1683.6599999999999, "start": 1677.12, "text": " And you can you can fold all of this blue stuff into a rectangular matrix." }, { "end": 1687.26, "start": 1683.6599999999999, "text": " If you know if you can see right here." }, { "end": 1693.1, "start": 1687.26, "text": " So you kind of roll this back, roll this back, roll this forward, and you replace whatever" }, { "end": 1695.62, "start": 1693.1, "text": " is missing by these." }, { "end": 1702.4199999999998, "start": 1695.62, "text": " Now this again gives you some inaccuracies because this block right here was never intended" }, { "end": 1704.76, "start": 1702.4199999999998, "text": " to be attended to." }, { "end": 1708.82, "start": 1704.76, "text": " And all of a sudden you see you have the K6 in here." }, { "end": 1713.6599999999999, "start": 1708.82, "text": " So it gives you a bit of inaccuracies at the edges of the sequence." }, { "end": 1718.6399999999999, "start": 1713.6599999999999, "text": " But you can take that, you know, you can take that hit for the increased performance that" }, { "end": 1721.62, "start": 1718.6399999999999, "text": " you gain by now having a rectangular matrix." }, { "end": 1727.1399999999999, "start": 1721.62, "text": " TPUs are really efficient at this, not as efficient at this." }, { "end": 1733.06, "start": 1727.1399999999999, "text": " And then the only thing that's really slow is gathering these random blocks right here." }, { "end": 1738.78, "start": 1733.06, "text": " But also by having the same amount of random blocks per input token, what you'll do is" }, { "end": 1745.3, "start": 1738.78, "text": " you'll end up with just one of these columns right here, or you know, R of these columns." }, { "end": 1747.6599999999999, "start": 1745.3, "text": " And that again gives you a rectangular matrix." }, { "end": 1753.3400000000001, "start": 1747.66, "text": " So this thing right here you can process very, very efficiently using a TPU." }, { "end": 1759.3000000000002, "start": 1753.3400000000001, "text": " And you know, the mistakes you make are basically this thing right here and this thing right" }, { "end": 1764.88, "start": 1759.3000000000002, "text": " here, because those weren't intended and are at the edges of the sequence." }, { "end": 1771.3400000000001, "start": 1764.88, "text": " So these were the tricks of Big Bird to quickly summarize." }, { "end": 1779.6599999999999, "start": 1771.34, "text": " Big Bird is basically taking a transformer saying, well, why do we need all of this attention," }, { "end": 1784.78, "start": 1779.6599999999999, "text": " all of this full attention, maybe we only need some of that and can already do a big" }, { "end": 1789.98, "start": 1784.78, "text": " job, a good job, especially, you know, considering the attention mechanism goes over multiple" }, { "end": 1791.6599999999999, "start": 1789.98, "text": " layers." }, { "end": 1797.86, "start": 1791.6599999999999, "text": " So we don't need a routing from each token to each token, we can make up for not having" }, { "end": 1801.8799999999999, "start": 1797.86, "text": " a fully connected graph by simply running multiple layers." }, { "end": 1809.1, "start": 1801.8799999999999, "text": " So their sparsity is first of all, you have this random attention, which I believe changes" }, { "end": 1815.28, "start": 1809.1, "text": " from sequence to sequence, but stays within or among the layers of the same sequence." }, { "end": 1818.82, "start": 1815.28, "text": " Then you have the window attention with the reasoning." }, { "end": 1822.74, "start": 1818.82, "text": " So the reasoning behind the random attention is that if you have a randomly connected graph," }, { "end": 1826, "start": 1822.74, "text": " the path lengths are on average logarithmic." }, { "end": 1828.52, "start": 1826, "text": " So you can route information efficiently." }, { "end": 1834.14, "start": 1828.52, "text": " The reasoning behind the window attention is that probably neighbor information is very" }, { "end": 1837.5, "start": 1834.14, "text": " important and that has been shown empirically." }, { "end": 1841.66, "start": 1837.5, "text": " And then the global attention, the reasoning behind this is that some of the tokens that" }, { "end": 1848.42, "start": 1841.66, "text": " are fixed by the developers are so important that it's very beneficial that each other" }, { "end": 1853.46, "start": 1848.42, "text": " node is connected to them and that they are connected to each other node." }, { "end": 1859.54, "start": 1853.46, "text": " The result of that is the Big Bird attention mechanism, which is basically long former," }, { "end": 1864.4, "start": 1859.54, "text": " which already had these two plus the random attention." }, { "end": 1872.46, "start": 1864.4, "text": " This achieves a linear complexity in terms of memory and compute, though linear has to" }, { "end": 1878.78, "start": 1872.46, "text": " be qualified a bit because it's modified by the window size, by the number of random attention" }, { "end": 1885.86, "start": 1878.78, "text": " tokens, by the number of global tokens, and in practice often ends up being fairly large" }, { "end": 1888.66, "start": 1885.86, "text": " ish." }, { "end": 1896.8999999999999, "start": 1888.66, "text": " And also the theoretical guarantees now come with the fact that you need multiple layers." }, { "end": 1902.02, "start": 1896.8999999999999, "text": " In the worst case, you need sequence length amount of layers, which in the worst case" }, { "end": 1907.8999999999999, "start": 1902.02, "text": " would result right back into a quadratic requirement for memory and compute." }, { "end": 1916.8600000000001, "start": 1907.9, "text": " They do some engineering, some engineering tricks right here, and their results are pretty" }, { "end": 1917.8600000000001, "start": 1916.8600000000001, "text": " good." }, { "end": 1923.0600000000002, "start": 1917.8600000000001, "text": " So the results in various tasks and we'll, we'll look at some of the tasks right here." }, { "end": 1928.7, "start": 1923.0600000000002, "text": " So these are def set results using base size models." }, { "end": 1935.18, "start": 1928.7, "text": " For example, where you can see they do outperform basic Roberta models, they outperform long" }, { "end": 1940.6200000000001, "start": 1935.18, "text": " former, which may mean that the random attention is useful, but you know, in these things," }, { "end": 1947.46, "start": 1940.6200000000001, "text": " it's also always may just mean that you've thrown more compute at it." }, { "end": 1951.5800000000002, "start": 1947.46, "text": " At least I'm not really looking that they outperform the models because as you can see" }, { "end": 1955.7, "start": 1951.5800000000002, "text": " right here, if they compare to state of the art and you know, granted, these are models" }, { "end": 1962.38, "start": 1955.7, "text": " that have been trained specifically for these tasks and are crafted and engineered and Big" }, { "end": 1969.2600000000002, "start": 1962.38, "text": " Bird manages to Big Bird manages to hold itself against them in a lot of tasks and even get" }, { "end": 1971.6200000000001, "start": 1969.2600000000002, "text": " state of the art on some." }, { "end": 1976.7, "start": 1971.6200000000001, "text": " What I'm more interested in is that it, you know, it can reach good numbers." }, { "end": 1981.38, "start": 1976.7, "text": " It doesn't necessarily have to be state of the art, but it can reach good numbers, which" }, { "end": 1989.0200000000002, "start": 1981.38, "text": " tells me that, okay, probably the, the empirical hit that I take by not having the full attention" }, { "end": 1996.58, "start": 1989.02, "text": " is, you know, is justifiable by the speed up and memory savings I do get." }, { "end": 2001.58, "start": 1996.58, "text": " Yeah, especially when result, when you see results mixed like this, you know, sometimes" }, { "end": 2007.62, "start": 2001.58, "text": " the other model is good and sometimes the Big Bird is good on different variations and" }, { "end": 2008.62, "start": 2007.62, "text": " so on." }, { "end": 2012.48, "start": 2008.62, "text": " I would not, you know, I would not make a big deal out of the fact that it is state" }, { "end": 2013.48, "start": 2012.48, "text": " of the art." }, { "end": 2014.9, "start": 2013.48, "text": " I get that the authors have to do that." }, { "end": 2022.9, "start": 2014.9, "text": " I would do so as well, but you know, you don't, don't think that this is the, like the best" }, { "end": 2023.9, "start": 2022.9, "text": " thing now." }, { "end": 2025.22, "start": 2023.9, "text": " It's very probable." }, { "end": 2028.76, "start": 2025.22, "text": " They just thrown also a lot of compute at it." }, { "end": 2032.5400000000002, "start": 2028.76, "text": " What is cool is they do some genomics experiments." }, { "end": 2039.3400000000001, "start": 2032.5400000000002, "text": " So not only do they have NLP state of the art, but also they go into genomics and experiment" }, { "end": 2040.3400000000001, "start": 2039.3400000000001, "text": " with data there." }, { "end": 2045.3799999999999, "start": 2040.34, "text": " I don't want to go into that because ultimately it's another task and I believe the paper" }, { "end": 2046.86, "start": 2045.3799999999999, "text": " is about the architecture." }, { "end": 2047.9399999999998, "start": 2046.86, "text": " All right." }, { "end": 2050.98, "start": 2047.9399999999998, "text": " So that was Big Bird." }, { "end": 2054.74, "start": 2050.98, "text": " I hope you enjoyed this video and learned." }, { "end": 2056.62, "start": 2054.74, "text": " I learned something." }, { "end": 2058.34, "start": 2056.62, "text": " Certainly." }, { "end": 2065.2999999999997, "start": 2058.34, "text": " If you want to check out the proofs, they're actually pretty entertaining to read and yeah," }, { "end": 2067.2999999999997, "start": 2065.2999999999997, "text": " I'll see you next time." }, { "end": 2071.02, "start": 2067.3, "text": " Bye bye." } ]
q7PjrmGNx5A
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Self-training with Noisy Student improves ImageNet classification (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ssl", "semi-supervised", "transfer learning", "cnn", "resnet", "efficientnet", "noise", "augmentation", "data augmentation", "randaugment", "dropout", "stochastic depth", "google", "distillation", "self-training", "knowledge distillation", "imagenet", "unsupervised", "unlabeled", "unlabelled", "jft" ]
The abundance of data on the internet is vast. Especially unlabeled images are plentiful and can be collected with ease. This model investigates a new method for incorporating unlabeled data into a supervised learning pipeline. First, a teacher model is trained in a supervised fashion. Then, that teacher is used to label the unlabeled data. Next, a larger student model is trained on the combination of all data and achieves better performance than the teacher by itself. OUTLINE: 0:00 - Intro & Overview 1:05 - Semi-Supervised & Transfer Learning 5:45 - Self-Training & Knowledge Distillation 10:00 - Noisy Student Algorithm Overview 20:20 - Noise Methods 22:30 - Dataset Balancing 25:20 - Results 30:15 - Perturbation Robustness 34:35 - Ablation Studies 39:30 - Conclusion & Comments Paper: https://arxiv.org/abs/1911.04252 Code: https://github.com/google-research/noisystudent Models: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet Abstract: We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2. Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On ImageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger EfficientNet as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. Models are available at this https URL. Code is available at this https URL. Authors: Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar (preferred to Patreon): https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at self-training with noisy student improves ImageNet classification by Qi Zi Sie, Min Tan Luong, Edward Hovey and Quoc Vy Lue. So this paper takes an ImageNet classifier that's been trained on the ImageNet data set and uses that classifier as a teacher model to label a whole bunch of unlabeled images and then it trains a student model that is larger than the original teacher model on those teacher labeled images and that turns out to improve the classification on the ImageNet validation set. Now that there is a couple of things that make this all work and today we're going to explore how this paper does it and what they say is important. If you enjoy content like this as always don't hesitate to share it out there tell your friends about it and if you're not subscribed yet then do so. I would appreciate that and you'll get more content so win-win. So this this paper is about semi-supervised learning in effect. So it's at the intersection actually of semi-supervised learning, knowledge distillation and transfer learning. So what do we mean by semi-supervised learning? Usually in supervised learning you'll have some sort of data set and the data set will contain, let's say it's an ImageNet, it's image data set. So the data set will contain images. This is an image with like some sort of cat on it and it will contain the labels according to that. So cat. Now in semi-supervised learning you assume that, so this is supervised learning, in semi-supervised learning you assume that only part of your data set has the labels. So like only this part down here has the labels and the upper part does not have the labels. So that's semi-supervised learning. It's often the case when it's very expensive to get labels so you can only get labels for a couple of images in your data set. But very often in semi-supervised learning you still assume it's the same data set. There is a slightly different setup here that's called transfer learning. So in transfer learning what you'll have is you'll have your data set that has the labels but it's very small. So you'll notice I've drawn it smaller. That means you have very little. That is also the case when it's very expensive to get labels but also it's expensive to get the data itself. This is often the case like say in medical data where not only is it expensive to get labels for like a CT scan it's actually expensive to get the CT scan. So what the goal in transfer learning is is to say well I do have only this small data set but I do have this giant other data set over here. Now can't I... it's not the same. Maybe they're not CT so these are CT scans. Maybe these are X-rays right? They're fairly similar. Similar technology. If you slice the CT it'll give you sort of an X-ray. Can I you know train my model, pre-train my model on X-ray data and then fine-tune it on the CT data? So that's called transfer learning usually. Now this can be done with or without labels. So it can be that for the X-ray data set you do have the labels or you don't have the labels. There are techniques for all of those. Now what we're going to look at today is kind of the situation right here. It's the transfer learning situation where you do not have the labels for this X-ray data set. But other than in this X-ray example what we're going to look at is the small data set is going to be our ImageNet database. So our original picture with label database. So you'll see immediately the difference here is that in the transfer learning setting we usually assume that the data set we want to train on is fairly small. Here you know ImageNet is already sizable. But what we have is we have a much larger database of unlabeled images that we can just get from the internet. So we can scrape the internet for any kind of pictures and that will be our unlabeled data set. Now what we'll try to do is somehow incorporate this unlabeled data set here into the training process to get better on the ImageNet data set. So this is the problem statement is you have the ImageNet data set and you have a second much larger data set of unlabeled images and you somehow want to make use of them. So I hope you see how this is sort of connected to the others. It's essentially sort of a transfer semi-supervised learning setting but with the exception that usually in transfer learning you assume that the label data set is like super small. Which is not the case here and that's going to result in us being able to apply a different technique. So this different technique is called the noisy student. Now usually what you might do in a transfer learning setting is you might want to start with that big data set because that's the data set that's sizable enough to allow you to train a really big model on it and then you fine-tune and you sort of hope that the information transfers over. Here on the other hand what we want to do is we start with the ImageNet data set. So first we train this in a supervised learning fashion into our model. Now this model is going to be called the teacher model. We know how to do this, we know how to train ImageNet models. So we can train this into a teacher model that has a reasonable accuracy on the ImageNet data set. Step two we're going to take that big data set over here and use the teacher model to label the unlabeled images. So for each image coming in here the teacher will say that's a cat. So that gives you the big data set where now you have images along with labels. Just the labels aren't true labels, they're generated by the teacher. And then in the third step you train this big data set, you train on this big data set and that's what you call your student model. And then the student model in this paper will see how can we make it such that the student is then better at the original ImageNet task than the teacher ever was. Which seems counterintuitive at first because all of the information that the student is trained from is basically what the teacher already knows. All the labels here come from the teacher, therefore the student shouldn't be able to outperform the teacher. But in this case the student will be able to outperform the teacher and their argument here is that this is mainly due to the fact that you use noise in this training procedure. So when you train the student what you'll do is you'll use noise and one of the types of noise is that you severely augment this data right here in order to train the student. Now we've known for a long time that data augmentation, for example in the frameworks of self-supervised learning and so on, can have a very large benefit to training. And here the fact that we incorporate this extra data and we use noise and augmentations on it is going to result in a student that can sort of learn more about the data than the teacher did know. Okay this is basically it and as you can see this is kind of their main final results where they say on ImageNet our top one accuracy sort of increases right here and even on these kind of subsets of ImageNet or these are sort of corrupted sets of ImageNet they make even more substantial improvements as you can see here. Now we'll go into what these corrupted subsets are but you know just for now these here are very difficult variants of ImageNet. They can be severely corrupted or distorted and so on and you can see that the model improves severely over the previous state of the art which basically means that this model is more robust and that's a direct consequence of the noise. Now one last thing I should say is that the student here is also larger than the teacher so that's also one thing that makes the student better. So what you will make is the student model is larger than the teacher model as a model as the architecture. So in combination with the noise right here with the noise in combination that means the student model is probably able to capture more of the variance of the data. It's larger it has more parameters it can learn more about the data together with the noise it can probably be a more robust and that's what makes it generalize better and we'll also see as we see here it's more robust to these transformations and it's also going to be more robust to adversarial perturbations. So the technique again is illustrated here as as we said it's pretty simple. First so step one step one train the teacher model with labeled data as you would. Step two you infer the pseudo labels on unlabeled data. Step three you make a student you make sorry we'll step three over here train an equal or a larger student model with combined data and noise injected. So they don't they use the original labeled data here and the pseudo labeled data right here in order to train the student but still this the student doesn't have more information more label information than the teacher had it simply has this teacher labeled teacher labeled unlabeled data also to train on. Now the crucial part here is well first of all that the student can be larger and second of all that there can be noise and the noise comes in three different forms. So first of all you use data augmentation which we've already seen this is sort of like random cropping or mild rotations color jitter whatever they use a rand augment here which is a specific technique to apply these augmentations they use dropout which is a fairly old technique where you in the student model that you train you randomly drop out connections which makes it more robust and more generalizing and then you also use stochastic depth. Now stochastic depth is a technique when you train a model what you'll do during training instead of always passing your data forward through the layers like this you use some sort of a dropout but with entire layers so what you'll do is you'll pass your data forward and then randomly you'll skip a layer and then pass it forward again. Now these these might seem weird first because yeah it might seem weird but in if you know that most models especially computer vision models nowadays are residual networks which means that their layers look like so you have the input and you have some computation and then you have the output and then there is already a residual connection that basically adds the original signal together to the result of the computation. So all you do in this stochastic layer dropout or this stochastic depth right here is you basically disable you you disable this connection right here and all the signal has to flow through here. If you read the residual the ResNet original ResNet paper they make it pretty clear why the residual connection is a good idea basically they say these computations here they if you have a very deep network each layer only has to basically do very a little bit of computation that that can be bypassed fairly efficiently for a lot of data points so it's not that hurtful to bypass a layer and in this case they actually use it to just bypass some of these small computations and inject some more robustness into the student model. So with these three strategies to bring noise into the training process one is on the data and two is on the student model itself they train the student model and then fourth and this is what we didn't have before four or maybe we put four here make the student a new teacher so now you can iterate you can use the student model that you just trained to again label the unlabeled data and then you can use another student model again under the influence of noise to train from that student model and so on and you can go on and they do up to like three iterations of this where they always take the new the student as the new teacher and then use a new student model to train from that teacher and they get better and better as they do this of course there's like a diminishing returns but it's pretty impressive that this even works right the new students in fact aren't even larger than the old students it's just that the students are larger than the original teacher model in most of these cases so here's the algorithm written down you'll require labeled images right here and unlabeled images which are the ones with the tilde so first you learn the teacher model which minimizes the cross entropy on labeled images this we already know this right this is the label this is the image according to the label and you train the teacher model which is this thing here and you can see here noised so already in the teacher training process you want to introduce this noise you want to introduce these data augmentations these are as I said these are standard techniques to make models more robust and therefore more generalizable yeah we know from these from these self-supervised papers that these augmentations are very powerful and the way you design them basically if you one of these augmentations is a random crop which means if you have an image you randomly crop out like part of that image and then that's your training sample and not the entire thing so by doing this you basically teaching the model to ignore the exact location and scale of things on an image and you can do this because you as a human know that you know I can zoom in I can zoom out into something and it won't change what's on the picture and so that's you use these augmentations to kind of heuristically tell the model what it should be invariant to and that is that is a very powerful technique to regularize basically to to robustify these deep methods and this is used the same here so already in the teacher model we train with this noise and then step two use a normal ie not noise teacher model to generate soft or hard pseudo labels for the clean ie not distorted unlabeled images and this is important they stress this here that when you when you label the unlabeled images you want to use the model that is without the noise and you do it on the not distorted unlabeled images so when you infer the labels it's very important that you have clean accurate labels without any sort of noise in them so label noise is not something that they have found to help in this case so not label noise on the teacher that is so you can see right here on the unlabeled images will use that teacher model without the noise to infer the labels now they say these can be hard model hard labels or soft labels so what does that mean if we generate hard pseudo labels that means that the y here is simply going to be either 0 or 1 or 2 or 3 and so on so just the index of the class whichever class is most likely that's going to be our label this is exactly how the supervised datasets come right so this is what you will think first when you see that however soft pseudo labels means that the y will be a distribution so instead of being of class 0 it will be sort of let's say 90% of class 0 but also 5% class 1 and 5% class 2 right so you'll output the distribution instead of the just the label and they have found that the soft pseudo labels work slightly slightly better than the hard pseudo labels okay thanks so that they use the soft pseudo labels here because they work slightly better but you can do it with hard or soft labels the important thing is that you use the teacher to generate as accurate as possible labels for your unlabeled data then third we've already seen this learn an equal or larger student model which minimizes the cross entropy loss on labeled images and unlabeled images with noise added to the student model so as you can see labeled images and unlabeled images so we're in this semi semi supervised learning setting right now you take in both together with noise and noise here is in bold which means they stress it again this is important so you can see that the loss is composed of two different things these are the true images of your original model and you use that and this means you noise the student model and that that noise can be on the data or in the model itself and here also the unlabeled images that you have labeled with the teacher you do the exact same thing so you train on both of these data sets and step four is if you want to do iterative training use the student as a teacher and go back to step two now they have some more tricks when they do this iterative training they also up the batch size during the iterative training and so on so they do a lot of things to make the student learn something more something better than the teacher and I think this the whole paper it doesn't it doesn't state it explicitly but I think the whole paper everything they do here is to kind of force or allow the student to become better than the teacher by by giving more noise by making the student larger by making the batch size for the student larger and so on so you you want to sort of inject as much invariance as you can and that will make the student learn more so they say here noising student when the student is deliberately noised in its it is trained to be consistent to the teacher that is not noised when it generates the pseudo labels in our experiments we use two types of noise input noise and model noise all right first data augmentation is an important noising method in noisy student training because it forces the student to ensure prediction consistency across augmented versions of an image specifically in our method the teacher produces high quality pseudo labels by reading in clean images while the student is required to produce to reproduce those labels with augmented images as an input second when dropout and stochastic depth function are used as noise the teacher behaves like an ensemble at inference time when it generates pseudo labels whereas the student behaves like a single model in other words the student is forced to mimic a more powerful ensemble model we present an ablation study so this it's a bit weird what they say here don't be confused you use the dropout and the stochastic depth on the student model and they they say here if you do this the teacher behaves like an ensemble at inference time whereas the student behaves like a single model and yeah it's it's a bit of a weird formulation but it's it's true like the teacher the teacher will produce these same the label for different pathways through the students if you use dropout and kind of stochastic depth and therefore the student is kind of required to approximate each time each forward pass has a different forward pass through the layers through the connections with dropout and it's forced to approximate that teacher label with all of these different things so you see that you you put in a lot of a lot of techniques so they have even other techniques there is one additional trick and it's not and it's not one actually they have so many tricks and if you look at their experimental setup that it's crazy like they describe exactly we reduce the learning rate like this and the batch size like this and so on so to get state-of-the-art on image net it's not enough to just have a good idea of a new thing to do what you you you have to have the good idea and then execute it almost like really well because you have to regard all of these additional tricks that people have figured out over the years in any case they say it works better with an additional trick data filtering and balancing specifically we filter images that the teacher model has low confidence on since they are usually out of domain images so that goes to a point where if you see we have this image net label data set right and we have the larger data set now the larger data set simply contains images and there is no guarantee that the images are actually of the classes that we have in the image net data set right here we have a thousand classes here there's no guarantee that these images fit into any of those classes yet we still ask the teacher model to put them in some of these classes now you can filter out part of those images if you can look at the teacher model and you look at its confidence so when it outputs a distribution if if there's just two labels let's say if it outputs a distribution like this that's wildly different than if it outputs a distribution like this both are class one labels but one is much more confident than the other so what you want to do is you want to filter out these low confidence labels because you know the model isn't really sure but it has to assign a class but that's usually an indication that it is an out of domain image so if they filter this it works better and then also to ensure that the distribution of the unlabeled images match that of the training set we also need to balance the number of unlabeled images for each class as all classes in image net have a similar number of labeled images for this purpose we duplicate images in classes where there are not enough images for classes where we have too many images we take the images with the highest confidence okay so this is just another technique this has basically nothing to do with their core idea but this is just another thing where they say okay we can treat this big thing that we scrape from the internet you know we can somehow filter and balance it smartly and that will work even better alright so let's go into the experiments of course there so what they do I think where is the graphic what they do is they take an image net sorry they take an efficient net right here and they trade they first train an efficient net a smaller efficient net as we said for to be the teacher and then they train a larger efficient net for the student the best model in our experiments is a result of three iterations of putting back the student as a new teacher we first train an efficient net B7 on image net as the teacher model so you can see in the table right here what the B7 achieves the efficient net B7 here you can see it has 66 million parameters which is fairly small compared to these other kind of previous state-of-the-art methods on image net right so they first train this and that will achieve something like an 85% accuracy now if you just train a larger model this efficient net L2 right here that has you can see 480 million parameters so a lot of more million parameters but you just train it on the same data set on image net you will get a 0.5% improvement and you can see that here with noisy student training with the exact same model so it has the same amount of parameters you'll actually get an 88.4 so I like a more than a 3% improvement and that's what the same model just with this different training procedure and inputting these 300 million unlabeled images that you have laying around but the all the information about all the label information comes from the image net data set and comes from this efficient net B7 teacher model so that's basically you can it's a testament that out of this out of this 85 you can make this 88 just by smartly using the information that the model that this model has learned about the data and transferring it to new data so they train an efficient net B7 that's the small model as a teacher model then by using the B7 model as the teacher we trained an efficient net L2 model with the unlabeled batch size set to 14 times the labeled batch size and they stressed that it's important that you up the batch size that's another thing that makes the student learn more than the teacher then we trained a new efficient net so by the way these 14 times it's also it can be done because now you have more data right so you can also up the batch size then we trained a new efficient net L2 model with the efficient net L2 model as the teacher lastly we iterated again and used an unlabeled batch size of 28 times the labeled batch size the detailed result of the three iterations and so okay so you can see that it's a fairly complicated procedure but you can gain and gain and gain by simply up upping the by simply upping the or iterating on this procedure and I think they have it somewhere here yes so as you can see if iteration one you train the efficient net L2 you started with the B7 you train the efficient at a two with a batch size 14 times larger and you gain significantly right this gains about 2% over the original efficient net then you iterate again with the same batch size and you get like a 5.5% improvement and you iterate again with an even larger batch size and you get a point three percent improvement so there is diminishing returns but still you can see that you know the more with the introduction of noise with the introduction of the larger model with the introduction of the larger batch size these are all things that help the student basically become better than the teacher all right so they do a bunch of other experiments so their main comparison is right here where they say look if we if even if we train the same model with this noisy student training we can make you know pretty large gains over the model over the same model where we do not train it with this noisy student training so this really seems to help you know due to the noise due to the additional data they do a lot of ablation studies so that's pretty interesting and they also do these studies on this special image net data set for example image net see you can see that there are quite a bit of distortions right here I don't even see if you can see it on this video but this is a swing so the swing right here is like something like this but you almost can't see it and you see that the bold on the left is always the prediction of their model while the thing on the right is the prediction of the original model so this model they claim is significantly more robust to these kinds of perturbations and they do an analysis of this where they show yes in fact it is so I think we've already seen this at the beginning that the noisy student is significantly more robust to these perturbations and they also test this to adversarial perturbations so right here you can see that the original model drops pretty quickly as you increase the epsilon the epsilon is kind of the strength of the adversarial perturbation and the noisy the original model drops very quickly to you know fairly low accuracy while as the noisy student training drops much much less quickly now this is another testament to the fact that what you do I think what's happening is you have your data space right and you have your data points in it now when you do the like normal data augmentation what you'll do is you not only force the model to predict those points correctly but you'll sort of make a bit of a cloud around them and you force the model to predict that cloud correctly now if you introduce more data and you do even more noise what you do is you'll make these clouds kind of larger and that means the model is more robust to any sort of perturbations in these clouds right and and that means it's probably also going to be more robust to adversarial perturbations so that's sort of how you can think of this this introduction of noise to make it more generalizable how does this generalize better so if you think of this data point right here if I'm looking to generalize that means you know I have this IID data set so probably my test data is going to be related to the training data so I might get a data point that's fairly close to that data point and generalizing means I classify it correctly now if this cloud is very small like it is here my decision boundary could be like here right and even though the terrestres data set is fairly close to the original training data point it's it won't be classified incorrectly however if my original cloud during training is larger you can see if I train a model it can maybe put the decision boundary here and then my test data point will be included in on that same side so that's kind of the idea behind generalizing better of course that's a vast simplification and also to say that this here is an FGSM attack so this is kind of the weakest attack in the adversarial perturbation spectrum they do say under a stronger attack PGD which is a fairly strong attack with 10 iterations at epsilon equals 16 noisy student training improves efficient netl2 accuracy from 1.1% to 4.4% now this I'm like you know 1.1% really means the model is almost like dead this is lower this is like random performance and 4.4% is still a bit above random performance but yeah you could probably you could probably get there by simply using any sort of noise in that case but still you can see that it is more robust to especially to natural distortions and therefore it generalizes better as I said they do quite a bit of drop sorry not drop out at ablation studies to figure out where exactly the performance comes from and the answer is it pretty much comes from all the things that they've described so here you can see the effect of that extra data set and you can see pretty much with that extra data set all the all the situations improve here you can see what do you what is happening when you do not augment the student when you do not data augment you can immediately see that the accuracy drops and then when you do not augment and also don't use these model noises then the performance drops again and lastly when you use the teacher but you noise the teacher you can see also here the performance is dropping from the original quite a bit so all of these things kind of contribute and they do much more ablations and they have listed their findings here so using a large teacher model with better performance leads to better result so you know as the original teacher you should use as good as possible a teacher model you can find second a large amount of unlabeled data is necessary for better performance okay so if you want to do this you better get a large large amount of extra data because that's one thing that makes the student perform better soft pseudo labels work better than hard pseudo labels for out of the main data in certain cases fourth a large student model is important to enable the student to learn a more powerful model okay so because usually this knowledge distillation is what it this is usually called knowledge distillation if you use a teacher model to train a student model and it is often used when the student model is smaller than the teacher because you want to kind of become more efficient to you from so the teacher is large or make the student small and you usually sacrifice some accuracy and here they say if you want to gain some accuracy you need a large student model it can't be like a small one number five data balancing is useful for small models number six joint training on labeled data and unlabeled data out performs the pipeline that first pre trains with unlabeled data and then fine-tunes on labeled data so this is in contrast to like what people have done before in the self supervised learning and so on where it's always kind of pre training then fine-tuning or in the in the transfer learning setting seven using a large ratio between unlabeled batch size and label batch size enables models to train longer on unlabeled data to it to achieve a higher accuracy okay we've already seen that they have used that and number eight training the student from scratch is sometimes better than initializing the student with the teacher and the student initialized with the teacher still requires a large number of training epochs to perform well this is fairly interesting because it kind of alludes to the fact that the minima in weight space if so if this is of course the case if the student model is the same as the teacher model so in like iteration two or three or whatnot it means that you know in weight space if we look at you know you might want to start the student here and the minimum is right here and you might want to think that if I learn the same thing then the minima are fairly close together right so the the teachers minima might be here and the student minima might be fairly close so it might be beneficial if I if I start not over here but actually start at the teachers minimum but this doesn't always seem to be the case and that is a fairly interesting observation because it kind of means that we're talking about different minima here we're talking about the student model learning different things and that's what we've discussed already the student model kind of learns to be robust and that's probably a minimum that's fairly far away in weight space at least in in a sort of energy landscape weight space might be the case that it needs to actually overcome kind of a hill here even though the minimum might be close there's lots of research in like how minima are distributed in these weight spaces which I don't want to go into right here but it is a fairly interesting observation that it's not always helpful to initialize the teacher sorry the student at the teachers optimum okay so this was the paper and you know this is this is the type of research where I do appreciate kind of the these large labs taking it on because they have the resources to do all of these ablations all of these different models cross them with these giant data sets and so on which I guess university labs just would not have and this is a fairly thorough paper really investigating which parts of the pipeline you know do something and which ones don't and usually I I'm fairly critical of pipelines that have like 50 billion tricks because you never know where the improvement exactly is coming from but you can sort of mitigate that criticism by doing all of these kind of ablations on the different parts and really showing look this is important but this is also important but this is also important but this is also important so yeah that was my two cents to this paper I hope you enjoyed this and I'll see you next time bye bye
[ { "end": 4.08, "start": 0, "text": " Hi there, today we'll look at self-training with noisy student improves" }, { "end": 10.84, "start": 4.08, "text": " ImageNet classification by Qi Zi Sie, Min Tan Luong, Edward Hovey and Quoc Vy Lue." }, { "end": 16.14, "start": 10.84, "text": " So this paper takes an ImageNet classifier that's been trained on the" }, { "end": 21.68, "start": 16.14, "text": " ImageNet data set and uses that classifier as a teacher model to label" }, { "end": 27.240000000000002, "start": 21.68, "text": " a whole bunch of unlabeled images and then it trains a student model that is" }, { "end": 32.04, "start": 27.24, "text": " larger than the original teacher model on those teacher labeled images and that" }, { "end": 38.239999999999995, "start": 32.04, "text": " turns out to improve the classification on the ImageNet validation set. Now that" }, { "end": 44.879999999999995, "start": 38.239999999999995, "text": " there is a couple of things that make this all work and today we're going to" }, { "end": 52.08, "start": 44.879999999999995, "text": " explore how this paper does it and what they say is important. If you enjoy" }, { "end": 56.519999999999996, "start": 52.08, "text": " content like this as always don't hesitate to share it out there tell your" }, { "end": 61.720000000000006, "start": 56.52, "text": " friends about it and if you're not subscribed yet then do so. I would" }, { "end": 68.68, "start": 61.720000000000006, "text": " appreciate that and you'll get more content so win-win. So this this paper is" }, { "end": 74.92, "start": 68.68, "text": " about semi-supervised learning in effect. So it's at the intersection" }, { "end": 79.08, "start": 74.92, "text": " actually of semi-supervised learning, knowledge distillation and transfer" }, { "end": 83.08000000000001, "start": 79.08, "text": " learning. So what do we mean by semi-supervised learning? Usually in" }, { "end": 87.48, "start": 83.08, "text": " supervised learning you'll have some sort of data set and the data set will" }, { "end": 91.96, "start": 87.48, "text": " contain, let's say it's an ImageNet, it's image data set. So the data set will" }, { "end": 99.52, "start": 91.96, "text": " contain images. This is an image with like some sort of cat on it and it will" }, { "end": 106.88, "start": 99.52, "text": " contain the labels according to that. So cat. Now in semi-supervised learning you" }, { "end": 112.02, "start": 106.88, "text": " assume that, so this is supervised learning, in semi-supervised learning you" }, { "end": 117.64, "start": 112.02, "text": " assume that only part of your data set has the labels. So like only this part" }, { "end": 123.52, "start": 117.64, "text": " down here has the labels and the upper part does not have the labels. So that's" }, { "end": 127.92, "start": 123.52, "text": " semi-supervised learning. It's often the case when it's very expensive to get" }, { "end": 132.6, "start": 127.92, "text": " labels so you can only get labels for a couple of images in your data set. But" }, { "end": 136.76, "start": 132.6, "text": " very often in semi-supervised learning you still assume it's the same data set." }, { "end": 141.2, "start": 136.76, "text": " There is a slightly different setup here that's called transfer learning." }, { "end": 146.67999999999998, "start": 141.2, "text": " So in transfer learning what you'll have is you'll have your data set that has" }, { "end": 151.67999999999998, "start": 146.67999999999998, "text": " the labels but it's very small. So you'll notice I've drawn it smaller. That means" }, { "end": 156.56, "start": 151.67999999999998, "text": " you have very little. That is also the case when it's very expensive to get" }, { "end": 161.6, "start": 156.56, "text": " labels but also it's expensive to get the data itself. This is often the case" }, { "end": 167.12, "start": 161.6, "text": " like say in medical data where not only is it expensive to get labels for like a" }, { "end": 174.24, "start": 167.12, "text": " CT scan it's actually expensive to get the CT scan. So what the goal in transfer" }, { "end": 180.20000000000002, "start": 174.24, "text": " learning is is to say well I do have only this small data set but I do have" }, { "end": 186.24, "start": 180.20000000000002, "text": " this giant other data set over here. Now can't I... it's not the same." }, { "end": 191.4, "start": 186.24, "text": " Maybe they're not CT so these are CT scans. Maybe these are X-rays right?" }, { "end": 197.92000000000002, "start": 191.4, "text": " They're fairly similar. Similar technology. If you slice the CT it'll" }, { "end": 203, "start": 197.92000000000002, "text": " give you sort of an X-ray. Can I you know train my model, pre-train my model on" }, { "end": 211, "start": 203, "text": " X-ray data and then fine-tune it on the CT data? So that's called transfer" }, { "end": 216.12, "start": 211, "text": " learning usually. Now this can be done with or without labels. So it can be that" }, { "end": 220.52, "start": 216.12, "text": " for the X-ray data set you do have the labels or you don't have the labels." }, { "end": 227.52, "start": 220.52, "text": " There are techniques for all of those. Now what we're going to look at today is" }, { "end": 232.36, "start": 227.52, "text": " kind of the situation right here. It's the transfer learning situation where" }, { "end": 239.68, "start": 232.36, "text": " you do not have the labels for this X-ray data set. But other than in this" }, { "end": 244.12, "start": 239.68, "text": " X-ray example what we're going to look at is the small data set is going to be" }, { "end": 251.76, "start": 244.12, "text": " our ImageNet database. So our original picture with label database. So you'll" }, { "end": 255.64000000000001, "start": 251.76, "text": " see immediately the difference here is that in the transfer learning setting we" }, { "end": 261.48, "start": 255.64000000000001, "text": " usually assume that the data set we want to train on is fairly small. Here you" }, { "end": 269.32, "start": 261.48, "text": " know ImageNet is already sizable. But what we have is we have a much larger" }, { "end": 274.36, "start": 269.32, "text": " database of unlabeled images that we can just get from the internet. So we can" }, { "end": 279.15999999999997, "start": 274.36, "text": " scrape the internet for any kind of pictures and that will be our unlabeled" }, { "end": 283.96, "start": 279.15999999999997, "text": " data set. Now what we'll try to do is somehow incorporate this unlabeled data" }, { "end": 289.71999999999997, "start": 283.96, "text": " set here into the training process to get better on the ImageNet data set." }, { "end": 293.56, "start": 289.71999999999997, "text": " So this is the problem statement is you have the ImageNet data set and you" }, { "end": 298.48, "start": 293.56, "text": " have a second much larger data set of unlabeled images and you somehow want to" }, { "end": 302.84000000000003, "start": 298.48, "text": " make use of them. So I hope you see how this is sort of connected to the others." }, { "end": 309.32, "start": 302.84000000000003, "text": " It's essentially sort of a transfer semi-supervised learning setting but with" }, { "end": 313.76, "start": 309.32, "text": " the exception that usually in transfer learning you assume that the" }, { "end": 319.04, "start": 313.76, "text": " label data set is like super small. Which is not the case here and that's going to" }, { "end": 323.36, "start": 319.04, "text": " result in us being able to apply a different technique. So this different" }, { "end": 328.72, "start": 323.36, "text": " technique is called the noisy student. Now usually what you might do in a" }, { "end": 332.8, "start": 328.72, "text": " transfer learning setting is you might want to start with that big data set" }, { "end": 337.84000000000003, "start": 332.8, "text": " because that's the data set that's sizable enough to allow you to train a" }, { "end": 341.64, "start": 337.84000000000003, "text": " really big model on it and then you fine-tune and you sort of hope that the" }, { "end": 346.36, "start": 341.64, "text": " information transfers over. Here on the other hand what we want to do is we" }, { "end": 352.28000000000003, "start": 346.36, "text": " start with the ImageNet data set. So first we train this in a supervised" }, { "end": 356.91999999999996, "start": 352.28, "text": " learning fashion into our model. Now this model is going to be called the teacher" }, { "end": 362.23999999999995, "start": 356.91999999999996, "text": " model. We know how to do this, we know how to train ImageNet models. So we can" }, { "end": 367.84, "start": 362.23999999999995, "text": " train this into a teacher model that has a reasonable accuracy on the ImageNet" }, { "end": 374.71999999999997, "start": 367.84, "text": " data set. Step two we're going to take that big data set over here and use the" }, { "end": 383.36, "start": 374.72, "text": " teacher model to label the unlabeled images. So for each image coming" }, { "end": 394.52000000000004, "start": 383.36, "text": " in here the teacher will say that's a cat. So that gives you the big data" }, { "end": 401.12, "start": 394.52000000000004, "text": " set where now you have images along with labels. Just the labels aren't true" }, { "end": 407.56, "start": 401.12, "text": " labels, they're generated by the teacher. And then in the third step you train" }, { "end": 416, "start": 407.56, "text": " this big data set, you train on this big data set and that's what you call your" }, { "end": 421.76, "start": 416, "text": " student model. And then the student model in this paper will see how can we make" }, { "end": 426.7, "start": 421.76, "text": " it such that the student is then better at the original ImageNet task than the" }, { "end": 430.92, "start": 426.7, "text": " teacher ever was. Which seems counterintuitive at first because all of" }, { "end": 434.16, "start": 430.92, "text": " the information that the student is trained from is basically what the" }, { "end": 439.52000000000004, "start": 434.16, "text": " teacher already knows. All the labels here come from the teacher, therefore the" }, { "end": 446.96000000000004, "start": 439.52000000000004, "text": " student shouldn't be able to outperform the teacher. But in this case the student" }, { "end": 451.20000000000005, "start": 446.96000000000004, "text": " will be able to outperform the teacher and their argument here is that this is" }, { "end": 457.08000000000004, "start": 451.20000000000005, "text": " mainly due to the fact that you use noise in this training procedure. So when" }, { "end": 462.15999999999997, "start": 457.08, "text": " you train the student what you'll do is you'll use noise and one of the types of" }, { "end": 468.68, "start": 462.15999999999997, "text": " noise is that you severely augment this data right here in order to train the" }, { "end": 473.56, "start": 468.68, "text": " student. Now we've known for a long time that data augmentation, for example in" }, { "end": 478.2, "start": 473.56, "text": " the frameworks of self-supervised learning and so on, can have a very large" }, { "end": 483.84, "start": 478.2, "text": " benefit to training. And here the fact that we incorporate this extra data" }, { "end": 489.91999999999996, "start": 483.84, "text": " and we use noise and augmentations on it is going to result in a student that can" }, { "end": 499.91999999999996, "start": 489.91999999999996, "text": " sort of learn more about the data than the teacher did know. Okay this is" }, { "end": 505.28, "start": 499.91999999999996, "text": " basically it and as you can see this is kind of their main final results where" }, { "end": 510.52, "start": 505.28, "text": " they say on ImageNet our top one accuracy sort of increases right here" }, { "end": 516.92, "start": 510.52, "text": " and even on these kind of subsets of ImageNet or these are sort of corrupted" }, { "end": 522.12, "start": 516.92, "text": " sets of ImageNet they make even more substantial improvements as you can see" }, { "end": 527.1999999999999, "start": 522.12, "text": " here. Now we'll go into what these corrupted subsets are but you know just" }, { "end": 533.48, "start": 527.1999999999999, "text": " for now these here are very difficult variants of ImageNet. They can be" }, { "end": 539.36, "start": 533.48, "text": " severely corrupted or distorted and so on and you can see that the model" }, { "end": 543.6800000000001, "start": 539.36, "text": " improves severely over the previous state of the art which basically means" }, { "end": 548.44, "start": 543.6800000000001, "text": " that this model is more robust and that's a direct consequence of the noise." }, { "end": 554, "start": 548.44, "text": " Now one last thing I should say is that the student here is also larger than the" }, { "end": 558.48, "start": 554, "text": " teacher so that's also one thing that makes the student better. So what you" }, { "end": 564.1800000000001, "start": 558.48, "text": " will make is the student model is larger than the teacher model as a model as the" }, { "end": 571.92, "start": 564.18, "text": " architecture. So in combination with the noise right here with the noise in" }, { "end": 576.8, "start": 571.92, "text": " combination that means the student model is probably able to capture more of the" }, { "end": 580.56, "start": 576.8, "text": " variance of the data. It's larger it has more parameters it can learn more about" }, { "end": 587.4799999999999, "start": 580.56, "text": " the data together with the noise it can probably be a more robust and that's" }, { "end": 591.8399999999999, "start": 587.4799999999999, "text": " what makes it generalize better and we'll also see as we see here it's more" }, { "end": 595.84, "start": 591.84, "text": " robust to these transformations and it's also going to be more robust to" }, { "end": 602.84, "start": 595.84, "text": " adversarial perturbations. So the technique again is illustrated here as" }, { "end": 609.32, "start": 602.84, "text": " as we said it's pretty simple. First so step one step one train the teacher" }, { "end": 616.8000000000001, "start": 609.32, "text": " model with labeled data as you would. Step two you infer the pseudo labels on" }, { "end": 624.3599999999999, "start": 616.8, "text": " unlabeled data. Step three you make a student you make sorry we'll step three" }, { "end": 631.7199999999999, "start": 624.3599999999999, "text": " over here train an equal or a larger student model with combined data and" }, { "end": 636.9399999999999, "start": 631.7199999999999, "text": " noise injected. So they don't they use the original labeled data here and the" }, { "end": 642.12, "start": 636.9399999999999, "text": " pseudo labeled data right here in order to train the student but still this the" }, { "end": 645.8399999999999, "start": 642.12, "text": " student doesn't have more information more label information than the teacher" }, { "end": 653.6, "start": 645.84, "text": " had it simply has this teacher labeled teacher labeled unlabeled data also to" }, { "end": 659.44, "start": 653.6, "text": " train on. Now the crucial part here is well first of all that the student can" }, { "end": 663.72, "start": 659.44, "text": " be larger and second of all that there can be noise and the noise comes in" }, { "end": 668.9200000000001, "start": 663.72, "text": " three different forms. So first of all you use data augmentation which we've" }, { "end": 674.2, "start": 668.9200000000001, "text": " already seen this is sort of like random cropping or mild rotations color jitter" }, { "end": 678.6, "start": 674.2, "text": " whatever they use a rand augment here which is a specific technique to apply" }, { "end": 683.6800000000001, "start": 678.6, "text": " these augmentations they use dropout which is a fairly old technique where" }, { "end": 688.48, "start": 683.6800000000001, "text": " you in the student model that you train you randomly drop out connections which" }, { "end": 693.2800000000001, "start": 688.48, "text": " makes it more robust and more generalizing and then you also use" }, { "end": 698.44, "start": 693.2800000000001, "text": " stochastic depth. Now stochastic depth is a technique when you train a model what" }, { "end": 703.1600000000001, "start": 698.44, "text": " you'll do during training instead of always passing your data forward through" }, { "end": 709.04, "start": 703.16, "text": " the layers like this you use some sort of a dropout but with entire layers so" }, { "end": 714.64, "start": 709.04, "text": " what you'll do is you'll pass your data forward and then randomly you'll skip a" }, { "end": 719.92, "start": 714.64, "text": " layer and then pass it forward again. Now these these might seem weird first" }, { "end": 727.1999999999999, "start": 719.92, "text": " because yeah it might seem weird but in if you know that most models especially" }, { "end": 732.36, "start": 727.1999999999999, "text": " computer vision models nowadays are residual networks which means that their" }, { "end": 737.2, "start": 732.36, "text": " layers look like so you have the input and you have some computation and then" }, { "end": 742.36, "start": 737.2, "text": " you have the output and then there is already a residual connection that" }, { "end": 746.48, "start": 742.36, "text": " basically adds the original signal together to the result of the" }, { "end": 752.16, "start": 746.48, "text": " computation. So all you do in this stochastic layer dropout or this" }, { "end": 757.88, "start": 752.16, "text": " stochastic depth right here is you basically disable you you disable this" }, { "end": 763.12, "start": 757.88, "text": " connection right here and all the signal has to flow through here. If you read the" }, { "end": 767.48, "start": 763.12, "text": " residual the ResNet original ResNet paper they make it pretty clear why the" }, { "end": 772.4399999999999, "start": 767.48, "text": " residual connection is a good idea basically they say these computations" }, { "end": 777.36, "start": 772.4399999999999, "text": " here they if you have a very deep network each layer only has to basically" }, { "end": 785.56, "start": 777.36, "text": " do very a little bit of computation that that can be bypassed fairly efficiently" }, { "end": 790.5999999999999, "start": 785.56, "text": " for a lot of data points so it's not that hurtful to bypass a layer and in" }, { "end": 795.7199999999999, "start": 790.5999999999999, "text": " this case they actually use it to just bypass some of these small computations" }, { "end": 801.28, "start": 795.7199999999999, "text": " and inject some more robustness into the student model. So with these three" }, { "end": 805.9599999999999, "start": 801.28, "text": " strategies to bring noise into the training process one is on the data and" }, { "end": 812.52, "start": 805.9599999999999, "text": " two is on the student model itself they train the student model and then fourth" }, { "end": 819.6, "start": 812.52, "text": " and this is what we didn't have before four or maybe we put four here make the" }, { "end": 824.28, "start": 819.6, "text": " student a new teacher so now you can iterate you can use the student model" }, { "end": 829.4399999999999, "start": 824.28, "text": " that you just trained to again label the unlabeled data and then you can use" }, { "end": 834.88, "start": 829.4399999999999, "text": " another student model again under the influence of noise to train from that" }, { "end": 838.88, "start": 834.88, "text": " student model and so on and you can go on and they do up to like three" }, { "end": 843.36, "start": 838.88, "text": " iterations of this where they always take the new the student as the new" }, { "end": 852, "start": 843.36, "text": " teacher and then use a new student model to train from that teacher and they get" }, { "end": 855.58, "start": 852, "text": " better and better as they do this of course there's like a diminishing" }, { "end": 861.2, "start": 855.58, "text": " returns but it's pretty impressive that this even works right the new students" }, { "end": 866.2, "start": 861.2, "text": " in fact aren't even larger than the old students it's just that the students are" }, { "end": 871.08, "start": 866.2, "text": " larger than the original teacher model in most of these cases so here's the" }, { "end": 877.2, "start": 871.08, "text": " algorithm written down you'll require labeled images right here and unlabeled" }, { "end": 882.6, "start": 877.2, "text": " images which are the ones with the tilde so first you learn the teacher model" }, { "end": 886.5200000000001, "start": 882.6, "text": " which minimizes the cross entropy on labeled images this we already know this" }, { "end": 892.96, "start": 886.5200000000001, "text": " right this is the label this is the image according to the label and you" }, { "end": 898.24, "start": 892.96, "text": " train the teacher model which is this thing here and you can see here noised so" }, { "end": 902.2800000000001, "start": 898.24, "text": " already in the teacher training process you want to introduce this noise you" }, { "end": 905.44, "start": 902.2800000000001, "text": " want to introduce these data augmentations these are as I said these" }, { "end": 909.2800000000001, "start": 905.44, "text": " are standard techniques to make models more robust and therefore more" }, { "end": 916.24, "start": 909.2800000000001, "text": " generalizable yeah we know from these from these self-supervised papers that" }, { "end": 922.2800000000001, "start": 916.24, "text": " these augmentations are very powerful and the way you design them basically if" }, { "end": 926, "start": 922.28, "text": " you one of these augmentations is a random crop which means if you have an" }, { "end": 931.12, "start": 926, "text": " image you randomly crop out like part of that image and then that's your training" }, { "end": 938.36, "start": 931.12, "text": " sample and not the entire thing so by doing this you basically teaching the" }, { "end": 943.8, "start": 938.36, "text": " model to ignore the exact location and scale of things on an image and you can" }, { "end": 947.4399999999999, "start": 943.8, "text": " do this because you as a human know that you know I can zoom in I can zoom out" }, { "end": 953.6800000000001, "start": 947.44, "text": " into something and it won't change what's on the picture and so that's you" }, { "end": 957.32, "start": 953.6800000000001, "text": " use these augmentations to kind of heuristically tell the model what it" }, { "end": 962.9200000000001, "start": 957.32, "text": " should be invariant to and that is that is a very powerful technique to" }, { "end": 969.24, "start": 962.9200000000001, "text": " regularize basically to to robustify these deep methods and this is used" }, { "end": 975.6, "start": 969.24, "text": " the same here so already in the teacher model we train with this noise and then" }, { "end": 981.16, "start": 975.6, "text": " step two use a normal ie not noise teacher model to generate soft or hard" }, { "end": 985.88, "start": 981.16, "text": " pseudo labels for the clean ie not distorted unlabeled images and this is" }, { "end": 991.48, "start": 985.88, "text": " important they stress this here that when you when you label the unlabeled" }, { "end": 997.6, "start": 991.48, "text": " images you want to use the model that is without the noise and you do it on the" }, { "end": 1002.8000000000001, "start": 997.6, "text": " not distorted unlabeled images so when you infer the labels it's very important" }, { "end": 1008.5999999999999, "start": 1002.8, "text": " that you have clean accurate labels without any sort of noise in them so" }, { "end": 1013.7199999999999, "start": 1008.5999999999999, "text": " label noise is not something that they have found to help in this case so not" }, { "end": 1019, "start": 1013.7199999999999, "text": " label noise on the teacher that is so you can see right here on the unlabeled" }, { "end": 1024.72, "start": 1019, "text": " images will use that teacher model without the noise to infer the labels" }, { "end": 1030.3999999999999, "start": 1024.72, "text": " now they say these can be hard model hard labels or soft labels so what does" }, { "end": 1036.2, "start": 1030.4, "text": " that mean if we generate hard pseudo labels that means that the y here is" }, { "end": 1041.72, "start": 1036.2, "text": " simply going to be either 0 or 1 or 2 or 3 and so on so just the index of the" }, { "end": 1045.7, "start": 1041.72, "text": " class whichever class is most likely that's going to be our label this is" }, { "end": 1051.2800000000002, "start": 1045.7, "text": " exactly how the supervised datasets come right so this is what you will think" }, { "end": 1056.96, "start": 1051.2800000000002, "text": " first when you see that however soft pseudo labels means that the y will be a" }, { "end": 1065.04, "start": 1056.96, "text": " distribution so instead of being of class 0 it will be sort of let's say 90%" }, { "end": 1073.8, "start": 1065.04, "text": " of class 0 but also 5% class 1 and 5% class 2 right so you'll output the" }, { "end": 1079.92, "start": 1073.8, "text": " distribution instead of the just the label and they have found that the soft" }, { "end": 1087.04, "start": 1079.92, "text": " pseudo labels work slightly slightly better than the hard pseudo labels okay" }, { "end": 1095.28, "start": 1087.04, "text": " thanks so that they use the soft pseudo labels here because they work slightly" }, { "end": 1099.3200000000002, "start": 1095.28, "text": " better but you can do it with hard or soft labels the important thing is that" }, { "end": 1105.1200000000001, "start": 1099.3200000000002, "text": " you use the teacher to generate as accurate as possible labels for your" }, { "end": 1111.2399999999998, "start": 1105.12, "text": " unlabeled data then third we've already seen this learn an equal or larger" }, { "end": 1115.6799999999998, "start": 1111.2399999999998, "text": " student model which minimizes the cross entropy loss on labeled images and" }, { "end": 1121.9599999999998, "start": 1115.6799999999998, "text": " unlabeled images with noise added to the student model so as you can see labeled" }, { "end": 1127.3, "start": 1121.9599999999998, "text": " images and unlabeled images so we're in this semi semi supervised learning" }, { "end": 1133, "start": 1127.3, "text": " setting right now you take in both together with noise and noise here is in" }, { "end": 1137.88, "start": 1133, "text": " bold which means they stress it again this is important so you can see that" }, { "end": 1143.68, "start": 1137.88, "text": " the loss is composed of two different things these are the true images of your" }, { "end": 1151, "start": 1143.68, "text": " original model and you use that and this means you noise the student model and" }, { "end": 1157.08, "start": 1151, "text": " that that noise can be on the data or in the model itself and here also the" }, { "end": 1161.6, "start": 1157.08, "text": " unlabeled images that you have labeled with the teacher you do the exact same" }, { "end": 1167.12, "start": 1161.6, "text": " thing so you train on both of these data sets and step four is if you want to do" }, { "end": 1173.8799999999999, "start": 1167.12, "text": " iterative training use the student as a teacher and go back to step two now they" }, { "end": 1179.1, "start": 1173.8799999999999, "text": " have some more tricks when they do this iterative training they also up the" }, { "end": 1183.8799999999999, "start": 1179.1, "text": " batch size during the iterative training and so on so they do a lot of things to" }, { "end": 1189.48, "start": 1183.8799999999999, "text": " make the student learn something more something better than the teacher and I" }, { "end": 1194.4, "start": 1189.48, "text": " think this the whole paper it doesn't it doesn't state it explicitly but I think" }, { "end": 1200.08, "start": 1194.4, "text": " the whole paper everything they do here is to kind of force or allow the student" }, { "end": 1205.1200000000001, "start": 1200.08, "text": " to become better than the teacher by by giving more noise by making the student" }, { "end": 1210.96, "start": 1205.1200000000001, "text": " larger by making the batch size for the student larger and so on so you you want" }, { "end": 1217.84, "start": 1210.96, "text": " to sort of inject as much invariance as you can and that will make the student" }, { "end": 1226.6399999999999, "start": 1217.84, "text": " learn more so they say here noising student when the student is deliberately" }, { "end": 1232.52, "start": 1226.6399999999999, "text": " noised in its it is trained to be consistent to the teacher that is not" }, { "end": 1237.6, "start": 1232.52, "text": " noised when it generates the pseudo labels in our experiments we use two" }, { "end": 1247.12, "start": 1237.6, "text": " types of noise input noise and model noise all right first data augmentation" }, { "end": 1251.1599999999999, "start": 1247.12, "text": " is an important noising method in noisy student training because it forces the" }, { "end": 1256.56, "start": 1251.1599999999999, "text": " student to ensure prediction consistency across augmented versions of an image" }, { "end": 1260.6399999999999, "start": 1256.56, "text": " specifically in our method the teacher produces high quality pseudo labels by" }, { "end": 1264.76, "start": 1260.6399999999999, "text": " reading in clean images while the student is required to produce to" }, { "end": 1272.04, "start": 1264.76, "text": " reproduce those labels with augmented images as an input second when dropout" }, { "end": 1278.04, "start": 1272.04, "text": " and stochastic depth function are used as noise the teacher behaves like an" }, { "end": 1282.12, "start": 1278.04, "text": " ensemble at inference time when it generates pseudo labels whereas the" }, { "end": 1286.72, "start": 1282.12, "text": " student behaves like a single model in other words the student is forced to" }, { "end": 1292.1599999999999, "start": 1286.72, "text": " mimic a more powerful ensemble model we present an ablation study so this it's a" }, { "end": 1297.52, "start": 1292.1599999999999, "text": " bit weird what they say here don't be confused you use the dropout and the" }, { "end": 1303.6399999999999, "start": 1297.52, "text": " stochastic depth on the student model and they they say here if you do this" }, { "end": 1309.36, "start": 1303.6399999999999, "text": " the teacher behaves like an ensemble at inference time whereas the student" }, { "end": 1314.32, "start": 1309.36, "text": " behaves like a single model and yeah it's it's a bit of a weird formulation" }, { "end": 1320.04, "start": 1314.32, "text": " but it's it's true like the teacher the teacher will produce these same the" }, { "end": 1325.72, "start": 1320.04, "text": " label for different pathways through the students if you use dropout and kind of" }, { "end": 1330.16, "start": 1325.72, "text": " stochastic depth and therefore the student is kind of required to" }, { "end": 1335.1200000000001, "start": 1330.16, "text": " approximate each time each forward pass has a different forward pass through the" }, { "end": 1338.76, "start": 1335.1200000000001, "text": " layers through the connections with dropout and it's forced to approximate" }, { "end": 1345.08, "start": 1338.76, "text": " that teacher label with all of these different things so you see that you you" }, { "end": 1351.52, "start": 1345.08, "text": " put in a lot of a lot of techniques so they have even other techniques there is" }, { "end": 1356.48, "start": 1351.52, "text": " one additional trick and it's not and it's not one actually they have so many" }, { "end": 1360.16, "start": 1356.48, "text": " tricks and if you look at their experimental setup that it's crazy like" }, { "end": 1363.96, "start": 1360.16, "text": " they describe exactly we reduce the learning rate like this and the batch" }, { "end": 1368.4, "start": 1363.96, "text": " size like this and so on so to get state-of-the-art on image net it's not" }, { "end": 1374.24, "start": 1368.4, "text": " enough to just have a good idea of a new thing to do what you you you have to" }, { "end": 1380.2, "start": 1374.24, "text": " have the good idea and then execute it almost like really well because you have" }, { "end": 1385.04, "start": 1380.2, "text": " to regard all of these additional tricks that people have figured out over the" }, { "end": 1389.88, "start": 1385.04, "text": " years in any case they say it works better with an additional trick data" }, { "end": 1395.2, "start": 1389.88, "text": " filtering and balancing specifically we filter images that the teacher model has" }, { "end": 1400.04, "start": 1395.2, "text": " low confidence on since they are usually out of domain images so that goes to a" }, { "end": 1405.44, "start": 1400.04, "text": " point where if you see we have this image net label data set right and we" }, { "end": 1411.04, "start": 1405.44, "text": " have the larger data set now the larger data set simply contains images and" }, { "end": 1415.72, "start": 1411.04, "text": " there is no guarantee that the images are actually of the classes that we have" }, { "end": 1420.48, "start": 1415.72, "text": " in the image net data set right here we have a thousand classes here there's no" }, { "end": 1426.1200000000001, "start": 1420.48, "text": " guarantee that these images fit into any of those classes yet we still ask the" }, { "end": 1432.3200000000002, "start": 1426.1200000000001, "text": " teacher model to put them in some of these classes now you can filter out" }, { "end": 1438.9199999999998, "start": 1432.32, "text": " part of those images if you can look at the teacher model and you look at its" }, { "end": 1442.8799999999999, "start": 1438.9199999999998, "text": " confidence so when it outputs a distribution if if there's just two" }, { "end": 1446.72, "start": 1442.8799999999999, "text": " labels let's say if it outputs a distribution like this that's wildly" }, { "end": 1451.72, "start": 1446.72, "text": " different than if it outputs a distribution like this both are class" }, { "end": 1456.3999999999999, "start": 1451.72, "text": " one labels but one is much more confident than the other so what you" }, { "end": 1461.36, "start": 1456.3999999999999, "text": " want to do is you want to filter out these low confidence labels because you" }, { "end": 1465.8, "start": 1461.36, "text": " know the model isn't really sure but it has to assign a class but that's usually" }, { "end": 1471.4399999999998, "start": 1465.8, "text": " an indication that it is an out of domain image so if they filter this it" }, { "end": 1476.6399999999999, "start": 1471.4399999999998, "text": " works better and then also to ensure that the distribution of the unlabeled" }, { "end": 1481.4799999999998, "start": 1476.6399999999999, "text": " images match that of the training set we also need to balance the number of" }, { "end": 1485.6799999999998, "start": 1481.4799999999998, "text": " unlabeled images for each class as all classes in image net have a similar" }, { "end": 1489.8, "start": 1485.6799999999998, "text": " number of labeled images for this purpose we duplicate images in classes" }, { "end": 1494.28, "start": 1489.8, "text": " where there are not enough images for classes where we have too many images we" }, { "end": 1501, "start": 1494.28, "text": " take the images with the highest confidence okay so this is just another" }, { "end": 1505.3999999999999, "start": 1501, "text": " technique this has basically nothing to do with their core idea but this is just" }, { "end": 1512.32, "start": 1505.3999999999999, "text": " another thing where they say okay we can treat this big thing that we scrape from" }, { "end": 1516.48, "start": 1512.32, "text": " the internet you know we can somehow filter and balance it smartly and that" }, { "end": 1528.3600000000001, "start": 1516.48, "text": " will work even better alright so let's go into the experiments of course there" }, { "end": 1534.84, "start": 1528.3600000000001, "text": " so what they do I think where is the graphic what they do is they take an" }, { "end": 1543.3600000000001, "start": 1534.84, "text": " image net sorry they take an efficient net right here and they trade they first" }, { "end": 1549.6399999999999, "start": 1543.36, "text": " train an efficient net a smaller efficient net as we said for to be the" }, { "end": 1559.6799999999998, "start": 1549.6399999999999, "text": " teacher and then they train a larger efficient net for the student the best" }, { "end": 1564.32, "start": 1559.6799999999998, "text": " model in our experiments is a result of three iterations of putting back the" }, { "end": 1569.28, "start": 1564.32, "text": " student as a new teacher we first train an efficient net B7 on image net as the" }, { "end": 1575.36, "start": 1569.28, "text": " teacher model so you can see in the table right here what the B7 achieves the" }, { "end": 1579.92, "start": 1575.36, "text": " efficient net B7 here you can see it has 66 million parameters which is fairly" }, { "end": 1584.04, "start": 1579.92, "text": " small compared to these other kind of previous state-of-the-art methods on" }, { "end": 1589.24, "start": 1584.04, "text": " image net right so they first train this and that will achieve something like an" }, { "end": 1596.04, "start": 1589.24, "text": " 85% accuracy now if you just train a larger model this efficient net L2 right" }, { "end": 1600.36, "start": 1596.04, "text": " here that has you can see 480 million parameters so a lot of more million" }, { "end": 1605.04, "start": 1600.36, "text": " parameters but you just train it on the same data set on image net you will get" }, { "end": 1612.52, "start": 1605.04, "text": " a 0.5% improvement and you can see that here with noisy student training with" }, { "end": 1616.84, "start": 1612.52, "text": " the exact same model so it has the same amount of parameters you'll actually get" }, { "end": 1623.84, "start": 1616.84, "text": " an 88.4 so I like a more than a 3% improvement and that's what the same" }, { "end": 1628.8799999999999, "start": 1623.84, "text": " model just with this different training procedure and inputting these 300" }, { "end": 1634.04, "start": 1628.8799999999999, "text": " million unlabeled images that you have laying around but the all the" }, { "end": 1639.48, "start": 1634.04, "text": " information about all the label information comes from the image net" }, { "end": 1645.1999999999998, "start": 1639.48, "text": " data set and comes from this efficient net B7 teacher model so that's" }, { "end": 1651.32, "start": 1645.1999999999998, "text": " basically you can it's a testament that out of this out of this 85 you can make" }, { "end": 1657.12, "start": 1651.32, "text": " this 88 just by smartly using the information that the model that this" }, { "end": 1663.1599999999999, "start": 1657.12, "text": " model has learned about the data and transferring it to new data so they" }, { "end": 1668.46, "start": 1663.1599999999999, "text": " train an efficient net B7 that's the small model as a teacher model then by" }, { "end": 1673.76, "start": 1668.46, "text": " using the B7 model as the teacher we trained an efficient net L2 model with" }, { "end": 1679.6399999999999, "start": 1673.76, "text": " the unlabeled batch size set to 14 times the labeled batch size and they stressed" }, { "end": 1683.5200000000002, "start": 1679.64, "text": " that it's important that you up the batch size that's another thing that" }, { "end": 1689.5400000000002, "start": 1683.5200000000002, "text": " makes the student learn more than the teacher then we trained a new efficient" }, { "end": 1694.48, "start": 1689.5400000000002, "text": " net so by the way these 14 times it's also it can be done because now you" }, { "end": 1700.64, "start": 1694.48, "text": " have more data right so you can also up the batch size then we trained a new" }, { "end": 1705.64, "start": 1700.64, "text": " efficient net L2 model with the efficient net L2 model as the teacher" }, { "end": 1710.6000000000001, "start": 1705.64, "text": " lastly we iterated again and used an unlabeled batch size of 28 times the" }, { "end": 1714.88, "start": 1710.6000000000001, "text": " labeled batch size the detailed result of the three iterations and so okay so" }, { "end": 1718.72, "start": 1714.88, "text": " you can see that it's a fairly complicated procedure but you can gain" }, { "end": 1727.6000000000001, "start": 1718.72, "text": " and gain and gain by simply up upping the by simply upping the or iterating on" }, { "end": 1733.48, "start": 1727.6000000000001, "text": " this procedure and I think they have it somewhere here yes so as you can see if" }, { "end": 1740.08, "start": 1733.48, "text": " iteration one you train the efficient net L2 you started with the B7 you" }, { "end": 1745.16, "start": 1740.08, "text": " train the efficient at a two with a batch size 14 times larger and you gain" }, { "end": 1750.46, "start": 1745.16, "text": " significantly right this gains about 2% over the original efficient net then you" }, { "end": 1758.32, "start": 1750.46, "text": " iterate again with the same batch size and you get like a 5.5% improvement and" }, { "end": 1761.56, "start": 1758.32, "text": " you iterate again with an even larger batch size and you get a point three" }, { "end": 1765.34, "start": 1761.56, "text": " percent improvement so there is diminishing returns but still you can" }, { "end": 1768.9199999999998, "start": 1765.34, "text": " see that you know the more with the introduction of noise with the" }, { "end": 1772.08, "start": 1768.9199999999998, "text": " introduction of the larger model with the introduction of the larger batch" }, { "end": 1777.12, "start": 1772.08, "text": " size these are all things that help the student basically become better than the" }, { "end": 1782.9199999999998, "start": 1777.12, "text": " teacher all right so they do a bunch of other experiments so their main" }, { "end": 1791.32, "start": 1782.9199999999998, "text": " comparison is right here where they say look if we if even if we train the" }, { "end": 1796.52, "start": 1791.32, "text": " same model with this noisy student training we can make you know pretty" }, { "end": 1802.6799999999998, "start": 1796.52, "text": " large gains over the model over the same model where we do not train it with this" }, { "end": 1808.3999999999999, "start": 1802.6799999999998, "text": " noisy student training so this really seems to help you know due to the noise" }, { "end": 1815.2, "start": 1808.3999999999999, "text": " due to the additional data they do a lot of ablation studies so that's pretty" }, { "end": 1820.6399999999999, "start": 1815.2, "text": " interesting and they also do these studies on this special image net data" }, { "end": 1824, "start": 1820.64, "text": " set for example image net see you can see that there are quite a bit of" }, { "end": 1827.92, "start": 1824, "text": " distortions right here I don't even see if you can see it on this video but this" }, { "end": 1835.5600000000002, "start": 1827.92, "text": " is a swing so the swing right here is like something like this but you almost" }, { "end": 1840.3200000000002, "start": 1835.5600000000002, "text": " can't see it and you see that the bold on the left is always the prediction of" }, { "end": 1844.96, "start": 1840.3200000000002, "text": " their model while the thing on the right is the prediction of the original model" }, { "end": 1850.24, "start": 1844.96, "text": " so this model they claim is significantly more robust to these kinds" }, { "end": 1857.24, "start": 1850.24, "text": " of perturbations and they do an analysis of this where they show yes in fact it" }, { "end": 1864.84, "start": 1857.24, "text": " is so I think we've already seen this at the beginning that the noisy student is" }, { "end": 1869.52, "start": 1864.84, "text": " significantly more robust to these perturbations and they also test this to" }, { "end": 1874.36, "start": 1869.52, "text": " adversarial perturbations so right here you can see that the original model" }, { "end": 1878.8, "start": 1874.36, "text": " drops pretty quickly as you increase the epsilon the epsilon is kind of the" }, { "end": 1884.44, "start": 1878.8, "text": " strength of the adversarial perturbation and the noisy the original model drops" }, { "end": 1891.04, "start": 1884.44, "text": " very quickly to you know fairly low accuracy while as the noisy student" }, { "end": 1899.1399999999999, "start": 1891.04, "text": " training drops much much less quickly now this is another testament to the" }, { "end": 1904.12, "start": 1899.1399999999999, "text": " fact that what you do I think what's happening is you have your data space" }, { "end": 1911.08, "start": 1904.12, "text": " right and you have your data points in it now when you do the like normal data" }, { "end": 1915.3999999999999, "start": 1911.08, "text": " augmentation what you'll do is you not only force the model to predict those" }, { "end": 1920.04, "start": 1915.3999999999999, "text": " points correctly but you'll sort of make a bit of a cloud around them and you" }, { "end": 1927.32, "start": 1920.04, "text": " force the model to predict that cloud correctly now if you introduce more data" }, { "end": 1934, "start": 1927.32, "text": " and you do even more noise what you do is you'll make these clouds kind of" }, { "end": 1939.52, "start": 1934, "text": " larger and that means the model is more robust to any sort of perturbations in" }, { "end": 1943.8, "start": 1939.52, "text": " these clouds right and and that means it's probably also going to be more" }, { "end": 1949, "start": 1943.8, "text": " robust to adversarial perturbations so that's sort of how you can think of this" }, { "end": 1953.6, "start": 1949, "text": " this introduction of noise to make it more generalizable how does this" }, { "end": 1957.84, "start": 1953.6, "text": " generalize better so if you think of this data point right here if I'm" }, { "end": 1962.84, "start": 1957.84, "text": " looking to generalize that means you know I have this IID data set so" }, { "end": 1968.04, "start": 1962.84, "text": " probably my test data is going to be related to the training data so I might" }, { "end": 1974.56, "start": 1968.04, "text": " get a data point that's fairly close to that data point and generalizing means I" }, { "end": 1979.8, "start": 1974.56, "text": " classify it correctly now if this cloud is very small like it is here my decision" }, { "end": 1985.96, "start": 1979.8, "text": " boundary could be like here right and even though the terrestres data set is" }, { "end": 1991.56, "start": 1985.96, "text": " fairly close to the original training data point it's it won't be classified" }, { "end": 1997.44, "start": 1991.56, "text": " incorrectly however if my original cloud during training is larger you can see if" }, { "end": 2002.52, "start": 1997.44, "text": " I train a model it can maybe put the decision boundary here and then my test" }, { "end": 2008.12, "start": 2002.52, "text": " data point will be included in on that same side so that's kind of the idea" }, { "end": 2012.84, "start": 2008.12, "text": " behind generalizing better of course that's a vast simplification and also" }, { "end": 2018.6, "start": 2012.84, "text": " to say that this here is an FGSM attack so this is kind of the weakest attack in" }, { "end": 2025.7199999999998, "start": 2018.6, "text": " the adversarial perturbation spectrum they do say under a stronger attack" }, { "end": 2031, "start": 2025.7199999999998, "text": " PGD which is a fairly strong attack with 10 iterations at epsilon equals 16" }, { "end": 2037.2399999999998, "start": 2031, "text": " noisy student training improves efficient netl2 accuracy from 1.1% to 4.4%" }, { "end": 2046.36, "start": 2037.24, "text": " now this I'm like you know 1.1% really means the model is almost like dead" }, { "end": 2053.32, "start": 2046.36, "text": " this is lower this is like random performance and 4.4% is still a bit above" }, { "end": 2059.76, "start": 2053.32, "text": " random performance but yeah you could probably you could probably get there by" }, { "end": 2066, "start": 2059.76, "text": " simply using any sort of noise in that case but still you can see that it is" }, { "end": 2072.6, "start": 2066, "text": " more robust to especially to natural distortions and therefore it generalizes" }, { "end": 2079.52, "start": 2072.6, "text": " better as I said they do quite a bit of drop sorry not drop out at ablation" }, { "end": 2086.04, "start": 2079.52, "text": " studies to figure out where exactly the performance comes from and the answer is" }, { "end": 2090.84, "start": 2086.04, "text": " it pretty much comes from all the things that they've described so here you can" }, { "end": 2096.76, "start": 2090.84, "text": " see the effect of that extra data set and you can see pretty much with that" }, { "end": 2102.36, "start": 2096.76, "text": " extra data set all the all the situations improve here you can see what" }, { "end": 2108.52, "start": 2102.36, "text": " do you what is happening when you do not augment the student when you do not data" }, { "end": 2113.44, "start": 2108.52, "text": " augment you can immediately see that the accuracy drops and then when you do not" }, { "end": 2118.6400000000003, "start": 2113.44, "text": " augment and also don't use these model noises then the performance drops again" }, { "end": 2124.16, "start": 2118.64, "text": " and lastly when you use the teacher but you noise the teacher you can see also" }, { "end": 2130.4, "start": 2124.16, "text": " here the performance is dropping from the original quite a bit so all of these" }, { "end": 2135.2599999999998, "start": 2130.4, "text": " things kind of contribute and they do much more ablations and they have listed" }, { "end": 2141.12, "start": 2135.2599999999998, "text": " their findings here so using a large teacher model with better performance" }, { "end": 2146.4, "start": 2141.12, "text": " leads to better result so you know as the original teacher you should use as" }, { "end": 2153.48, "start": 2146.4, "text": " good as possible a teacher model you can find second a large amount of unlabeled" }, { "end": 2161.44, "start": 2153.48, "text": " data is necessary for better performance okay so if you want to do this you better" }, { "end": 2167.28, "start": 2161.44, "text": " get a large large amount of extra data because that's one thing that makes the" }, { "end": 2172.1600000000003, "start": 2167.28, "text": " student perform better soft pseudo labels work better than hard pseudo" }, { "end": 2178.08, "start": 2172.16, "text": " labels for out of the main data in certain cases fourth a large student" }, { "end": 2183.72, "start": 2178.08, "text": " model is important to enable the student to learn a more powerful model okay so" }, { "end": 2189.44, "start": 2183.72, "text": " because usually this knowledge distillation is what it this is usually" }, { "end": 2193.96, "start": 2189.44, "text": " called knowledge distillation if you use a teacher model to train a student model" }, { "end": 2198.08, "start": 2193.96, "text": " and it is often used when the student model is smaller than the teacher" }, { "end": 2201.7599999999998, "start": 2198.08, "text": " because you want to kind of become more efficient to you from so the teacher is" }, { "end": 2207.92, "start": 2201.76, "text": " large or make the student small and you usually sacrifice some accuracy and here" }, { "end": 2212, "start": 2207.92, "text": " they say if you want to gain some accuracy you need a large student model" }, { "end": 2219.48, "start": 2212, "text": " it can't be like a small one number five data balancing is useful for small" }, { "end": 2224.76, "start": 2219.48, "text": " models number six joint training on labeled data and unlabeled data out" }, { "end": 2229.1200000000003, "start": 2224.76, "text": " performs the pipeline that first pre trains with unlabeled data and then" }, { "end": 2234.24, "start": 2229.12, "text": " fine-tunes on labeled data so this is in contrast to like what people have done" }, { "end": 2239.3599999999997, "start": 2234.24, "text": " before in the self supervised learning and so on where it's always kind of" }, { "end": 2244.56, "start": 2239.3599999999997, "text": " pre training then fine-tuning or in the in the transfer learning setting seven" }, { "end": 2249.2, "start": 2244.56, "text": " using a large ratio between unlabeled batch size and label batch size enables" }, { "end": 2256.24, "start": 2249.2, "text": " models to train longer on unlabeled data to it to achieve a higher accuracy okay" }, { "end": 2260.56, "start": 2256.24, "text": " we've already seen that they have used that and number eight training the" }, { "end": 2265, "start": 2260.56, "text": " student from scratch is sometimes better than initializing the student with the" }, { "end": 2269.4399999999996, "start": 2265, "text": " teacher and the student initialized with the teacher still requires a large" }, { "end": 2274.4799999999996, "start": 2269.4399999999996, "text": " number of training epochs to perform well this is fairly interesting because" }, { "end": 2281.04, "start": 2274.4799999999996, "text": " it kind of alludes to the fact that the minima in weight space if so if this is" }, { "end": 2285.56, "start": 2281.04, "text": " of course the case if the student model is the same as the teacher model so in" }, { "end": 2292.2799999999997, "start": 2285.56, "text": " like iteration two or three or whatnot it means that you know in weight space" }, { "end": 2297.52, "start": 2292.2799999999997, "text": " if we look at you know you might want to start the student here and the minimum" }, { "end": 2304.2, "start": 2297.52, "text": " is right here and you might want to think that if I learn the same thing then" }, { "end": 2308.9, "start": 2304.2, "text": " the minima are fairly close together right so the the teachers minima might" }, { "end": 2313.32, "start": 2308.9, "text": " be here and the student minima might be fairly close so it might be beneficial" }, { "end": 2318.52, "start": 2313.32, "text": " if I if I start not over here but actually start at the teachers minimum" }, { "end": 2322.6000000000004, "start": 2318.52, "text": " but this doesn't always seem to be the case and that is a fairly interesting" }, { "end": 2326.52, "start": 2322.6000000000004, "text": " observation because it kind of means that we're talking about different" }, { "end": 2331.28, "start": 2326.52, "text": " minima here we're talking about the student model learning different things" }, { "end": 2335.6800000000003, "start": 2331.28, "text": " and that's what we've discussed already the student model kind of learns to be" }, { "end": 2341.6800000000003, "start": 2335.6800000000003, "text": " robust and that's probably a minimum that's fairly far away in weight space" }, { "end": 2346.68, "start": 2341.68, "text": " at least in in a sort of energy landscape weight space might be the case" }, { "end": 2351.96, "start": 2346.68, "text": " that it needs to actually overcome kind of a hill here even though the minimum" }, { "end": 2356.44, "start": 2351.96, "text": " might be close there's lots of research in like how minima are distributed in" }, { "end": 2361.7999999999997, "start": 2356.44, "text": " these weight spaces which I don't want to go into right here but it is a fairly" }, { "end": 2365.7599999999998, "start": 2361.7999999999997, "text": " interesting observation that it's not always helpful to initialize the" }, { "end": 2374.28, "start": 2365.76, "text": " teacher sorry the student at the teachers optimum okay so this was the" }, { "end": 2379.6400000000003, "start": 2374.28, "text": " paper and you know this is this is the type of research where I do appreciate" }, { "end": 2384.28, "start": 2379.6400000000003, "text": " kind of the these large labs taking it on because they have the resources to do" }, { "end": 2388.36, "start": 2384.28, "text": " all of these ablations all of these different models cross them with these" }, { "end": 2394.5600000000004, "start": 2388.36, "text": " giant data sets and so on which I guess university labs just would not have and" }, { "end": 2400.04, "start": 2394.56, "text": " this is a fairly thorough paper really investigating which parts of the" }, { "end": 2406.48, "start": 2400.04, "text": " pipeline you know do something and which ones don't and usually I I'm fairly" }, { "end": 2411.72, "start": 2406.48, "text": " critical of pipelines that have like 50 billion tricks because you never know" }, { "end": 2416.68, "start": 2411.72, "text": " where the improvement exactly is coming from but you can sort of mitigate that" }, { "end": 2421.7599999999998, "start": 2416.68, "text": " criticism by doing all of these kind of ablations on the different parts and" }, { "end": 2425.1600000000003, "start": 2421.76, "text": " really showing look this is important but this is also important but this is" }, { "end": 2430.28, "start": 2425.1600000000003, "text": " also important but this is also important so yeah that was my two cents" }, { "end": 2452.6400000000003, "start": 2430.28, "text": " to this paper I hope you enjoyed this and I'll see you next time bye bye" } ]
rFwQDDbYTm4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Classic] Playing Atari with Deep Reinforcement Learning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "dqn", "deep q learning", "deep q networks", "q learning", "qlearning", "rl", "drl", "deep rl", "deep reinforcement learning", "deepmind", "david silver", "atari", "pong", "breakout", "space invaders", "agent", "cnn", "convolutional neural network", "bellman" ]
#ai #dqn #deepmind After the initial success of deep neural networks, especially convolutional neural networks on supervised image processing tasks, this paper was the first to demonstrate their applicability to reinforcement learning. Deep Q Networks learn from pixel input to play seven different Atari games and outperform baselines that require hand-crafted features. This paper kicked off the entire field of deep reinforcement learning and positioned DeepMind as one of the leading AI companies in the world. OUTLINE: 0:00 - Intro & Overview 2:50 - Arcade Learning Environment 4:25 - Deep Reinforcement Learning 9:20 - Deep Q-Learning 26:30 - Experience Replay 32:25 - Network Architecture 33:50 - Experiments 37:45 - Conclusion Paper: https://arxiv.org/abs/1312.5602 Abstract: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. Authors: Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar (preferred to Patreon): https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there. Today we'll look at playing Atari with deep reinforcement learning by Vladimir Mn. et al. of DeepMind. So this is another one of our series of impactful past papers. This paper right here kicked off an entire revolution in reinforcement learning. Specifically, it sort of started the deep reinforcement learning hype. Before that, reinforcement learning was kind of this weird field of Markov decision processes and so on. Now, I know there were successes and all and stuff was happening. But this really made a lot of waves because it brought the power of deep neural networks to reinforcement learning. And with a pretty simple application of convolutional networks, managed to solve these reinforcement learning games where previous algorithms really couldn't either or were heavily reliant on hand engineered features. So we'll take a look here and what people did back then, what was the state of the art and what they are telling us about that. Kind of set it in relation to today. Alright, if you do like papers like this, commentary like this, share it out, leave a like and tell me in the comments what you think. So let's dive in. They say we present the first deep learning model to successfully learn control policies directly from high dimensional sensory input using reinforcement learning. The model is a convolutional neural network trained with a variant of Q learning whose input is raw pixels and whose output is a value value function estimating future rewards. We apply our method to seven Atari 2600 games from the arcade learning environment with no adjustment of the architecture or learning algorithm. We find that it outperforms all approaches on six of the games and surpasses a human expert on three of them. So there's a lot packed into this. First of all, I wanted to recognize the absolute LaTeX savagery right here. Yeah, you know, it's just just something. I'm not OCD about that kind of stuff. I think we should ditch LaTeX honestly. But so there's a lot of information packed in this abstract right here. So they say this is the first deep learning model to learn control policies. So that's the task of these reinforcement learning algorithms directly from high dimensional sensory input using using reinforcement learning. So what do they mean this are called learning environment if you don't know what it is, it's basically these old games right here, you can kind of emulate them and run them. And the inputs are always sort of the same. So you have one joystick, I believe. So you have kind of this joystick and it can go into various directions like left, right, up, down, and then also the intermediate directions. And then you have a I think also a button that you can push. And that gives you a total of some somewhere around 16 or 20 actions in each of the games. So the good thing about this environment is that the actions are always the same. But of course, they mean different things in different games. So the games here, you know, for example, pong, or this breakout, these games are kind of really low pixel games, as you can see, and they come in form of an image, right. So this is an image, this is like 180 pixels, and this is like 150 pixels. And the task here is to learn a policy, which means which buttons and directions you need to push, depending on the observation right here, these pixels, and achieve the maximum amount of reward. So reward is given in each game differently as well. For example, in this pong game, the reward is every time you score kind of a goal against your opponent, in breakout, you get a reward every time you manage to hit or kill a one of these blocks and so on. So reward is different, but your objective is always to maximize the reward. In a formal framework, you have an agent and an environment. And the environment would always give you an observation, which in this case, the observation is one of these images. And the agent will give back an action. So the action in this case, would be the which button to press or which direction to move the joystick into. And then the environment would get back give back a reward. So the reward is could be, you know, it could be this, you scored a goal. So that's zero most of the time. And sometimes it's one or, or nine. Or it could be like how long you're alive and so on. This is very, very, very variable. So the difficulty of reinforcement learning very often is that these episodes can go for a while. So this whole process here will repeat over time. And the this can go on for hundreds of steps or 1000s of steps until you're done, you know, playing a game like this, like this right here. And the reward can be very sparse. So you might only get a reward at the very end of the game. Sometimes, most often in these games, you get one in between, but still there can be multiple time steps where you don't have a reward. And your task is to figure out which of the actions were the good ones. This is known as the credit assignment problem. And to do the credit assignment problem just from pixels alone, that was unheard of at the time this paper came out. That's why they say we are the first deep learning model to successfully learn the directly from high dimensional sensory inputs. Okay, so the power of deep learning, they argue is that a deep neural network, a convolutional neural network can extract these high level features by itself. However, at this time, people only knew that it could do so for supervised learning, basically for every input of an image, you had a label. And that's how you train these convolutional neural network. Here, it's very different. Here, you will get maybe 1000 of those images. And you'll simply say, well, you got a score of 1100. And somehow you need to figure out which ones of these were the ones that gave you the good score. And how to generalize that. So there are various difficulties here to apply convolutional neural networks to this problem. And they have detail right here, how they did it. So they say the model is a convolutional neural network, which you know, have been demonstrated. So this is after things like Alex net, though before resnet trained with a variant of Q learning, whose input is raw pixels, and whose output is a value function estimating the future. So we'll get into Q learning, Q learning is a reinforcement learning algorithm that has been around, you know, for a long time, but just not combined with deep neural networks. And yeah, then they say, the last cool thing here is that they apply them to seven games with no adjustment of the architecture or learning algorithm. So that they apply the same algorithm, the same hyper parameters to all of the seven games. And the model learns all of the seven games, not as a single model, but as seven different models. However, they all have the same hyper parameters. So they don't need to, to tune them, which is an additional benefit, like this would have been a cool paper, even if they had to sort of tune the algorithm to each of the seven games. But you know, they didn't, which makes it even more impressive is kind of their point here that these that this reinforcement, deep reinforcement learning can be some sort of a general learning mechanism. Of course, you know, later, there has been like giant amount of development since this, and people have come up with all kinds of giant architectures and whatever corrections of policy corrections seem to real continuous control. This is, this is most of it is a derivation of this work right here. And this work, it reads surprisingly simple, I have to say, and it's almost like they had little idea what problems were to be tackled later in RL, because it kind of reads like, you know, we have this thing and you can learn general things. So yeah, that was, we're still not done with reinforcement learning. I guess we we've just started. Okay, so in a bit of more formal setting, what you will have in reinforcement learning are these rewards, right? So at each time step, you perform an action, you're in a state. So the we said you get back an observation, and we call the observation the state. Now, these two things aren't a sorry, these two things aren't entirely the same thing. So the observation is what you get back directly from the environment. And then the state, it can be something more like if you remember something from the last observation, that can be part of your state, the state is basically what you base your decision on. And the observation is the pure thing you get from the environment. Now, in this case, they do some processing, but essentially, we'll, we'll regard them as the same thing. So the state is what you see of the environment. Then in each of these steps, you perform an action, we'll call that this a thing right here. And in each step, you also get a reward for the last action that you took. And the reward is going to be lowercase r. Now, what you want to do formally is you want to maximize the the reward that you get in time t. Sorry, you want to, that's the reward you get at time t prime, you what you want to do, if you play an episode, you're here, and you perform an action, you go here, you perform an action, you go here, you perform an action, you go here, and then your episode is done. For each action, you'll get a reward reward one reward, two reward three, what you want to do is you want to maximize the total the sum of all of these rewards. So over the course of your episode, you want to collect as much reward as you can. There is a discount factor right here, which is sort of saying that rewards that are very far in the future, they're not as important as rewards right now. However, you can set this to one if you want. So you see you want to maximize the future, the sum of future rewards, which is this thing right here. Okay. So how do you how do you do this? There are two main methods in reinforcement learning. The the first one is called a policy gradient method. And very briefly, a policy, we'll call it pi takes in a state s, and it gives you back an action. Okay. And I mean, this is the same for for Q learning, but in a policy gradient method, which is not this paper, but it's like a little bit easier to understand, I believe, we'll simply say, well, we'll simply train a neural network to do that, right. And there is this, there's this policy gradient trick where you can back propagate even though the reward isn't back propagatable, and so on. You can simply say we'll learn a neural network to give us the action that's best, right. So we'll have a neural network, the state goes in neural network, and then we'll just have as many outputs as there are actions like action one, action two, action three, action four, and we'll treat it as a classification problem, right. So you simply train the network to pick the action that is best in this case, and you are, regardless of how you know which action is best right now. In Q learning, you do something else, namely, you train this thing called the Q function. The Q function is a function that takes in a state and an action, and it gives you what the reward is going to be in the future if you are in this state and perform this action, okay. So you are in a given state, right, and you have three actions at your proposal, okay. You have action one, action two, and action three, and you are in state S. What you would do is you would call the Q function, if you had a perfect Q function, it could give you the reward. You would call the Q function three times. You would call the Q function first with A1, say what's Q of S and A1, and the Q function would maybe say that's seven, and here you say what's the Q function of S and A2, and it would maybe say that's four, and here the same thing for A3, and it maybe say that's one. Then you would know, aha, if I take action one, my reward from not only the reward for this step, but my reward from here on until the end of the episode is going to be seven. That is if you had a perfect Q function. Now the Q function is always of course conditioned on a policy right here, so there's what it basically says if I take action A1 right now, and after that I follow policy pi, then I'm going to get the reward of seven. It's a bit of a multi-layered reasoning approach, but ultimately you don't have to worry much about this being conditioned on a policy. Ultimately the Q function says if you take this action right now, what will the reward be for the entire rest of the episode? So if you had a perfect Q function, you could simply ask it about all the actions as we did here, and then pick the action with the highest number. Then you're guaranteed, because there could be a situation where your reward in a single step is going to be very high here, like a hundred, and here is zero, zero. You would be tempted to take that action right here, but after that it's just going to be zero, zero, zero, so your total reward is going to be 100. But here, even though it's zero now, it could be that after that it's 50, and then 40, and then 2000, and so on. So your total reward is going to be much, much more. If you were simply to train a function that tells you what's the reward in the next step, then you would lose, because that function would not be able to look ahead sufficiently. What we're trying to do with the Q function is we're trying to predict to train a function that will tell us not only what's the reward in the next step, but what is the reward in all the steps to come from here. Of course, conditioned on all the decisions we make in the future, but that's this policy pi right here. I hope it's somewhat clear what a Q function is. Interestingly, we can take the same network architecture for this. So what you would do naively is you would build a neural network where you say, okay, a Q function takes a state and an action, so I'll put those into a neural network, and then out comes this estimation of the reward, which we usually call Q. So the value Q is this estimation of the future reward if you take this action in this state. Now, the disadvantage here is that we have to call this neural network once for each action in every state that we're in. So that's like, if there's 10 actions, that's like 10 forward passes. What we could do is we could simply take the same neural network we had for our initial or very initial policy method, and we use that and we simply input state and we'll train it to output the Q for action one and the Q sorry for the state s and action one, the Q for the state s and action two, and so on. So there is going to be this kind of shared encoder. And then that's, it's basically going to encode the state into a latent space and then classify for each of the actions how valuable this particular action would be in that state. So the this here is called a deep Q network. Okay, it's a network that takes in a state and gives you back the Q value. Now the problem right here is we, you know, here we said if we had a perfect Q function, a Q function that was always right, then the problem would be solved because we could just ask the Q function what to do. Of course, we don't have a perfect Q function, we need to train it. So how do we train a Q function? And the answer is surprisingly simple. So what you want to do is you want you are in this you're in this state. And you want to estimate right what your Q value is you want to train your Q network, what you can do is you can simply play an episode according to the Q function you have, and you'll maybe play this episode right here, right? Like you go here and you collect all of this reward. So this entire thing now goes into your data set. And then you have a sample, you know, I was here in this state s I took action one, and I got in total 2090 as a reward. So that is going to be your labeled sample, right? Your labeled sample is going to be s, I was in s, I did a one. And now I have I then got 2090 reward. Cool. And into the next episode, so you're going on playing, and you maybe go down here, and then you get a next training example, I was in state s, I so you keep restarting the episode, so you can get into the same state multiple times, I performed a three, and I got only 100 reward. So that's another training sample. So these training samples right here, you can use to train your Q function. This is called online reinforcement learning, you play the game at the same time as you train your neural network. And you use that improved neural network to play more games. And with time, there is this well known, there is a there are theorems around Q learning that say if you do that iteratively, then your Q function will converge to the optimal Q functions under some assumptions, which of course not given if this is a deep neural network, but you know, who cares? Yeah, so formally, your Q function, as you can see right here, is going to be there is this Bellman recurrence kind of recurrence property of the Q function. So if I am if I am in a state s, and I'm wondering, what is my Q of my state s and my and an action a. And I said with respect to a policy, which the star policy is going to be the policy where we always select the highest Q function. So we'll basically say, we're in state s, we select action a, and after that, we'll just always select whatever the highest scoring action is right like right now, action a might not be the highest scoring, but we'll take a right now. And after that, the highest scoring, that's Q star, it's a Q function conditioned on the policy where after we perform the first action, which is a will take always the best one according to the Q function. Right, that's right here. So we're in state s, we perform a, and s prime is going to be the state that we are going to. So we're in s, we perform a, we get to s prime. So in s prime, which is a function of your environment, we're always going to take the maximum action, and r is going to be the reward of the next step. So you can see this recurrence equation right here that Q star can be framed in terms of Q star. So the Q star of this state is going to depend on the Q star of the next state. And you can use that fact and you can, you know, prove that pretty, we've already done it, basically, you can use that fact to now train your neural network. So your neural network loss function is going to be the following. It's going to say, look, this here is the Q function for state s and action a, that's my, and this is my neural network telling me how much that's worth. And this is the label, right? So here you have to think in terms of back classic supervised learning, this here is going to be your F of X, and this here is going to be your Y, and we'll take the squared loss between the two, except your input X is going to be which state am I in and which action am I taking, and your label is going to be bootstrapped by your own Q function. So your label is going to be the reward you got. Remember, this comes from a replay buffer, we already played that game. And we already know what happened after we performed this action, right? And what happened is we got this reward, and we got into this state. So we can simply ask our own Q function again, what's the best action to take in this state? And what reward would we get? And then we have our label, right? So our label Y is going to be, yeah, I was I was I was pretty confused when I learned this the first time. So I'm going to assume some of you are confused as well. So your Q function is supposed to tell you what's going to be the reward from here until the end of the episode, okay? That you can decompose in the reward that you get from this very next action plus the sum from then, so t plus one, until the end of the episode, okay? So t prime, so that's t prime equals t plus one. All right, so pretty simple. The total reward from now until the end, you can decompose in the reward now plus the reward after that until the end. Now, this here, we know we've played the episode, we know what happened. This here, we can simply ask our Q function again, because we also know what state we got into. And this, as you can see, is very much this, but just one step later. So we can simply ask our own Q function, which might be imperfect, right? But it's certainly a good guess. We say, okay, this reward from now should be equal to the reward we got plus whatever reward we get later. And yes, you might be astounded by the fact that we are using our own neural network, though be with the parameters one time step ago, in order to produce our label. But that is exactly what these Q learning theorems are about. They basically say under some assumptions, if you do this, and you iterate, then this will converge to the optimal Q function. So as you can see right here, this is the this is the gradient of the loss. It's astounding that back then, they still wrote down the gradient of the loss, like almost no one does this. Now, you just say, put this into TensorFlow and go. Yeah, so they make some remarks here, namely that this algorithm is model free, right? There's no model of the environment, you simply learn a function that for each state tells you the Q value for each action. That's, that's all everything, everything that all the logic needs to be within the neural network itself. So that's pretty cool. And they say it's also off policy, it learns about the greedy strategy while following a behavior distribution that ensures adequate exploration of the state space. So while while training, they do this epsilon greedy strategy that follows the greedy strategy, which is where you always take the maximum with one minus epsilon selects a random action with probability epsilon. So while you do your experience, you follow your Q function, you always ask the Q function, what's the best thing to do right here. But you know, that's, that gets you into too much of exploitation. So in epsilon amount of time, you want to do a bit of exploration and just take a random action. Alright, so that's basically the algorithm. So the algorithm is right here. And they have some tricks to get it to work. And the biggest trick they got it to work is the so called replay buffer, this experience replay, because what happens if you play a game of Atari, right, of pong, specifically, then, you know, you have this and you're here and your opponent is here and the ball is here. And then the next frame, you are here again, your opponent might be a bit up, and the ball is here. Okay, and so on. So these samples here, they are all very, very correlated, right, the ones after another, especially if you now build mini batch, let's say, or mini batch sizes to this mini batch has almost no variability in it. So if you've had something like batch norm or whatnot, this, this will be like terrible, because these data samples are correlated. And we in supervised learning, we make a big, pretty big deal out of, you know, shuffling our data set and all of the data points being ID and so on. So what they say is, rather than using the data samples, as we collect them, we put them into a big, big buffer, a big replay buffer. And from that replay buffer, we basically sample at random. Okay, so that means that, you know, some samples can be used multiple times, other samples can be never sampled, because there is a fixed size, and the new ones will always kick out the oldest ones. So some samples might not be used, some samples might be used twice or three times, we can also learn, you know, four times as fast as we sample, and then every sample on average will be used four times. So this, this experience replay proved very, very important for this algorithm to work. That's why they say deep Q learning with experience replay. So they have this replay memory D right here to capacity n. And you initialize your Q function with random weights as you do with a neural network. And then you play these episodes for each episode, you start out with s one, the state one, and you do pre processing. So in pre process, they have some more tricks where they downscale the image, they concatenate four images in a row, because sometimes in Atari get these flicker things. And also, if you concatenate four things in a row, you, for example, can tell it in which direction the ball is moving, and so on. So give a little bit of history. So one sample technically would be four frames, they also do sticky actions, and so on all of these things that you can find today in these emulators that are almost default now, like sticky actions, they invented right here. So for the time steps within the episode, we want to we've probability epsilon select a random action. Otherwise, just ask your Q function, what should I do right here? Give me the best action in this particular state, then you would execute that action and observe a reward and the next state. So the next image right here, you would set the next state to this transition. Okay, so in the state, there can be more, as I said, there can be more than the image like the previous state, and the action you took, but right here, I believe it's like purely the current last four frames. And then you store that transition in the replay buffer. After that, you sample a random mini batch of transitions from the replay buffer. So here, you can see this here is where we de correlate the inputs, because if we simply were to use our last transition for learning, then we would run into a problem. But right here, we sample from that replay buffer. So this is going to be your input, this is going to be your X for your supervised learning of the deep neural network, what's going to be your sorry, without the reward, of course, what's going to be your y, your y, if you're at the end of the episode, it's simply the reward that you got, because there's no more reward coming. However, if you're not at the end, it's the reward that you got from this last step, plus all of the reward that you're going to get in the future. Now, you aren't in the future yet, but you can ask yourself, you can ask your Q function, what that reward is most likely going to be. If your Q function gets better, and this estimate gets better, and your labels get better, then your Q function gets better, and so on in a big circle. And then you perform a gradient descent step on this L2 loss between the label and your prediction. Note that there, if you are in a deep learning framework, there is like a stop gradient on this label right here. So the back propagation only happens with respect to this right here, which makes sense, right? So this is your X, this is your input, and f of X is usually what we back propagate into. Okay, there's no notion yet of like a second Q network and so on, which proved very valuable in the future of this paper. This paper simply applied kind of the most basic version of this, and they simply got it to work. They just got deep neural networks to work with reinforcement learning. And yeah, there's a big chance that this was due to this experience replay, which I believe they did not invent. I mean, this has, of course, been around before, but they were the ones to realize and combine and do that. It's also pretty interesting, the neural network that they actually used was like super duper small. The input to the neural network consists of 84 by 84 by 4 image produced by this. So this is the pre-processing. The first hidden layer convolves 16 8 by 8 filters with stride 4 with the input image and applies the rectifier non-linearity. So the ReLU. The second hidden layer convolves 32 4 by 4 filters with stride 2, again followed by rectifier non-linearity. The final layer hidden layer is fully connected and consists of 256 rectifier units. The output layer is a fully connected linear layer with single output for each valid action. Number of valid actions is vary between 4 and 18 on the games we considered. Okay, as you can see that neural network is pretty small, it's two conv layers. And as was in fashion back then, you had like big filters. So you know big filters from like Alex net. Big filters, but fewer than today. So today, the trend is more like deeper layers, more filters, but they are not as big. They're like three by three filters today only. Yeah, pretty interesting how they did it back then. Interesting also no max pooling and so on. So pretty cool. And here they go into experiments. So they show that their average reward in these games is kind of noisy, but it improves over time, especially also if you look at the average queue of the max action, it continuously goes up during training. So this is really a successful training, especially this investigative experiment they did right here, where you can see one example of how the queue function, what the queue function says. Remember, the queue function gives us the whatever the future reward is going to be. Okay. And here we always look at the max action. So in this first frame, you can see this enemy had just appeared. And you can see that from here to here, there's a spike in the queue value because you can shoot enemies. And that gives you reward. The A this is already so the enemy isn't shot yet by the simple appearance of the enemy, the queue function also like already jumps in value, because it anticipates a future reward, right, then the the agent shoots. And you can see here the shot is about to land at the enemy. And that's when we're here. So this now the queue function is very sure that in the future, there's going to be a high reward. But then once the once the enemy is shot, then there is no more enemy to be shot. And the queue function drops drastically, because it doesn't see a future reward as being as likely as at the beginning when there was this new enemy to be shot. So that's, you know, pretty interesting. And you can see pretty directly that there is a correlation between what's happening in the game and this learned queue function. If you compare this to other methods, and they really say that these other methods, most of them have some kind of very special feature engineered, like, so their method just takes RGB, but the other methods recognize that, oh, in these Atari games, most of the time, you know, there are unique colors for the things. So you know, the enemies are all like green, like, and they make unique channels for those green enemies, or they even have handcrafted object detectors, and tell the algorithm where these objects are. So the comparison really isn't fair. Yet, the DQN outperform these others like almost everywhere. And they also evaluated against a against a human. And I don't actually know they just say an expert human. I have no idea. Maybe just put David Silver in front of computers like, okay, David, here you go. And you can you can, like what happened in Pong? Like, come on, David. But you can see there, there were still problems where the humans were vastly superior. And they mainly attribute this to the difficulty of the problem. And it could also be because for example, in breakout, there's this this kind of the most famous example, where the agent kind of figured out this strategy of shooting the ball, shooting like a hole into this wall that you have to break, and then shooting the ball up here. So the ball bounces up and down. And basically, you win. From then on, you just watch the ball go. And the agent does nothing anymore. So this deep Q networks figured out that strategy, and you need to pull it off very precisely, which of course, the the computer can do very well. So it sometimes achieves these super high scores by pulling something off precisely. But in games where they say where you have to plan ahead for longer, it it kind of fails. And we know that this long planning was about to be a problem for years to come. And it's still not solved. So still, go explore is highly controversial that can solve these kind of long exploration games. And those are still games, right? So we are basically not we are very much further than they were in this paper. But also, we are basically no nowhere yet. Yeah, if I'm if I'm allowed to say that. So I enjoyed reading this paper, I this is it's very it's very well written. If you somehow know how to think about reinforcement learning, like this, this Q function, what the Q function means, and why you would learn it in this way. I find this is not super well described, this kind of requires a bit of a knowledge of not of RL, but just of how to think of RL. But apart from this, everything else is written incredibly well, easy, straightforward. And you know, this was just a nice work of its time. And I appreciate it for that. Alright, I'll see you next time. And I appreciate your time too. Bye.
[ { "end": 5.36, "start": 0, "text": " Hi there. Today we'll look at playing Atari with deep reinforcement learning by Vladimir" }, { "end": 13.44, "start": 5.36, "text": " Mn. et al. of DeepMind. So this is another one of our series of impactful past papers." }, { "end": 19.92, "start": 13.44, "text": " This paper right here kicked off an entire revolution in reinforcement learning. Specifically," }, { "end": 25.96, "start": 19.92, "text": " it sort of started the deep reinforcement learning hype. Before that, reinforcement" }, { "end": 31.520000000000003, "start": 25.96, "text": " learning was kind of this weird field of Markov decision processes and so on. Now, I know there" }, { "end": 39.6, "start": 31.520000000000003, "text": " were successes and all and stuff was happening. But this really made a lot of waves because it" }, { "end": 46.400000000000006, "start": 39.6, "text": " brought the power of deep neural networks to reinforcement learning. And with a pretty simple" }, { "end": 54.040000000000006, "start": 46.400000000000006, "text": " application of convolutional networks, managed to solve these reinforcement learning games where" }, { "end": 61.08, "start": 54.04, "text": " previous algorithms really couldn't either or were heavily reliant on hand engineered features." }, { "end": 68.56, "start": 61.08, "text": " So we'll take a look here and what people did back then, what was the state of the art and" }, { "end": 76, "start": 68.56, "text": " what they are telling us about that. Kind of set it in relation to today. Alright, if you do like" }, { "end": 82.12, "start": 76, "text": " papers like this, commentary like this, share it out, leave a like and tell me in the comments what" }, { "end": 89.88000000000001, "start": 82.12, "text": " you think. So let's dive in. They say we present the first deep learning model to successfully learn" }, { "end": 97.16000000000001, "start": 89.88000000000001, "text": " control policies directly from high dimensional sensory input using reinforcement learning. The" }, { "end": 103.36000000000001, "start": 97.16000000000001, "text": " model is a convolutional neural network trained with a variant of Q learning whose input is raw" }, { "end": 110.02000000000001, "start": 103.36000000000001, "text": " pixels and whose output is a value value function estimating future rewards. We apply our method to" }, { "end": 116.8, "start": 110.02, "text": " seven Atari 2600 games from the arcade learning environment with no adjustment of the architecture" }, { "end": 123.44, "start": 116.8, "text": " or learning algorithm. We find that it outperforms all approaches on six of the games and surpasses a" }, { "end": 129.44, "start": 123.44, "text": " human expert on three of them. So there's a lot packed into this. First of all, I wanted to" }, { "end": 141.96, "start": 129.44, "text": " recognize the absolute LaTeX savagery right here. Yeah, you know, it's just just something. I'm not" }, { "end": 148.64, "start": 141.96, "text": " OCD about that kind of stuff. I think we should ditch LaTeX honestly. But so there's a lot of" }, { "end": 155.78, "start": 148.64, "text": " information packed in this abstract right here. So they say this is the first deep learning model" }, { "end": 163, "start": 155.78, "text": " to learn control policies. So that's the task of these reinforcement learning algorithms directly" }, { "end": 169.84, "start": 163, "text": " from high dimensional sensory input using using reinforcement learning. So what do they mean this" }, { "end": 174.32, "start": 169.84, "text": " are called learning environment if you don't know what it is, it's basically these old games right" }, { "end": 181.32, "start": 174.32, "text": " here, you can kind of emulate them and run them. And the inputs are always sort of the same. So you" }, { "end": 186.72, "start": 181.32, "text": " have one joystick, I believe. So you have kind of this joystick and it can go into various directions" }, { "end": 192.68, "start": 186.72, "text": " like left, right, up, down, and then also the intermediate directions. And then you have a I" }, { "end": 200.32, "start": 192.68, "text": " think also a button that you can push. And that gives you a total of some somewhere around 16 or" }, { "end": 205.92, "start": 200.32, "text": " 20 actions in each of the games. So the good thing about this environment is that the actions are" }, { "end": 210.95999999999998, "start": 205.92, "text": " always the same. But of course, they mean different things in different games. So the games here," }, { "end": 220.36, "start": 210.96, "text": " you know, for example, pong, or this breakout, these games are kind of really low pixel games," }, { "end": 226.08, "start": 220.36, "text": " as you can see, and they come in form of an image, right. So this is an image, this is like 180" }, { "end": 233.72, "start": 226.08, "text": " pixels, and this is like 150 pixels. And the task here is to learn a policy, which means which" }, { "end": 240.84, "start": 233.72, "text": " buttons and directions you need to push, depending on the observation right here, these pixels," }, { "end": 247.28, "start": 240.84, "text": " and achieve the maximum amount of reward. So reward is given in each game differently as well. For" }, { "end": 254.72, "start": 247.28, "text": " example, in this pong game, the reward is every time you score kind of a goal against your opponent," }, { "end": 261.56, "start": 254.72, "text": " in breakout, you get a reward every time you manage to hit or kill a one of these blocks and" }, { "end": 268.76, "start": 261.56, "text": " so on. So reward is different, but your objective is always to maximize the reward. In a formal" }, { "end": 275.68, "start": 268.76, "text": " framework, you have an agent and an environment. And the environment would always give you an" }, { "end": 283.96, "start": 275.68, "text": " observation, which in this case, the observation is one of these images. And the agent will give" }, { "end": 293.2, "start": 283.96, "text": " back an action. So the action in this case, would be the which button to press or which direction to" }, { "end": 300.4, "start": 293.2, "text": " move the joystick into. And then the environment would get back give back a reward. So the reward" }, { "end": 307.84, "start": 300.4, "text": " is could be, you know, it could be this, you scored a goal. So that's zero most of the time. And" }, { "end": 314.24, "start": 307.84, "text": " sometimes it's one or, or nine. Or it could be like how long you're alive and so on. This is very," }, { "end": 322.8, "start": 314.24, "text": " very, very variable. So the difficulty of reinforcement learning very often is that these" }, { "end": 330.6, "start": 322.8, "text": " episodes can go for a while. So this whole process here will repeat over time. And the this can go" }, { "end": 336.52000000000004, "start": 330.6, "text": " on for hundreds of steps or 1000s of steps until you're done, you know, playing a game like this," }, { "end": 344.40000000000003, "start": 336.52000000000004, "text": " like this right here. And the reward can be very sparse. So you might only get a reward at the very" }, { "end": 349.76, "start": 344.40000000000003, "text": " end of the game. Sometimes, most often in these games, you get one in between, but still there" }, { "end": 355, "start": 349.76, "text": " can be multiple time steps where you don't have a reward. And your task is to figure out which of" }, { "end": 362.08, "start": 355, "text": " the actions were the good ones. This is known as the credit assignment problem. And to do the credit" }, { "end": 368.48, "start": 362.08, "text": " assignment problem just from pixels alone, that was unheard of at the time this paper came out." }, { "end": 376.03999999999996, "start": 368.48, "text": " That's why they say we are the first deep learning model to successfully learn the directly from high" }, { "end": 382.44, "start": 376.04, "text": " dimensional sensory inputs. Okay, so the power of deep learning, they argue is that a deep neural" }, { "end": 388.84000000000003, "start": 382.44, "text": " network, a convolutional neural network can extract these high level features by itself. However," }, { "end": 396, "start": 388.84000000000003, "text": " at this time, people only knew that it could do so for supervised learning, basically for every" }, { "end": 401.84000000000003, "start": 396, "text": " input of an image, you had a label. And that's how you train these convolutional neural network." }, { "end": 408.52, "start": 401.84, "text": " Here, it's very different. Here, you will get maybe 1000 of those images. And you'll simply say," }, { "end": 416.2, "start": 408.52, "text": " well, you got a score of 1100. And somehow you need to figure out which ones of these were the" }, { "end": 422.52, "start": 416.2, "text": " ones that gave you the good score. And how to generalize that. So there are various difficulties" }, { "end": 428.84, "start": 422.52, "text": " here to apply convolutional neural networks to this problem. And they have detail right here," }, { "end": 435.28, "start": 428.84, "text": " how they did it. So they say the model is a convolutional neural network, which you know," }, { "end": 442.67999999999995, "start": 435.28, "text": " have been demonstrated. So this is after things like Alex net, though before resnet trained with" }, { "end": 448.52, "start": 442.67999999999995, "text": " a variant of Q learning, whose input is raw pixels, and whose output is a value function" }, { "end": 453.2, "start": 448.52, "text": " estimating the future. So we'll get into Q learning, Q learning is a reinforcement learning" }, { "end": 460.36, "start": 453.2, "text": " algorithm that has been around, you know, for a long time, but just not combined with deep neural" }, { "end": 468.56, "start": 460.36, "text": " networks. And yeah, then they say, the last cool thing here is that they apply them to seven games" }, { "end": 476.15999999999997, "start": 468.56, "text": " with no adjustment of the architecture or learning algorithm. So that they apply the same algorithm," }, { "end": 481.96, "start": 476.15999999999997, "text": " the same hyper parameters to all of the seven games. And the model learns all of the seven" }, { "end": 486.68, "start": 481.96, "text": " games, not as a single model, but as seven different models. However, they all have the" }, { "end": 491.2, "start": 486.68, "text": " same hyper parameters. So they don't need to, to tune them, which is an additional benefit," }, { "end": 497.12, "start": 491.2, "text": " like this would have been a cool paper, even if they had to sort of tune the algorithm to each" }, { "end": 503, "start": 497.12, "text": " of the seven games. But you know, they didn't, which makes it even more impressive is kind of" }, { "end": 509.91999999999996, "start": 503, "text": " their point here that these that this reinforcement, deep reinforcement learning can be some sort of a" }, { "end": 516.88, "start": 509.92, "text": " general learning mechanism. Of course, you know, later, there has been like giant amount of" }, { "end": 523.8000000000001, "start": 516.88, "text": " development since this, and people have come up with all kinds of giant architectures and whatever" }, { "end": 531.96, "start": 523.8000000000001, "text": " corrections of policy corrections seem to real continuous control. This is, this is most of it" }, { "end": 540.36, "start": 531.96, "text": " is a derivation of this work right here. And this work, it reads surprisingly simple, I have to say," }, { "end": 548.8000000000001, "start": 540.36, "text": " and it's almost like they had little idea what problems were to be tackled later in RL, because" }, { "end": 553.5600000000001, "start": 548.8000000000001, "text": " it kind of reads like, you know, we have this thing and you can learn general things. So yeah," }, { "end": 560.1600000000001, "start": 553.5600000000001, "text": " that was, we're still not done with reinforcement learning. I guess we we've just started. Okay," }, { "end": 567.68, "start": 560.16, "text": " so in a bit of more formal setting, what you will have in reinforcement learning are these rewards," }, { "end": 573.7199999999999, "start": 567.68, "text": " right? So at each time step, you perform an action, you're in a state. So the we said you" }, { "end": 581.0799999999999, "start": 573.7199999999999, "text": " get back an observation, and we call the observation the state. Now, these two things aren't a sorry," }, { "end": 586.6, "start": 581.0799999999999, "text": " these two things aren't entirely the same thing. So the observation is what you get back directly" }, { "end": 593.48, "start": 586.6, "text": " from the environment. And then the state, it can be something more like if you remember something" }, { "end": 598.5600000000001, "start": 593.48, "text": " from the last observation, that can be part of your state, the state is basically what you base" }, { "end": 603.28, "start": 598.5600000000001, "text": " your decision on. And the observation is the pure thing you get from the environment. Now," }, { "end": 610.48, "start": 603.28, "text": " in this case, they do some processing, but essentially, we'll, we'll regard them as the" }, { "end": 616.76, "start": 610.48, "text": " same thing. So the state is what you see of the environment. Then in each of these steps," }, { "end": 622.36, "start": 616.76, "text": " you perform an action, we'll call that this a thing right here. And in each step, you also" }, { "end": 629.64, "start": 622.36, "text": " get a reward for the last action that you took. And the reward is going to be lowercase r. Now," }, { "end": 638.8000000000001, "start": 629.64, "text": " what you want to do formally is you want to maximize the the reward that you get in time t." }, { "end": 649.12, "start": 638.8, "text": " Sorry, you want to, that's the reward you get at time t prime, you what you want to do, if you play" }, { "end": 653.4799999999999, "start": 649.12, "text": " an episode, you're here, and you perform an action, you go here, you perform an action," }, { "end": 658.68, "start": 653.4799999999999, "text": " you go here, you perform an action, you go here, and then your episode is done. For each action," }, { "end": 664.3199999999999, "start": 658.68, "text": " you'll get a reward reward one reward, two reward three, what you want to do is you want to maximize" }, { "end": 670.2800000000001, "start": 664.32, "text": " the total the sum of all of these rewards. So over the course of your episode, you want to collect" }, { "end": 676.9200000000001, "start": 670.2800000000001, "text": " as much reward as you can. There is a discount factor right here, which is sort of saying that" }, { "end": 682.36, "start": 676.9200000000001, "text": " rewards that are very far in the future, they're not as important as rewards right now. However," }, { "end": 690.7600000000001, "start": 682.36, "text": " you can set this to one if you want. So you see you want to maximize the future, the sum of future" }, { "end": 697.4, "start": 690.76, "text": " rewards, which is this thing right here. Okay. So how do you how do you do this? There are two main" }, { "end": 705.56, "start": 697.4, "text": " methods in reinforcement learning. The the first one is called a policy gradient method. And very" }, { "end": 715.28, "start": 705.56, "text": " briefly, a policy, we'll call it pi takes in a state s, and it gives you back an action. Okay." }, { "end": 721.9599999999999, "start": 715.28, "text": " And I mean, this is the same for for Q learning, but in a policy gradient method, which is not" }, { "end": 727.0799999999999, "start": 721.9599999999999, "text": " this paper, but it's like a little bit easier to understand, I believe, we'll simply say, well," }, { "end": 734.04, "start": 727.0799999999999, "text": " we'll simply train a neural network to do that, right. And there is this, there's this policy" }, { "end": 740.64, "start": 734.04, "text": " gradient trick where you can back propagate even though the reward isn't back propagatable, and so" }, { "end": 745.72, "start": 740.64, "text": " on. You can simply say we'll learn a neural network to give us the action that's best, right. So we'll" }, { "end": 751.68, "start": 745.72, "text": " have a neural network, the state goes in neural network, and then we'll just have as many outputs" }, { "end": 758.48, "start": 751.68, "text": " as there are actions like action one, action two, action three, action four, and we'll treat it as a" }, { "end": 765.48, "start": 758.48, "text": " classification problem, right. So you simply train the network to pick the action that is best in" }, { "end": 771.84, "start": 765.48, "text": " this case, and you are, regardless of how you know which action is best right now. In Q learning," }, { "end": 779.32, "start": 771.84, "text": " you do something else, namely, you train this thing called the Q function. The Q function is" }, { "end": 789.48, "start": 779.32, "text": " a function that takes in a state and an action, and it gives you what the reward is going to be in" }, { "end": 796.24, "start": 789.48, "text": " the future if you are in this state and perform this action, okay. So you are in a given state," }, { "end": 803.5600000000001, "start": 796.24, "text": " right, and you have three actions at your proposal, okay. You have action one, action two," }, { "end": 811.16, "start": 803.5600000000001, "text": " and action three, and you are in state S. What you would do is you would call the Q function," }, { "end": 816.8000000000001, "start": 811.16, "text": " if you had a perfect Q function, it could give you the reward. You would call the Q function three" }, { "end": 824.7199999999999, "start": 816.8, "text": " times. You would call the Q function first with A1, say what's Q of S and A1, and the Q function" }, { "end": 831.68, "start": 824.7199999999999, "text": " would maybe say that's seven, and here you say what's the Q function of S and A2, and it would" }, { "end": 838.04, "start": 831.68, "text": " maybe say that's four, and here the same thing for A3, and it maybe say that's one. Then you would" }, { "end": 846.28, "start": 838.04, "text": " know, aha, if I take action one, my reward from not only the reward for this step, but my reward" }, { "end": 854.6, "start": 846.28, "text": " from here on until the end of the episode is going to be seven. That is if you had a perfect Q function." }, { "end": 861.48, "start": 854.6, "text": " Now the Q function is always of course conditioned on a policy right here, so there's what it basically" }, { "end": 870.9599999999999, "start": 861.48, "text": " says if I take action A1 right now, and after that I follow policy pi, then I'm going to get the reward" }, { "end": 878.44, "start": 870.96, "text": " of seven. It's a bit of a multi-layered reasoning approach, but ultimately you don't have to" }, { "end": 887.0400000000001, "start": 878.44, "text": " worry much about this being conditioned on a policy. Ultimately the Q function says if you take" }, { "end": 894.32, "start": 887.0400000000001, "text": " this action right now, what will the reward be for the entire rest of the episode? So if you had a" }, { "end": 900.2800000000001, "start": 894.32, "text": " perfect Q function, you could simply ask it about all the actions as we did here, and then pick the" }, { "end": 906.04, "start": 900.28, "text": " action with the highest number. Then you're guaranteed, because there could be a situation" }, { "end": 915.24, "start": 906.04, "text": " where your reward in a single step is going to be very high here, like a hundred, and here is zero," }, { "end": 922.36, "start": 915.24, "text": " zero. You would be tempted to take that action right here, but after that it's just going to be" }, { "end": 929.92, "start": 922.36, "text": " zero, zero, zero, so your total reward is going to be 100. But here, even though it's zero now, it" }, { "end": 938.52, "start": 929.92, "text": " could be that after that it's 50, and then 40, and then 2000, and so on. So your total reward is going" }, { "end": 943.76, "start": 938.52, "text": " to be much, much more. If you were simply to train a function that tells you what's the reward in the" }, { "end": 949.92, "start": 943.76, "text": " next step, then you would lose, because that function would not be able to look ahead sufficiently." }, { "end": 954.8199999999999, "start": 949.92, "text": " What we're trying to do with the Q function is we're trying to predict to train a function that" }, { "end": 959.88, "start": 954.8199999999999, "text": " will tell us not only what's the reward in the next step, but what is the reward in all the steps" }, { "end": 966.56, "start": 959.88, "text": " to come from here. Of course, conditioned on all the decisions we make in the future, but that's" }, { "end": 973.32, "start": 966.56, "text": " this policy pi right here. I hope it's somewhat clear what a Q function is. Interestingly, we can" }, { "end": 979.6, "start": 973.32, "text": " take the same network architecture for this. So what you would do naively is you would build a" }, { "end": 984.32, "start": 979.6, "text": " neural network where you say, okay, a Q function takes a state and an action, so I'll put those" }, { "end": 990.9200000000001, "start": 984.32, "text": " into a neural network, and then out comes this estimation of the reward, which we usually call" }, { "end": 999.08, "start": 990.9200000000001, "text": " Q. So the value Q is this estimation of the future reward if you take this action in this state. Now," }, { "end": 1005.5200000000001, "start": 999.08, "text": " the disadvantage here is that we have to call this neural network once for each action in every state" }, { "end": 1010, "start": 1005.5200000000001, "text": " that we're in. So that's like, if there's 10 actions, that's like 10 forward passes. What we" }, { "end": 1016.28, "start": 1010, "text": " could do is we could simply take the same neural network we had for our initial or very initial" }, { "end": 1024.52, "start": 1016.28, "text": " policy method, and we use that and we simply input state and we'll train it to output the Q for" }, { "end": 1033.64, "start": 1024.52, "text": " action one and the Q sorry for the state s and action one, the Q for the state s and action two," }, { "end": 1042.8000000000002, "start": 1033.64, "text": " and so on. So there is going to be this kind of shared encoder. And then that's, it's basically" }, { "end": 1048.5600000000002, "start": 1042.8000000000002, "text": " going to encode the state into a latent space and then classify for each of the actions how" }, { "end": 1057.96, "start": 1048.5600000000002, "text": " valuable this particular action would be in that state. So the this here is called a deep Q network." }, { "end": 1066.56, "start": 1057.96, "text": " Okay, it's a network that takes in a state and gives you back the Q value. Now the problem right" }, { "end": 1073.3600000000001, "start": 1066.56, "text": " here is we, you know, here we said if we had a perfect Q function, a Q function that was always" }, { "end": 1078.8, "start": 1073.3600000000001, "text": " right, then the problem would be solved because we could just ask the Q function what to do. Of" }, { "end": 1084, "start": 1078.8, "text": " course, we don't have a perfect Q function, we need to train it. So how do we train a Q function?" }, { "end": 1091.2, "start": 1084, "text": " And the answer is surprisingly simple. So what you want to do is you want you are in this you're in" }, { "end": 1098.32, "start": 1091.2, "text": " this state. And you want to estimate right what your Q value is you want to train your Q network," }, { "end": 1104.2, "start": 1098.32, "text": " what you can do is you can simply play an episode according to the Q function you have, and you'll" }, { "end": 1110.64, "start": 1104.2, "text": " maybe play this episode right here, right? Like you go here and you collect all of this reward. So" }, { "end": 1119.88, "start": 1110.64, "text": " this entire thing now goes into your data set. And then you have a sample, you know, I was here in" }, { "end": 1129.68, "start": 1119.88, "text": " this state s I took action one, and I got in total 2090 as a reward. So that is going to be your" }, { "end": 1136.2, "start": 1129.68, "text": " labeled sample, right? Your labeled sample is going to be s, I was in s, I did a one. And now I" }, { "end": 1145.0800000000002, "start": 1136.2, "text": " have I then got 2090 reward. Cool. And into the next episode, so you're going on playing, and you" }, { "end": 1151.48, "start": 1145.0800000000002, "text": " maybe go down here, and then you get a next training example, I was in state s, I so you keep" }, { "end": 1156.76, "start": 1151.48, "text": " restarting the episode, so you can get into the same state multiple times, I performed a three," }, { "end": 1163.68, "start": 1156.76, "text": " and I got only 100 reward. So that's another training sample. So these training samples right" }, { "end": 1169.2, "start": 1163.68, "text": " here, you can use to train your Q function. This is called online reinforcement learning, you play" }, { "end": 1175.92, "start": 1169.2, "text": " the game at the same time as you train your neural network. And you use that improved neural network" }, { "end": 1184.64, "start": 1175.92, "text": " to play more games. And with time, there is this well known, there is a there are theorems around" }, { "end": 1191.76, "start": 1184.64, "text": " Q learning that say if you do that iteratively, then your Q function will converge to the optimal" }, { "end": 1195.96, "start": 1191.76, "text": " Q functions under some assumptions, which of course not given if this is a deep neural network," }, { "end": 1204.4, "start": 1195.96, "text": " but you know, who cares? Yeah, so formally, your Q function, as you can see right here, is going to" }, { "end": 1214.32, "start": 1204.4, "text": " be there is this Bellman recurrence kind of recurrence property of the Q function. So if I am if I" }, { "end": 1226.4399999999998, "start": 1214.32, "text": " am in a state s, and I'm wondering, what is my Q of my state s and my and an action a. And I said" }, { "end": 1231.76, "start": 1226.4399999999998, "text": " with respect to a policy, which the star policy is going to be the policy where we always select" }, { "end": 1239.04, "start": 1231.76, "text": " the highest Q function. So we'll basically say, we're in state s, we select action a, and after" }, { "end": 1244.8799999999999, "start": 1239.04, "text": " that, we'll just always select whatever the highest scoring action is right like right now, action a" }, { "end": 1250.04, "start": 1244.8799999999999, "text": " might not be the highest scoring, but we'll take a right now. And after that, the highest scoring," }, { "end": 1256.44, "start": 1250.04, "text": " that's Q star, it's a Q function conditioned on the policy where after we perform the first action," }, { "end": 1263.04, "start": 1256.44, "text": " which is a will take always the best one according to the Q function. Right, that's right here. So" }, { "end": 1270.1599999999999, "start": 1263.04, "text": " we're in state s, we perform a, and s prime is going to be the state that we are going to. So" }, { "end": 1276.8, "start": 1270.1599999999999, "text": " we're in s, we perform a, we get to s prime. So in s prime, which is a function of your environment," }, { "end": 1282.52, "start": 1276.8, "text": " we're always going to take the maximum action, and r is going to be the reward of the next step. So" }, { "end": 1289.3999999999999, "start": 1282.52, "text": " you can see this recurrence equation right here that Q star can be framed in terms of Q star. So" }, { "end": 1295.8000000000002, "start": 1289.4, "text": " the Q star of this state is going to depend on the Q star of the next state. And you can use that" }, { "end": 1300.92, "start": 1295.8000000000002, "text": " fact and you can, you know, prove that pretty, we've already done it, basically, you can use" }, { "end": 1309.88, "start": 1300.92, "text": " that fact to now train your neural network. So your neural network loss function is going to be the" }, { "end": 1321.2, "start": 1309.88, "text": " following. It's going to say, look, this here is the Q function for state s and action a, that's my," }, { "end": 1327.72, "start": 1321.2, "text": " and this is my neural network telling me how much that's worth. And this is the label, right? So here" }, { "end": 1333.8000000000002, "start": 1327.72, "text": " you have to think in terms of back classic supervised learning, this here is going to be your F of X," }, { "end": 1341.84, "start": 1333.8, "text": " and this here is going to be your Y, and we'll take the squared loss between the two, except your" }, { "end": 1348.6399999999999, "start": 1341.84, "text": " input X is going to be which state am I in and which action am I taking, and your label is going" }, { "end": 1357.3999999999999, "start": 1348.6399999999999, "text": " to be bootstrapped by your own Q function. So your label is going to be the reward you got. Remember," }, { "end": 1364.92, "start": 1357.4, "text": " this comes from a replay buffer, we already played that game. And we already know what happened after" }, { "end": 1371.92, "start": 1364.92, "text": " we performed this action, right? And what happened is we got this reward, and we got into this state." }, { "end": 1379.4, "start": 1371.92, "text": " So we can simply ask our own Q function again, what's the best action to take in this state? And" }, { "end": 1389.76, "start": 1379.4, "text": " what reward would we get? And then we have our label, right? So our label Y is going to be, yeah," }, { "end": 1395.1200000000001, "start": 1389.76, "text": " I was I was I was pretty confused when I learned this the first time. So I'm going to assume some" }, { "end": 1401.3600000000001, "start": 1395.1200000000001, "text": " of you are confused as well. So your Q function is supposed to tell you what's going to be the" }, { "end": 1412.04, "start": 1401.36, "text": " reward from here until the end of the episode, okay? That you can decompose in the reward that" }, { "end": 1419.6399999999999, "start": 1412.04, "text": " you get from this very next action plus the sum from then, so t plus one, until the end of the" }, { "end": 1427.52, "start": 1419.6399999999999, "text": " episode, okay? So t prime, so that's t prime equals t plus one. All right, so pretty simple. The total" }, { "end": 1434.12, "start": 1427.52, "text": " reward from now until the end, you can decompose in the reward now plus the reward after that until" }, { "end": 1442.68, "start": 1434.12, "text": " the end. Now, this here, we know we've played the episode, we know what happened. This here, we can" }, { "end": 1449.68, "start": 1442.68, "text": " simply ask our Q function again, because we also know what state we got into. And this, as you can" }, { "end": 1456.6399999999999, "start": 1449.68, "text": " see, is very much this, but just one step later. So we can simply ask our own Q function, which might" }, { "end": 1466.24, "start": 1456.64, "text": " be imperfect, right? But it's certainly a good guess. We say, okay, this reward from now should" }, { "end": 1475.24, "start": 1466.24, "text": " be equal to the reward we got plus whatever reward we get later. And yes, you might be astounded by" }, { "end": 1482.44, "start": 1475.24, "text": " the fact that we are using our own neural network, though be with the parameters one time step ago," }, { "end": 1488.56, "start": 1482.44, "text": " in order to produce our label. But that is exactly what these Q learning theorems are about. They" }, { "end": 1495.56, "start": 1488.56, "text": " basically say under some assumptions, if you do this, and you iterate, then this will converge to" }, { "end": 1503.44, "start": 1495.56, "text": " the optimal Q function. So as you can see right here, this is the this is the gradient of the loss." }, { "end": 1508.3200000000002, "start": 1503.44, "text": " It's astounding that back then, they still wrote down the gradient of the loss, like almost no one" }, { "end": 1516.24, "start": 1508.32, "text": " does this. Now, you just say, put this into TensorFlow and go. Yeah, so they make some remarks" }, { "end": 1523.12, "start": 1516.24, "text": " here, namely that this algorithm is model free, right? There's no model of the environment, you" }, { "end": 1530.6399999999999, "start": 1523.12, "text": " simply learn a function that for each state tells you the Q value for each action. That's, that's" }, { "end": 1537.8, "start": 1530.6399999999999, "text": " all everything, everything that all the logic needs to be within the neural network itself. So that's" }, { "end": 1546.08, "start": 1537.8, "text": " pretty cool. And they say it's also off policy, it learns about the greedy strategy while following" }, { "end": 1551.44, "start": 1546.08, "text": " a behavior distribution that ensures adequate exploration of the state space. So while while" }, { "end": 1557.32, "start": 1551.44, "text": " training, they do this epsilon greedy strategy that follows the greedy strategy, which is where" }, { "end": 1563.2, "start": 1557.32, "text": " you always take the maximum with one minus epsilon selects a random action with probability epsilon." }, { "end": 1569.72, "start": 1563.2, "text": " So while you do your experience, you follow your Q function, you always ask the Q function, what's" }, { "end": 1576.72, "start": 1569.72, "text": " the best thing to do right here. But you know, that's, that gets you into too much of exploitation." }, { "end": 1582.68, "start": 1576.72, "text": " So in epsilon amount of time, you want to do a bit of exploration and just take a random action." }, { "end": 1590.68, "start": 1582.68, "text": " Alright, so that's basically the algorithm. So the algorithm is right here. And they have some tricks" }, { "end": 1597.76, "start": 1590.68, "text": " to get it to work. And the biggest trick they got it to work is the so called replay buffer, this" }, { "end": 1604.72, "start": 1597.76, "text": " experience replay, because what happens if you play a game of Atari, right, of pong, specifically," }, { "end": 1610, "start": 1604.72, "text": " then, you know, you have this and you're here and your opponent is here and the ball is here. And" }, { "end": 1618.1200000000001, "start": 1610, "text": " then the next frame, you are here again, your opponent might be a bit up, and the ball is here." }, { "end": 1625.84, "start": 1618.12, "text": " Okay, and so on. So these samples here, they are all very, very correlated, right, the ones after" }, { "end": 1630.76, "start": 1625.84, "text": " another, especially if you now build mini batch, let's say, or mini batch sizes to this mini batch" }, { "end": 1636.52, "start": 1630.76, "text": " has almost no variability in it. So if you've had something like batch norm or whatnot, this, this" }, { "end": 1642.4799999999998, "start": 1636.52, "text": " will be like terrible, because these data samples are correlated. And we in supervised learning," }, { "end": 1647.6, "start": 1642.4799999999998, "text": " we make a big, pretty big deal out of, you know, shuffling our data set and all of the data points" }, { "end": 1655.1999999999998, "start": 1647.6, "text": " being ID and so on. So what they say is, rather than using the data samples, as we collect them," }, { "end": 1662.08, "start": 1655.1999999999998, "text": " we put them into a big, big buffer, a big replay buffer. And from that replay buffer, we basically" }, { "end": 1670.1599999999999, "start": 1662.08, "text": " sample at random. Okay, so that means that, you know, some samples can be used multiple times," }, { "end": 1676.4399999999998, "start": 1670.1599999999999, "text": " other samples can be never sampled, because there is a fixed size, and the new ones will always kick" }, { "end": 1680.96, "start": 1676.44, "text": " out the oldest ones. So some samples might not be used, some samples might be used twice or three" }, { "end": 1687.16, "start": 1680.96, "text": " times, we can also learn, you know, four times as fast as we sample, and then every sample on" }, { "end": 1693.88, "start": 1687.16, "text": " average will be used four times. So this, this experience replay proved very, very important for" }, { "end": 1699.56, "start": 1693.88, "text": " this algorithm to work. That's why they say deep Q learning with experience replay. So they have" }, { "end": 1707.04, "start": 1699.56, "text": " this replay memory D right here to capacity n. And you initialize your Q function with random" }, { "end": 1715.56, "start": 1707.04, "text": " weights as you do with a neural network. And then you play these episodes for each episode," }, { "end": 1722.24, "start": 1715.56, "text": " you start out with s one, the state one, and you do pre processing. So in pre process, they have" }, { "end": 1730.64, "start": 1722.24, "text": " some more tricks where they downscale the image, they concatenate four images in a row, because" }, { "end": 1735.44, "start": 1730.64, "text": " sometimes in Atari get these flicker things. And also, if you concatenate four things in a row," }, { "end": 1741.56, "start": 1735.44, "text": " you, for example, can tell it in which direction the ball is moving, and so on. So give a little" }, { "end": 1747.8, "start": 1741.56, "text": " bit of history. So one sample technically would be four frames, they also do sticky actions, and so" }, { "end": 1753.72, "start": 1747.8, "text": " on all of these things that you can find today in these emulators that are almost default now," }, { "end": 1761.72, "start": 1753.72, "text": " like sticky actions, they invented right here. So for the time steps within the episode, we want to" }, { "end": 1768.56, "start": 1761.72, "text": " we've probability epsilon select a random action. Otherwise, just ask your Q function, what should I" }, { "end": 1774.52, "start": 1768.56, "text": " do right here? Give me the best action in this particular state, then you would execute that" }, { "end": 1783.8799999999999, "start": 1774.52, "text": " action and observe a reward and the next state. So the next image right here, you would set the" }, { "end": 1790.48, "start": 1783.8799999999999, "text": " next state to this transition. Okay, so in the state, there can be more, as I said, there can be" }, { "end": 1795.6399999999999, "start": 1790.48, "text": " more than the image like the previous state, and the action you took, but right here, I believe it's" }, { "end": 1805.6000000000001, "start": 1795.64, "text": " like purely the current last four frames. And then you store that transition in the replay buffer." }, { "end": 1811.3200000000002, "start": 1805.6000000000001, "text": " After that, you sample a random mini batch of transitions from the replay buffer. So here," }, { "end": 1818.5600000000002, "start": 1811.3200000000002, "text": " you can see this here is where we de correlate the inputs, because if we simply were to use our last" }, { "end": 1826.72, "start": 1818.56, "text": " transition for learning, then we would run into a problem. But right here, we sample from that" }, { "end": 1833.84, "start": 1826.72, "text": " replay buffer. So this is going to be your input, this is going to be your X for your supervised" }, { "end": 1839.72, "start": 1833.84, "text": " learning of the deep neural network, what's going to be your sorry, without the reward, of course," }, { "end": 1846.24, "start": 1839.72, "text": " what's going to be your y, your y, if you're at the end of the episode, it's simply the reward" }, { "end": 1850.56, "start": 1846.24, "text": " that you got, because there's no more reward coming. However, if you're not at the end," }, { "end": 1857.08, "start": 1850.56, "text": " it's the reward that you got from this last step, plus all of the reward that you're going to get" }, { "end": 1863.72, "start": 1857.08, "text": " in the future. Now, you aren't in the future yet, but you can ask yourself, you can ask your Q" }, { "end": 1869.92, "start": 1863.72, "text": " function, what that reward is most likely going to be. If your Q function gets better, and this" }, { "end": 1874.92, "start": 1869.92, "text": " estimate gets better, and your labels get better, then your Q function gets better, and so on in a" }, { "end": 1882.96, "start": 1874.92, "text": " big circle. And then you perform a gradient descent step on this L2 loss between the label and your" }, { "end": 1891.3600000000001, "start": 1882.96, "text": " prediction. Note that there, if you are in a deep learning framework, there is like a stop gradient" }, { "end": 1897.5600000000002, "start": 1891.3600000000001, "text": " on this label right here. So the back propagation only happens with respect to this right here," }, { "end": 1903.8000000000002, "start": 1897.5600000000002, "text": " which makes sense, right? So this is your X, this is your input, and f of X is usually what we back" }, { "end": 1912.04, "start": 1903.8, "text": " propagate into. Okay, there's no notion yet of like a second Q network and so on, which proved very" }, { "end": 1918.6, "start": 1912.04, "text": " valuable in the future of this paper. This paper simply applied kind of the most basic version of" }, { "end": 1926.24, "start": 1918.6, "text": " this, and they simply got it to work. They just got deep neural networks to work with reinforcement" }, { "end": 1934.56, "start": 1926.24, "text": " learning. And yeah, there's a big chance that this was due to this experience replay, which I believe" }, { "end": 1943.8, "start": 1934.56, "text": " they did not invent. I mean, this has, of course, been around before, but they were the ones to" }, { "end": 1950.52, "start": 1943.8, "text": " realize and combine and do that. It's also pretty interesting, the neural network that they actually" }, { "end": 1960.44, "start": 1950.52, "text": " used was like super duper small. The input to the neural network consists of 84 by 84 by 4 image" }, { "end": 1967.62, "start": 1960.44, "text": " produced by this. So this is the pre-processing. The first hidden layer convolves 16 8 by 8 filters" }, { "end": 1974.8799999999999, "start": 1967.62, "text": " with stride 4 with the input image and applies the rectifier non-linearity. So the ReLU. The" }, { "end": 1979.92, "start": 1974.8799999999999, "text": " second hidden layer convolves 32 4 by 4 filters with stride 2, again followed by rectifier" }, { "end": 1986.3200000000002, "start": 1979.92, "text": " non-linearity. The final layer hidden layer is fully connected and consists of 256 rectifier" }, { "end": 1991.4, "start": 1986.3200000000002, "text": " units. The output layer is a fully connected linear layer with single output for each valid" }, { "end": 1999.6000000000001, "start": 1991.4, "text": " action. Number of valid actions is vary between 4 and 18 on the games we considered. Okay, as you" }, { "end": 2005.76, "start": 1999.6000000000001, "text": " can see that neural network is pretty small, it's two conv layers. And as was in fashion back then," }, { "end": 2012.72, "start": 2005.76, "text": " you had like big filters. So you know big filters from like Alex net. Big filters," }, { "end": 2018.72, "start": 2012.72, "text": " but fewer than today. So today, the trend is more like deeper layers, more filters," }, { "end": 2024.96, "start": 2018.72, "text": " but they are not as big. They're like three by three filters today only. Yeah, pretty interesting" }, { "end": 2033.84, "start": 2024.96, "text": " how they did it back then. Interesting also no max pooling and so on. So pretty cool. And here" }, { "end": 2040.48, "start": 2033.84, "text": " they go into experiments. So they show that their average reward in these games is kind of noisy," }, { "end": 2046.9599999999998, "start": 2040.48, "text": " but it improves over time, especially also if you look at the average queue of the max action," }, { "end": 2055.2799999999997, "start": 2046.9599999999998, "text": " it continuously goes up during training. So this is really a successful training, especially this" }, { "end": 2061, "start": 2055.2799999999997, "text": " investigative experiment they did right here, where you can see one example of how the queue" }, { "end": 2068.96, "start": 2061, "text": " function, what the queue function says. Remember, the queue function gives us the whatever the future" }, { "end": 2074.96, "start": 2068.96, "text": " reward is going to be. Okay. And here we always look at the max action. So in this first frame," }, { "end": 2083.68, "start": 2076, "text": " you can see this enemy had just appeared. And you can see that from here to here, there's a spike in" }, { "end": 2090.72, "start": 2083.68, "text": " the queue value because you can shoot enemies. And that gives you reward. The A this is already so" }, { "end": 2096.7999999999997, "start": 2090.72, "text": " the enemy isn't shot yet by the simple appearance of the enemy, the queue function also like already" }, { "end": 2105.68, "start": 2096.7999999999997, "text": " jumps in value, because it anticipates a future reward, right, then the the agent shoots. And" }, { "end": 2112.3999999999996, "start": 2106.3999999999996, "text": " you can see here the shot is about to land at the enemy. And that's when we're here. So this now the" }, { "end": 2118.3199999999997, "start": 2112.3999999999996, "text": " queue function is very sure that in the future, there's going to be a high reward. But then once" }, { "end": 2128, "start": 2118.32, "text": " the once the enemy is shot, then there is no more enemy to be shot. And the queue function drops" }, { "end": 2135.36, "start": 2128, "text": " drastically, because it doesn't see a future reward as being as likely as at the beginning when there" }, { "end": 2140.88, "start": 2135.36, "text": " was this new enemy to be shot. So that's, you know, pretty interesting. And you can see pretty" }, { "end": 2147.04, "start": 2140.88, "text": " directly that there is a correlation between what's happening in the game and this learned queue" }, { "end": 2156.16, "start": 2147.04, "text": " function. If you compare this to other methods, and they really say that these other methods," }, { "end": 2163.84, "start": 2156.16, "text": " most of them have some kind of very special feature engineered, like, so their method just takes RGB," }, { "end": 2168.08, "start": 2163.84, "text": " but the other methods recognize that, oh, in these Atari games, most of the time, you know," }, { "end": 2173.52, "start": 2168.08, "text": " there are unique colors for the things. So you know, the enemies are all like green, like, and they" }, { "end": 2180.08, "start": 2173.52, "text": " make unique channels for those green enemies, or they even have handcrafted object detectors," }, { "end": 2186.56, "start": 2180.08, "text": " and tell the algorithm where these objects are. So the comparison really isn't fair. Yet," }, { "end": 2195.52, "start": 2187.6, "text": " the DQN outperform these others like almost everywhere. And they also evaluated against a" }, { "end": 2203.44, "start": 2195.52, "text": " against a human. And I don't actually know they just say an expert human. I have no idea. Maybe" }, { "end": 2210.8, "start": 2203.44, "text": " just put David Silver in front of computers like, okay, David, here you go. And you can you can," }, { "end": 2219.36, "start": 2211.36, "text": " like what happened in Pong? Like, come on, David. But you can see there, there were still problems" }, { "end": 2225.84, "start": 2219.36, "text": " where the humans were vastly superior. And they mainly attribute this to the difficulty of the" }, { "end": 2232.8, "start": 2225.84, "text": " problem. And it could also be because for example, in breakout, there's this this kind of the most" }, { "end": 2240.96, "start": 2232.8, "text": " famous example, where the agent kind of figured out this strategy of shooting the ball, shooting" }, { "end": 2246.96, "start": 2240.96, "text": " like a hole into this wall that you have to break, and then shooting the ball up here. So the ball" }, { "end": 2252.8, "start": 2246.96, "text": " bounces up and down. And basically, you win. From then on, you just watch the ball go. And the agent" }, { "end": 2258, "start": 2252.8, "text": " does nothing anymore. So this deep Q networks figured out that strategy, and you need to pull" }, { "end": 2265.04, "start": 2258, "text": " it off very precisely, which of course, the the computer can do very well. So it sometimes achieves" }, { "end": 2271.04, "start": 2265.04, "text": " these super high scores by pulling something off precisely. But in games where they say where you" }, { "end": 2278.96, "start": 2271.04, "text": " have to plan ahead for longer, it it kind of fails. And we know that this long planning was about to" }, { "end": 2287.44, "start": 2278.96, "text": " be a problem for years to come. And it's still not solved. So still, go explore is highly controversial" }, { "end": 2293.92, "start": 2287.44, "text": " that can solve these kind of long exploration games. And those are still games, right? So we are" }, { "end": 2301.84, "start": 2293.92, "text": " basically not we are very much further than they were in this paper. But also, we are basically no" }, { "end": 2310.08, "start": 2302.4, "text": " nowhere yet. Yeah, if I'm if I'm allowed to say that. So I enjoyed reading this paper, I" }, { "end": 2317.2, "start": 2310.08, "text": " this is it's very it's very well written. If you somehow know how to think about reinforcement" }, { "end": 2323.84, "start": 2317.2, "text": " learning, like this, this Q function, what the Q function means, and why you would learn it in this" }, { "end": 2330.56, "start": 2323.84, "text": " way. I find this is not super well described, this kind of requires a bit of a knowledge of not of" }, { "end": 2338, "start": 2330.56, "text": " RL, but just of how to think of RL. But apart from this, everything else is written incredibly" }, { "end": 2344.72, "start": 2338, "text": " well, easy, straightforward. And you know, this was just a nice work of its time. And I appreciate" }, { "end": 2371.52, "start": 2344.72, "text": " it for that. Alright, I'll see you next time. And I appreciate your time too. Bye." } ]
Nq3auVtvd9Q
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Classic] ImageNet Classification with Deep Convolutional Neural Networks (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "classic", "alexnet", "hinton", "geoff hinton", "imagenet", "convolution", "convolutional neural network", "architecture", "dropout", "data augmentation", "cnns", "computer vision", "image classification", "object recognition", "classifier", "max pool", "pretraining", "deep neural networks" ]
#ai #research #alexnet AlexNet was the start of the deep learning revolution. Up until 2012, the best computer vision systems relied on hand-crafted features and highly specialized algorithms to perform object classification. This paper was the first to successfully train a deep convolutional neural network on not one, but two GPUs and managed to outperform the competition on ImageNet by an order of magnitude. OUTLINE: 0:00 - Intro & Overview 2:00 - The necessity of larger models 6:20 - Why CNNs? 11:05 - ImageNet 12:05 - Model Architecture Overview 14:35 - ReLU Nonlinearities 18:45 - Multi-GPU training 21:30 - Classification Results 24:30 - Local Response Normalization 28:05 - Overlapping Pooling 32:25 - Data Augmentation 38:30 - Dropout 40:30 - More Results 43:50 - Conclusion Paper: http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. Authors: Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar (preferred to Patreon): https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Today we'll look at ImageNet classification with deep convolutional neural networks by Alex Kruschevsky, Ilya Sutskever and Jeffrey E. Hinton. This paper is another one in the installment of our historical paper overview, where we go through kind of old papers that were or weren't very impactful and see what people knew at the time already, how this developed and so on. Of course this paper here, also known as AlexNet, was the one that started the deep learning revolution, so to say, or at least contributed in large part to it. It was the first paper that showed that you could train these very deep neural networks, and very deep in here is a relative term, but the first one that showed that you could actually use CUDA, GPUs, to train those large networks efficiently, and it won the ImageNet competition that year, and it did so by a very very large margin. So it kind of shook the world, because previously computer vision was still doing like hand engineered features and then using some kind of classifiers on top of those. This paper basically changed everything. So we'll go through the paper and we'll see what was already known, and especially I always enjoy with these papers how did the choices that people make back then, how did they pull through to today, sort of what arbitrary choices that Alex Kruschevsky made right here are we still doing today and what have we learned since then. So the paper is written relatively straightforward, I have to say. It's a good read if you want to read it, and you know straightforward, and sort of gives you a little bit of an intuition of how much work must have gone into this, which is I guess a lot. So they start off by saying that that current approaches to object recognition make essential use of machine learning methods. This was also new, right? Object recognition wasn't always learned. The object recognizers, you could even do it in the indifferent way, like matching templates and so on. Machine learning was still one of the methods used, and of course today it's the method used. To improve their performance we can collect larger datasets, learn more powerful models, and use better techniques for preventing overfitting. Until recently datasets of labeled images were relatively small, on the orders of tens of thousands of images. So this especially at NORP, or here the C410, or C4100, these are relatively small datasets with relatively small images as well, like C410 is 32 by 32 pixels. So they're saying that in these small datasets you can solve it with classical computer vision models, but if you have larger datasets, and especially more realistic datasets like bigger resolution and so on, you need bigger models. So they say but objects in realistic settings exhibit considerable variability to learn to recognize them, it is necessary to use much larger training sets. So they say that this ImageNet dataset is one of those larger datasets, consists of 15 million labeled high resolution images in over 22,000 categories. People keep forgetting this, and I am included in that group of people, that the ImageNet dataset is actually much larger than we know, than we when we talk of ImageNet. When we speak of ImageNet we think of the ImageNet that has a thousand classes and about one or one and a half million images. However that's only a subset of the much much larger ImageNet dataset within many many more categories. It's just that the ImageNet competitions were performed on this subset, because I guess people thought well a thousand classes and a million images is already plenty, so we'll do that. So that's I guess how that came to be. So their argument is right here, to learn about thousands of objects from millions of images we need a model with a large learning capacity. However the immense complexity of object recognition task means that this problem cannot be specified even by a dataset as large as ImageNet. So our model should also have lots of prior knowledge to compensate for all the data we don't have. So their main argument for using neural networks is that the size of the dataset is so large, therefore we need a large model. Granted they already recognize the inherent connection between large models and a lot of complex data, but in the opposite they say well even if we have that much data the task we are trying to solve, object recognition, is way more complicated than the amount of data we have. So our model should also have lots of prior knowledge to compensate for all the data we don't have. Remember at this time convolutional neural networks weren't really known to do anything. I guess they were used for handwritten digit recognition and so on and were kind of on par with other methods. However it wasn't like obviously clear that you would use them for image recognition. So here they have to make like a argument to convince people that okay we can use neural networks for this task because they have such a high capacity. However neural networks, feed-forward neural networks, are already too powerful. They don't know anything about the data. Everything's connected to everything and they argue right here our model should have lots of prior knowledge to compensate for all the data we don't have. So they allude to the convolutional neural networks constitute one such class of models. Their capacity can be controlled by varying the depth and breadth and they also make strong and mostly correct assumptions about the nature of images, namely stationarity of statistics and locality of pixel dependencies. So their argument here is that the convolutional operation is such a strong prior that is mostly consistent with what we know about images that they are very well suited to computer vision. Again something that was not abundantly clear at the time as it is right now. It's interesting to see how they get to this point where they say we need lots of capacity but we also need a model with lots of prior knowledge and of course CNNs fit that very well. So they go into the problems of CNN despite the attractive qualities and despite the relative efficiency of their local architecture they are prohibitively expensive to apply in large-scale high-resolution images. Luckily current GPUs paired with a highly optimized implementation of 2D convolution are powerful enough to facilitate the training of interestingly large CNNs and recent data sets such as ImageNet contain enough labeled example to train such model without severe overfitting. So overfitting was also still like very much at the forefront of people's minds back then. Right now we don't really care about overfitting that much anymore. Basically we figured out that if we just build large enough models we don't overfit which is strange in itself like this double descent phenomenon and so on but overfitting was still very much at the forefront of people's minds and they do a lot of things here to prevent overfitting which gives them kind of a boost in the test accuracy which might actually not have been the overfitting that they're combating. So they do for example in data augmentation already in this paper and they always allude to how this is to prevent overfitting. However we know nowadays that it might not be the overfitting that's combated by data augmentation. It might actually have something to do with regularizing your function making it more smooth and so on. So you just see how coming from a classical machine learning perspective overfitting was like the number one or one of the number one problems in classical machine learning in SVMs and things like this. So it's safe to say that they thought if we built these large models we're gonna have a huge overfitting problem and yeah so that's why this pulls through right here. Also I guess one of the main contributions of this paper is to show to combine this CNN training with GPUs. Also not very non-clear at the time like it was known that you could do computation on GPUs but the fact that these are you know very capable for training these CNNs or generally neural networks wasn't something that was you know known at the time. So this paper basically showed that if you use a GPU you can get that much faster and that makes it possible to train these big neural networks. Again right here the size of our network made overfitting a significant problem even with 1.2 million labeled training examples so we use several effective techniques for preventing overfitting and we'll look at those. And the end they say the network's size is limited mainly by the amount of memory available on current GPUs and by the amount of training time that we are willing to tolerate. Our network takes between five and six days to train on two GTX 580 GPUs. All of our experiments suggest that our results can be improved by simply waiting for faster GPUs and bigger data sets to become available. And I mean that proved to be absolutely true. We don't necessarily have bigger data sets right now though we do but certainly with faster GPUs and bigger GPUs this became a this became these networks became better simply by increasing their depth and as you know then ResNets came along increasing the depth by an order of magnitude and that gave another boost to computer vision. Alright so they talk about the ImageNet data set here and the main point in the ImageNet data set right here is the fact that the images are plenty so there are over a million training images in this subset with a thousand classes which was you know a very big that was that was on like CIFAR 10 had 10 classes, CIFAR 100 had a hundred classes that was already a lot. A thousand classes that is like unheard of before this data set. I guess not unheard of but yeah and a million training images. Completely crazy and also not only was it a lot of images they were resolution was really big so in the order of 256 by 256 whereas previous methods all were like 32 by 32 so definitely challenging data set even today it's a challenging data set. Alright so the architecture. The architecture and there's this famous graphic right here of the AlexNet architecture so briefly they described these convolutional layers right here as you can see there's max pooling already here they have dense layers at the end they do generally increase the number of feature maps right here while decreasing the resolution with max pooling so all of this has sort of you know kept until today I guess they also took it from earlier work on convolutional neural networks that generally found this to be a good idea and the important part here that is kind of special to AlexNet is you can see there are these two different pipelines and Alex for cutting off this part right here I mean you just know like this has the eight pages we need to like we have like three lines too much how can we fit the three lines we've already cropped everything let's just cut off the top half here it's essentially the same as the bottom yeah so space constraints and PDFs for conference submissions ruining yet another paper alright but you can see there is this two this this two column architecture right here so this network was so large that it didn't fit on one GPU so they had to split it onto two GPUs with the occasional intercommunication right you can see here there is intercommunication between the two GPUs and there is also no intercommunication right here on this layer this was very intricate that was one thing that really didn't hold until today I guess until now with things like I don't know G shard or so where you have different weights on different GPUs again I guess the the invention of bigger GPUs made that sort of super fluid but just imagine the amount of code they had to write there was no tensor flow at this point there I don't think there was even cafe around there was just CUDA and yeah just this cross GPU memory writing I just imagined this to be so so ugly and big respect for writing all of this all of this code alright so they they go through a number of important things and most of the things here aren't their invention let's say but they cleverly combine things that were already known about neural networks and things that were maybe developed somewhere that they have found to work really well so the first one is the relu non-linearity now of course relu is nowadays all like abundant everyone uses relu's non-linearities but at that time it was still very much in fashion to use something like the sigmoid right here or the hyperbolic tangent and why is that because the neural networks were still kind of inspired by the neurons where you had the soma of the neuron and then the input dendrites sorry the dendrites with the input axons and then you would sum up all the incoming signals and then that would go over so in the true neuron you have this this this kind of curve where if the input rises above this border right here the action potential maybe I don't know what the the English term is then if it rise above that then the neuron would start to spike right and if it's below that it wouldn't so people wanted to approximate this using some sort of a a kind of differentiable but something that's very similar to this step function and that ultimately led to something like a sigmoid or an hyperbolic tangent so people trying to stay close to biological neurons did this but that gives you the problem that in this region and in this region right here you have almost no gradient to learn from so you can see that they argue that in terms of training time with gradient descent the saturating non-linearity so the hyperbolic tangent and the sigmoid are much slower than the non saturating lean non-linearity this one following Narendt Hinton we refer to neurons with this non-linearity as rectified linear units so taken from this this other paper they say okay we use these relu's these rectified linear units which are not exactly like real biological neurons but they train much faster right and of course relu's are used until this day so you can see right here that a this is on a C for 10 and they measure the time to reach 25% of the training error and this here is with the relu's and this here is with the hyperbolic tangent and it takes much longer to reach the hyperbolic tangent especially it takes six times faster to with the relu's and they say that's one of the main components that allows them to learn this fast to even experiment with these big networks because their entire training time is six days right but they probably didn't train it only once they experimented with it and saw what works so if you have a couple of months of time and he takes you a week to train one of these things you know you don't you can't afford a six times slowdown because that would mean you can only train like two models in the entire course of research and that would severely hinder your progress now we are at the point where that becomes true again with these giant giant transformer language models where people can train it once and then you know like GPT-3 they say oh we made we discovered a bug halfway through and we've kind of fixed it but we're not sure we couldn't restart because it was too expensive yeah maybe we're waiting for a moment I'm still saying we're waiting for the resonant moment in the transformers but yeah relu's in you know here in not introduced here but used here and have been prevailing until today training on multiple GPUs something as I said that didn't didn't really get forward from here especially the kind of GPU training so if we train on multiple GPUs today what we mean is that we have our model right and then we distribute that to multiple GPUs like this and then we take a mini batch from the training data and we simply split it up let each GPU do its thing on its subset of the mini batch and then at the end kind of calculate the loss and then back propagate the gradients and synchronize the gradients between that so we have one model that is on both GPUs here they distribute a model to two GPUs and I'm also thinking that with frameworks like G shard this could potentially have a revival right here this kind of distributing your models especially within the same layer across many GPUs and then having cross communication only at some points so their argument is this only has three gigabytes of memory which limits the maximum size of networks can be trained on it turns out that 1.2 train million training samples are enough to train networks which are too big to fit on one GPU therefore we spread the net across two GPUs current GPUs are particularly well suited to cross GPU parallelization as they're able to read from and write to one another's memory directly without going through the host machine okay so this means that for so sorry here they say the parallelization scheme that we employ essentially puts half the kernels or neurons on each GPU with one additional trick the GPUs communicate only in certain layers that means that for example the kernels of layer 3 take input from all kernel maps in layer 2 however the kernels in layer 4 take input only from the kernel maps in layer 3 which reside on the same GPU so very very interesting choice right here and they they justify this here or they say the results this scheme reduces our top one top five error rates by 1.7 and 1.2 percent respectively as compared with a net with half as many kernels in each computational layer in each convolutional layer on one GPU the two GPU net takes slightly less time to train than the one a GPU net so first of all I have to say big respect right here like like I can imagine they did this you know with the relu's and stuff and they were already better than previous because they're so just to go to the results the pre they beat the error rates of previous models by ginormous amount so this is what they knew right here this is on the 2010 image net split so the previous best ones were like at around 28 25 percent and here their best one is at 17 percent top five error rate I'm gonna imagine that they trained it first and we're already better than the 25 percent and I guess lots of people would just call it a day would be like oh cool we have this entirely new method not only did we show that we can train it we actually showed that it's better and bad a boom I have point one percent better error rate and everything else can be a separate paper no they stuck with it and they pushed it each so each of these things right here they say oh this reduces the error rate by 1% this reduces the error rate by 2% and you know really they they went about it how far can we push this with everything I mean just imagine you come and you train a network I'm pretty sure they first trained on one GPU right and and then they thought oh you know maybe we can train an even bigger network by using two GPUs and then they realized what it's gonna take like a crap ton amount of dumb code to cross synchronize and keep them in lockstep and blah blah blah like it's not even easy to write multi GPU code today with all the frameworks just imagine that and for them to having already observed that their network does better than everything that was previously to sit down and do the cross GPU thing experiment with okay when do we cross communicate and whatnot that is very very respectable right here so maybe a lesson to be learned or or just the mentality of the people maybe they just had more time they were like okay it's still like two months out this competition deadline I don't know but you know I'm this this is not something that I see today very often this this kind of persistence and additional pushing and reporting of what works in these kinds of things I mean some some papers do it but most papers do it because only with all the tricks they can get that point one percent improvement and this one already had the improvement and did it anyway okay but multi GPU training didn't really it's like splitting the models across GPUs didn't really didn't really stick around mainly because I guess the GPUs got larger in memory pretty quickly so it wasn't that necessary but also I guess because the frameworks were just too clunky and now maybe with G-shard this is coming back so worth another shot I guess next one local response normalization this also didn't really stick around I cut kind of dumped in favor of things like batch normalization but with the resurfacing of things like layer normalization this it comes back to this thing here again a little bit so what they say is that what they want to do is they want to kind of normalize the response of these of these relu's so what they do is each response which is this alpha they are these a here is normalized by the following quantity and it's the all the responses of the other neurons around them or of the other kernels around them and you can see the sum is over this weird quantity right here so what does it mean if they have a bunch of convolutional filters and these are the activation so these are the feature maps after the convolution and yeah so if I have like 10 convolutional filters in my layer this is going to be the output the way they normalizes they normalize each filter sorry each output channel by averaging by you see here dividing by the average response of the channels around them right so let's maybe say the five channels though two channels in front of them and two channels behind them this is going to be they take the average across this one and then for another channel right here for this one you would take the average of the five around that this isn't really something that stuck around I guess mainly because of the really dynamic situation right here what people do today is they have things like layer normalization that simply averages across all of the channels or they have group normalization that pre defines these groups like here is there's two groups and we only normalize within this group and within this group also always the same this kind of dynamic normalization on across neighboring filters as I said didn't really stick around not really sure why but I guess it was just easier to implement it otherwise or it just worked better again here they say this this it was motivated well right this scheme bears some resemblance to the local contrast normalization scheme of that but ours would be more correctly termed brightness normalization since we do not subtract the mean activity and oh they make it connection to biological neurons where is it this sort of response normalization implements a form of lateral inhibition inspired by type found in real neurons creating competition for big activities amongst neuron outputs computed using different kernels okay so kind of inspired by real neurons but also kind of inspired by other people doing also some kind of normalization so people already knew that normalization was helpful at some times and this is what they employed right here again reducing the top error rates by 1.4 and 1.2 percent respectively so not a big improvement but still an improvement the last thing overlapping pooling again a thing that didn't really stick around that much where they say okay instead of having a pooling layer so if this is your image and instead of pooling 2x2 in the stride of 2 like we do today and you know pull it down to a smaller image what we can do instead is we can pool with overlapping windows so in that case they pool with a 3x3 window but they do always do stride of 2 so they have like these overlaps right here resulting in the same size but then each pixel right here has some sort of overlapping information from the pixels around it again they say it reduces the top one and top five error rates by 0.4 percent and 0.3 percent maybe this this didn't stick around because I'm not sure maybe because people found it doesn't work in other problems who knows so the overall architecture as we said is described in this picture right here so you have the input image which you can see has three channels and they use convolutional filters with a here with a stride of four at the beginning to reduce the size so at the beginning it's 224 by 224 and then it's 48 by sorry it's 55 by 55 that thing here 55 by 55 48 feature maps you can already see as we said before the feature maps keep increasing while the number of the dimension the resolution of the image keeps decreasing the stride of four convolution here already employed in order to down sample the image at the same time as convolving it nowadays a lot of architectures will simply not do max pooling at all but always use the kind of strided convolution to down sample image while convolving it what you also see here is that they thought that the feature map size should be should also be large at the beginning and then decrease which is a reasonable assumption right because if you have higher resolution images you're probably going to need higher resolution feature maps this didn't really come through until today as you know most architectures today they just go with like three by three kernels from the very start and don't really care about you know also downsizing their their filters I don't really know why whether it's just more convenient or less parameters or whether there's really something to having small filters but I just know you know this is something the large filters at the beginning is something that didn't didn't hold over time also you can see right here they have multiple dense layers at the end I believe most architectures today simply go with two of those instead of three so one like hidden layer and then one classification layer but it's you know it's it's very close to the architectures today right there hasn't changed that much like the difference between this and the VGG 16 VGG 19 network is just depth and then the difference between those and the ResNet is just the whatever these skip connections right here and that's where we are today so so there hasn't hasn't changed that much honestly they also allude to the fact that actually even though it doesn't look like it most parameters are here in these dense layers those are most parameters of the network this right here a convolution layer is like 1% of the parameters even though it takes up a lot of space in the in the drawing so maybe the reduction in the number of classification layers at the end also has something to do with the fact that that's where most parameters are so if you get rid of one of those dense layers you can like get many many more convolutional layers all right so the last part here is on reducing overfitting again they didn't really investigate whether or not really their network was overfitting like really establishing the overfitting it was I think maybe they did and maybe it was actually overfitting but we now we we don't care about overfitting too much anymore maybe because we already use these augmentations naturally but also because we built these deep models so we somehow have an idea that they generalize naturally I'm not sure whether they actually were only worried about it that much because of the history of machine learning or whether they actually did see that everything was overfitting constantly okay they say our neural network architecture has 60 million parameters although the thousand classes make each training example impose 10 bits of constraints on the mapping from image to label this turns out to be insufficient to learn many parameters without considerable overfitting below we describe two primary ways in which we combat overfitting again there's no one today no one today makes this argument anymore this oh we have this many parameters and there are that many images right we have 60 million parameters we have 1.2 million images a thousand classes how you know when how many parameters per sample is that and so on how many bits of constraint we don't care about we're fine with having like a billion times more parameters than training samples we we don't worry about it anymore so the first thing they do is data augmentation already I mean this was already known again like lots of these things here were already known but the combination is just so cool in this paper where so first of all again they say the transformed images are generating Python code on the CPU while the GPU is training on the previous batch of images so these data augmentation schemes are in effect computationally free again this code must have been ugly the first form of data augmentation consists of generating image translations and horizontal reflections we do this by extracting random 224 by 224 patches and their horizontal reflections from the 256 by 256 images okay so random so this was already this these are the most valuable data augmentations that still we have today random horizontal flipping is still used in every pipeline of computer vision except if you want to read text I guess and random cropping is still the most powerful data augmentation technique for images today and the it's crazy that this was already discovered and I I don't know whether they say right here how much this particular thing improves I don't think they have a stat on how much this improves they just say how much this this next thing improves but I'm going to guess this was one of the vital things for pushing the performance because now we know cropping is very important I guess they thought that they they would you know translation was the important part and so they focused on generating image translations and to generate an image translation from a single image naturally you have to crop it however we we we now focus much more on the fact that we crop it and kind of have different sub images of the same image especially in you know self-supervised learning and things like this we know that cropping is what is like the the power horse of these methods so the fact that they extract random patches right here means that their network only operates on these sub patches and then they compensate by a test time the networks makes a prediction by extracting five patches the four corner patches and the center patch as well as their horizontal reflections and averaging the prediction made by the networks softmax layer on the ten patches I also believe that people don't do this too much nowadays they most of the time they simply rescale the test images or something like this or a fine-tune at the end on the kind of scale training images there are various techniques for doing this but random cropping and horizontal flipping already employed right here also color kind of color jittering a form of color jittering a very special form altering the intensities of RGB channels in training images specifically we perform PCA on the set of RGB pixel values throughout the image in a training set to each training image we add multiples of the found principal components with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a gauss with zero mean and standard deviation point one this is I believe this has gone out of fashion so people do color jitter and kind of brightness jitter and so on but I don't think they particularly do this kind of PCA based image image augmentation right here anymore they say this scheme reduces the top one error rate by over 1% I wonder why why this isn't or maybe because you need these stats over the entire data set and the other things may be working equivalently well but you you can simply apply them without knowing kind of your your principal components okay next thing dropout dropout has been you know one of the things that was very important throughout the early stages of deep learning isn't that important anymore now dropout some people still use it but most people I think don't use dropout anymore and it's very interesting to see but it definitely was a technique that was used a lot during like from Alex net to basically like now or like the last very few years so they say combining the predictions of many different models is a very successful way to reduce test errors but it appears to be too expensive for big neural networks that already take several days to train there is however a very efficient version of model combination that only costs about a factor of two during training so there's this take this technique called dropout then they explain it to set to zero the output of each hidden neuron with probability 0.5 again people didn't know about dropout as they do now but they introduced this right here and they say it reduces their not sure that they also don't say how they how much they by how much this reduces the training error but they say we use drop out in the first two fully connected layers without dropout our network exhibits substantial overfitting dropout roughly doubles the number of iterations required to converge so okay so they did actually make sure or they did find the actual evidence of overfitting and saw that dropout reduces that and I wonder why this doesn't happen nowadays maybe because we have the we have less of these fully connected layers but I can't really imagine maybe because we do more augmentation I don't I don't know or maybe dropout is still used and I'm just I just don't know it and don't see it yeah so here they use momentum to train this and they do some qualitative analysis they do some qualitative analysis so first of all they say okay they shatter all of the previous approaches especially also then they build kind of ensemble methods and they pre-trained they already do transfer learning they already pre-trained on image net 2011 and fine-tune then on the image net 2012 right here the image net 2011 and then fine-tuning on the image net 2012 to reduce that error even further like pulling all the tricks all these things are around still very cool and then they look into what their network learned so they find that there are a number of these kind of filters you see these 11 by 11 filters in the first layer where they show okay this really and this was kind of already known that these neural networks extract filters like this like color gradients or edge detectors in various forms and directions and it's cool to see that this one also does so this one here is also a very cool investigation where they look at examples and the red bar the red one is always the correct label and the bars are basically what their model says are the top five things and it's cool to look at so for example here you have might as the top one but then also black widow cockroach tick starfish but the top labels are usually also very very good labels you can see here grill and it assigns convertible which you know by all means is correct it's just not the class that the annotators assigned to this particular image as well as here Dalmatian was the highest prediction of the network where the label was actually cherry and this is this is quite debatable right so you can see that a lot of the mistakes the network does is are are you know forgivable let's say and you can see that for when the network doesn't do mistakes the not only the top label is good but a lot of the top five labels are also very very adequate lastly they look at a given training set image which these are the training set images right here and they look at the last layers feature vector and the five nearest or the nearest neighbors in Euclidean space of the entire training data set and here's what you come up with so you can see for the elephant the nearest neighbors are all other elephants and regard that they are in different poses right they don't always look the same way these elephants also these dogs right here so it's pretty cool to see that the network actually learns some invariances across the class and puts images with the same label into the same area in the embedding space yeah so that's their that's their paper that they they already allude to the fact that depth is very important it is notable that our networks performance degrades if a single convolutional layer is removed for example removing any of the middle layers results in a loss of about 2% for the top one performance of the network so the depth really is important for achieving our results and as you know this spurred an area of this burden area of trying to build deeper and deeper networks until Resnets came along and built ultra deep networks they also say we did not use any unsupervised pre-training even though we expect that it will help especially if we obtain enough computational power to significantly increase the size of the network without obtaining a corresponding increase of the amount of labeled data thus far our results have improved as we have made our network larger and trained it longer but we still have many orders of magnitude to go in order to match the infrared temporal pathway of the human visual system ultimately with ultimately we would like to use very large and deep convolutional nets on video sequences where the temporal structure provides very helpful information that is missing of our far less obvious in static images so already the previewing of future research here with the self supervised with the many more layers and so on astounding that this kind of foresight and of course all of this proved to be you know very very adequate predictions right here and yeah so this was the paper right here the paper that kicked off deep learning I enjoy reading kind of these old papers especially looking back at what was already known what still is around which turns out to be a lot a lot is still around and the choices that people made back then some of them defined our modern field so that was it for Alex net let me know what you think in the comments and I'll see you next time bye
[ { "end": 5.1000000000000005, "start": 0, "text": " Hi there! Today we'll look at ImageNet classification with deep convolutional" }, { "end": 10.9, "start": 5.1000000000000005, "text": " neural networks by Alex Kruschevsky, Ilya Sutskever and Jeffrey E. Hinton." }, { "end": 15.24, "start": 10.9, "text": " This paper is another one in the installment of our historical paper" }, { "end": 20.28, "start": 15.24, "text": " overview, where we go through kind of old papers that were or weren't very" }, { "end": 26.6, "start": 20.28, "text": " impactful and see what people knew at the time already, how this developed and" }, { "end": 31.76, "start": 26.6, "text": " so on. Of course this paper here, also known as AlexNet, was the one that" }, { "end": 37.64, "start": 31.76, "text": " started the deep learning revolution, so to say, or at least contributed in large" }, { "end": 42.56, "start": 37.64, "text": " part to it. It was the first paper that showed that you could train these very" }, { "end": 49.44, "start": 42.56, "text": " deep neural networks, and very deep in here is a relative term, but the first" }, { "end": 55.68000000000001, "start": 49.44, "text": " one that showed that you could actually use CUDA, GPUs, to train those large" }, { "end": 60.76, "start": 55.68, "text": " networks efficiently, and it won the ImageNet competition that year, and it" }, { "end": 67.92, "start": 60.76, "text": " did so by a very very large margin. So it kind of shook the world, because" }, { "end": 72.6, "start": 67.92, "text": " previously computer vision was still doing like hand engineered features and" }, { "end": 78.44, "start": 72.6, "text": " then using some kind of classifiers on top of those. This paper basically" }, { "end": 83.88, "start": 78.44, "text": " changed everything. So we'll go through the paper and we'll see what was already" }, { "end": 90.16, "start": 83.88, "text": " known, and especially I always enjoy with these papers how did the choices that" }, { "end": 95.8, "start": 90.16, "text": " people make back then, how did they pull through to today, sort of what arbitrary" }, { "end": 100.75999999999999, "start": 95.8, "text": " choices that Alex Kruschevsky made right here are we still doing today and what" }, { "end": 105.32, "start": 100.75999999999999, "text": " have we learned since then. So the paper is written relatively" }, { "end": 110.3, "start": 105.32, "text": " straightforward, I have to say. It's a good read if you want to read it, and you" }, { "end": 114.52, "start": 110.3, "text": " know straightforward, and sort of gives you a little bit of an intuition of how" }, { "end": 122.12, "start": 114.52, "text": " much work must have gone into this, which is I guess a lot. So they start off" }, { "end": 127.96, "start": 122.12, "text": " by saying that that current approaches to object recognition make essential use" }, { "end": 132, "start": 127.96, "text": " of machine learning methods. This was also new, right? Object recognition" }, { "end": 138.64, "start": 132, "text": " wasn't always learned. The object recognizers, you could even do it in" }, { "end": 143.44, "start": 138.64, "text": " the indifferent way, like matching templates and so on. Machine learning" }, { "end": 150.04, "start": 143.44, "text": " was still one of the methods used, and of course today it's the method used. To" }, { "end": 154.48, "start": 150.04, "text": " improve their performance we can collect larger datasets, learn more powerful" }, { "end": 159.44, "start": 154.48, "text": " models, and use better techniques for preventing overfitting. Until recently" }, { "end": 163.27999999999997, "start": 159.44, "text": " datasets of labeled images were relatively small, on the orders of tens" }, { "end": 169.32, "start": 163.28, "text": " of thousands of images. So this especially at NORP, or here the C410," }, { "end": 174.32, "start": 169.32, "text": " or C4100, these are relatively small datasets with relatively small" }, { "end": 181.64, "start": 174.32, "text": " images as well, like C410 is 32 by 32 pixels. So they're saying that" }, { "end": 186.88, "start": 181.64, "text": " in these small datasets you can solve it with classical computer" }, { "end": 191.92000000000002, "start": 186.88, "text": " vision models, but if you have larger datasets, and especially more realistic" }, { "end": 196.6, "start": 191.92, "text": " datasets like bigger resolution and so on, you need bigger models. So they say" }, { "end": 201.27999999999997, "start": 196.6, "text": " but objects in realistic settings exhibit considerable variability to" }, { "end": 208.32, "start": 201.27999999999997, "text": " learn to recognize them, it is necessary to use much larger training sets. So" }, { "end": 214.64, "start": 208.32, "text": " they say that this ImageNet dataset is one of those larger datasets, consists of" }, { "end": 219.64, "start": 214.64, "text": " 15 million labeled high resolution images in over 22,000 categories." }, { "end": 225.51999999999998, "start": 219.64, "text": " People keep forgetting this, and I am included in that group of people, that" }, { "end": 230.64, "start": 225.51999999999998, "text": " the ImageNet dataset is actually much larger than we know, than we when we" }, { "end": 235.67999999999998, "start": 230.64, "text": " talk of ImageNet. When we speak of ImageNet we think of the ImageNet that" }, { "end": 240.56, "start": 235.67999999999998, "text": " has a thousand classes and about one or one and a half million images. However" }, { "end": 246.92, "start": 240.56, "text": " that's only a subset of the much much larger ImageNet dataset within many" }, { "end": 252.72, "start": 246.92, "text": " many more categories. It's just that the ImageNet competitions were performed on" }, { "end": 256.47999999999996, "start": 252.72, "text": " this subset, because I guess people thought well a thousand classes and" }, { "end": 262.36, "start": 256.47999999999996, "text": " a million images is already plenty, so we'll do that. So that's I guess how that" }, { "end": 268.96, "start": 262.36, "text": " came to be. So their argument is right here, to learn about thousands of objects" }, { "end": 274, "start": 268.96, "text": " from millions of images we need a model with a large learning capacity. However" }, { "end": 277.04, "start": 274, "text": " the immense complexity of object recognition task means that this" }, { "end": 282.12, "start": 277.04, "text": " problem cannot be specified even by a dataset as large as ImageNet. So" }, { "end": 286.12, "start": 282.12, "text": " our model should also have lots of prior knowledge to compensate for all the" }, { "end": 291.32, "start": 286.12, "text": " data we don't have. So their main argument for using neural" }, { "end": 297.08, "start": 291.32, "text": " networks is that the size of the dataset is so large, therefore we need a large" }, { "end": 304.76, "start": 297.08, "text": " model. Granted they already recognize the inherent connection" }, { "end": 310.82, "start": 304.76, "text": " between large models and a lot of complex data, but in the opposite they" }, { "end": 316.46, "start": 310.82, "text": " say well even if we have that much data the task we are trying to solve, object" }, { "end": 322.82, "start": 316.46, "text": " recognition, is way more complicated than the amount of data we have. So our model" }, { "end": 328.28, "start": 322.82, "text": " should also have lots of prior knowledge to compensate for all the data we don't" }, { "end": 334.48, "start": 328.28, "text": " have. Remember at this time convolutional neural networks weren't really" }, { "end": 338.32, "start": 334.48, "text": " known to do anything. I guess they were used for handwritten" }, { "end": 342.03999999999996, "start": 338.32, "text": " digit recognition and so on and were kind of on par with other methods." }, { "end": 346.8, "start": 342.03999999999996, "text": " However it wasn't like obviously clear that you would use them for image" }, { "end": 352.03999999999996, "start": 346.8, "text": " recognition. So here they have to make like a argument to convince" }, { "end": 357.48, "start": 352.04, "text": " people that okay we can use neural networks for this task because they have" }, { "end": 363.72, "start": 357.48, "text": " such a high capacity. However neural networks, feed-forward neural" }, { "end": 367.76000000000005, "start": 363.72, "text": " networks, are already too powerful. They don't know anything about the data." }, { "end": 372.8, "start": 367.76000000000005, "text": " Everything's connected to everything and they argue right here our model should" }, { "end": 377.16, "start": 372.8, "text": " have lots of prior knowledge to compensate for all the data we don't have." }, { "end": 382.8, "start": 377.16, "text": " So they allude to the convolutional neural networks constitute one such class" }, { "end": 386.68, "start": 382.8, "text": " of models. Their capacity can be controlled by varying the depth and" }, { "end": 391.28000000000003, "start": 386.68, "text": " breadth and they also make strong and mostly correct assumptions about the" }, { "end": 396.40000000000003, "start": 391.28000000000003, "text": " nature of images, namely stationarity of statistics and locality of pixel" }, { "end": 401.72, "start": 396.40000000000003, "text": " dependencies. So their argument here is that the convolutional operation is such" }, { "end": 407.12, "start": 401.72, "text": " a strong prior that is mostly consistent with what we know about images that" }, { "end": 411.44, "start": 407.12, "text": " they are very well suited to computer vision. Again something that was not" }, { "end": 417.2, "start": 411.44, "text": " abundantly clear at the time as it is right now. It's interesting to see how" }, { "end": 420.92, "start": 417.2, "text": " they get to this point where they say we need lots of capacity but we also need" }, { "end": 429.36, "start": 420.92, "text": " a model with lots of prior knowledge and of course CNNs fit that very well." }, { "end": 435.6, "start": 429.36, "text": " So they go into the problems of CNN despite the attractive qualities" }, { "end": 439.68, "start": 435.6, "text": " and despite the relative efficiency of their local architecture they are" }, { "end": 444.44, "start": 439.68, "text": " prohibitively expensive to apply in large-scale high-resolution images." }, { "end": 448.64000000000004, "start": 444.44, "text": " Luckily current GPUs paired with a highly optimized implementation of 2D" }, { "end": 453.08000000000004, "start": 448.64000000000004, "text": " convolution are powerful enough to facilitate the training of interestingly" }, { "end": 457.92, "start": 453.08000000000004, "text": " large CNNs and recent data sets such as ImageNet contain enough labeled example" }, { "end": 462.64000000000004, "start": 457.92, "text": " to train such model without severe overfitting. So overfitting was also" }, { "end": 467.36, "start": 462.64, "text": " still like very much at the forefront of people's minds back then. Right now we" }, { "end": 471.52, "start": 467.36, "text": " don't really care about overfitting that much anymore. Basically we figured out" }, { "end": 477.76, "start": 471.52, "text": " that if we just build large enough models we don't overfit which is strange" }, { "end": 482.8, "start": 477.76, "text": " in itself like this double descent phenomenon and so on but overfitting was" }, { "end": 489.88, "start": 482.8, "text": " still very much at the forefront of people's minds and they do a lot of" }, { "end": 496.52, "start": 489.88, "text": " things here to prevent overfitting which gives them kind of a boost in the test" }, { "end": 501.44, "start": 496.52, "text": " accuracy which might actually not have been the overfitting that they're" }, { "end": 505.71999999999997, "start": 501.44, "text": " combating. So they do for example in data augmentation already in this paper" }, { "end": 511, "start": 505.71999999999997, "text": " and they always allude to how this is to prevent overfitting. However we know" }, { "end": 517.68, "start": 511, "text": " nowadays that it might not be the overfitting that's combated by data" }, { "end": 522.28, "start": 517.68, "text": " augmentation. It might actually have something to do with regularizing" }, { "end": 528.68, "start": 522.28, "text": " your function making it more smooth and so on. So you just see how" }, { "end": 533.8, "start": 528.68, "text": " coming from a classical machine learning perspective overfitting was like the" }, { "end": 537.3599999999999, "start": 533.8, "text": " number one or one of the number one problems in classical machine learning" }, { "end": 545.16, "start": 537.3599999999999, "text": " in SVMs and things like this. So it's safe to say that they thought" }, { "end": 549.24, "start": 545.16, "text": " if we built these large models we're gonna have a huge overfitting problem" }, { "end": 556.04, "start": 549.24, "text": " and yeah so that's why this pulls through right here. Also I guess" }, { "end": 561.4, "start": 556.04, "text": " one of the main contributions of this paper is to show to combine this CNN" }, { "end": 566.56, "start": 561.4, "text": " training with GPUs. Also not very non-clear at the time like it was known" }, { "end": 573.04, "start": 566.56, "text": " that you could do computation on GPUs but the fact that these are you know very" }, { "end": 578.24, "start": 573.04, "text": " capable for training these CNNs or generally neural networks wasn't" }, { "end": 584.48, "start": 578.24, "text": " something that was you know known at the time. So this paper basically showed that" }, { "end": 591.48, "start": 584.48, "text": " if you use a GPU you can get that much faster and that makes it" }, { "end": 597.92, "start": 591.48, "text": " possible to train these big neural networks. Again right here the size of" }, { "end": 602.28, "start": 597.92, "text": " our network made overfitting a significant problem even with 1.2" }, { "end": 606.56, "start": 602.28, "text": " million labeled training examples so we use several effective techniques for" }, { "end": 613.48, "start": 606.56, "text": " preventing overfitting and we'll look at those. And the end they say the" }, { "end": 618.36, "start": 613.48, "text": " network's size is limited mainly by the amount of memory available on current" }, { "end": 622.48, "start": 618.36, "text": " GPUs and by the amount of training time that we are willing to tolerate. Our" }, { "end": 629.52, "start": 622.48, "text": " network takes between five and six days to train on two GTX 580 GPUs. All of our" }, { "end": 633.6, "start": 629.52, "text": " experiments suggest that our results can be improved by simply waiting for faster" }, { "end": 638.24, "start": 633.6, "text": " GPUs and bigger data sets to become available. And I mean that proved to be" }, { "end": 642.56, "start": 638.24, "text": " absolutely true. We don't necessarily have bigger data sets right now though" }, { "end": 651.88, "start": 642.56, "text": " we do but certainly with faster GPUs and bigger GPUs this became a this became" }, { "end": 657.18, "start": 651.88, "text": " these networks became better simply by increasing their depth and as you know" }, { "end": 662.5999999999999, "start": 657.18, "text": " then ResNets came along increasing the depth by an order of magnitude and that" }, { "end": 668.8399999999999, "start": 662.5999999999999, "text": " gave another boost to computer vision. Alright so they talk about the ImageNet" }, { "end": 675.28, "start": 668.8399999999999, "text": " data set here and the main point in the ImageNet data set right here is the fact" }, { "end": 681.56, "start": 675.28, "text": " that the images are plenty so there are over a million training images in this" }, { "end": 688.28, "start": 681.56, "text": " subset with a thousand classes which was you know a very big that was that was on" }, { "end": 692.4399999999999, "start": 688.28, "text": " like CIFAR 10 had 10 classes, CIFAR 100 had a hundred classes that was already a" }, { "end": 700.16, "start": 692.4399999999999, "text": " lot. A thousand classes that is like unheard of before this data set. I guess" }, { "end": 706.28, "start": 700.16, "text": " not unheard of but yeah and a million training images. Completely crazy and" }, { "end": 713.24, "start": 706.28, "text": " also not only was it a lot of images they were resolution was really big so" }, { "end": 721.04, "start": 713.24, "text": " in the order of 256 by 256 whereas previous methods all were like 32 by 32" }, { "end": 728.24, "start": 721.04, "text": " so definitely challenging data set even today it's a challenging data set. Alright" }, { "end": 733.04, "start": 728.24, "text": " so the architecture. The architecture and there's this famous graphic right" }, { "end": 739.64, "start": 733.04, "text": " here of the AlexNet architecture so briefly they described these" }, { "end": 744.52, "start": 739.64, "text": " convolutional layers right here as you can see there's max pooling already" }, { "end": 750.68, "start": 744.52, "text": " here they have dense layers at the end they do generally increase the number" }, { "end": 755.9599999999999, "start": 750.68, "text": " of feature maps right here while decreasing the resolution with max" }, { "end": 761.78, "start": 755.9599999999999, "text": " pooling so all of this has sort of you know kept until today I guess they also" }, { "end": 765.56, "start": 761.78, "text": " took it from earlier work on convolutional neural networks that" }, { "end": 772.72, "start": 765.56, "text": " generally found this to be a good idea and the important part here that is kind" }, { "end": 776.3199999999999, "start": 772.72, "text": " of special to AlexNet is you can see there are these two different pipelines" }, { "end": 784, "start": 776.3199999999999, "text": " and Alex for cutting off this part right here I mean you just know like this has" }, { "end": 788.88, "start": 784, "text": " the eight pages we need to like we have like three lines too much how can we fit" }, { "end": 793.16, "start": 788.88, "text": " the three lines we've already cropped everything let's just cut off the top" }, { "end": 799.4399999999999, "start": 793.16, "text": " half here it's essentially the same as the bottom yeah so space constraints and" }, { "end": 806.24, "start": 799.4399999999999, "text": " PDFs for conference submissions ruining yet another paper alright but you can" }, { "end": 811.32, "start": 806.24, "text": " see there is this two this this two column architecture right here so this" }, { "end": 817.8, "start": 811.32, "text": " network was so large that it didn't fit on one GPU so they had to split it onto" }, { "end": 823.8, "start": 817.8, "text": " two GPUs with the occasional intercommunication right you can see" }, { "end": 828.7199999999999, "start": 823.8, "text": " here there is intercommunication between the two GPUs and there is also no" }, { "end": 833.8599999999999, "start": 828.7199999999999, "text": " intercommunication right here on this layer this was very intricate that was" }, { "end": 838.4799999999999, "start": 833.8599999999999, "text": " one thing that really didn't hold until today I guess until now with things like" }, { "end": 843.7199999999999, "start": 838.4799999999999, "text": " I don't know G shard or so where you have different weights on different GPUs" }, { "end": 849.5600000000001, "start": 843.72, "text": " again I guess the the invention of bigger GPUs made that sort of super" }, { "end": 854.1600000000001, "start": 849.5600000000001, "text": " fluid but just imagine the amount of code they had to write there was no" }, { "end": 858.72, "start": 854.1600000000001, "text": " tensor flow at this point there I don't think there was even cafe around there" }, { "end": 867.32, "start": 858.72, "text": " was just CUDA and yeah just this cross GPU memory writing I just imagined this" }, { "end": 873.8000000000001, "start": 867.32, "text": " to be so so ugly and big respect for writing all of this all of this code" }, { "end": 879.84, "start": 873.8000000000001, "text": " alright so they they go through a number of important things and most of the" }, { "end": 886.72, "start": 879.84, "text": " things here aren't their invention let's say but they cleverly combine things" }, { "end": 890.3000000000001, "start": 886.72, "text": " that were already known about neural networks and things that were maybe" }, { "end": 894.44, "start": 890.3000000000001, "text": " developed somewhere that they have found to work really well so the first one is" }, { "end": 899.48, "start": 894.44, "text": " the relu non-linearity now of course relu is nowadays all like abundant" }, { "end": 904.72, "start": 899.48, "text": " everyone uses relu's non-linearities but at that time it was still very much in" }, { "end": 908.96, "start": 904.72, "text": " fashion to use something like the sigmoid right here or the hyperbolic" }, { "end": 912.7600000000001, "start": 908.96, "text": " tangent and why is that because the neural networks were still kind of" }, { "end": 917.6400000000001, "start": 912.7600000000001, "text": " inspired by the neurons where you had the soma of the neuron and then the" }, { "end": 924.08, "start": 917.6400000000001, "text": " input dendrites sorry the dendrites with the input axons and then you would sum" }, { "end": 930.2, "start": 924.08, "text": " up all the incoming signals and then that would go over so in the true neuron" }, { "end": 937.32, "start": 930.2, "text": " you have this this this kind of curve where if the input rises above this" }, { "end": 943.96, "start": 937.32, "text": " border right here the action potential maybe I don't know what the the English" }, { "end": 950, "start": 943.96, "text": " term is then if it rise above that then the neuron would start to spike right" }, { "end": 955.88, "start": 950, "text": " and if it's below that it wouldn't so people wanted to approximate this using" }, { "end": 961.52, "start": 955.88, "text": " some sort of a a kind of differentiable but something that's very similar to" }, { "end": 967.32, "start": 961.52, "text": " this step function and that ultimately led to something like a sigmoid or an" }, { "end": 974.68, "start": 967.32, "text": " hyperbolic tangent so people trying to stay close to biological neurons did" }, { "end": 979.84, "start": 974.68, "text": " this but that gives you the problem that in this region and in this region right" }, { "end": 985.76, "start": 979.84, "text": " here you have almost no gradient to learn from so you can see that they" }, { "end": 993.9200000000001, "start": 985.76, "text": " argue that in terms of training time with gradient descent the saturating" }, { "end": 998.48, "start": 993.9200000000001, "text": " non-linearity so the hyperbolic tangent and the sigmoid are much slower than the" }, { "end": 1003.76, "start": 998.48, "text": " non saturating lean non-linearity this one following Narendt Hinton we refer to" }, { "end": 1009.1600000000001, "start": 1003.76, "text": " neurons with this non-linearity as rectified linear units so taken from" }, { "end": 1015.52, "start": 1009.16, "text": " this this other paper they say okay we use these relu's these rectified linear" }, { "end": 1021.36, "start": 1015.52, "text": " units which are not exactly like real biological neurons but they train much" }, { "end": 1029.36, "start": 1021.36, "text": " faster right and of course relu's are used until this day so you can see right" }, { "end": 1035.56, "start": 1029.36, "text": " here that a this is on a C for 10 and they measure the time to reach 25% of" }, { "end": 1041.36, "start": 1035.56, "text": " the training error and this here is with the relu's and this here is with the" }, { "end": 1046.48, "start": 1041.36, "text": " hyperbolic tangent and it takes much longer to reach the hyperbolic tangent" }, { "end": 1054.72, "start": 1046.48, "text": " especially it takes six times faster to with the relu's and they say that's one" }, { "end": 1060, "start": 1054.72, "text": " of the main components that allows them to learn this fast to even experiment" }, { "end": 1064.8799999999999, "start": 1060, "text": " with these big networks because their entire training time is six days right" }, { "end": 1069.48, "start": 1064.88, "text": " but they probably didn't train it only once they experimented with it and saw" }, { "end": 1074.72, "start": 1069.48, "text": " what works so if you have a couple of months of time and he takes you a week" }, { "end": 1080.0800000000002, "start": 1074.72, "text": " to train one of these things you know you don't you can't afford a six times" }, { "end": 1085.8000000000002, "start": 1080.0800000000002, "text": " slowdown because that would mean you can only train like two models in the entire" }, { "end": 1092.0800000000002, "start": 1085.8000000000002, "text": " course of research and that would severely hinder your progress now we are" }, { "end": 1097.12, "start": 1092.08, "text": " at the point where that becomes true again with these giant giant transformer" }, { "end": 1103.1999999999998, "start": 1097.12, "text": " language models where people can train it once and then you know like GPT-3" }, { "end": 1107.36, "start": 1103.1999999999998, "text": " they say oh we made we discovered a bug halfway through and we've kind of fixed" }, { "end": 1111.96, "start": 1107.36, "text": " it but we're not sure we couldn't restart because it was too expensive" }, { "end": 1115.8999999999999, "start": 1111.96, "text": " yeah maybe we're waiting for a moment I'm still saying we're waiting for the" }, { "end": 1123.1200000000001, "start": 1115.9, "text": " resonant moment in the transformers but yeah relu's in you know here in not" }, { "end": 1129.5600000000002, "start": 1123.1200000000001, "text": " introduced here but used here and have been prevailing until today training on" }, { "end": 1135.3600000000001, "start": 1129.5600000000002, "text": " multiple GPUs something as I said that didn't didn't really get forward from" }, { "end": 1140.76, "start": 1135.3600000000001, "text": " here especially the kind of GPU training so if we train on multiple GPUs today" }, { "end": 1147.08, "start": 1140.76, "text": " what we mean is that we have our model right and then we distribute that to" }, { "end": 1153.44, "start": 1147.08, "text": " multiple GPUs like this and then we take a mini batch from the training data and" }, { "end": 1159.16, "start": 1153.44, "text": " we simply split it up let each GPU do its thing on its subset of the mini batch" }, { "end": 1164.32, "start": 1159.16, "text": " and then at the end kind of calculate the loss and then back propagate the" }, { "end": 1169.12, "start": 1164.32, "text": " gradients and synchronize the gradients between that so we have one model that" }, { "end": 1175.9599999999998, "start": 1169.12, "text": " is on both GPUs here they distribute a model to two GPUs and I'm also thinking" }, { "end": 1182.56, "start": 1175.9599999999998, "text": " that with frameworks like G shard this could potentially have a revival right" }, { "end": 1187.6799999999998, "start": 1182.56, "text": " here this kind of distributing your models especially within the same layer" }, { "end": 1194.32, "start": 1187.6799999999998, "text": " across many GPUs and then having cross communication only at some points so" }, { "end": 1199.3999999999999, "start": 1194.32, "text": " their argument is this only has three gigabytes of memory which limits the" }, { "end": 1204.48, "start": 1199.3999999999999, "text": " maximum size of networks can be trained on it turns out that 1.2 train million" }, { "end": 1208.04, "start": 1204.48, "text": " training samples are enough to train networks which are too big to fit on one" }, { "end": 1213.72, "start": 1208.04, "text": " GPU therefore we spread the net across two GPUs current GPUs are particularly" }, { "end": 1218.48, "start": 1213.72, "text": " well suited to cross GPU parallelization as they're able to read from and write" }, { "end": 1223.84, "start": 1218.48, "text": " to one another's memory directly without going through the host machine okay so" }, { "end": 1232.12, "start": 1223.84, "text": " this means that for so sorry here they say the parallelization scheme that we" }, { "end": 1237.48, "start": 1232.12, "text": " employ essentially puts half the kernels or neurons on each GPU with one" }, { "end": 1242.6799999999998, "start": 1237.48, "text": " additional trick the GPUs communicate only in certain layers that means that" }, { "end": 1246.8, "start": 1242.6799999999998, "text": " for example the kernels of layer 3 take input from all kernel maps in layer 2" }, { "end": 1250.84, "start": 1246.8, "text": " however the kernels in layer 4 take input only from the kernel maps in layer" }, { "end": 1256.72, "start": 1250.84, "text": " 3 which reside on the same GPU so very very interesting choice right here and" }, { "end": 1264.6, "start": 1256.72, "text": " they they justify this here or they say the results this scheme reduces our top" }, { "end": 1269.56, "start": 1264.6, "text": " one top five error rates by 1.7 and 1.2 percent respectively as compared with a" }, { "end": 1273.84, "start": 1269.56, "text": " net with half as many kernels in each computational layer in each" }, { "end": 1279.06, "start": 1273.84, "text": " convolutional layer on one GPU the two GPU net takes slightly less time to" }, { "end": 1284.52, "start": 1279.06, "text": " train than the one a GPU net so first of all I have to say big respect right here" }, { "end": 1289.96, "start": 1284.52, "text": " like like I can imagine they did this you know with the relu's and stuff and" }, { "end": 1293.8799999999999, "start": 1289.96, "text": " they were already better than previous because they're so just to go to the" }, { "end": 1301.52, "start": 1293.8799999999999, "text": " results the pre they beat the error rates of previous models by ginormous" }, { "end": 1307.52, "start": 1301.52, "text": " amount so this is what they knew right here this is on the 2010 image net split" }, { "end": 1314.24, "start": 1307.52, "text": " so the previous best ones were like at around 28 25 percent and here their best" }, { "end": 1320.12, "start": 1314.24, "text": " one is at 17 percent top five error rate I'm gonna imagine that they trained it" }, { "end": 1324.8, "start": 1320.12, "text": " first and we're already better than the 25 percent and I guess lots of people" }, { "end": 1328.6399999999999, "start": 1324.8, "text": " would just call it a day would be like oh cool we have this entirely new method" }, { "end": 1332.6, "start": 1328.6399999999999, "text": " not only did we show that we can train it we actually showed that it's better" }, { "end": 1338.1599999999999, "start": 1332.6, "text": " and bad a boom I have point one percent better error rate and everything else" }, { "end": 1342.9599999999998, "start": 1338.1599999999999, "text": " can be a separate paper no they stuck with it and they pushed it each so each" }, { "end": 1347.48, "start": 1342.9599999999998, "text": " of these things right here they say oh this reduces the error rate by 1% this" }, { "end": 1354.04, "start": 1347.48, "text": " reduces the error rate by 2% and you know really they they went about it how" }, { "end": 1359.1599999999999, "start": 1354.04, "text": " far can we push this with everything I mean just imagine you come and you train" }, { "end": 1365.28, "start": 1359.16, "text": " a network I'm pretty sure they first trained on one GPU right and and then" }, { "end": 1370.2, "start": 1365.28, "text": " they thought oh you know maybe we can train an even bigger network by using" }, { "end": 1375.6000000000001, "start": 1370.2, "text": " two GPUs and then they realized what it's gonna take like a crap ton amount" }, { "end": 1380.8000000000002, "start": 1375.6000000000001, "text": " of dumb code to cross synchronize and keep them in lockstep and blah blah blah" }, { "end": 1385.2, "start": 1380.8000000000002, "text": " like it's not even easy to write multi GPU code today with all the frameworks" }, { "end": 1391.1200000000001, "start": 1385.2, "text": " just imagine that and for them to having already observed that their network does" }, { "end": 1396.4, "start": 1391.1200000000001, "text": " better than everything that was previously to sit down and do the cross" }, { "end": 1402.64, "start": 1396.4, "text": " GPU thing experiment with okay when do we cross communicate and whatnot that is" }, { "end": 1411.48, "start": 1402.64, "text": " very very respectable right here so maybe a lesson to be learned or or just" }, { "end": 1415.6, "start": 1411.48, "text": " the mentality of the people maybe they just had more time they were like okay" }, { "end": 1420.32, "start": 1415.6, "text": " it's still like two months out this competition deadline I don't know but" }, { "end": 1426.88, "start": 1420.32, "text": " you know I'm this this is not something that I see today very often this this" }, { "end": 1432.2, "start": 1426.88, "text": " kind of persistence and additional pushing and reporting of what works in" }, { "end": 1436.46, "start": 1432.2, "text": " these kinds of things I mean some some papers do it but most papers do it" }, { "end": 1441.1200000000001, "start": 1436.46, "text": " because only with all the tricks they can get that point one percent improvement" }, { "end": 1446.72, "start": 1441.1200000000001, "text": " and this one already had the improvement and did it anyway okay but multi GPU" }, { "end": 1451.2, "start": 1446.72, "text": " training didn't really it's like splitting the models across GPUs didn't" }, { "end": 1457.68, "start": 1451.2, "text": " really didn't really stick around mainly because I guess the GPUs got larger in" }, { "end": 1463.08, "start": 1457.68, "text": " memory pretty quickly so it wasn't that necessary but also I guess because the" }, { "end": 1467.48, "start": 1463.08, "text": " frameworks were just too clunky and now maybe with G-shard this is coming back" }, { "end": 1473.76, "start": 1467.48, "text": " so worth another shot I guess next one local response normalization this also" }, { "end": 1479.1599999999999, "start": 1473.76, "text": " didn't really stick around I cut kind of dumped in favor of things like batch" }, { "end": 1484.96, "start": 1479.1599999999999, "text": " normalization but with the resurfacing of things like layer normalization this" }, { "end": 1493.4, "start": 1484.96, "text": " it comes back to this thing here again a little bit so what they say is that what" }, { "end": 1498.56, "start": 1493.4, "text": " they want to do is they want to kind of normalize the response of these of these" }, { "end": 1504.16, "start": 1498.56, "text": " relu's so what they do is each response which is this alpha they are these a" }, { "end": 1511.92, "start": 1504.16, "text": " here is normalized by the following quantity and it's the all the responses" }, { "end": 1517.24, "start": 1511.92, "text": " of the other neurons around them or of the other kernels around them and you can" }, { "end": 1523.3600000000001, "start": 1517.24, "text": " see the sum is over this weird quantity right here so what does it mean if they" }, { "end": 1528.44, "start": 1523.3600000000001, "text": " have a bunch of convolutional filters and these are the activation so these are" }, { "end": 1534.3600000000001, "start": 1528.44, "text": " the feature maps after the convolution and yeah so if I have like 10" }, { "end": 1539.16, "start": 1534.3600000000001, "text": " convolutional filters in my layer this is going to be the output the way they" }, { "end": 1547.92, "start": 1539.16, "text": " normalizes they normalize each filter sorry each output channel by averaging" }, { "end": 1556.88, "start": 1547.92, "text": " by you see here dividing by the average response of the channels around them" }, { "end": 1561.1200000000001, "start": 1556.88, "text": " right so let's maybe say the five channels though two channels in front of" }, { "end": 1565.2, "start": 1561.1200000000001, "text": " them and two channels behind them this is going to be they take the average" }, { "end": 1570.6000000000001, "start": 1565.2, "text": " across this one and then for another channel right here for this one you" }, { "end": 1575.04, "start": 1570.6000000000001, "text": " would take the average of the five around that this isn't really something" }, { "end": 1580.8, "start": 1575.04, "text": " that stuck around I guess mainly because of the really dynamic situation right" }, { "end": 1587.48, "start": 1580.8, "text": " here what people do today is they have things like layer normalization that" }, { "end": 1592.24, "start": 1587.48, "text": " simply averages across all of the channels or they have group normalization" }, { "end": 1598.8, "start": 1592.24, "text": " that pre defines these groups like here is there's two groups and we only" }, { "end": 1603.84, "start": 1598.8, "text": " normalize within this group and within this group also always the same this" }, { "end": 1610.36, "start": 1603.84, "text": " kind of dynamic normalization on across neighboring filters as I said didn't" }, { "end": 1617.72, "start": 1610.36, "text": " really stick around not really sure why but I guess it was just easier to" }, { "end": 1624.84, "start": 1617.72, "text": " implement it otherwise or it just worked better again here they say this this it" }, { "end": 1629, "start": 1624.84, "text": " was motivated well right this scheme bears some resemblance to the local" }, { "end": 1632.92, "start": 1629, "text": " contrast normalization scheme of that but ours would be more correctly termed" }, { "end": 1638.52, "start": 1632.92, "text": " brightness normalization since we do not subtract the mean activity and oh they" }, { "end": 1646.16, "start": 1638.52, "text": " make it connection to biological neurons where is it this sort of response" }, { "end": 1650.64, "start": 1646.16, "text": " normalization implements a form of lateral inhibition inspired by type" }, { "end": 1655, "start": 1650.64, "text": " found in real neurons creating competition for big activities amongst" }, { "end": 1661.52, "start": 1655, "text": " neuron outputs computed using different kernels okay so kind of inspired by real" }, { "end": 1666.72, "start": 1661.52, "text": " neurons but also kind of inspired by other people doing also some kind of" }, { "end": 1670.76, "start": 1666.72, "text": " normalization so people already knew that normalization was helpful at some" }, { "end": 1676.16, "start": 1670.76, "text": " times and this is what they employed right here again reducing the top error" }, { "end": 1683.44, "start": 1676.16, "text": " rates by 1.4 and 1.2 percent respectively so not a big improvement but still an" }, { "end": 1688, "start": 1683.44, "text": " improvement the last thing overlapping pooling again a thing that didn't really" }, { "end": 1694.32, "start": 1688, "text": " stick around that much where they say okay instead of having a pooling layer" }, { "end": 1701.6799999999998, "start": 1694.32, "text": " so if this is your image and instead of pooling 2x2 in the stride of 2 like" }, { "end": 1706.84, "start": 1701.6799999999998, "text": " we do today and you know pull it down to a smaller image what we can do instead" }, { "end": 1713.76, "start": 1706.84, "text": " is we can pool with overlapping windows so in that case they pool with a 3x3" }, { "end": 1719.48, "start": 1713.76, "text": " window but they do always do stride of 2 so they have like these overlaps right" }, { "end": 1725.56, "start": 1719.48, "text": " here resulting in the same size but then each pixel right here has some sort of" }, { "end": 1731.8, "start": 1725.56, "text": " overlapping information from the pixels around it again they say it reduces the" }, { "end": 1737.96, "start": 1731.8, "text": " top one and top five error rates by 0.4 percent and 0.3 percent maybe this this" }, { "end": 1743.92, "start": 1737.96, "text": " didn't stick around because I'm not sure maybe because people found it doesn't" }, { "end": 1750.24, "start": 1743.92, "text": " work in other problems who knows so the overall architecture as we said is" }, { "end": 1755.64, "start": 1750.24, "text": " described in this picture right here so you have the input image which you can" }, { "end": 1761.96, "start": 1755.64, "text": " see has three channels and they use convolutional filters with a here with a" }, { "end": 1766.1200000000001, "start": 1761.96, "text": " stride of four at the beginning to reduce the size so at the beginning it's" }, { "end": 1776.1599999999999, "start": 1766.12, "text": " 224 by 224 and then it's 48 by sorry it's 55 by 55 that thing here 55 by 55" }, { "end": 1781.2399999999998, "start": 1776.1599999999999, "text": " 48 feature maps you can already see as we said before the feature maps keep" }, { "end": 1787.6, "start": 1781.2399999999998, "text": " increasing while the number of the dimension the resolution of the image" }, { "end": 1794.6799999999998, "start": 1787.6, "text": " keeps decreasing the stride of four convolution here already employed in" }, { "end": 1799.8, "start": 1794.68, "text": " order to down sample the image at the same time as convolving it nowadays a" }, { "end": 1805.3200000000002, "start": 1799.8, "text": " lot of architectures will simply not do max pooling at all but always use the" }, { "end": 1811.2, "start": 1805.3200000000002, "text": " kind of strided convolution to down sample image while convolving it what" }, { "end": 1818.76, "start": 1811.2, "text": " you also see here is that they thought that the feature map size should be" }, { "end": 1823.5600000000002, "start": 1818.76, "text": " should also be large at the beginning and then decrease which is a reasonable" }, { "end": 1827.56, "start": 1823.56, "text": " assumption right because if you have higher resolution images you're" }, { "end": 1832.44, "start": 1827.56, "text": " probably going to need higher resolution feature maps this didn't really come" }, { "end": 1838.2, "start": 1832.44, "text": " through until today as you know most architectures today they just go with" }, { "end": 1844.36, "start": 1838.2, "text": " like three by three kernels from the very start and don't really care about" }, { "end": 1851.6399999999999, "start": 1844.36, "text": " you know also downsizing their their filters I don't really know why whether" }, { "end": 1857.2800000000002, "start": 1851.64, "text": " it's just more convenient or less parameters or whether there's really" }, { "end": 1862.68, "start": 1857.2800000000002, "text": " something to having small filters but I just know you know this is something the" }, { "end": 1867.44, "start": 1862.68, "text": " large filters at the beginning is something that didn't didn't hold over" }, { "end": 1875.5600000000002, "start": 1867.44, "text": " time also you can see right here they have multiple dense layers at the end I" }, { "end": 1880.88, "start": 1875.5600000000002, "text": " believe most architectures today simply go with two of those instead of three" }, { "end": 1886.2800000000002, "start": 1880.88, "text": " so one like hidden layer and then one classification layer but it's you know" }, { "end": 1891.2, "start": 1886.2800000000002, "text": " it's it's very close to the architectures today right there hasn't" }, { "end": 1896.6000000000001, "start": 1891.2, "text": " changed that much like the difference between this and the VGG 16 VGG 19" }, { "end": 1899.7600000000002, "start": 1896.6000000000001, "text": " network is just depth and then the difference between those and the" }, { "end": 1904.8400000000001, "start": 1899.7600000000002, "text": " ResNet is just the whatever these skip connections right here and that's where" }, { "end": 1911.24, "start": 1904.84, "text": " we are today so so there hasn't hasn't changed that much honestly they also" }, { "end": 1914.76, "start": 1911.24, "text": " allude to the fact that actually even though it doesn't look like it most" }, { "end": 1919.36, "start": 1914.76, "text": " parameters are here in these dense layers those are most parameters of the" }, { "end": 1924.32, "start": 1919.36, "text": " network this right here a convolution layer is like 1% of the parameters even" }, { "end": 1929.6799999999998, "start": 1924.32, "text": " though it takes up a lot of space in the in the drawing so maybe the reduction in" }, { "end": 1933.8799999999999, "start": 1929.6799999999998, "text": " the number of classification layers at the end also has something to do with" }, { "end": 1938.68, "start": 1933.88, "text": " the fact that that's where most parameters are so if you get rid of one" }, { "end": 1944.7600000000002, "start": 1938.68, "text": " of those dense layers you can like get many many more convolutional layers" }, { "end": 1953.88, "start": 1944.7600000000002, "text": " all right so the last part here is on reducing overfitting again they didn't" }, { "end": 1959.1200000000001, "start": 1953.88, "text": " really investigate whether or not really their network was overfitting like" }, { "end": 1963.24, "start": 1959.1200000000001, "text": " really establishing the overfitting it was I think maybe they did and maybe it" }, { "end": 1969.16, "start": 1963.24, "text": " was actually overfitting but we now we we don't care about overfitting too much" }, { "end": 1974.24, "start": 1969.16, "text": " anymore maybe because we already use these augmentations naturally but also" }, { "end": 1979.64, "start": 1974.24, "text": " because we built these deep models so we somehow have an idea that they" }, { "end": 1984.36, "start": 1979.64, "text": " generalize naturally I'm not sure whether they actually were only worried" }, { "end": 1988.04, "start": 1984.36, "text": " about it that much because of the history of machine learning or whether" }, { "end": 1995.36, "start": 1988.04, "text": " they actually did see that everything was overfitting constantly okay they say" }, { "end": 1999.56, "start": 1995.36, "text": " our neural network architecture has 60 million parameters although the thousand" }, { "end": 2003.62, "start": 1999.56, "text": " classes make each training example impose 10 bits of constraints on the" }, { "end": 2007.44, "start": 2003.62, "text": " mapping from image to label this turns out to be insufficient to learn many" }, { "end": 2011.2, "start": 2007.44, "text": " parameters without considerable overfitting below we describe two" }, { "end": 2015.24, "start": 2011.2, "text": " primary ways in which we combat overfitting again there's no one today" }, { "end": 2020.76, "start": 2015.24, "text": " no one today makes this argument anymore this oh we have this many parameters and" }, { "end": 2026.52, "start": 2020.76, "text": " there are that many images right we have 60 million parameters we have 1.2 million" }, { "end": 2033.24, "start": 2026.52, "text": " images a thousand classes how you know when how many parameters per sample is" }, { "end": 2040, "start": 2033.24, "text": " that and so on how many bits of constraint we don't care about we're fine" }, { "end": 2047.68, "start": 2040, "text": " with having like a billion times more parameters than training samples we we" }, { "end": 2052.24, "start": 2047.68, "text": " don't worry about it anymore so the first thing they do is data" }, { "end": 2058.48, "start": 2052.24, "text": " augmentation already I mean this was already known again like lots of these" }, { "end": 2063.04, "start": 2058.48, "text": " things here were already known but the combination is just so cool in this" }, { "end": 2071.04, "start": 2063.04, "text": " paper where so first of all again they say the transformed images are generating" }, { "end": 2076.68, "start": 2071.04, "text": " Python code on the CPU while the GPU is training on the previous batch of images" }, { "end": 2080.52, "start": 2076.68, "text": " so these data augmentation schemes are in effect computationally free again" }, { "end": 2086.84, "start": 2080.52, "text": " this code must have been ugly the first form of data augmentation consists of" }, { "end": 2092.2, "start": 2086.84, "text": " generating image translations and horizontal reflections we do this by" }, { "end": 2097.96, "start": 2092.2, "text": " extracting random 224 by 224 patches and their horizontal reflections from the" }, { "end": 2106.2799999999997, "start": 2097.96, "text": " 256 by 256 images okay so random so this was already this these are the most" }, { "end": 2111.7999999999997, "start": 2106.2799999999997, "text": " valuable data augmentations that still we have today random horizontal flipping" }, { "end": 2116.3199999999997, "start": 2111.7999999999997, "text": " is still used in every pipeline of computer vision except if you want to" }, { "end": 2123.76, "start": 2116.32, "text": " read text I guess and random cropping is still the most powerful data" }, { "end": 2131.6800000000003, "start": 2123.76, "text": " augmentation technique for images today and the it's crazy that this was already" }, { "end": 2137.76, "start": 2131.6800000000003, "text": " discovered and I I don't know whether they say right here how much this" }, { "end": 2142.6400000000003, "start": 2137.76, "text": " particular thing improves I don't think they have a stat on how much this" }, { "end": 2147.2, "start": 2142.64, "text": " improves they just say how much this this next thing improves but I'm going" }, { "end": 2151.7599999999998, "start": 2147.2, "text": " to guess this was one of the vital things for pushing the performance" }, { "end": 2157.44, "start": 2151.7599999999998, "text": " because now we know cropping is very important I guess they thought that they" }, { "end": 2163.56, "start": 2157.44, "text": " they would you know translation was the important part and so they focused on" }, { "end": 2168.96, "start": 2163.56, "text": " generating image translations and to generate an image translation from a" }, { "end": 2175.6, "start": 2168.96, "text": " single image naturally you have to crop it however we we we now focus much more" }, { "end": 2180.52, "start": 2175.6, "text": " on the fact that we crop it and kind of have different sub images of the same" }, { "end": 2184.32, "start": 2180.52, "text": " image especially in you know self-supervised learning and things like" }, { "end": 2189.32, "start": 2184.32, "text": " this we know that cropping is what is like the the power horse of these" }, { "end": 2195.7200000000003, "start": 2189.32, "text": " methods so the fact that they extract random patches right here means that" }, { "end": 2200.12, "start": 2195.72, "text": " their network only operates on these sub patches and then they compensate by a" }, { "end": 2203.68, "start": 2200.12, "text": " test time the networks makes a prediction by extracting five patches" }, { "end": 2207.3999999999996, "start": 2203.68, "text": " the four corner patches and the center patch as well as their horizontal" }, { "end": 2212.04, "start": 2207.3999999999996, "text": " reflections and averaging the prediction made by the networks softmax layer on" }, { "end": 2217.08, "start": 2212.04, "text": " the ten patches I also believe that people don't do this too much nowadays" }, { "end": 2224.08, "start": 2217.08, "text": " they most of the time they simply rescale the test images or something" }, { "end": 2228.16, "start": 2224.08, "text": " like this or a fine-tune at the end on the kind of scale training images there" }, { "end": 2234.44, "start": 2228.16, "text": " are various techniques for doing this but random cropping and horizontal" }, { "end": 2240.08, "start": 2234.44, "text": " flipping already employed right here also color kind of color jittering a" }, { "end": 2245.24, "start": 2240.08, "text": " form of color jittering a very special form altering the intensities of RGB" }, { "end": 2250.7599999999998, "start": 2245.24, "text": " channels in training images specifically we perform PCA on the set of RGB pixel" }, { "end": 2254.44, "start": 2250.76, "text": " values throughout the image in a training set to each training image we" }, { "end": 2259.0800000000004, "start": 2254.44, "text": " add multiples of the found principal components with magnitudes proportional" }, { "end": 2263.92, "start": 2259.0800000000004, "text": " to the corresponding eigenvalues times a random variable drawn from a gauss with" }, { "end": 2270.4, "start": 2263.92, "text": " zero mean and standard deviation point one this is I believe this has gone out" }, { "end": 2276.0800000000004, "start": 2270.4, "text": " of fashion so people do color jitter and kind of brightness jitter and so on but" }, { "end": 2283.04, "start": 2276.08, "text": " I don't think they particularly do this kind of PCA based image image" }, { "end": 2288.56, "start": 2283.04, "text": " augmentation right here anymore they say this scheme reduces the top one error" }, { "end": 2298, "start": 2288.56, "text": " rate by over 1% I wonder why why this isn't or maybe because you need these" }, { "end": 2302.08, "start": 2298, "text": " stats over the entire data set and the other things may be working equivalently" }, { "end": 2306.58, "start": 2302.08, "text": " well but you you can simply apply them without knowing kind of your your" }, { "end": 2314.36, "start": 2306.58, "text": " principal components okay next thing dropout dropout has been you know one of" }, { "end": 2319.4, "start": 2314.36, "text": " the things that was very important throughout the early stages of deep" }, { "end": 2324.56, "start": 2319.4, "text": " learning isn't that important anymore now dropout some people still use it but" }, { "end": 2330.7999999999997, "start": 2324.56, "text": " most people I think don't use dropout anymore and it's very interesting to see" }, { "end": 2336.92, "start": 2330.8, "text": " but it definitely was a technique that was used a lot during like from Alex net" }, { "end": 2345.44, "start": 2336.92, "text": " to basically like now or like the last very few years so they say combining the" }, { "end": 2349, "start": 2345.44, "text": " predictions of many different models is a very successful way to reduce test" }, { "end": 2352.6400000000003, "start": 2349, "text": " errors but it appears to be too expensive for big neural networks that" }, { "end": 2357, "start": 2352.6400000000003, "text": " already take several days to train there is however a very efficient version of" }, { "end": 2361.72, "start": 2357, "text": " model combination that only costs about a factor of two during training so" }, { "end": 2365.88, "start": 2361.72, "text": " there's this take this technique called dropout then they explain it to set to" }, { "end": 2371.68, "start": 2365.88, "text": " zero the output of each hidden neuron with probability 0.5 again people" }, { "end": 2379.08, "start": 2371.68, "text": " didn't know about dropout as they do now but they introduced this right here and" }, { "end": 2386.28, "start": 2379.08, "text": " they say it reduces their not sure that they also don't say how they how much" }, { "end": 2390.76, "start": 2386.28, "text": " they by how much this reduces the training error but they say we use drop" }, { "end": 2394.2400000000002, "start": 2390.76, "text": " out in the first two fully connected layers without dropout our network" }, { "end": 2398.8, "start": 2394.2400000000002, "text": " exhibits substantial overfitting dropout roughly doubles the number of iterations" }, { "end": 2404.7200000000003, "start": 2398.8, "text": " required to converge so okay so they did actually make sure or they did find the" }, { "end": 2411, "start": 2404.7200000000003, "text": " actual evidence of overfitting and saw that dropout reduces that and I wonder" }, { "end": 2416.0800000000004, "start": 2411, "text": " why this doesn't happen nowadays maybe because we have the we have less of" }, { "end": 2420.92, "start": 2416.08, "text": " these fully connected layers but I can't really imagine maybe because we do more" }, { "end": 2425.2, "start": 2420.92, "text": " augmentation I don't I don't know or maybe dropout is still used and I'm just" }, { "end": 2431.88, "start": 2425.2, "text": " I just don't know it and don't see it yeah so here they use momentum to train" }, { "end": 2440.44, "start": 2431.88, "text": " this and they do some qualitative analysis they do some qualitative" }, { "end": 2444, "start": 2440.44, "text": " analysis so first of all they say okay they shatter all of the previous" }, { "end": 2449.56, "start": 2444, "text": " approaches especially also then they build kind of ensemble methods and they" }, { "end": 2454.6, "start": 2449.56, "text": " pre-trained they already do transfer learning they already pre-trained on" }, { "end": 2462.08, "start": 2454.6, "text": " image net 2011 and fine-tune then on the image net 2012 right here the image net" }, { "end": 2468.92, "start": 2462.08, "text": " 2011 and then fine-tuning on the image net 2012 to reduce that error even" }, { "end": 2477.2000000000003, "start": 2468.92, "text": " further like pulling all the tricks all these things are around still very cool" }, { "end": 2482.2400000000002, "start": 2477.2000000000003, "text": " and then they look into what their network learned so they find that there" }, { "end": 2488.84, "start": 2482.2400000000002, "text": " are a number of these kind of filters you see these 11 by 11 filters in the" }, { "end": 2493.32, "start": 2488.84, "text": " first layer where they show okay this really and this was kind of already" }, { "end": 2499.56, "start": 2493.32, "text": " known that these neural networks extract filters like this like color gradients" }, { "end": 2504.6000000000004, "start": 2499.56, "text": " or edge detectors in various forms and directions and it's cool to see that" }, { "end": 2511.7200000000003, "start": 2504.6000000000004, "text": " this one also does so this one here is also a very cool investigation where" }, { "end": 2517.04, "start": 2511.7200000000003, "text": " they look at examples and the red bar the red one is always the correct label" }, { "end": 2522.04, "start": 2517.04, "text": " and the bars are basically what their model says are the top five things and" }, { "end": 2527.6, "start": 2522.04, "text": " it's cool to look at so for example here you have might as the top one but then" }, { "end": 2535.72, "start": 2527.6, "text": " also black widow cockroach tick starfish but the top labels are usually also very" }, { "end": 2541.8, "start": 2535.72, "text": " very good labels you can see here grill and it assigns convertible which you" }, { "end": 2545.68, "start": 2541.8, "text": " know by all means is correct it's just not the class that the annotators" }, { "end": 2552.72, "start": 2545.68, "text": " assigned to this particular image as well as here Dalmatian was the highest" }, { "end": 2557.52, "start": 2552.72, "text": " prediction of the network where the label was actually cherry and this is" }, { "end": 2561.9199999999996, "start": 2557.52, "text": " this is quite debatable right so you can see that a lot of the mistakes the" }, { "end": 2568.44, "start": 2561.9199999999996, "text": " network does is are are you know forgivable let's say and you can see" }, { "end": 2573.96, "start": 2568.44, "text": " that for when the network doesn't do mistakes the not only the top label is" }, { "end": 2583, "start": 2573.96, "text": " good but a lot of the top five labels are also very very adequate lastly they" }, { "end": 2587.84, "start": 2583, "text": " look at a given training set image which these are the training set images right" }, { "end": 2594.36, "start": 2587.84, "text": " here and they look at the last layers feature vector and the five nearest or" }, { "end": 2598.92, "start": 2594.36, "text": " the nearest neighbors in Euclidean space of the entire training data set and" }, { "end": 2603.92, "start": 2598.92, "text": " here's what you come up with so you can see for the elephant the nearest" }, { "end": 2608.96, "start": 2603.92, "text": " neighbors are all other elephants and regard that they are in different poses" }, { "end": 2613.96, "start": 2608.96, "text": " right they don't always look the same way these elephants also these dogs" }, { "end": 2619.32, "start": 2613.96, "text": " right here so it's pretty cool to see that the network actually learns some" }, { "end": 2625.16, "start": 2619.32, "text": " invariances across the class and puts images with the same label into the same" }, { "end": 2636.64, "start": 2625.16, "text": " area in the embedding space yeah so that's their that's their paper that they" }, { "end": 2642.16, "start": 2636.64, "text": " they already allude to the fact that depth is very important it is notable" }, { "end": 2646.68, "start": 2642.16, "text": " that our networks performance degrades if a single convolutional layer is" }, { "end": 2651.7599999999998, "start": 2646.68, "text": " removed for example removing any of the middle layers results in a loss of about" }, { "end": 2657.32, "start": 2651.76, "text": " 2% for the top one performance of the network so the depth really is important" }, { "end": 2664.28, "start": 2657.32, "text": " for achieving our results and as you know this spurred an area of this burden" }, { "end": 2671, "start": 2664.28, "text": " area of trying to build deeper and deeper networks until Resnets came along" }, { "end": 2676.6800000000003, "start": 2671, "text": " and built ultra deep networks they also say we did not use any unsupervised" }, { "end": 2680.76, "start": 2676.6800000000003, "text": " pre-training even though we expect that it will help especially if we obtain" }, { "end": 2684.76, "start": 2680.76, "text": " enough computational power to significantly increase the size of the" }, { "end": 2688.6000000000004, "start": 2684.76, "text": " network without obtaining a corresponding increase of the amount of" }, { "end": 2693.5600000000004, "start": 2688.6000000000004, "text": " labeled data thus far our results have improved as we have made our network" }, { "end": 2697.1600000000003, "start": 2693.5600000000004, "text": " larger and trained it longer but we still have many orders of magnitude to" }, { "end": 2701.2400000000002, "start": 2697.1600000000003, "text": " go in order to match the infrared temporal pathway of the human visual" }, { "end": 2706.1200000000003, "start": 2701.2400000000002, "text": " system ultimately with ultimately we would like to use very large and deep" }, { "end": 2710.36, "start": 2706.1200000000003, "text": " convolutional nets on video sequences where the temporal structure provides" }, { "end": 2715.48, "start": 2710.36, "text": " very helpful information that is missing of our far less obvious in static images" }, { "end": 2720.2000000000003, "start": 2715.48, "text": " so already the previewing of future research here with the self supervised" }, { "end": 2726.04, "start": 2720.2000000000003, "text": " with the many more layers and so on astounding that this kind of foresight" }, { "end": 2732.1600000000003, "start": 2726.04, "text": " and of course all of this proved to be you know very very adequate predictions" }, { "end": 2738.2000000000003, "start": 2732.1600000000003, "text": " right here and yeah so this was the paper right here the paper that kicked" }, { "end": 2745.2, "start": 2738.2, "text": " off deep learning I enjoy reading kind of these old papers especially looking" }, { "end": 2750.08, "start": 2745.2, "text": " back at what was already known what still is around which turns out to be a" }, { "end": 2756.9199999999996, "start": 2750.08, "text": " lot a lot is still around and the choices that people made back then some" }, { "end": 2763, "start": 2756.9199999999996, "text": " of them defined our modern field so that was it for Alex net let me know what you" }, { "end": 2768.2, "start": 2763, "text": " think in the comments and I'll see you next time bye" } ]
a6v92P0EbJc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Neural Architecture Search without Training (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nas", "nas-bench", "architecture search", "initialization", "untrained", "cifar10", "imagenet", "neural architecture search", "controller", "rnn", "correlation", "gradient", "jacobian", "linearization" ]
#ai #research #machinelearning Neural Architecture Search is typically very slow and resource-intensive. A meta-controller has to train many hundreds or thousands of different models to find a suitable building plan. This paper proposes to use statistics of the Jacobian around data points to estimate the performance of proposed architectures at initialization. This method does not require training and speeds up NAS by orders of magnitude. OUTLINE: 0:00 - Intro & Overview 0:50 - Neural Architecture Search 4:15 - Controller-based NAS 7:35 - Architecture Search Without Training 9:30 - Linearization Around Datapoints 14:10 - Linearization Statistics 19:00 - NAS-201 Benchmark 20:15 - Experiments 34:15 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.04647 Code: https://github.com/BayesWatch/nas-without-training Abstract: The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be extremely slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be remedied if we could infer a network's trained accuracy from its initial state. In this work, we examine how the linear maps induced by data points correlate for untrained network architectures in the NAS-Bench-201 search space, and motivate how this can be used to give a measure of modelling flexibility which is highly indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU. Code to reproduce our experiments is available at this https URL. Authors: Joseph Mellor, Jack Turner, Amos Storkey, Elliot J. Crowley Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar (preferred to Patreon): https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Today we're looking at neural architecture search without training by Joseph Meller, Jack Turner, Alma Storky and Elliot J. Crowley. On a high level, this paper performs neural architecture search by looking at the correlation matrices of the Jacobian of the data when you pass it through the network. And it does so at initialization. So you pass the data, look at the Jacobian, and if it's very correlated, then the network is bad. And if it's very uncorrelated, then the network is good. And by simply observing that, they can already achieve a very good score on a neural architecture search benchmark. All right, that was a high level and maybe a bit too simplified. But that's sort of what's going on. Okay, let's dive in. So what's neural architecture search? Neural architecture search is the discipline of you are given a data set. Let's say here we have a data set, which could be something like CIFAR-10, which is an image data set. And you are given a sort of training procedure, let's say, ADM or SGD for 100,000 steps or something like this with many batches of size 64. Okay, and you're given a loss function, which the loss function here could be the cross entropy between the outputs of the network, which we'll call L and the label Y. And your task is now to find a neural network architecture that conforms to these specifications, but gives the lowest possible loss or the highest possible validation accuracy in this case. So this here would be like the train and then you'd have the test accuracy or the validation accuracy. Okay, so you could decide, well, I'm going to go with, you know, first, like three convolutional layers, each one having like a ReLU non-linearity. But you could also say, well, I'm going to build like a skip connection from here to here. You could also say that I'm going to down sample by two, you could have maybe a bigger stride and so on. So the kernel size of the convolution, you can vary until now, people have done this by hand, right? In effect, we all use like the same 10 to 20 different architectures. So if it's an image problem, we tend to go for like a ResNet or a wide ResNet, or like a VGG style architecture. Someone has come up with those at some point with each of those, discovered that it works well. And we don't really do much exploration, we simply kind of use the same things over and over. And the truth is that there might be much better architectures that we're simply not exploring, right? There might be much better building plans for networks that we don't know of that might perform a lot better with the same data and the same training. So neural architecture search is the process of automatically searching for these better architectures. Of course, that's a combinatorial problem. But the idea is that, you know, you can actually learn to construct good architectures. And by doing so, you can, you can sort of speed up this process that is manual otherwise. And the idea behind it is there's some regularity of when an architecture is good, there's some like high level pattern that you as a human maybe cannot really grasp, but like a machine can figure out which architectures are good and which ones aren't. So there have been a few inventions in this area, but they are mostly costly. That's what they say here. The time and effort involved in hand designing deep neural networks is immense. This has prompted the development of neural architecture search techniques to automate this design. However, neural architecture search algorithms tend to be extremely slow and expensive. They need to train vast numbers of candidate networks to inform the search process. So what neural architecture search methods do is what they'll have is they'll have something like a controller, the controller itself, of course, is going to be a neural network. So there'll be this thing that will be the controller, and the controller will emit like a building plan. So the controller will emit like a building plan for this network right here. And then you train the entire thing once through for the entire 100,000 steps. And then you observe the final validation accuracy, which might be something like 80%. And then you know, okay, this is 80%. So you feed the 80% into your controller and the controller outputs the next building plan that it thinks will score higher. And then you train the entire thing again, and you maybe observe 70% accuracy, you again feed that in, right, and the controller realizes, oh, I may have done something wrong, let me try something else. And does again, if this looks like reinforcement learning to you, that's because this is reinforcement learning. So the real the, the C here, the controller would be the agent, the percentages here, the accuracies would be the real reward. And the environment, the observations would be basically, this thing here, this thing would be the actions, but sometimes it's the observations and you need to score the different things. Okay. So the problem, of course, with this is that the reinforcement learning requires a lot of data, it requires a lot of steps to converge, because the signal from the reward is just so weak, you simply get one number for your action. And you don't know what you can change to make it better, you simply have to try. So you need a lot of steps, but this thing here is mighty slow, because each each single step in your reinforcement learning procedure involves training an entire neural network for like this many steps. Okay, so all of this is ginormously slow, and it's resource intensive. And that of course, blocks a lot of research, because, you know, we started with the plan to automate this part right here, but automating it itself is super expensive. So they go for a different solution. They say this could be remedied if we could infer at net, sorry, if we could infer a network's trained accuracy from its initial state. Okay, it seems a bit out there, but let's let's give them benefit of the doubt. In this work, we examine how the linear maps induced by data points correlate for untrained network architectures in the NAS bench 201 search space, and motivate how this can be used to give a measure of the accuracy of the network. So we use this measure to give a measure of modeling flexibility, which is highly indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU. Okay, and they have the code available right here if you want to go and check that out. So let's go ahead and check that out. The claims are pretty big. And the reasoning behind the claims is the following observation. You can already sort of see in this graphic right here, we'll go over what it means in one second. But what they do is they take different networks in this search space. And the search space in this case is given by this benchmark. So this benchmark basically has a long architectures that you could consider. Actually, so it's a constructive list. So they don't actually give you the list, but they give you like a way to construct architectures. And they took those architectures and they rank them by how well they score on CIFAR-10. So there are very good architectures, which are here, there are good ones, there are mediocre ones, and then the bad ones. Okay, and you can see that the histograms here of whatever they measure, they look quite different. So the histograms with the good ones, they all look kind of spiky around zero. And the histograms of the bad ones all sort of look spread out. So this is the measure that they're going to propose is they have some sort of number, some sort of histogram that they produce. And if the histogram is very spiky and close together around zero, then they conclude that this network is good. And if the histogram is very spread out like this, they conclude that the network is bad. Now these histograms, as you might expect, they are computed not from the final trained network, but they are computed from the initial network. So here they show at least, you know, in this case, it seems to be that there is a general correlation between the trained accuracy and how this histogram looks. And we're going to explore what they do. So it's essentially, it's pretty easy. They compute the linear map around each data point. So what is that? If you imagine a neural network as a nonlinear function, which I guess you should, because it is. And so let's imagine it as like a nonlinear function from X to Y. What they'll do is simply they'll look at a given date training data point, which could be here, right? This could be the X and this could be the Y. And in fact, let's look at it in loss landscape, not even in Y, but in L in terms of the loss, because we don't need necessarily a single label. This could be for unsupervised, this could be for anything. Okay, so it maps a data point to a loss. Now, what we'll do is we'll simply linearize the function around that point, which means we'll just freeze all the nonlinearities in place. And that will give us this linear function right here. Okay, we just observe that this linear function can exist. It's the tangent to the loss landscape. And it's at a particular data point, right? It's in data space, not in weight space. Then we look at a different data point. So we look at this data point right here, another data point. What's the linear function around this one is sort of like, whoops, D is like that. And then around this one is like this. Okay, so this is one function. Now let's look at a different function right here. So L, X, and we'll look at this function, the linear function. Okay, so for some reason, this is like this. And if we consider two data points, their linearization is very similar. Now imagine that these two have been produced by the same sort of neural networks. It's just the architecture is a little different. But they have the same number of parameters in the neural network. Which neural network would you prefer? Remember, by training the neural network, you can actually shape this loss function. You can kind of shape that around. So which one would you prefer? I personally would prefer the top one, because the top one already tells me that, hey, you know, I might have 10 parameters here. And this already sort of looks like each of the 10 parameters is doing something. So if I then go into my 10 parameters, and I, you know, turn this knob right here, then I might, you know, up this bump, or down this bump, or do something with it. But the sort of frequency, curvature, the randomness of the function, the way that it fluctuates tells me that all of the different parameters must have some sort of effect, right? Because it's quite an expressive function. Whereas if I have the same number of parameters for a function like this, this sort of tells me, well, maybe only one of the when the only one of the weights is actually doing something, maybe only one of the dimensions is doing something. This seems odd, right? That even though I've initialized it randomly, a super regular function like this comes out. So maybe all of the all of these parameters down here, they don't do anything. Or this, so somehow the signal doesn't get through. So that's, I, they don't explicitly say it in these terms. But this is how I make sense of this. What they're saying is that if you look at the linearizations of the functions, and you look at the the angle right here, so the angle in this case is that and in this case is that and in this case is that. So you look at the slope here. And the slope is basically the gradient of these linearized functions. And what you want to do is you want to look at the correlation between those of the different data points. So here we have three angles. One is very short, one is very bit longer, like this, and or no, even like this, and one is even over 90 degrees like that. They are not correlated at all, right? They're all very different. However, the angles here, they're all quite the same, as you can see. So what they propose is the following. Let's send all the data points, or in that case, all the data points in a particular mini batch, let's send them through the function, and let's calculate their linearizations. So the linearization is nothing else than you send them through the network to obtain the f value for the x value, and then you calculate the gradient with respect to the input. Now you have to get used to this a bit, because usually we calculate the gradient with respect to the weight. So you calculate the gradient, but now we calculate the gradient with respect to the input, which if this is a linear function, so if you have a f of x equals wx, like a linear function, then this gradient, del f del x, would just give you the w, will give you the slope of the linear function, and the same in the neural network when you linearize it. All right, so we're going to obtain all these linearizations, and that gives us this matrix J right here. And what we can do is we can then observe the covariance matrix of J, of all these linearizations. The covariance matrix simply tells you how two data points vary with each other, and in fact they don't look at the covariance matrix, but they look at the correlation matrix, which is simply the scaled covariance matrix. So one entry in this covariance matrix, so you have n data points, and this gives you a matrix that's n by n, and a particular entry here, like the entry i, j, would simply state how does the angle of data point i correlate with the angle of data point j. Okay, that's the covariance matrix. And now the hypothesis is, if all of these data points are sort of independent, like in our very expressive function here, then these correlations, they should not be high. In fact most data points should be rather uncorrelated. However, in this case right here, if the function is sort of kind of degenerative or something, not very expressive, then all of these angles, all of these linearizations should be highly correlated. And that's what you see in this graph right here. This right here now is this correlation histogram of the correlations between local linear maps across all pairs of items in a mini batch of C410 training data. Each policy is scrammed for a single untrained NASBench 201 architecture. So remember the expressivity is important because we want to train that function, and therefore it's important that every parameter does something. And if it's degenerate, we can't train it well. And that's, I find that's the reasoning. They sort of say this, but I might make the wrong sense out of it here, but it seems to me like that's what's actually going on. So you can see this is simply these matrix values rolled out and then plotted as a histogram. So what does it mean when an histogram is like super spread out like this? It means that there are a lot, and I think down here are axes, yes, there are a lot of data points that correlate highly or anti-correlate highly with each other. Okay, which means that exactly this degeneracy happens. So either too high or too negative high correlation means that they're very much, they're kind of the same thing. So there is, if you have as many parameters as data points, that means that one parameter can potentially serve these two data points or these two that are correlated by one or negative one. You don't need both parameters and therefore you have a lot of parameters doing nothing. Whereas over here with the good networks, you can see that this spikes around zero, meaning that the data points are not correlated or the linearizations around the data points are not correlated. And therefore you can sort of shape the function around each data point however you want. Which we sort of know that neural networks, what they do is they're so over expressive that they're actually able to shape the functions around the data points without necessarily looking at other data points nearby. And that expressivity is what you want and that expressivity is what this in part measures. Okay, so they make a, they have some experiments here where they validate this. So for all these architectures in this benchmark, and maybe I should tell you what, show you what the benchmark looks like. So the benchmark has this particular form, this particular form, there's this skeleton, and in this skeleton there is this block and it's always repeated. And you're basically, your task is to determine what this block should be. So this block has an input node A and an output node D and two intermediate nodes. And what you have to do is basically you have to determine these connections right here. So there are six connections and for each one you have the option of putting different things there. Like you can see you can put a convolution, you can put the identity function, which is a skip connection, zero wise. I don't, maybe that's the zero function, so it basically means nothing. I'm not so sure, honestly. But you could technically put a convolution here and here, right, or different convolutions or things like this. So there are these 15,625 possible cells. So the NAS benchmark contains 15,625 possible architectures that you'll have to search. And they take these architectures and they plot now, they plot for each architecture the validation accuracy after training. And the training protocol is standardized, you don't have to care about that. And the score that they measure at the beginning of training. And what you can see is that there is a linear relationship, sort of, like sort of. From these experiments what you'll get is like this sort of feeling. What they're going to propose is that you should take that score as a measure. And here again also, sort of, sort of. There is a clear trend, as you can see, right here. Though, yeah, though this, as you can see, this sort of spreads out. And the most right one is ImageNet, which is the most difficult one, of course. So, and this is CIFAR 100, which is more difficult than CIFAR 10. So we can see that this sort of relationship at the top, it doesn't really hold anymore if the task gets difficult. And this is, so what I think is happening, this is kind of an interjection of my own opinion. What's happening here is that this score that they discover allows them pretty efficiently to see which networks are just degenerate and cannot be trained. Like if you try to train them, they just perform really poorly, okay? That, it's probably a very good score for weeding those out. And that would mean if you kind of barrier here somewhere, right? You could just discard a whole lot of this crap, or even here, right? You could just discard a whole lot of this crap. And also now here, just, you know, all of this crap. Yeah, whereas here, as you can see, some, this score, sometimes it's higher than these ones, even though they perform better. And again, you could probably discard a lot of the crap, but it's not as distinctive for the well performing networks, because these here are all not the degenerate version, right? They're not degenerate in the sense that they're, they have some fundamental flaw where the function lacks now expressivity from the very start, so you can't train it. And so, you know, it's not a big deal. And then probably other factors come into play, other factors than you can simply determine with this particular score. But, you know, there is this relationship that's, you know, you can see that. And they do some ablations on this here. For example, are your scores a proxy for a number of parameters? And they say, no, the number of parameters works way than this particular score, which, you know, is a cool thing. Then how important is the specific mini batch and initialization? And they say, look right here, we, for some architectures, we do different mini batch sizes. And you can see each of those groups, they don't vary too much in how they're, it influences their score. This is, I believe this is the same architecture. So it's always an architecture that achieves in this case, for example, wow, that's not a straight line, 77% or so. And you can see if you go for different mini batches, the score varies only minimally. Initialization is a bigger variance inducing thing. But also here, the scores don't vary too much. But it is interesting that the different initialization do get you to different score, because it would directly support kind of my hypothesis that what's going on here is that you sort of measure initial degeneracies. And you can sort of make up for these initial degeneracies in the architecture sometimes with sort of a different initialization. So the different initializations give you differently performing networks. We already know this from things like, you know, lottery ticket hypothesis and so on, that the initialization can matter to some degree in these types of things. Now, that being said, they always train to the same, it seems, but their their score varies. So I might be backwards correct here, or not correct. But in any case, the initialization here matters more, but also you can still see this linear relationship. And this is particularly interesting. This is even the case when you just input white noise. So instead of the data, you measure that score by just inputting noise that I guess has some sort of the same magnitude as the data would have, but it's just noise. And you can still sort of see this linear relationship, which is very interesting. And that I think also shows some that you what you're fine, what you find is a property of the network itself. And the fact that it is, it is initialized and built in such a way that it allows you to train it in a very, in a sort of a benign manner, it has no degeneracies. Okay. So in the last experiment, they go here and they say, we evaluated the score on initialized networks in the PyTorch CV library. So they go to this library that has a lot of these networks, but these networks are not the same as this benchmark. This benchmark is specifically designed to do architecture search. Now the networks in this library, they are all designed to perform really well. Some are designed to be quite small, some are designed to be quite fast and so on. But in general, they are all of their goal is to perform well, and they have been sort of found by humans to perform well. So they take now these networks on CIFAR 10 and they test them. So as you can see here, here is the test accuracy again, and here is their score that they give it. And they say, now I can't move this anymore. Hello. Well, okay. They say that this linear relationship still sort of holds. It doesn't hold super, super well, but you can still sort of, if you squint, if you squint hard, you can see that it sort of goes upward, though you really have to squint hard. Like what are these things right here? And what, again, what's the case is that if the score is low, it's not going to be a good score. So what you can do is that if the score is low, you will sort of be able to cut off the worst performing ones. But really at the top here, it doesn't seem like there is a particular relation between these networks and this initial score, which sort of strengthens my hypothesis that it's just kind of weed out the bad ones. But it's pretty cool because you can weed out the bad ones without any training, right? You'd simply forward prop backward prop. There you have it. So cool. Now they come, they, here is the experiment where they now really do this NAS benchmark and they compare with other methods. So some of these other methods are designed to do the call weight sharing, which basically is a technique where you can sort of speed up the speed up the algorithm as compared to non weight sharing and the non weight sharing. That's one of these we have discussed initially. That was my initial example with the controller and so on where it takes super long. So here you see the method and how long each method takes. Now the best ones, as you can see already, the best ones here, or these, these methods right here are the best ones. They score somewhat like a 93.9 or so on C for 10, whereas these weight sharing ones, they don't perform too well, except this one seems to perform quite well. And in this hours case, they perform worse than that, but they still perform better than a lot of the weight sharing ones. So what their point is basically is that they get a pretty good score, which is a 91.5 on C for 10, which is, you know, it's at least not degenerate. It's a, it's a good accuracy. They score that with simply evaluating 10 architectures, right? And as n goes up, as they evaluate more and more architectures, they do, they do get better, but not much. So they have a discussion here. I'm having trouble moving this. All right, so we'll sort of go through the discussion. We report results, yada, yada, yada, yada, yada. As the setup, the non weight sharing methods are given a time budget of 12,000 seconds for our method and the non weight sharing methods are averaged. Accuracies are averaged over 500 runs for weight sharing methods. Accuracies are reported over three runs with the exception of G Das. Our method is able to outperform all the weight sharing methods while requiring a fraction of the search time. And that you may see at the table. This is the real, I mean, this is the real deal here. They only use here 1.7 seconds compared to the 12,000 seconds of the other methods. And you reach almost the same accuracy. Now to be said, 2% in this particular regime on C410 is still a sizable difference. And that's the same benchmark, right? With the same sort of the same training schedule and so on. So there's not too much room to tune here. You simply have to find a better architecture. So these things are still sizably ahead of this. And what it appears to me that these methods here that don't perform well, they're simply crap. It seems they're simply, I don't know, but they might be trying out something or, you know, doing something researchy or whatnot. But it seems like if you're well able to weed out the bad architectures, you might be getting to a score like this. And then if you are actually performing a search to find the best one, then you might be getting to somewhere like this. And you can see this here throughout. So in C4100, they achieve a better score than these things, but a worse score than the non-weight sharing method. And in ImageNet, the difference is even larger. So again, what I can see here is that theirs is a good method to maybe get you, let's say, 90% of the way you want to go. And what's interesting is that here they say, we also show the effect of sample size. We show the accuracy of the networks chosen by our method for each n. So that's the sample size. We list the optimal accuracy for sample sizes 10 and 100 and random selection over the whole benchmark. So in this case, they have the optimal one, which I guess they just draw 10 samples and then take the best one. So they train all of them and then take the best one. And you can see that already gets you to the 93. And whereas in their case, sometimes when they add more, they get worse. So here they get better, but then they get worse again. So they comment on this right here. We observe that the sample size does not have a large effect on the accuracy of our method. But note that as sample size increases, our method suffers from a small amount of noise, increasing the gap between our score and the optimal result. And of course, the key practical benefit is execution time. So again, they are massively faster than the other methods. But to me, it seems you could just think of combining these methods, right? You combine this with this in that what you want to do is actually actively search for the best ones. But by doing so, you could, if you could pretty quickly weed out the bad ones using this method down here, you might already have like a big speed up. Because again, with comparison to this random ones, what appears to happen is that they get good at finding, you know, your 90% architecture, but then they fail to differentiate the top performance performers from each other, where you'd really have to train the network to find out what's, you know, which one's better. So yeah, here they say they visualize the trade off between search time and accuracy for C410 for different NAS algorithms on the NAS benchmark. By removing the need for training, our method is able to find accurate networks in seconds instead of hours. And here you can see the accuracy and here you can see the time and all the good ones are either way over here or here. And theirs is almost at zero while being quite close to the accuracy of the other ones. All right, yeah, that was that was this paper. Again, I think this is pretty valuable if you are especially if you're in a new domain, where you might not know what kind of network to build, you might just be able to write a little script that generates networks, run it through this algorithm, and at least you get an idea of which ones are certainly not worth considering. And then you can simply select one of the other ones. It doesn't, you know, often it doesn't need to be the best ones. And you can then tweak it a little bit manually, the ones you found, maybe you see some regularity. And yeah, that was my two cents on this paper. I hope you liked it. If you did, consider sharing it out and telling your friends about it and subscribing, liking, and leave a comment if you agree or disagree. That was it. Bye bye.
[ { "end": 6.5600000000000005, "start": 0, "text": " Hi there! Today we're looking at neural architecture search without training by Joseph Meller, Jack" }, { "end": 12.96, "start": 6.5600000000000005, "text": " Turner, Alma Storky and Elliot J. Crowley. On a high level, this paper performs neural" }, { "end": 22.56, "start": 12.96, "text": " architecture search by looking at the correlation matrices of the Jacobian of the data when" }, { "end": 28.240000000000002, "start": 22.56, "text": " you pass it through the network. And it does so at initialization. So you pass the data," }, { "end": 35.04, "start": 28.24, "text": " look at the Jacobian, and if it's very correlated, then the network is bad. And if it's very" }, { "end": 41.28, "start": 35.04, "text": " uncorrelated, then the network is good. And by simply observing that, they can already" }, { "end": 47.28, "start": 41.28, "text": " achieve a very good score on a neural architecture search benchmark. All right, that was a high" }, { "end": 53.12, "start": 47.28, "text": " level and maybe a bit too simplified. But that's sort of what's going on. Okay, let's dive in." }, { "end": 58.72, "start": 53.12, "text": " So what's neural architecture search? Neural architecture search is the discipline of you" }, { "end": 65.28, "start": 58.72, "text": " are given a data set. Let's say here we have a data set, which could be something like CIFAR-10," }, { "end": 74.96, "start": 65.84, "text": " which is an image data set. And you are given a sort of training procedure, let's say, ADM or SGD" }, { "end": 83.83999999999999, "start": 74.96, "text": " for 100,000 steps or something like this with many batches of size 64. Okay, and you're given a loss" }, { "end": 91.44, "start": 83.83999999999999, "text": " function, which the loss function here could be the cross entropy between the outputs of the network," }, { "end": 99.83999999999999, "start": 91.44, "text": " which we'll call L and the label Y. And your task is now to find a neural network architecture" }, { "end": 106.72, "start": 99.84, "text": " that conforms to these specifications, but gives the lowest possible loss or the highest possible" }, { "end": 112.96000000000001, "start": 106.72, "text": " validation accuracy in this case. So this here would be like the train and then you'd have the" }, { "end": 118.88, "start": 112.96000000000001, "text": " test accuracy or the validation accuracy. Okay, so you could decide, well, I'm going to go with," }, { "end": 124.96000000000001, "start": 118.88, "text": " you know, first, like three convolutional layers, each one having like a ReLU non-linearity." }, { "end": 129.35999999999999, "start": 124.96, "text": " But you could also say, well, I'm going to build like a skip connection from here to here." }, { "end": 136, "start": 129.84, "text": " You could also say that I'm going to down sample by two, you could have maybe a bigger stride and" }, { "end": 142.56, "start": 136, "text": " so on. So the kernel size of the convolution, you can vary until now, people have done this by hand," }, { "end": 149.84, "start": 142.56, "text": " right? In effect, we all use like the same 10 to 20 different architectures. So if it's an image" }, { "end": 155.92000000000002, "start": 149.84, "text": " problem, we tend to go for like a ResNet or a wide ResNet, or like a VGG style architecture." }, { "end": 163.2, "start": 157.20000000000002, "text": " Someone has come up with those at some point with each of those, discovered that it works well." }, { "end": 169.52, "start": 163.2, "text": " And we don't really do much exploration, we simply kind of use the same things over and over." }, { "end": 177.36, "start": 170.48000000000002, "text": " And the truth is that there might be much better architectures that we're simply not exploring," }, { "end": 182.88000000000002, "start": 177.36, "text": " right? There might be much better building plans for networks that we don't know of that might" }, { "end": 189.04000000000002, "start": 182.88000000000002, "text": " perform a lot better with the same data and the same training. So neural architecture search is" }, { "end": 193.68, "start": 189.04000000000002, "text": " the process of automatically searching for these better architectures. Of course, that's a" }, { "end": 202.72000000000003, "start": 193.68, "text": " combinatorial problem. But the idea is that, you know, you can actually learn to construct good" }, { "end": 208.64, "start": 202.72, "text": " architectures. And by doing so, you can, you can sort of speed up this process that is manual" }, { "end": 214.48, "start": 208.64, "text": " otherwise. And the idea behind it is there's some regularity of when an architecture is good," }, { "end": 219.84, "start": 214.48, "text": " there's some like high level pattern that you as a human maybe cannot really grasp, but like a" }, { "end": 225.68, "start": 219.84, "text": " machine can figure out which architectures are good and which ones aren't. So there have been a few" }, { "end": 233.20000000000002, "start": 225.68, "text": " inventions in this area, but they are mostly costly. That's what they say here. The time and" }, { "end": 238.08, "start": 233.20000000000002, "text": " effort involved in hand designing deep neural networks is immense. This has prompted the" }, { "end": 244.08, "start": 238.08, "text": " development of neural architecture search techniques to automate this design. However," }, { "end": 250.48000000000002, "start": 244.08, "text": " neural architecture search algorithms tend to be extremely slow and expensive. They need to train" }, { "end": 256.96, "start": 250.48, "text": " vast numbers of candidate networks to inform the search process. So what neural architecture" }, { "end": 262, "start": 256.96, "text": " search methods do is what they'll have is they'll have something like a controller," }, { "end": 267.76, "start": 262, "text": " the controller itself, of course, is going to be a neural network. So there'll be this thing that" }, { "end": 274.71999999999997, "start": 267.76, "text": " will be the controller, and the controller will emit like a building plan. So the controller will" }, { "end": 280.96000000000004, "start": 274.72, "text": " emit like a building plan for this network right here. And then you train the entire thing once" }, { "end": 286.64000000000004, "start": 280.96000000000004, "text": " through for the entire 100,000 steps. And then you observe the final validation accuracy, which" }, { "end": 293.92, "start": 286.64000000000004, "text": " might be something like 80%. And then you know, okay, this is 80%. So you feed the 80% into your" }, { "end": 300.08000000000004, "start": 293.92, "text": " controller and the controller outputs the next building plan that it thinks will score higher." }, { "end": 306.71999999999997, "start": 300.08, "text": " And then you train the entire thing again, and you maybe observe 70% accuracy, you again feed" }, { "end": 311.68, "start": 306.71999999999997, "text": " that in, right, and the controller realizes, oh, I may have done something wrong, let me try" }, { "end": 317.2, "start": 311.68, "text": " something else. And does again, if this looks like reinforcement learning to you, that's because" }, { "end": 324.15999999999997, "start": 317.2, "text": " this is reinforcement learning. So the real the, the C here, the controller would be the agent," }, { "end": 325.44, "start": 324.15999999999997, "text": " the percentages here, the accuracies would be the real" }, { "end": 333.2, "start": 325.44, "text": " reward. And the environment, the observations would be basically, this thing here, this thing" }, { "end": 337.6, "start": 333.2, "text": " would be the actions, but sometimes it's the observations and you need to score the different" }, { "end": 345.6, "start": 337.6, "text": " things. Okay. So the problem, of course, with this is that the reinforcement learning requires a lot" }, { "end": 351.6, "start": 345.6, "text": " of data, it requires a lot of steps to converge, because the signal from the reward is just so" }, { "end": 358.40000000000003, "start": 351.6, "text": " weak, you simply get one number for your action. And you don't know what you can change to make it" }, { "end": 364.48, "start": 358.40000000000003, "text": " better, you simply have to try. So you need a lot of steps, but this thing here is mighty slow," }, { "end": 371.28000000000003, "start": 364.48, "text": " because each each single step in your reinforcement learning procedure involves training an entire" }, { "end": 378.72, "start": 371.28000000000003, "text": " neural network for like this many steps. Okay, so all of this is ginormously slow, and it's" }, { "end": 386.40000000000003, "start": 378.72, "text": " resource intensive. And that of course, blocks a lot of research, because, you know, we started" }, { "end": 391.84000000000003, "start": 386.40000000000003, "text": " with the plan to automate this part right here, but automating it itself is super expensive." }, { "end": 401.68, "start": 392.8, "text": " So they go for a different solution. They say this could be remedied if we could infer at net," }, { "end": 409.68, "start": 401.68, "text": " sorry, if we could infer a network's trained accuracy from its initial state. Okay, it seems" }, { "end": 416.88, "start": 409.68, "text": " a bit out there, but let's let's give them benefit of the doubt. In this work, we examine how the" }, { "end": 422.8, "start": 416.88, "text": " linear maps induced by data points correlate for untrained network architectures in the NAS bench" }, { "end": 430.32, "start": 422.8, "text": " 201 search space, and motivate how this can be used to give a measure of the accuracy of the" }, { "end": 436.71999999999997, "start": 430.32, "text": " network. So we use this measure to give a measure of modeling flexibility, which is highly indicative" }, { "end": 443.84, "start": 436.71999999999997, "text": " of a network's trained performance. We incorporate this measure into a simple algorithm that allows" }, { "end": 450.64, "start": 443.84, "text": " us to search for powerful networks without any training in a matter of seconds on a single GPU." }, { "end": 456.56, "start": 450.64, "text": " Okay, and they have the code available right here if you want to go and check that out. So let's go" }, { "end": 463.36, "start": 456.56, "text": " ahead and check that out. The claims are pretty big. And the reasoning behind the claims is the" }, { "end": 470.08, "start": 463.36, "text": " following observation. You can already sort of see in this graphic right here, we'll go over what it" }, { "end": 476.88, "start": 470.08, "text": " means in one second. But what they do is they take different networks in this search space. And the" }, { "end": 483.6, "start": 476.88, "text": " search space in this case is given by this benchmark. So this benchmark basically has a long" }, { "end": 490, "start": 483.6, "text": " architectures that you could consider. Actually, so it's a constructive list. So they don't actually" }, { "end": 497.12, "start": 490, "text": " give you the list, but they give you like a way to construct architectures. And they took those" }, { "end": 502.8, "start": 497.12, "text": " architectures and they rank them by how well they score on CIFAR-10. So there are very good" }, { "end": 508.48, "start": 502.8, "text": " architectures, which are here, there are good ones, there are mediocre ones, and then the bad ones." }, { "end": 514.8000000000001, "start": 508.48, "text": " Okay, and you can see that the histograms here of whatever they measure, they look quite different." }, { "end": 520.72, "start": 514.8000000000001, "text": " So the histograms with the good ones, they all look kind of spiky around zero. And the histograms" }, { "end": 526.5600000000001, "start": 520.72, "text": " of the bad ones all sort of look spread out. So this is the measure that they're going to propose" }, { "end": 532.64, "start": 526.5600000000001, "text": " is they have some sort of number, some sort of histogram that they produce. And if the histogram" }, { "end": 539.76, "start": 532.64, "text": " is very spiky and close together around zero, then they conclude that this network is good." }, { "end": 546.3199999999999, "start": 539.76, "text": " And if the histogram is very spread out like this, they conclude that the network is bad. Now these" }, { "end": 554.3199999999999, "start": 546.3199999999999, "text": " histograms, as you might expect, they are computed not from the final trained network, but they are" }, { "end": 562, "start": 554.3199999999999, "text": " computed from the initial network. So here they show at least, you know, in this case, it seems" }, { "end": 568.96, "start": 562, "text": " to be that there is a general correlation between the trained accuracy and how this histogram looks." }, { "end": 571.68, "start": 569.68, "text": " And we're going to explore what they do." }, { "end": 581.76, "start": 574.48, "text": " So it's essentially, it's pretty easy. They compute the linear map around each data point." }, { "end": 588.64, "start": 581.76, "text": " So what is that? If you imagine a neural network as a nonlinear function, which I guess you should," }, { "end": 597.52, "start": 588.64, "text": " because it is. And so let's imagine it as like a nonlinear function from X to Y. What they'll do" }, { "end": 603.6, "start": 597.52, "text": " is simply they'll look at a given date training data point, which could be here, right? This could" }, { "end": 611.84, "start": 603.6, "text": " be the X and this could be the Y. And in fact, let's look at it in loss landscape, not even in Y," }, { "end": 617.28, "start": 611.84, "text": " but in L in terms of the loss, because we don't need necessarily a single label. This could be" }, { "end": 623.76, "start": 617.28, "text": " for unsupervised, this could be for anything. Okay, so it maps a data point to a loss. Now," }, { "end": 629.68, "start": 624.72, "text": " what we'll do is we'll simply linearize the function around that point, which means we'll" }, { "end": 635.28, "start": 629.68, "text": " just freeze all the nonlinearities in place. And that will give us this linear function right here." }, { "end": 643.4399999999999, "start": 636.0799999999999, "text": " Okay, we just observe that this linear function can exist. It's the tangent to the loss landscape." }, { "end": 649.12, "start": 643.44, "text": " And it's at a particular data point, right? It's in data space, not in weight space. Then we look" }, { "end": 654.32, "start": 649.12, "text": " at a different data point. So we look at this data point right here, another data point. What's the" }, { "end": 662, "start": 654.32, "text": " linear function around this one is sort of like, whoops, D is like that. And then around this one" }, { "end": 669.44, "start": 662, "text": " is like this. Okay, so this is one function. Now let's look at a different function right here. So" }, { "end": 680, "start": 669.44, "text": " L, X, and we'll look at this function, the linear function. Okay, so for some reason," }, { "end": 692.08, "start": 681.12, "text": " this is like this. And if we consider two data points, their linearization is very similar." }, { "end": 699.2800000000001, "start": 692.6400000000001, "text": " Now imagine that these two have been produced by the same sort of neural networks. It's just" }, { "end": 705.36, "start": 699.28, "text": " the architecture is a little different. But they have the same number of parameters in the neural" }, { "end": 712, "start": 705.36, "text": " network. Which neural network would you prefer? Remember, by training the neural network, you can" }, { "end": 718.0799999999999, "start": 712, "text": " actually shape this loss function. You can kind of shape that around. So which one would you prefer?" }, { "end": 725.68, "start": 719.04, "text": " I personally would prefer the top one, because the top one already tells me that, hey, you know," }, { "end": 730.7199999999999, "start": 725.68, "text": " I might have 10 parameters here. And this already sort of looks like each of the 10 parameters is" }, { "end": 736.2399999999999, "start": 730.7199999999999, "text": " doing something. So if I then go into my 10 parameters, and I, you know, turn this knob" }, { "end": 742.3199999999999, "start": 736.2399999999999, "text": " right here, then I might, you know, up this bump, or down this bump, or do something with it. But" }, { "end": 749.92, "start": 742.3199999999999, "text": " the sort of frequency, curvature, the randomness of the function, the way that it fluctuates tells" }, { "end": 756.0799999999999, "start": 749.92, "text": " me that all of the different parameters must have some sort of effect, right? Because it's quite an" }, { "end": 761.5999999999999, "start": 756.0799999999999, "text": " expressive function. Whereas if I have the same number of parameters for a function like this," }, { "end": 768.3199999999999, "start": 761.5999999999999, "text": " this sort of tells me, well, maybe only one of the when the only one of the weights is actually" }, { "end": 774.16, "start": 768.3199999999999, "text": " doing something, maybe only one of the dimensions is doing something. This seems odd, right? That" }, { "end": 780.3199999999999, "start": 774.16, "text": " even though I've initialized it randomly, a super regular function like this comes out. So maybe all" }, { "end": 787.28, "start": 780.3199999999999, "text": " of the all of these parameters down here, they don't do anything. Or this, so somehow the signal" }, { "end": 794.24, "start": 787.28, "text": " doesn't get through. So that's, I, they don't explicitly say it in these terms. But this is" }, { "end": 802, "start": 794.24, "text": " how I make sense of this. What they're saying is that if you look at the linearizations of the" }, { "end": 809.68, "start": 802, "text": " functions, and you look at the the angle right here, so the angle in this case is that and in" }, { "end": 816.8, "start": 809.68, "text": " this case is that and in this case is that. So you look at the slope here. And the slope is basically" }, { "end": 823.28, "start": 816.8, "text": " the gradient of these linearized functions. And what you want to do is you want to look at the" }, { "end": 828.64, "start": 823.28, "text": " correlation between those of the different data points. So here we have three angles. One is" }, { "end": 839.28, "start": 828.64, "text": " very short, one is very bit longer, like this, and or no, even like this, and one is even over" }, { "end": 845.92, "start": 839.84, "text": " 90 degrees like that. They are not correlated at all, right? They're all very different. However," }, { "end": 854.48, "start": 845.92, "text": " the angles here, they're all quite the same, as you can see. So what they propose is the following." }, { "end": 860.64, "start": 854.48, "text": " Let's send all the data points, or in that case, all the data points in a particular mini batch," }, { "end": 867.36, "start": 860.64, "text": " let's send them through the function, and let's calculate their linearizations. So the linearization" }, { "end": 873.12, "start": 867.36, "text": " is nothing else than you send them through the network to obtain the f value for the x value," }, { "end": 878.72, "start": 873.12, "text": " and then you calculate the gradient with respect to the input. Now you have to get used to this a" }, { "end": 884.4, "start": 878.72, "text": " bit, because usually we calculate the gradient with respect to the weight. So you calculate the" }, { "end": 889.92, "start": 884.4, "text": " gradient, but now we calculate the gradient with respect to the input, which if this is a linear" }, { "end": 898.9599999999999, "start": 889.92, "text": " function, so if you have a f of x equals wx, like a linear function, then this gradient, del f del" }, { "end": 906.24, "start": 898.9599999999999, "text": " x, would just give you the w, will give you the slope of the linear function, and the same in the" }, { "end": 912.56, "start": 906.24, "text": " neural network when you linearize it. All right, so we're going to obtain all these linearizations," }, { "end": 920.64, "start": 912.56, "text": " and that gives us this matrix J right here. And what we can do is we can then observe the" }, { "end": 929.8399999999999, "start": 920.64, "text": " covariance matrix of J, of all these linearizations. The covariance matrix simply tells you how two data" }, { "end": 935.28, "start": 929.8399999999999, "text": " points vary with each other, and in fact they don't look at the covariance matrix, but they look at the" }, { "end": 941.5999999999999, "start": 935.28, "text": " correlation matrix, which is simply the scaled covariance matrix. So one entry in this covariance" }, { "end": 949.28, "start": 941.6, "text": " matrix, so you have n data points, and this gives you a matrix that's n by n, and a particular entry" }, { "end": 957.28, "start": 949.28, "text": " here, like the entry i, j, would simply state how does the angle of data point i correlate with the" }, { "end": 970.1600000000001, "start": 957.28, "text": " angle of data point j. Okay, that's the covariance matrix. And now the hypothesis is, if all of these" }, { "end": 976.88, "start": 970.16, "text": " data points are sort of independent, like in our very expressive function here, then these correlations," }, { "end": 983.1999999999999, "start": 976.88, "text": " they should not be high. In fact most data points should be rather uncorrelated. However, in this" }, { "end": 990.56, "start": 983.1999999999999, "text": " case right here, if the function is sort of kind of degenerative or something, not very expressive," }, { "end": 996.48, "start": 990.56, "text": " then all of these angles, all of these linearizations should be highly correlated." }, { "end": 1005.04, "start": 996.48, "text": " And that's what you see in this graph right here. This right here now is this correlation histogram" }, { "end": 1012.08, "start": 1005.84, "text": " of the correlations between local linear maps across all pairs of items in a mini batch of C410" }, { "end": 1019.36, "start": 1012.08, "text": " training data. Each policy is scrammed for a single untrained NASBench 201 architecture. So remember" }, { "end": 1024.8, "start": 1019.36, "text": " the expressivity is important because we want to train that function, and therefore it's important" }, { "end": 1030.32, "start": 1024.8, "text": " that every parameter does something. And if it's degenerate, we can't train it well. And that's," }, { "end": 1039.68, "start": 1030.32, "text": " I find that's the reasoning. They sort of say this, but I might make the wrong sense out of it here," }, { "end": 1045.04, "start": 1039.68, "text": " but it seems to me like that's what's actually going on. So you can see this is simply these" }, { "end": 1050.3999999999999, "start": 1045.04, "text": " matrix values rolled out and then plotted as a histogram. So what does it mean when an histogram" }, { "end": 1055.92, "start": 1050.4, "text": " is like super spread out like this? It means that there are a lot, and I think down here are axes," }, { "end": 1062.48, "start": 1055.92, "text": " yes, there are a lot of data points that correlate highly or anti-correlate highly with each other." }, { "end": 1071.0400000000002, "start": 1063.1200000000001, "text": " Okay, which means that exactly this degeneracy happens. So either too high or too negative high" }, { "end": 1077.2, "start": 1071.0400000000002, "text": " correlation means that they're very much, they're kind of the same thing. So there is, if you have" }, { "end": 1084.8, "start": 1077.2, "text": " as many parameters as data points, that means that one parameter can potentially serve these two data" }, { "end": 1090.48, "start": 1084.8, "text": " points or these two that are correlated by one or negative one. You don't need both parameters and" }, { "end": 1095.8400000000001, "start": 1090.48, "text": " therefore you have a lot of parameters doing nothing. Whereas over here with the good networks," }, { "end": 1102.72, "start": 1095.8400000000001, "text": " you can see that this spikes around zero, meaning that the data points are not correlated" }, { "end": 1110.8, "start": 1102.72, "text": " or the linearizations around the data points are not correlated. And therefore you can sort of shape" }, { "end": 1117.52, "start": 1110.8, "text": " the function around each data point however you want. Which we sort of know that neural networks," }, { "end": 1122.96, "start": 1117.52, "text": " what they do is they're so over expressive that they're actually able to shape the functions" }, { "end": 1129.68, "start": 1122.96, "text": " around the data points without necessarily looking at other data points nearby. And that" }, { "end": 1138.5600000000002, "start": 1129.68, "text": " expressivity is what you want and that expressivity is what this in part measures. Okay, so they make" }, { "end": 1144.48, "start": 1138.5600000000002, "text": " a, they have some experiments here where they validate this. So for all these architectures" }, { "end": 1148.8, "start": 1144.48, "text": " in this benchmark, and maybe I should tell you what, show you what the benchmark looks like." }, { "end": 1154.96, "start": 1148.8, "text": " So the benchmark has this particular form, this particular form, there's this skeleton," }, { "end": 1159.76, "start": 1154.96, "text": " and in this skeleton there is this block and it's always repeated. And you're basically," }, { "end": 1165.6000000000001, "start": 1159.76, "text": " your task is to determine what this block should be. So this block has an input node A and an output" }, { "end": 1170.72, "start": 1165.6000000000001, "text": " node D and two intermediate nodes. And what you have to do is basically you have to determine" }, { "end": 1177.3600000000001, "start": 1170.72, "text": " these connections right here. So there are six connections and for each one you have the option" }, { "end": 1182.24, "start": 1177.3600000000001, "text": " of putting different things there. Like you can see you can put a convolution, you can put the" }, { "end": 1187.68, "start": 1182.24, "text": " identity function, which is a skip connection, zero wise. I don't, maybe that's the zero function," }, { "end": 1194.56, "start": 1187.68, "text": " so it basically means nothing. I'm not so sure, honestly. But you could technically put a" }, { "end": 1201.76, "start": 1194.56, "text": " convolution here and here, right, or different convolutions or things like this. So there are" }, { "end": 1214.72, "start": 1201.76, "text": " these 15,625 possible cells. So the NAS benchmark contains 15,625 possible architectures that you'll" }, { "end": 1223.12, "start": 1214.72, "text": " have to search. And they take these architectures and they plot now, they plot for each architecture" }, { "end": 1227.76, "start": 1223.12, "text": " the validation accuracy after training. And the training protocol is standardized, you don't have" }, { "end": 1234, "start": 1227.76, "text": " to care about that. And the score that they measure at the beginning of training. And what you can see" }, { "end": 1242, "start": 1234, "text": " is that there is a linear relationship, sort of, like sort of. From these experiments what you'll" }, { "end": 1249.04, "start": 1242, "text": " get is like this sort of feeling. What they're going to propose is that you should take that score" }, { "end": 1259.12, "start": 1249.04, "text": " as a measure. And here again also, sort of, sort of. There is a clear trend, as you can see," }, { "end": 1266.8799999999999, "start": 1259.12, "text": " right here. Though, yeah, though this, as you can see, this sort of spreads out. And the most right" }, { "end": 1275.68, "start": 1266.8799999999999, "text": " one is ImageNet, which is the most difficult one, of course. So, and this is CIFAR 100, which is more" }, { "end": 1283.6000000000001, "start": 1275.68, "text": " difficult than CIFAR 10. So we can see that this sort of relationship at the top, it doesn't really" }, { "end": 1288.8, "start": 1283.6000000000001, "text": " hold anymore if the task gets difficult. And this is, so what I think is happening, this is kind of" }, { "end": 1295.3600000000001, "start": 1288.8, "text": " an interjection of my own opinion. What's happening here is that this score that they discover" }, { "end": 1302.96, "start": 1296.4, "text": " allows them pretty efficiently to see which networks are just degenerate and cannot be trained." }, { "end": 1309.92, "start": 1302.96, "text": " Like if you try to train them, they just perform really poorly, okay? That, it's probably a very" }, { "end": 1316.08, "start": 1309.92, "text": " good score for weeding those out. And that would mean if you kind of barrier here somewhere, right?" }, { "end": 1321.28, "start": 1316.08, "text": " You could just discard a whole lot of this crap, or even here, right? You could just discard a" }, { "end": 1330.24, "start": 1321.28, "text": " whole lot of this crap. And also now here, just, you know, all of this crap. Yeah, whereas here," }, { "end": 1335.92, "start": 1330.24, "text": " as you can see, some, this score, sometimes it's higher than these ones, even though they perform" }, { "end": 1342.4, "start": 1335.92, "text": " better. And again, you could probably discard a lot of the crap, but it's not as distinctive for" }, { "end": 1348, "start": 1342.4, "text": " the well performing networks, because these here are all not the degenerate version, right? They're" }, { "end": 1353.76, "start": 1348, "text": " not degenerate in the sense that they're, they have some fundamental flaw where the function lacks" }, { "end": 1360.08, "start": 1353.76, "text": " now expressivity from the very start, so you can't train it. And so, you know, it's not" }, { "end": 1366.24, "start": 1360.08, "text": " a big deal. And then probably other factors come into play, other factors than you can simply" }, { "end": 1372.6399999999999, "start": 1366.24, "text": " determine with this particular score. But, you know, there is this relationship that's," }, { "end": 1381.36, "start": 1373.6, "text": " you know, you can see that. And they do some ablations on this here. For example, are your" }, { "end": 1387.6799999999998, "start": 1381.36, "text": " scores a proxy for a number of parameters? And they say, no, the number of parameters works way" }, { "end": 1394, "start": 1387.68, "text": " than this particular score, which, you know, is a cool thing. Then how important is the specific" }, { "end": 1400.24, "start": 1394, "text": " mini batch and initialization? And they say, look right here, we, for some architectures," }, { "end": 1406.64, "start": 1400.24, "text": " we do different mini batch sizes. And you can see each of those groups, they don't vary too much" }, { "end": 1412.16, "start": 1406.64, "text": " in how they're, it influences their score. This is, I believe this is the same architecture. So" }, { "end": 1418, "start": 1412.16, "text": " it's always an architecture that achieves in this case, for example, wow, that's not a straight line," }, { "end": 1426.24, "start": 1419.28, "text": " 77% or so. And you can see if you go for different mini batches, the score varies only minimally." }, { "end": 1436.24, "start": 1427.2, "text": " Initialization is a bigger variance inducing thing. But also here, the scores don't vary too much." }, { "end": 1441.3600000000001, "start": 1436.24, "text": " But it is interesting that the different initialization do get you to different score," }, { "end": 1446.24, "start": 1441.36, "text": " because it would directly support kind of my hypothesis that what's going on here is that" }, { "end": 1454.32, "start": 1446.8, "text": " you sort of measure initial degeneracies. And you can sort of make up for these initial degeneracies" }, { "end": 1458.8, "start": 1454.32, "text": " in the architecture sometimes with sort of a different initialization. So the different" }, { "end": 1464.9599999999998, "start": 1458.8, "text": " initializations give you differently performing networks. We already know this from things like," }, { "end": 1470.8799999999999, "start": 1464.9599999999998, "text": " you know, lottery ticket hypothesis and so on, that the initialization can matter to some degree" }, { "end": 1477.1200000000001, "start": 1470.88, "text": " in these types of things. Now, that being said, they always train to the same, it seems, but their" }, { "end": 1484.88, "start": 1477.1200000000001, "text": " their score varies. So I might be backwards correct here, or not correct. But in any case," }, { "end": 1492, "start": 1484.88, "text": " the initialization here matters more, but also you can still see this linear relationship." }, { "end": 1499.6000000000001, "start": 1492.96, "text": " And this is particularly interesting. This is even the case when you just input white noise. So" }, { "end": 1505.6799999999998, "start": 1499.6, "text": " instead of the data, you measure that score by just inputting noise that I guess has some sort" }, { "end": 1511.28, "start": 1505.6799999999998, "text": " of the same magnitude as the data would have, but it's just noise. And you can still sort of see this" }, { "end": 1517.76, "start": 1511.28, "text": " linear relationship, which is very interesting. And that I think also shows some that you what" }, { "end": 1525.04, "start": 1517.76, "text": " you're fine, what you find is a property of the network itself. And the fact that it is," }, { "end": 1531.68, "start": 1525.04, "text": " it is initialized and built in such a way that it allows you to train it in a very," }, { "end": 1542.72, "start": 1532.56, "text": " in a sort of a benign manner, it has no degeneracies. Okay. So in the last experiment," }, { "end": 1552.24, "start": 1542.72, "text": " they go here and they say, we evaluated the score on initialized networks in the PyTorch CV library." }, { "end": 1557.52, "start": 1552.24, "text": " So they go to this library that has a lot of these networks, but these networks are not the same as" }, { "end": 1561.68, "start": 1557.52, "text": " this benchmark. This benchmark is specifically designed to do architecture search. Now the" }, { "end": 1567.76, "start": 1561.68, "text": " networks in this library, they are all designed to perform really well. Some are designed to be" }, { "end": 1572.72, "start": 1567.76, "text": " quite small, some are designed to be quite fast and so on. But in general, they are all of their" }, { "end": 1579.04, "start": 1572.72, "text": " goal is to perform well, and they have been sort of found by humans to perform well. So they take" }, { "end": 1586, "start": 1579.04, "text": " now these networks on CIFAR 10 and they test them. So as you can see here, here is the test" }, { "end": 1594.96, "start": 1586, "text": " accuracy again, and here is their score that they give it. And they say, now I can't move this anymore." }, { "end": 1599.52, "start": 1595.6, "text": " Hello. Well, okay." }, { "end": 1606.48, "start": 1599.52, "text": " They say that this linear relationship still sort of holds. It doesn't hold super, super well," }, { "end": 1614.96, "start": 1606.48, "text": " but you can still sort of, if you squint, if you squint hard, you can see that it sort of goes" }, { "end": 1621.76, "start": 1614.96, "text": " upward, though you really have to squint hard. Like what are these things right here? And what," }, { "end": 1628.16, "start": 1621.76, "text": " again, what's the case is that if the score is low, it's not going to be a good score." }, { "end": 1636.16, "start": 1628.16, "text": " So what you can do is that if the score is low, you will sort of be able to cut off the worst" }, { "end": 1643.92, "start": 1636.16, "text": " performing ones. But really at the top here, it doesn't seem like there is a particular relation" }, { "end": 1652.72, "start": 1643.92, "text": " between these networks and this initial score, which sort of strengthens my hypothesis that" }, { "end": 1659.6000000000001, "start": 1652.72, "text": " it's just kind of weed out the bad ones. But it's pretty cool because you can weed out the bad ones" }, { "end": 1665.3600000000001, "start": 1659.6000000000001, "text": " without any training, right? You'd simply forward prop backward prop. There you have it. So cool." }, { "end": 1672.64, "start": 1666.4, "text": " Now they come, they, here is the experiment where they now really do this NAS benchmark and they" }, { "end": 1679.44, "start": 1672.64, "text": " compare with other methods. So some of these other methods are designed to do the call weight" }, { "end": 1685.44, "start": 1679.44, "text": " sharing, which basically is a technique where you can sort of speed up the speed up the algorithm" }, { "end": 1691.52, "start": 1685.44, "text": " as compared to non weight sharing and the non weight sharing. That's one of these we have discussed" }, { "end": 1697.68, "start": 1691.52, "text": " initially. That was my initial example with the controller and so on where it takes super long." }, { "end": 1706.4, "start": 1697.68, "text": " So here you see the method and how long each method takes. Now the best ones, as you can see" }, { "end": 1715.1200000000001, "start": 1706.4, "text": " already, the best ones here, or these, these methods right here are the best ones. They score" }, { "end": 1722.64, "start": 1715.1200000000001, "text": " somewhat like a 93.9 or so on C for 10, whereas these weight sharing ones, they don't perform too" }, { "end": 1730.88, "start": 1722.64, "text": " well, except this one seems to perform quite well. And in this hours case, they perform worse than" }, { "end": 1736.64, "start": 1730.88, "text": " that, but they still perform better than a lot of the weight sharing ones. So what their point is" }, { "end": 1744.88, "start": 1736.64, "text": " basically is that they get a pretty good score, which is a 91.5 on C for 10, which is, you know," }, { "end": 1753.5200000000002, "start": 1744.88, "text": " it's at least not degenerate. It's a, it's a good accuracy. They score that with simply evaluating" }, { "end": 1762.16, "start": 1753.52, "text": " 10 architectures, right? And as n goes up, as they evaluate more and more architectures, they do," }, { "end": 1769.36, "start": 1763.6, "text": " they do get better, but not much. So they have a discussion here. I'm having trouble moving this." }, { "end": 1776.4, "start": 1771.68, "text": " All right, so we'll sort of go through the discussion. We report results, yada, yada, yada," }, { "end": 1782.32, "start": 1776.4, "text": " yada, yada. As the setup, the non weight sharing methods are given a time budget of 12,000 seconds" }, { "end": 1787.84, "start": 1782.32, "text": " for our method and the non weight sharing methods are averaged. Accuracies are averaged over 500" }, { "end": 1794.8, "start": 1787.84, "text": " runs for weight sharing methods. Accuracies are reported over three runs with the exception of" }, { "end": 1800.8, "start": 1794.8, "text": " G Das. Our method is able to outperform all the weight sharing methods while requiring a fraction" }, { "end": 1805.36, "start": 1800.8, "text": " of the search time. And that you may see at the table. This is the real, I mean, this is the real" }, { "end": 1812.9599999999998, "start": 1805.36, "text": " deal here. They only use here 1.7 seconds compared to the 12,000 seconds of the other methods. And" }, { "end": 1821.28, "start": 1812.9599999999998, "text": " you reach almost the same accuracy. Now to be said, 2% in this particular regime on C410 is still a" }, { "end": 1827.1999999999998, "start": 1821.28, "text": " sizable difference. And that's the same benchmark, right? With the same sort of the same training" }, { "end": 1832.32, "start": 1827.1999999999998, "text": " schedule and so on. So there's not too much room to tune here. You simply have to find a better" }, { "end": 1842.3999999999999, "start": 1832.32, "text": " architecture. So these things are still sizably ahead of this. And what it appears to me that" }, { "end": 1848.32, "start": 1842.3999999999999, "text": " these methods here that don't perform well, they're simply crap. It seems they're simply," }, { "end": 1854, "start": 1849.04, "text": " I don't know, but they might be trying out something or, you know, doing something" }, { "end": 1862.16, "start": 1854, "text": " researchy or whatnot. But it seems like if you're well able to weed out the bad architectures," }, { "end": 1870, "start": 1862.16, "text": " you might be getting to a score like this. And then if you are actually performing a search to" }, { "end": 1876, "start": 1870, "text": " find the best one, then you might be getting to somewhere like this. And you can see this here" }, { "end": 1884, "start": 1876, "text": " throughout. So in C4100, they achieve a better score than these things, but a worse score than" }, { "end": 1894.8, "start": 1884, "text": " the non-weight sharing method. And in ImageNet, the difference is even larger. So again, what I" }, { "end": 1902.4, "start": 1894.8, "text": " can see here is that theirs is a good method to maybe get you, let's say, 90% of the way you want" }, { "end": 1910.24, "start": 1902.4, "text": " to go. And what's interesting is that here they say, we also show the effect of sample size. We" }, { "end": 1914.64, "start": 1910.24, "text": " show the accuracy of the networks chosen by our method for each n. So that's the sample size." }, { "end": 1920.64, "start": 1914.64, "text": " We list the optimal accuracy for sample sizes 10 and 100 and random selection over the whole benchmark." }, { "end": 1927.76, "start": 1921.76, "text": " So in this case, they have the optimal one, which I guess they just draw 10 samples and then take" }, { "end": 1931.76, "start": 1927.76, "text": " the best one. So they train all of them and then take the best one. And you can see that" }, { "end": 1940.8799999999999, "start": 1931.76, "text": " already gets you to the 93. And whereas in their case, sometimes when they add more, they get worse." }, { "end": 1947.76, "start": 1940.8799999999999, "text": " So here they get better, but then they get worse again. So they comment on this right here. We" }, { "end": 1953.36, "start": 1947.76, "text": " observe that the sample size does not have a large effect on the accuracy of our method. But note" }, { "end": 1958.56, "start": 1953.36, "text": " that as sample size increases, our method suffers from a small amount of noise, increasing the gap" }, { "end": 1966.08, "start": 1958.56, "text": " between our score and the optimal result. And of course, the key practical benefit is execution" }, { "end": 1974.08, "start": 1966.08, "text": " time. So again, they are massively faster than the other methods. But to me, it seems you could" }, { "end": 1981.12, "start": 1974.08, "text": " just think of combining these methods, right? You combine this with this in that what you want to do" }, { "end": 1987.04, "start": 1981.12, "text": " is actually actively search for the best ones. But by doing so, you could, if you could pretty" }, { "end": 1993.28, "start": 1987.04, "text": " quickly weed out the bad ones using this method down here, you might already have like a big speed" }, { "end": 2001.28, "start": 1993.28, "text": " up. Because again, with comparison to this random ones, what appears to happen is that they get good" }, { "end": 2008.08, "start": 2001.28, "text": " at finding, you know, your 90% architecture, but then they fail to differentiate the top" }, { "end": 2014.8799999999999, "start": 2008.08, "text": " performance performers from each other, where you'd really have to train the network to find out" }, { "end": 2024.16, "start": 2014.88, "text": " what's, you know, which one's better. So yeah, here they say they visualize the trade off between" }, { "end": 2030.48, "start": 2024.16, "text": " search time and accuracy for C410 for different NAS algorithms on the NAS benchmark. By removing" }, { "end": 2035.2, "start": 2030.48, "text": " the need for training, our method is able to find accurate networks in seconds instead of hours." }, { "end": 2042, "start": 2035.2, "text": " And here you can see the accuracy and here you can see the time and all the good ones are either" }, { "end": 2051.44, "start": 2042, "text": " way over here or here. And theirs is almost at zero while being quite close to the accuracy of" }, { "end": 2060.16, "start": 2051.44, "text": " the other ones. All right, yeah, that was that was this paper. Again, I think this is pretty" }, { "end": 2066.48, "start": 2060.16, "text": " valuable if you are especially if you're in a new domain, where you might not know what kind of" }, { "end": 2071.92, "start": 2066.48, "text": " network to build, you might just be able to write a little script that generates networks, run it" }, { "end": 2077.04, "start": 2071.92, "text": " through this algorithm, and at least you get an idea of which ones are certainly not worth" }, { "end": 2082.8, "start": 2077.04, "text": " considering. And then you can simply select one of the other ones. It doesn't, you know, often it" }, { "end": 2087.44, "start": 2082.8, "text": " doesn't need to be the best ones. And you can then tweak it a little bit manually, the ones you found," }, { "end": 2093.12, "start": 2087.44, "text": " maybe you see some regularity. And yeah, that was my two cents on this paper. I hope you liked it." }, { "end": 2100.24, "start": 2093.12, "text": " If you did, consider sharing it out and telling your friends about it and subscribing, liking," }, { "end": 2127.2799999999997, "start": 2100.24, "text": " and leave a comment if you agree or disagree. That was it. Bye bye." } ]
eyxmSmjmNS0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Classic] Generative Adversarial Networks (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gan", "generator", "discriminator", "convolution", "deconvolution", "goodfellow", "bengio", "convolutional neural network", "mnist", "cifar10", "generative", "generative model", "image generation", "face model", "latent space", "interpolation", "minmax", "nash equilibrium", "game theory" ]
#ai #deeplearning #gan GANs are of the main models in modern deep learning. This is the paper that started it all! While the task of image classification was making progress, the task of image generation was still cumbersome and prone to artifacts. The main idea behind GANs is to pit two competing networks against each other, thereby creating a generative model that only ever has implicit access to the data through a second, discriminative, model. The paper combines architecture, experiments, and theoretical analysis beautifully. OUTLINE: 0:00 - Intro & Overview 3:50 - Motivation 8:40 - Minimax Loss Function 13:20 - Intuition Behind the Loss 19:30 - GAN Algorithm 22:05 - Theoretical Analysis 27:00 - Experiments 33:10 - Advantages & Disadvantages 35:00 - Conclusion Paper: https://arxiv.org/abs/1406.2661 Abstract: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. Authors: Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Today we'll look at Generative Adversarial Nets by Ian J. Goodfellow et al. So this one is another installment in our series of historical papers that had great impact. GANs nowadays, or Generative Adversarial Nets back then, were sort of... This was the starting shot in a long line of research that is still continuing today. So I remember when I started my PhD in 2015, GANs were just about spiking. I remember NURiPS, or back then NIPS, in 2016, and every other paper was about GANs. There was also this famous Schmidhuber Goodfellow moment at the tutorial. It was a wild time. And this is the paper that started it all. The paper is quite well written. It's very focused on convincing you that this is a sound method mathematically. That it doesn't just do wild things. And also it has a lot of modern tricks for GANs already built into it. So astounding how much foresight there was already in this paper. But of course, GANs have come a super long way since then. And today we'll just go through the paper and look at how it looked back then and what this paper was like. So yeah, join me in this. If you like it, please share it out. Let me know in the comments what you think of historic paper reviews. This is not going to be like a beginner's tutorial in GANs. This is really going to be... We'll go through the paper. You'll see right here the paper is from 2014. So it would still be another two years or so until GANs really take off from this point on. But the introduction, of course, was really important. Okay, so abstract. Here we go. We propose a new framework for estimating generative models via an adversarial process in which we simultaneously train two models, a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G. Okay, this was sort of a new thing. Now, I know, I know people disagree with this being a new thing, but this was a new thing. And specifically, this was the first paper that made something like this really work for data. So to have a discriminator, the words generator and discriminator were also introduced in this paper. So you train this D model, which is the discriminator, and the D model basically decides whether or not a given data point comes from data or comes from the fake distribution. And then you have a generative model G that is supposed to just create this data X rather than coming from the database. So you want to sample a couple of times from the data, and sometimes you sample from this model G, and then the discriminator is supposed to decide whether or not it comes from the data set or from your counterfeiter, like from this generator G. And it's supposed to say whether it's data or fake. So you train the D model as a simple image classifier. So people already knew how to build image classifiers. This was shortly, as you can see, before ResNet came on the scene. So people already kind of knew how to build CNNs, build really good image classifiers. And the thought here was really generative models weren't really a thing until then. So people were in language models, Word2Vec was kind of coming up, but they would still be doing like RNNs using these Word2Vec vectors for generating language. In images, generative models weren't really much of a thing. So you would do like compositional models or you would do autoencoders, which were just either really blurry or really, really artifactory. And there were also approaches like deep belief networks and so on, but they had their own problems. So there wasn't really a satisfactory way to do image generation that resulted in really high quality images. Now here, I think the entire thought, and this is not really spelled out, but the entire thought here is that, hey, we know how to train really, really good image classifiers. This has been evident in these since AlexNet. So for two years, this was evident how to build really good image classifiers. And the question here is to say that rather than also building really good generators, can't we like harness the power of building really good classifiers for training a generator? And this is this idea right here. This wasn't the one before, as you know, in like an autoencoder, what you do is you'd input a sample into some kind of auto bottleneck thing, whatever. And then at the end, you train your output sample to match the input sample as close as possible. And then in here, after you've trained this, this part here is your generative model. And then here, in here, you'd input like MCMC sampler or whatnot. And then, of course, variational autoencoders came up and so on. But still, what you always would do is you would somehow use the data directly. So this is data in order to train your model. So you would somehow say, ah, the output here should probably match the input in some way or in some at least distributional way. This was a new thing. As you can see right here, there is no direct connection between the data and the generator. And I think this was the success of this model. The fact that the generator did not, it wasn't trained from the data like you would do if you were just approaching this problem. The philosophy here is let's use the power of discriminative models, which we know how to build in order to train this generator. So the generators task now isn't to match any sort of data point. The generators task is to produce images that the discriminator would classify as data. And you can do that by simply back propagating through the discriminator to the generator. So I think that's the only thing that's kind of unstated in this paper, the reasoning behind why this is new, why this might work. But everything else is spelled out very well in this paper, I have to say, if you read through it. So the training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two player game. So as I said, the paper is very much focused on convincing you that there's something sound happening here, because at that time, if you were to look at this, you would say something like there is no way. This is you would be like, yeah. So I can understand the motivation here to really convince people that, you know, something something good is happening also on the on the theoretical side. In the space, sorry, in the space of arbitrary functions, G and D, a unique solution exists with G recovering the training data distribution D equals to one half everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with back propagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. OK, so the point here is that it's much easier than current methods of producing of generative models. And also it does something sound. Now, let's jump into the loss function right here. So they say G and D play the following two player minimax game with value function V. And this is still understood until today that it was already like if this was a pure engineering paper, they could simply build the architecture and say, oh, we let these networks fight. And they they are kind of adversarial and they they pump each other up and so on. And this here was more much more into the direction of kind of a a theoretical reasoning into why something like this would work. Of course, there is still a lot of engineering going on to actually make it work. So they they have there is this value function right here. OK, and the value function is the following. So what you have is you have the log probability of data and you have one the log one minus the of the generated samples. So here you can see and this was introduced, this seems also obvious now. Right. But you have a prior on what this is called the noise distribution. OK, so you have a prior on your input noise to the generator because the generator is supposed to come up with very many different data points. And if it is a if it is a, you know, non-stochastic function like a neural network, then you need some way to make to produce different images. So there is this prior distribution over the noise. You feed that noise into the generator. The generator will produce an output. You put that into the discriminator and then this right here, as you can see, the discriminator is trying to maximize this objective. So the discriminator is trying to maximize the probability of real data and it is trying to minimize the probability of fake data. OK, it is this is simply a two way classification problem. At the same time, the generator, as you can see, is trying to minimize the objective. In fact, the order here is quite important. So the generator, as you can see, is trying to minimize whatever this here is. So the generator sort of is trying to minimize against the best possible discriminator. And so this is one one observation right here is that the formulation is always with respect to a perfect discriminator. Now, we know that this doesn't work because if you have a perfect discriminator, then generator cannot catch up because you have insufficient gradients and so on. And this was already recognized in this paper as well. But the formulation is with respect to a min max game and not a max min game. So the other point I want to make here is that you can see the discriminator appears in both in both terms right here. However, the generator only appears right here. And this this basically means that the objective for the generator is only this part here because the other part is constant. So the generator is just trying to make the discriminator think that fake data is real. So it is trying to make the discriminator the class of fake data as small as possible for the data that it outputs. Well, the discriminator is trying to make the class of fake data more than the class of sorry, real data. Yeah, it's trying to make it's trying to classify fake data as fake and real data as real. Whereas the generator has only this part on the right. This is I feel this is it's quite important. Why? Because already in this paper, they recognize that this might not be the best practical objective. And for the generator, they can actually exchange this part here on the right to simply say we want to. So we want to instead of one minus D, instead of log one minus D, we simply want to use minus log D as an objective for the generator. So you can kind of play around with this. And as you know, lots of formulations have played around with this loss right here. And yeah, that's why we have like a billion, billion, billion, billion GAN variations. They introduced the reasoning behind this. So there's an intuition right here. And you can see already in practice, equation one may not provide sufficient gradient for G to learn well. Early in learning, when G is poor, D can reject samples with high confidence because they're clearly different from the training data. In this case, this saturates rather than training G to minimize that we can train G to maximize log D. This objective function results in the same fixed point for the dynamic, but provides much stronger gradients in early, much stronger gradients early in learning. This is in contrast to like other papers that seem to say, oh, we do this. And they at least say it provides the same fixed point. Right. Yeah. So again, they're trying to convince you that this is doing something useful and that this is easier. OK, so this strategy is analogous to other things. Training maintains samples from a Markov chain from one learning step in the next order to avoid burning in a Markov chain in another loop of learning. Sorry. OK, this is from another paper. So their point here is that it's analogous to other papers that use these Markov chains where you always do one step in G and one step in D. We alternate between K steps of optimizing D and one step of optimizing G because you have this inner maximization over D and then the outer maximization, the outer minimization over G. So this has already been around the fact that you kind of have to have these optimizations in lockstep. But the difference here is you don't need any sort of like Markov chain in the inner loop and so on. You simply need back propagation. So here's an illustration of how that might work. So at the beginning here, you have your Z space and this is always sampled uniformly, as you can see right here. This is from a prior distribution and through the mapping. So this here is from Z to X is G. So this is the mapping G. You can see that the uniform distribution is now mapped to something non-uniform, which results in the green thing. So G is the green line, while as this is data, the black dots are data. And if you have a discriminator, the discriminator is supposed to tell you where there's data and where there's fake data. Now, so green here is fake. Now, this blue line is sort of a half trained discriminator. Now you train D, right? You max maximize D, the discriminator, and that gives you this blue line right here. So this this is a perfect discriminator for these two data distributions. It tells you it's basically the ratio of green to black at each point. And now you train the generator according to this. And you can see that the gradient of the discriminator is so the gradient of the discriminator. Discriminator is in this direction. OK, so it's like up this hill. And that's why you want to shift your green curve over here according to the gradient of the discriminator. Note that we first trained the discriminator and now in a second step, we optimize the generator. So now we shift this green curve over in order to in along the gradient of the blue curve. So it's important the green curve doesn't see the black curve ever. The generator doesn't see the data. The generator simply sees that blue curve and it goes along the gradient of that blue curve of the discriminator. OK, and then if you do this many, many steps, actually, there are dots right here. You will end up with a discriminator that has no clue what's where. This is one half probability everywhere because the ratio is the same. And you end up with the probability of data equal to the probability of the output generated samples. And this can happen if the generator simply remembers the training data. But there are a number of things that counter that. For example, the generator is continuous while the training data is, of course, discrete. So there is this in between things right here where there is no training data. In fact, to hit exactly training data is very, very unlikely. But of course, you can still you can still peek at the training data. But also, I think there are two things why the generator doesn't simply remember the training data first, because it doesn't ever see the training data directly. So it can only see it through the discriminator. And second of all, because it is built as these multilayer neural networks, it doesn't have the power to just remember this, because as there is kind of this notion of continuous function. So and these neural networks are rather smooth functions often. And therefore, I think that is something that helps the generator avoid remembering the training data. Of course, there is still this problem of mode collapse that was really big in GANs. So even if it doesn't remember the training data, it might focus on the easiest part of the training data and forget all other parts. And that was a direct result, actually, of this objective. So where was it? So this objective directly led to mode collapse in some in some form, because it penalizes different errors differently. So of course, people have come up with ways to to solve that. OK, now here is the algorithm. As you can see, this was already quite this was already quite the algorithm we use nowadays. So for K steps, this is the inner maximization. And here they say that we use K equals one. So this is this is pretty much what we use today. The early days of GAN were still like, how much do I need to discriminator per generator and so on. Nowadays, everyone is just using one step here, one step there, or even training and jointly works in some cases. So you want to sample a mini batch of noise samples and you will sample a mini batch of M examples from training data generation. So from this data, you want to update the discriminator by sending its stochastic gradient. And this is simply the gradient of the objective. And then after those K steps, you want to sample another mini batch of noise samples and update the generator by descending its stochastic gradient. And you can see right here already, there is this reduced objective that doesn't include this because it falls away in the gradient. And they say the gradient based up this can use any standard learning based rule. We use momentum in our experiments. Very cool. So I believe they already also say that it is somewhere here. It's pretty fun that they say, oh, in our generator, we only input noise at the lowest layer. This is also something that if you think that G here is a multilayer network, so it's kind of a multilayer network that outputs an image. And if you ask yourself, if I have noise, how would I input that into there? It's so clear nowadays that we just put it here. But this was not clear at all. This was kind of an invention of this paper because you could put it pretty much at all layers. You could distribute it and so on. You could add some right here. It was this paper that already established the fact that we input noise kind of as a vector at the very beginning and then just let the neural network produce the image from that. So yeah, pretty cool. It's pretty sneaky how many things are hidden in these initial papers, how many decisions that are made there then are just taken over. And this one, I guess, turned out to be fairly good. OK, so here they go for some theoretical analysis. And the first they want to convince you that if the generator, if this all works well, if both parties, this generator and the discriminator, optimize their objective to the optimum, then the generator will have captured the data distribution, so the global optimality of this. And they go about convincing you of that. So the first thing that they convince you of is that if you fix the generator, the optimal discriminator is this. And we've already seen this in this drawing right here. So the optimal discriminator is simply the ratio of the data, of the likelihood of data versus the likelihood of the generated data. OK, so you train, you almost train this discriminator in the inner loop. And that's simply the consequence of this, of a pointwise. This is true pointwise, therefore it's true over the entire data distribution. In the next thing, they convince you that the global minimum of the virtual training criterion, this is the value function, this min-max game, is achieved if and only if this holds. At that point, the training criterion achieves the value of negative log 4. And this, again, this was already here, the fact that this has a global minimum, and it is achieved when the generator matches the data distribution, which is pretty cool. So in the proof, it's pretty simple, actually. They first say, look, if this is the case, we just simply plug that in, the discriminator will be confused. So if the generator exactly captures the data, the discriminator will have no clue what's going on, right? Because it can't, because they're equal. So it must basically output the probability of one half everywhere, and then your objective becomes a constant negative log 4. Now, if you then plug that into the other equation, you'll see that the training criterion ends up being negative log 4 plus twice the Jensen-Shannon divergence between the data and the generated distribution. And since this term here is always positive, that means that this thing here can never be less than negative log 4. And therefore, the negative log 4 is the optimum. OK, that's the proof is pretty cool, I have to say, to show that this has the optimum at that place. And the last thing they convince you of is that this algorithm actually converges. And the convergence is simply predicated on the fact that if you look at each of these problems individually, they are convex. So like here is convex in X for every alpha. So each of these are sort of convex problems, and then it will naturally converge to their minimum. However, in practice, adversarial nets represent a limited family of distributions via the function. And we optimize the parameters rather than the distribution itself. Using a multilayer perceptron to define G introduces multiple critical points in parameter space. However, the excellent performance of the multilayer perceptrons in practice suggest that they are a reasonable model to use, despite their lack of theoretical guarantees. So they say if we could optimize this probability distribution directly, it is a convex problem and we will always converge. But in practice, of course, we only optimize the parameters of an MLP or a CNN. And that doesn't always converge. But we have reasonable hopes that it will converge. OK, so again, it's very much focused on convincing that this is doing something sensible, which I hope now you are convinced. So there is a global optimum point. It's when the generator captures the data distribution perfectly. This is this can be achieved and will be achieved if you can optimize these probability distributions with a reasonable degree of freedom. And the neural networks provide that reasonable degree of freedom and give us good hope that in practice it will work. So they apply this to data sets, namely MNIST, the Toronto Phase Database and C410. The generator nets used a mixture of rectifier linear activations and sigmoid activations, while the discriminator net used max out activations. That was still a thing. Dropout was applied in training at the discriminator net. While our theoretical framework permits the use of dropout and other noise at intermediate layers of the generator, we used noise as the input to only the bottom most layer of the generator network. Again, this wasn't kind of clear at the beginning. And also the fact that to leave out dropout and so on in the generator was, I guess they found that empirically. And then there was of course no way to evaluate these things. Like how do we evaluate generative models? Nowadays we have these inception distances and so on. But then we estimate probability of the test set under P under the generated data by fitting a Gaussian parsing window to the samples generated with G and reporting the log likelihood under this distribution. The theta parameter, yada yada yada. Results are reported. This method of estimating the likelihood has somewhat high variance and does not perform well in high dimensional spaces, but it is the best method available to our knowledge. Advances in generative models that can sample but not estimate likelihood directly motivate further research into how to evaluate such models. They were absolutely right in this. And there was a lot of research into how to evaluate these models. However, it is my opinion that we still have very, very limited methods of evaluating models like this. Like we have better methods, but it's yeah, it's not really it's not really satisfactory how it is right now. So you see that these models, these adversarial nets, by the way, they're always called adversarial nets right here, where I think we call them like most people would call them adversarial networks. But it's just interesting to see the nets also in the title. Right. It says I think it says nets, does it? I think it does. We'll look at it after. So the out they outperform these other models, especially these these belief networks were kind of popular at the time. And you can see the samples right here were in no way comparable to examples that you get from the modern GANs. But this was already very, very, very good, especially the MNIST. And then here you could actually recognize. So the ones with the yellow are always from the training data set. They're like the nearest neighbors of the things on the left. So they want to show that it doesn't simply remember the training data, though I'm not so sure. Like this seems like it has some sort of somehow remember the training data a little bit. Also, this one right here. And there was already a way. So this was also very foresighted. So these A to C were fully connected networks, which might be one of the reasons why it worked moderately well. Right. But the last one was a convolutional discriminator and a deconvolutional generator. So already using kind of deconvolutions that are used everywhere today. So they are used in GANs and whatnot. VAs to up sample anything. If you want to do pixel wise classification, you use deconvolutions. So again, this paper sort of introduced a lot of things that later that we still use in GANs today. Now, I'm sure deconvolutions weren't invented here, but we still use them. So legit, they were the first GAN paper to use deconvolutions. Haha. Yeah. They also say we make no claim that these samples are better than samples generated by existing methods. We believe that these samples are at least competitive with the better generative models in the literature and highlight the potential of the adversarial framework. Today, this paper would be so rejected. Like, wait, you're not better. Get out of here. You can't claim it anymore. No, it doesn't work anymore. I'm sorry. Yours has always has to be better than everything else nowadays. Otherwise, it's a it's a it's a weak reject their experimental evidence doesn't doesn't convince me. You can't simply say something's cool. Also already introduced in this paper digits obtained by linearly interpolating between coordinates in z space of the full model like this thing here. Every single GAN paper had interpolations in the like in this in the GAN spike. And it came all came from here. So already this is just like this is like every GAN paper then had like rows of these like of these interpolations. I should know if I've written a paper on it and introduced right here. Who knows if they hadn't done this. Yeah, I guess it's it's kind of an obvious thing. But still, you know, very, very cool to see that this was already done. And here GANs compared to other different methods like deep directed graphical models, generative auto encoders and compared in very many ways. So this is a actually a good reference if you want to learn about these different kinds of models. And they make the claim here that their advantages and disadvantages. So disadvantages mainly come with training these things because you have to train them in lockstep. But then also the disadvantages that you don't have an explicit representation. So there is no explicit representation of this probability distribution. You never build the data distribution. You can only sample from it. However, the advantages are that Markov chains are never needed. Only backprop is used to obtain gradients. No inference is needed during learning. And a wide variety of functions can be incorporated into the model. This, you know, I hadn't read this paper in a while. And I just have to laugh nowadays because now all the people are trying to reintroduce like there are as many papers like reintroducing Markov chains into GANs. Like, oh, GANs would be so much better if they had an MCMC sampler somewhere. You're like, no, the point was to get rid of it. And like no inference is needed during learning, which, you know, for some of these other models, you actually need an inference during training. Right. So this is very, very costly. And how many models are there nowadays where it's like, oh, if we just do this inference during training. Yeah. So it's quite it's quite funny to see people kind of trying to to just combine everything with everything. And in the process, sort of reverse, reverse whatever these methods were originally meant to get rid of. Now, I'm not saying anything against these methods, but it's just kind of funny. Yeah. So they had a lot of conclusions and future work. They already say, you know, conditional GANs are very easy to do straightforward. Learned approximate inference can be performed by training an auxiliary network to predict Z given X. And this, of course, as you know, has come, you know, has come to fruit very often. Early papers already introduced that the so if you have the G network producing some producing an X and then the D network discriminating, that you would also have like a encoder right here to produce back the Z noise to give you the latent encoding, sort of like a variational autoencoder, but not really. It's more like a reverse generator. You know, this model nowadays are big by GAN and things like this that employ this exact thing that was sort of predicted right here. Of course, there are much earlier models also using this. As long as I can remember, people have attempted to bring encoders into GANs. They have a bunch of other things like semi supervised learning. You can use this to do to do get more data for a classifier, which is also done. So a lot of things here already foresight in this paper is pretty cool. And the coolest thing, look at that savages good fellow, not even using the full eight pages, just dropping this on the world. Absolutely cool. Mad respect. So, yeah, this was kind of my take on general. Yeah, it is generative adversarial nets. And yeah, please tell me if you like historic paper overviews. It's more kind of a rant than it really is a paper explanation. But I do enjoy going through this papers and kind of looking at them in hindsight. All right. That was it from me. I wish you a nice day. Bye bye.
[ { "end": 6, "start": 0, "text": " Hi there! Today we'll look at Generative Adversarial Nets by Ian J. Goodfellow et al." }, { "end": 12, "start": 6, "text": " So this one is another installment in our series of historical papers that had great impact." }, { "end": 20, "start": 12, "text": " GANs nowadays, or Generative Adversarial Nets back then, were sort of..." }, { "end": 26, "start": 20, "text": " This was the starting shot in a long line of research that is still continuing today." }, { "end": 34, "start": 26, "text": " So I remember when I started my PhD in 2015, GANs were just about spiking." }, { "end": 41, "start": 34, "text": " I remember NURiPS, or back then NIPS, in 2016, and every other paper was about GANs." }, { "end": 47, "start": 41, "text": " There was also this famous Schmidhuber Goodfellow moment at the tutorial." }, { "end": 54, "start": 47, "text": " It was a wild time. And this is the paper that started it all." }, { "end": 64, "start": 54, "text": " The paper is quite well written. It's very focused on convincing you that this is a sound method mathematically." }, { "end": 69, "start": 64, "text": " That it doesn't just do wild things." }, { "end": 79, "start": 69, "text": " And also it has a lot of modern tricks for GANs already built into it." }, { "end": 84, "start": 79, "text": " So astounding how much foresight there was already in this paper." }, { "end": 89, "start": 84, "text": " But of course, GANs have come a super long way since then." }, { "end": 96, "start": 89, "text": " And today we'll just go through the paper and look at how it looked back then and what this paper was like." }, { "end": 100, "start": 96, "text": " So yeah, join me in this. If you like it, please share it out." }, { "end": 104, "start": 100, "text": " Let me know in the comments what you think of historic paper reviews." }, { "end": 109, "start": 104, "text": " This is not going to be like a beginner's tutorial in GANs." }, { "end": 112, "start": 109, "text": " This is really going to be... We'll go through the paper." }, { "end": 117, "start": 112, "text": " You'll see right here the paper is from 2014." }, { "end": 124, "start": 117, "text": " So it would still be another two years or so until GANs really take off from this point on." }, { "end": 129, "start": 124, "text": " But the introduction, of course, was really important." }, { "end": 132, "start": 129, "text": " Okay, so abstract. Here we go." }, { "end": 138, "start": 132, "text": " We propose a new framework for estimating generative models via an adversarial process" }, { "end": 145, "start": 138, "text": " in which we simultaneously train two models, a generative model G that captures the data distribution" }, { "end": 154, "start": 145, "text": " and a discriminative model D that estimates the probability that a sample came from the training data rather than G." }, { "end": 156, "start": 154, "text": " Okay, this was sort of a new thing." }, { "end": 161, "start": 156, "text": " Now, I know, I know people disagree with this being a new thing, but this was a new thing." }, { "end": 169, "start": 161, "text": " And specifically, this was the first paper that made something like this really work for data." }, { "end": 177, "start": 169, "text": " So to have a discriminator, the words generator and discriminator were also introduced in this paper." }, { "end": 181, "start": 177, "text": " So you train this D model, which is the discriminator," }, { "end": 187, "start": 181, "text": " and the D model basically decides whether or not a given data point comes from data" }, { "end": 192, "start": 187, "text": " or comes from the fake distribution." }, { "end": 202, "start": 192, "text": " And then you have a generative model G that is supposed to just create this data X rather than coming from the database." }, { "end": 209, "start": 202, "text": " So you want to sample a couple of times from the data, and sometimes you sample from this model G," }, { "end": 215, "start": 209, "text": " and then the discriminator is supposed to decide whether or not it comes from the data set" }, { "end": 222, "start": 215, "text": " or from your counterfeiter, like from this generator G." }, { "end": 225, "start": 222, "text": " And it's supposed to say whether it's data or fake." }, { "end": 229, "start": 225, "text": " So you train the D model as a simple image classifier." }, { "end": 232, "start": 229, "text": " So people already knew how to build image classifiers." }, { "end": 238, "start": 232, "text": " This was shortly, as you can see, before ResNet came on the scene." }, { "end": 244, "start": 238, "text": " So people already kind of knew how to build CNNs, build really good image classifiers." }, { "end": 251, "start": 244, "text": " And the thought here was really generative models weren't really a thing until then." }, { "end": 256, "start": 251, "text": " So people were in language models, Word2Vec was kind of coming up," }, { "end": 263, "start": 256, "text": " but they would still be doing like RNNs using these Word2Vec vectors for generating language." }, { "end": 267, "start": 263, "text": " In images, generative models weren't really much of a thing." }, { "end": 272, "start": 267, "text": " So you would do like compositional models or you would do autoencoders," }, { "end": 278, "start": 272, "text": " which were just either really blurry or really, really artifactory." }, { "end": 281, "start": 278, "text": " And there were also approaches like deep belief networks and so on," }, { "end": 283, "start": 281, "text": " but they had their own problems." }, { "end": 292, "start": 283, "text": " So there wasn't really a satisfactory way to do image generation that resulted in really high quality images." }, { "end": 296, "start": 292, "text": " Now here, I think the entire thought, and this is not really spelled out," }, { "end": 304, "start": 296, "text": " but the entire thought here is that, hey, we know how to train really, really good image classifiers." }, { "end": 309, "start": 304, "text": " This has been evident in these since AlexNet." }, { "end": 313, "start": 309, "text": " So for two years, this was evident how to build really good image classifiers." }, { "end": 319, "start": 313, "text": " And the question here is to say that rather than also building really good generators," }, { "end": 326, "start": 319, "text": " can't we like harness the power of building really good classifiers for training a generator?" }, { "end": 329, "start": 326, "text": " And this is this idea right here." }, { "end": 332, "start": 329, "text": " This wasn't the one before, as you know, in like an autoencoder," }, { "end": 338, "start": 332, "text": " what you do is you'd input a sample into some kind of auto bottleneck thing, whatever." }, { "end": 345, "start": 338, "text": " And then at the end, you train your output sample to match the input sample as close as possible." }, { "end": 350, "start": 345, "text": " And then in here, after you've trained this, this part here is your generative model." }, { "end": 355, "start": 350, "text": " And then here, in here, you'd input like MCMC sampler or whatnot." }, { "end": 359, "start": 355, "text": " And then, of course, variational autoencoders came up and so on." }, { "end": 365, "start": 359, "text": " But still, what you always would do is you would somehow use the data directly." }, { "end": 368, "start": 365, "text": " So this is data in order to train your model." }, { "end": 373, "start": 368, "text": " So you would somehow say, ah, the output here should probably match the input in some way" }, { "end": 377, "start": 373, "text": " or in some at least distributional way." }, { "end": 385, "start": 377, "text": " This was a new thing. As you can see right here, there is no direct connection between the data and the generator." }, { "end": 388, "start": 385, "text": " And I think this was the success of this model." }, { "end": 396, "start": 388, "text": " The fact that the generator did not, it wasn't trained from the data like you would do if you were just approaching this problem." }, { "end": 407, "start": 396, "text": " The philosophy here is let's use the power of discriminative models, which we know how to build in order to train this generator." }, { "end": 411, "start": 407, "text": " So the generators task now isn't to match any sort of data point." }, { "end": 417, "start": 411, "text": " The generators task is to produce images that the discriminator would classify as data." }, { "end": 424, "start": 417, "text": " And you can do that by simply back propagating through the discriminator to the generator." }, { "end": 434, "start": 424, "text": " So I think that's the only thing that's kind of unstated in this paper, the reasoning behind why this is new, why this might work." }, { "end": 442, "start": 434, "text": " But everything else is spelled out very well in this paper, I have to say, if you read through it." }, { "end": 450, "start": 442, "text": " So the training procedure for G is to maximize the probability of D making a mistake." }, { "end": 453, "start": 450, "text": " This framework corresponds to a minimax two player game." }, { "end": 458, "start": 453, "text": " So as I said, the paper is very much focused on convincing you that there's something sound happening here," }, { "end": 465, "start": 458, "text": " because at that time, if you were to look at this, you would say something like there is no way." }, { "end": 469, "start": 465, "text": " This is you would be like, yeah." }, { "end": 480, "start": 469, "text": " So I can understand the motivation here to really convince people that, you know, something something good is happening also on the on the theoretical side." }, { "end": 491, "start": 480, "text": " In the space, sorry, in the space of arbitrary functions, G and D, a unique solution exists with G recovering the training data distribution D equals to one half everywhere." }, { "end": 497, "start": 491, "text": " In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with back propagation." }, { "end": 505, "start": 497, "text": " There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples." }, { "end": 514, "start": 505, "text": " OK, so the point here is that it's much easier than current methods of producing of generative models." }, { "end": 518, "start": 514, "text": " And also it does something sound." }, { "end": 524, "start": 518, "text": " Now, let's jump into the loss function right here." }, { "end": 532, "start": 524, "text": " So they say G and D play the following two player minimax game with value function V." }, { "end": 546, "start": 532, "text": " And this is still understood until today that it was already like if this was a pure engineering paper, they could simply build the architecture and say, oh, we let these networks fight." }, { "end": 552, "start": 546, "text": " And they they are kind of adversarial and they they pump each other up and so on." }, { "end": 560, "start": 552, "text": " And this here was more much more into the direction of kind of a a theoretical reasoning into why something like this would work." }, { "end": 565, "start": 560, "text": " Of course, there is still a lot of engineering going on to actually make it work." }, { "end": 570, "start": 565, "text": " So they they have there is this value function right here." }, { "end": 573, "start": 570, "text": " OK, and the value function is the following." }, { "end": 585, "start": 573, "text": " So what you have is you have the log probability of data and you have one the log one minus the of the generated samples." }, { "end": 590, "start": 585, "text": " So here you can see and this was introduced, this seems also obvious now." }, { "end": 596, "start": 590, "text": " Right. But you have a prior on what this is called the noise distribution." }, { "end": 606, "start": 596, "text": " OK, so you have a prior on your input noise to the generator because the generator is supposed to come up with very many different data points." }, { "end": 616, "start": 606, "text": " And if it is a if it is a, you know, non-stochastic function like a neural network, then you need some way to make to produce different images." }, { "end": 620, "start": 616, "text": " So there is this prior distribution over the noise." }, { "end": 623, "start": 620, "text": " You feed that noise into the generator." }, { "end": 625, "start": 623, "text": " The generator will produce an output." }, { "end": 634, "start": 625, "text": " You put that into the discriminator and then this right here, as you can see, the discriminator is trying to maximize this objective." }, { "end": 643, "start": 634, "text": " So the discriminator is trying to maximize the probability of real data and it is trying to minimize the probability of fake data." }, { "end": 650, "start": 643, "text": " OK, it is this is simply a two way classification problem." }, { "end": 655, "start": 650, "text": " At the same time, the generator, as you can see, is trying to minimize the objective." }, { "end": 658, "start": 655, "text": " In fact, the order here is quite important." }, { "end": 667, "start": 658, "text": " So the generator, as you can see, is trying to minimize whatever this here is." }, { "end": 672, "start": 667, "text": " So the generator sort of is trying to minimize against the best possible discriminator." }, { "end": 681, "start": 672, "text": " And so this is one one observation right here is that the formulation is always with respect to a perfect discriminator." }, { "end": 690, "start": 681, "text": " Now, we know that this doesn't work because if you have a perfect discriminator, then generator cannot catch up because you have insufficient gradients and so on." }, { "end": 694, "start": 690, "text": " And this was already recognized in this paper as well." }, { "end": 701, "start": 694, "text": " But the formulation is with respect to a min max game and not a max min game." }, { "end": 711, "start": 701, "text": " So the other point I want to make here is that you can see the discriminator appears in both in both terms right here." }, { "end": 715, "start": 711, "text": " However, the generator only appears right here." }, { "end": 723, "start": 715, "text": " And this this basically means that the objective for the generator is only this part here because the other part is constant." }, { "end": 730, "start": 723, "text": " So the generator is just trying to make the discriminator think that fake data is real." }, { "end": 739, "start": 730, "text": " So it is trying to make the discriminator the class of fake data as small as possible for the data that it outputs." }, { "end": 748, "start": 739, "text": " Well, the discriminator is trying to make the class of fake data more than the class of sorry, real data." }, { "end": 754, "start": 748, "text": " Yeah, it's trying to make it's trying to classify fake data as fake and real data as real." }, { "end": 757, "start": 754, "text": " Whereas the generator has only this part on the right." }, { "end": 761, "start": 757, "text": " This is I feel this is it's quite important." }, { "end": 768, "start": 761, "text": " Why? Because already in this paper, they recognize that this might not be the best practical objective." }, { "end": 775, "start": 768, "text": " And for the generator, they can actually exchange this part here on the right to simply say we want to." }, { "end": 789, "start": 775, "text": " So we want to instead of one minus D, instead of log one minus D, we simply want to use minus log D as an objective for the generator." }, { "end": 791, "start": 789, "text": " So you can kind of play around with this." }, { "end": 795, "start": 791, "text": " And as you know, lots of formulations have played around with this loss right here." }, { "end": 802, "start": 795, "text": " And yeah, that's why we have like a billion, billion, billion, billion GAN variations." }, { "end": 805, "start": 802, "text": " They introduced the reasoning behind this." }, { "end": 809, "start": 805, "text": " So there's an intuition right here." }, { "end": 816, "start": 809, "text": " And you can see already in practice, equation one may not provide sufficient gradient for G to learn well." }, { "end": 822, "start": 816, "text": " Early in learning, when G is poor, D can reject samples with high confidence because they're clearly different from the training data." }, { "end": 830, "start": 822, "text": " In this case, this saturates rather than training G to minimize that we can train G to maximize log D." }, { "end": 839, "start": 830, "text": " This objective function results in the same fixed point for the dynamic, but provides much stronger gradients in early, much stronger gradients early in learning." }, { "end": 843, "start": 839, "text": " This is in contrast to like other papers that seem to say, oh, we do this." }, { "end": 846, "start": 843, "text": " And they at least say it provides the same fixed point." }, { "end": 848, "start": 846, "text": " Right. Yeah." }, { "end": 854, "start": 848, "text": " So again, they're trying to convince you that this is doing something useful and that this is easier." }, { "end": 858, "start": 854, "text": " OK, so this strategy is analogous to other things." }, { "end": 869, "start": 858, "text": " Training maintains samples from a Markov chain from one learning step in the next order to avoid burning in a Markov chain in another loop of learning." }, { "end": 871, "start": 869, "text": " Sorry. OK, this is from another paper." }, { "end": 881, "start": 871, "text": " So their point here is that it's analogous to other papers that use these Markov chains where you always do one step in G and one step in D." }, { "end": 893, "start": 881, "text": " We alternate between K steps of optimizing D and one step of optimizing G because you have this inner maximization over D and then the outer maximization, the outer minimization over G." }, { "end": 899, "start": 893, "text": " So this has already been around the fact that you kind of have to have these optimizations in lockstep." }, { "end": 905, "start": 899, "text": " But the difference here is you don't need any sort of like Markov chain in the inner loop and so on." }, { "end": 907, "start": 905, "text": " You simply need back propagation." }, { "end": 911, "start": 907, "text": " So here's an illustration of how that might work." }, { "end": 918, "start": 911, "text": " So at the beginning here, you have your Z space and this is always sampled uniformly, as you can see right here." }, { "end": 922, "start": 918, "text": " This is from a prior distribution and through the mapping." }, { "end": 926, "start": 922, "text": " So this here is from Z to X is G." }, { "end": 928, "start": 926, "text": " So this is the mapping G." }, { "end": 935, "start": 928, "text": " You can see that the uniform distribution is now mapped to something non-uniform, which results in the green thing." }, { "end": 943, "start": 935, "text": " So G is the green line, while as this is data, the black dots are data." }, { "end": 952, "start": 943, "text": " And if you have a discriminator, the discriminator is supposed to tell you where there's data and where there's fake data." }, { "end": 957, "start": 952, "text": " Now, so green here is fake." }, { "end": 960, "start": 957, "text": " Now, this blue line is sort of a half trained discriminator." }, { "end": 962, "start": 960, "text": " Now you train D, right?" }, { "end": 969, "start": 962, "text": " You max maximize D, the discriminator, and that gives you this blue line right here." }, { "end": 974, "start": 969, "text": " So this this is a perfect discriminator for these two data distributions." }, { "end": 980, "start": 974, "text": " It tells you it's basically the ratio of green to black at each point." }, { "end": 985, "start": 980, "text": " And now you train the generator according to this." }, { "end": 991, "start": 985, "text": " And you can see that the gradient of the discriminator is so the gradient of the discriminator." }, { "end": 994, "start": 991, "text": " Discriminator is in this direction." }, { "end": 997, "start": 994, "text": " OK, so it's like up this hill." }, { "end": 1005, "start": 997, "text": " And that's why you want to shift your green curve over here according to the gradient of the discriminator." }, { "end": 1016, "start": 1005, "text": " Note that we first trained the discriminator and now in a second step, we optimize the generator." }, { "end": 1024, "start": 1016, "text": " So now we shift this green curve over in order to in along the gradient of the blue curve." }, { "end": 1029, "start": 1024, "text": " So it's important the green curve doesn't see the black curve ever." }, { "end": 1031, "start": 1029, "text": " The generator doesn't see the data." }, { "end": 1038, "start": 1031, "text": " The generator simply sees that blue curve and it goes along the gradient of that blue curve of the discriminator." }, { "end": 1044, "start": 1038, "text": " OK, and then if you do this many, many steps, actually, there are dots right here." }, { "end": 1050, "start": 1044, "text": " You will end up with a discriminator that has no clue what's where." }, { "end": 1053, "start": 1050, "text": " This is one half probability everywhere because the ratio is the same." }, { "end": 1063, "start": 1053, "text": " And you end up with the probability of data equal to the probability of the output generated samples." }, { "end": 1069, "start": 1063, "text": " And this can happen if the generator simply remembers the training data." }, { "end": 1071, "start": 1069, "text": " But there are a number of things that counter that." }, { "end": 1078, "start": 1071, "text": " For example, the generator is continuous while the training data is, of course, discrete." }, { "end": 1085, "start": 1078, "text": " So there is this in between things right here where there is no training data." }, { "end": 1089, "start": 1085, "text": " In fact, to hit exactly training data is very, very unlikely." }, { "end": 1093, "start": 1089, "text": " But of course, you can still you can still peek at the training data." }, { "end": 1101, "start": 1093, "text": " But also, I think there are two things why the generator doesn't simply remember the training data first," }, { "end": 1104, "start": 1101, "text": " because it doesn't ever see the training data directly." }, { "end": 1108, "start": 1104, "text": " So it can only see it through the discriminator." }, { "end": 1113, "start": 1108, "text": " And second of all, because it is built as these multilayer neural networks," }, { "end": 1118, "start": 1113, "text": " it doesn't have the power to just remember this," }, { "end": 1123, "start": 1118, "text": " because as there is kind of this notion of continuous function." }, { "end": 1128, "start": 1123, "text": " So and these neural networks are rather smooth functions often." }, { "end": 1135, "start": 1128, "text": " And therefore, I think that is something that helps the generator avoid remembering the training data." }, { "end": 1139, "start": 1135, "text": " Of course, there is still this problem of mode collapse that was really big in GANs." }, { "end": 1141, "start": 1139, "text": " So even if it doesn't remember the training data," }, { "end": 1146, "start": 1141, "text": " it might focus on the easiest part of the training data and forget all other parts." }, { "end": 1151, "start": 1146, "text": " And that was a direct result, actually, of this objective." }, { "end": 1153, "start": 1151, "text": " So where was it?" }, { "end": 1160, "start": 1153, "text": " So this objective directly led to mode collapse in some in some form," }, { "end": 1163, "start": 1160, "text": " because it penalizes different errors differently." }, { "end": 1169, "start": 1163, "text": " So of course, people have come up with ways to to solve that." }, { "end": 1173, "start": 1169, "text": " OK, now here is the algorithm." }, { "end": 1181, "start": 1173, "text": " As you can see, this was already quite this was already quite the algorithm we use nowadays." }, { "end": 1184, "start": 1181, "text": " So for K steps, this is the inner maximization." }, { "end": 1187, "start": 1184, "text": " And here they say that we use K equals one." }, { "end": 1190, "start": 1187, "text": " So this is this is pretty much what we use today." }, { "end": 1196, "start": 1190, "text": " The early days of GAN were still like, how much do I need to discriminator per generator and so on." }, { "end": 1199, "start": 1196, "text": " Nowadays, everyone is just using one step here, one step there," }, { "end": 1204, "start": 1199, "text": " or even training and jointly works in some cases." }, { "end": 1207, "start": 1204, "text": " So you want to sample a mini batch of noise samples" }, { "end": 1214, "start": 1207, "text": " and you will sample a mini batch of M examples from training data generation." }, { "end": 1220, "start": 1214, "text": " So from this data, you want to update the discriminator by sending its stochastic gradient." }, { "end": 1222, "start": 1220, "text": " And this is simply the gradient of the objective." }, { "end": 1227, "start": 1222, "text": " And then after those K steps, you want to sample another mini batch of noise samples" }, { "end": 1231, "start": 1227, "text": " and update the generator by descending its stochastic gradient." }, { "end": 1235, "start": 1231, "text": " And you can see right here already, there is this reduced objective" }, { "end": 1241, "start": 1235, "text": " that doesn't include this because it falls away in the gradient." }, { "end": 1245, "start": 1241, "text": " And they say the gradient based up this can use any standard learning based rule." }, { "end": 1248, "start": 1245, "text": " We use momentum in our experiments." }, { "end": 1250, "start": 1248, "text": " Very cool." }, { "end": 1259, "start": 1250, "text": " So I believe they already also say that it is somewhere here." }, { "end": 1266, "start": 1259, "text": " It's pretty fun that they say, oh, in our generator, we only input noise at the lowest layer." }, { "end": 1272, "start": 1266, "text": " This is also something that if you think that G here is a multilayer network," }, { "end": 1276, "start": 1272, "text": " so it's kind of a multilayer network that outputs an image." }, { "end": 1281, "start": 1276, "text": " And if you ask yourself, if I have noise, how would I input that into there?" }, { "end": 1285, "start": 1281, "text": " It's so clear nowadays that we just put it here." }, { "end": 1287, "start": 1285, "text": " But this was not clear at all." }, { "end": 1293, "start": 1287, "text": " This was kind of an invention of this paper because you could put it pretty much at all layers." }, { "end": 1295, "start": 1293, "text": " You could distribute it and so on." }, { "end": 1298, "start": 1295, "text": " You could add some right here." }, { "end": 1304, "start": 1298, "text": " It was this paper that already established the fact that we input noise kind of as a vector" }, { "end": 1309, "start": 1304, "text": " at the very beginning and then just let the neural network produce the image from that." }, { "end": 1312, "start": 1309, "text": " So yeah, pretty cool." }, { "end": 1316, "start": 1312, "text": " It's pretty sneaky how many things are hidden in these initial papers," }, { "end": 1320, "start": 1316, "text": " how many decisions that are made there then are just taken over." }, { "end": 1324, "start": 1320, "text": " And this one, I guess, turned out to be fairly good." }, { "end": 1328, "start": 1324, "text": " OK, so here they go for some theoretical analysis." }, { "end": 1336, "start": 1328, "text": " And the first they want to convince you that if the generator, if this all works well," }, { "end": 1344, "start": 1336, "text": " if both parties, this generator and the discriminator, optimize their objective to the optimum," }, { "end": 1353, "start": 1344, "text": " then the generator will have captured the data distribution, so the global optimality of this." }, { "end": 1356, "start": 1353, "text": " And they go about convincing you of that." }, { "end": 1362, "start": 1356, "text": " So the first thing that they convince you of is that if you fix the generator," }, { "end": 1364, "start": 1362, "text": " the optimal discriminator is this." }, { "end": 1366, "start": 1364, "text": " And we've already seen this in this drawing right here." }, { "end": 1375, "start": 1366, "text": " So the optimal discriminator is simply the ratio of the data, of the likelihood of data" }, { "end": 1379, "start": 1375, "text": " versus the likelihood of the generated data." }, { "end": 1385, "start": 1379, "text": " OK, so you train, you almost train this discriminator in the inner loop." }, { "end": 1390, "start": 1385, "text": " And that's simply the consequence of this, of a pointwise." }, { "end": 1396, "start": 1390, "text": " This is true pointwise, therefore it's true over the entire data distribution." }, { "end": 1405, "start": 1396, "text": " In the next thing, they convince you that the global minimum of the virtual training criterion," }, { "end": 1412, "start": 1405, "text": " this is the value function, this min-max game, is achieved if and only if this holds." }, { "end": 1418, "start": 1412, "text": " At that point, the training criterion achieves the value of negative log 4." }, { "end": 1426, "start": 1418, "text": " And this, again, this was already here, the fact that this has a global minimum," }, { "end": 1432, "start": 1426, "text": " and it is achieved when the generator matches the data distribution, which is pretty cool." }, { "end": 1434, "start": 1432, "text": " So in the proof, it's pretty simple, actually." }, { "end": 1440, "start": 1434, "text": " They first say, look, if this is the case, we just simply plug that in," }, { "end": 1443, "start": 1440, "text": " the discriminator will be confused." }, { "end": 1450, "start": 1443, "text": " So if the generator exactly captures the data, the discriminator will have no clue what's going on, right?" }, { "end": 1453, "start": 1450, "text": " Because it can't, because they're equal." }, { "end": 1457, "start": 1453, "text": " So it must basically output the probability of one half everywhere," }, { "end": 1462, "start": 1457, "text": " and then your objective becomes a constant negative log 4." }, { "end": 1466, "start": 1462, "text": " Now, if you then plug that into the other equation," }, { "end": 1473, "start": 1466, "text": " you'll see that the training criterion ends up being negative log 4 plus twice the Jensen-Shannon divergence" }, { "end": 1476, "start": 1473, "text": " between the data and the generated distribution." }, { "end": 1486, "start": 1476, "text": " And since this term here is always positive, that means that this thing here can never be less than negative log 4." }, { "end": 1489, "start": 1486, "text": " And therefore, the negative log 4 is the optimum." }, { "end": 1501, "start": 1489, "text": " OK, that's the proof is pretty cool, I have to say, to show that this has the optimum at that place." }, { "end": 1507, "start": 1501, "text": " And the last thing they convince you of is that this algorithm actually converges." }, { "end": 1514, "start": 1507, "text": " And the convergence is simply predicated on the fact that if you look at each of these problems individually," }, { "end": 1523, "start": 1514, "text": " they are convex. So like here is convex in X for every alpha." }, { "end": 1534, "start": 1523, "text": " So each of these are sort of convex problems, and then it will naturally converge to their minimum." }, { "end": 1540, "start": 1534, "text": " However, in practice, adversarial nets represent a limited family of distributions via the function." }, { "end": 1544, "start": 1540, "text": " And we optimize the parameters rather than the distribution itself." }, { "end": 1550, "start": 1544, "text": " Using a multilayer perceptron to define G introduces multiple critical points in parameter space." }, { "end": 1556, "start": 1550, "text": " However, the excellent performance of the multilayer perceptrons in practice suggest that they are a reasonable model to use," }, { "end": 1558, "start": 1556, "text": " despite their lack of theoretical guarantees." }, { "end": 1566, "start": 1558, "text": " So they say if we could optimize this probability distribution directly, it is a convex problem and we will always converge." }, { "end": 1573, "start": 1566, "text": " But in practice, of course, we only optimize the parameters of an MLP or a CNN." }, { "end": 1580, "start": 1573, "text": " And that doesn't always converge. But we have reasonable hopes that it will converge." }, { "end": 1588, "start": 1580, "text": " OK, so again, it's very much focused on convincing that this is doing something sensible, which I hope now you are convinced." }, { "end": 1597, "start": 1588, "text": " So there is a global optimum point. It's when the generator captures the data distribution perfectly." }, { "end": 1609, "start": 1597, "text": " This is this can be achieved and will be achieved if you can optimize these probability distributions with a reasonable degree of freedom." }, { "end": 1617, "start": 1609, "text": " And the neural networks provide that reasonable degree of freedom and give us good hope that in practice it will work." }, { "end": 1627, "start": 1617, "text": " So they apply this to data sets, namely MNIST, the Toronto Phase Database and C410." }, { "end": 1636, "start": 1627, "text": " The generator nets used a mixture of rectifier linear activations and sigmoid activations, while the discriminator net used max out activations." }, { "end": 1642, "start": 1636, "text": " That was still a thing. Dropout was applied in training at the discriminator net." }, { "end": 1653, "start": 1642, "text": " While our theoretical framework permits the use of dropout and other noise at intermediate layers of the generator," }, { "end": 1660, "start": 1653, "text": " we used noise as the input to only the bottom most layer of the generator network." }, { "end": 1671, "start": 1660, "text": " Again, this wasn't kind of clear at the beginning. And also the fact that to leave out dropout and so on in the generator was, I guess they found that empirically." }, { "end": 1678, "start": 1671, "text": " And then there was of course no way to evaluate these things. Like how do we evaluate generative models?" }, { "end": 1681, "start": 1678, "text": " Nowadays we have these inception distances and so on." }, { "end": 1692, "start": 1681, "text": " But then we estimate probability of the test set under P under the generated data by fitting a Gaussian parsing window to the samples generated with G" }, { "end": 1695, "start": 1692, "text": " and reporting the log likelihood under this distribution." }, { "end": 1702, "start": 1695, "text": " The theta parameter, yada yada yada. Results are reported." }, { "end": 1712, "start": 1702, "text": " This method of estimating the likelihood has somewhat high variance and does not perform well in high dimensional spaces, but it is the best method available to our knowledge." }, { "end": 1721, "start": 1712, "text": " Advances in generative models that can sample but not estimate likelihood directly motivate further research into how to evaluate such models." }, { "end": 1728, "start": 1721, "text": " They were absolutely right in this. And there was a lot of research into how to evaluate these models." }, { "end": 1737, "start": 1728, "text": " However, it is my opinion that we still have very, very limited methods of evaluating models like this." }, { "end": 1747, "start": 1737, "text": " Like we have better methods, but it's yeah, it's not really it's not really satisfactory how it is right now." }, { "end": 1760, "start": 1747, "text": " So you see that these models, these adversarial nets, by the way, they're always called adversarial nets right here, where I think we call them like most people would call them adversarial networks." }, { "end": 1772, "start": 1760, "text": " But it's just interesting to see the nets also in the title. Right. It says I think it says nets, does it? I think it does. We'll look at it after." }, { "end": 1782, "start": 1772, "text": " So the out they outperform these other models, especially these these belief networks were kind of popular at the time." }, { "end": 1790, "start": 1782, "text": " And you can see the samples right here were in no way comparable to examples that you get from the modern GANs." }, { "end": 1798, "start": 1790, "text": " But this was already very, very, very good, especially the MNIST. And then here you could actually recognize." }, { "end": 1806, "start": 1798, "text": " So the ones with the yellow are always from the training data set. They're like the nearest neighbors of the things on the left." }, { "end": 1812, "start": 1806, "text": " So they want to show that it doesn't simply remember the training data, though I'm not so sure." }, { "end": 1818, "start": 1812, "text": " Like this seems like it has some sort of somehow remember the training data a little bit." }, { "end": 1824, "start": 1818, "text": " Also, this one right here. And there was already a way." }, { "end": 1834, "start": 1824, "text": " So this was also very foresighted. So these A to C were fully connected networks, which might be one of the reasons why it worked moderately well." }, { "end": 1842, "start": 1834, "text": " Right. But the last one was a convolutional discriminator and a deconvolutional generator." }, { "end": 1848, "start": 1842, "text": " So already using kind of deconvolutions that are used everywhere today." }, { "end": 1858, "start": 1848, "text": " So they are used in GANs and whatnot. VAs to up sample anything. If you want to do pixel wise classification, you use deconvolutions." }, { "end": 1869, "start": 1858, "text": " So again, this paper sort of introduced a lot of things that later that we still use in GANs today." }, { "end": 1876, "start": 1869, "text": " Now, I'm sure deconvolutions weren't invented here, but we still use them." }, { "end": 1884, "start": 1876, "text": " So legit, they were the first GAN paper to use deconvolutions. Haha. Yeah." }, { "end": 1891, "start": 1884, "text": " They also say we make no claim that these samples are better than samples generated by existing methods." }, { "end": 1899, "start": 1891, "text": " We believe that these samples are at least competitive with the better generative models in the literature and highlight the potential of the adversarial framework." }, { "end": 1909, "start": 1899, "text": " Today, this paper would be so rejected. Like, wait, you're not better. Get out of here. You can't claim it anymore." }, { "end": 1916, "start": 1909, "text": " No, it doesn't work anymore. I'm sorry. Yours has always has to be better than everything else nowadays." }, { "end": 1924, "start": 1916, "text": " Otherwise, it's a it's a it's a weak reject their experimental evidence doesn't doesn't convince me." }, { "end": 1935, "start": 1924, "text": " You can't simply say something's cool. Also already introduced in this paper digits obtained by linearly interpolating between coordinates in z space of the full model like this thing here." }, { "end": 1941, "start": 1935, "text": " Every single GAN paper had interpolations in the like in this in the GAN spike." }, { "end": 1953, "start": 1941, "text": " And it came all came from here. So already this is just like this is like every GAN paper then had like rows of these like of these interpolations." }, { "end": 1961, "start": 1953, "text": " I should know if I've written a paper on it and introduced right here. Who knows if they hadn't done this." }, { "end": 1969, "start": 1961, "text": " Yeah, I guess it's it's kind of an obvious thing. But still, you know, very, very cool to see that this was already done." }, { "end": 1981, "start": 1969, "text": " And here GANs compared to other different methods like deep directed graphical models, generative auto encoders and compared in very many ways." }, { "end": 1985, "start": 1981, "text": " So this is a actually a good reference if you want to learn about these different kinds of models." }, { "end": 1991, "start": 1985, "text": " And they make the claim here that their advantages and disadvantages." }, { "end": 1999, "start": 1991, "text": " So disadvantages mainly come with training these things because you have to train them in lockstep." }, { "end": 2005, "start": 1999, "text": " But then also the disadvantages that you don't have an explicit representation." }, { "end": 2009, "start": 2005, "text": " So there is no explicit representation of this probability distribution." }, { "end": 2014, "start": 2009, "text": " You never build the data distribution. You can only sample from it." }, { "end": 2018, "start": 2014, "text": " However, the advantages are that Markov chains are never needed." }, { "end": 2023, "start": 2018, "text": " Only backprop is used to obtain gradients. No inference is needed during learning." }, { "end": 2026, "start": 2023, "text": " And a wide variety of functions can be incorporated into the model." }, { "end": 2030, "start": 2026, "text": " This, you know, I hadn't read this paper in a while." }, { "end": 2044, "start": 2030, "text": " And I just have to laugh nowadays because now all the people are trying to reintroduce like there are as many papers like reintroducing Markov chains into GANs." }, { "end": 2049, "start": 2044, "text": " Like, oh, GANs would be so much better if they had an MCMC sampler somewhere." }, { "end": 2053, "start": 2049, "text": " You're like, no, the point was to get rid of it." }, { "end": 2064, "start": 2053, "text": " And like no inference is needed during learning, which, you know, for some of these other models, you actually need an inference during training." }, { "end": 2067, "start": 2064, "text": " Right. So this is very, very costly." }, { "end": 2075, "start": 2067, "text": " And how many models are there nowadays where it's like, oh, if we just do this inference during training." }, { "end": 2084, "start": 2075, "text": " Yeah. So it's quite it's quite funny to see people kind of trying to to just combine everything with everything." }, { "end": 2092, "start": 2084, "text": " And in the process, sort of reverse, reverse whatever these methods were originally meant to get rid of." }, { "end": 2099, "start": 2092, "text": " Now, I'm not saying anything against these methods, but it's just kind of funny." }, { "end": 2103, "start": 2099, "text": " Yeah. So they had a lot of conclusions and future work." }, { "end": 2110, "start": 2103, "text": " They already say, you know, conditional GANs are very easy to do straightforward." }, { "end": 2115, "start": 2110, "text": " Learned approximate inference can be performed by training an auxiliary network to predict Z given X." }, { "end": 2120, "start": 2115, "text": " And this, of course, as you know, has come, you know, has come to fruit very often." }, { "end": 2131, "start": 2120, "text": " Early papers already introduced that the so if you have the G network producing some producing an X and then the D network discriminating," }, { "end": 2141, "start": 2131, "text": " that you would also have like a encoder right here to produce back the Z noise to give you the latent encoding," }, { "end": 2144, "start": 2141, "text": " sort of like a variational autoencoder, but not really." }, { "end": 2147, "start": 2144, "text": " It's more like a reverse generator." }, { "end": 2158, "start": 2147, "text": " You know, this model nowadays are big by GAN and things like this that employ this exact thing that was sort of predicted right here." }, { "end": 2161, "start": 2158, "text": " Of course, there are much earlier models also using this." }, { "end": 2169, "start": 2161, "text": " As long as I can remember, people have attempted to bring encoders into GANs." }, { "end": 2173, "start": 2169, "text": " They have a bunch of other things like semi supervised learning." }, { "end": 2179, "start": 2173, "text": " You can use this to do to do get more data for a classifier, which is also done." }, { "end": 2184, "start": 2179, "text": " So a lot of things here already foresight in this paper is pretty cool." }, { "end": 2194, "start": 2184, "text": " And the coolest thing, look at that savages good fellow, not even using the full eight pages, just dropping this on the world." }, { "end": 2198, "start": 2194, "text": " Absolutely cool. Mad respect." }, { "end": 2204, "start": 2198, "text": " So, yeah, this was kind of my take on general." }, { "end": 2207, "start": 2204, "text": " Yeah, it is generative adversarial nets." }, { "end": 2212, "start": 2207, "text": " And yeah, please tell me if you like historic paper overviews." }, { "end": 2216, "start": 2212, "text": " It's more kind of a rant than it really is a paper explanation." }, { "end": 2220, "start": 2216, "text": " But I do enjoy going through this papers and kind of looking at them in hindsight." }, { "end": 2248, "start": 2220, "text": " All right. That was it from me. I wish you a nice day. Bye bye." } ]
yexR53My2O4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "jeff dean", "mikolov", "word2vec", "word vectors", "word representations", "nlp", "natural language processing", "sentiment classification", "king", "queen", "man", "woman", "arithmetic", "latent space", "distributed", "country", "capital", "semantic", "synonyms", "skip gram", "negative sampling", "nce", "noise contrastive estimation" ]
#ai #research #word2vec Word vectors have been one of the most influential techniques in modern NLP to date. This paper describes Word2Vec, which the most popular technique to obtain word vectors. The paper introduces the negative sampling technique as an approximation to noise contrastive estimation and shows that this allows the training of word vectors from giant corpora on a single machine in a very short time. OUTLINE: 0:00 - Intro & Outline 1:50 - Distributed Word Representations 5:40 - Skip-Gram Model 12:00 - Hierarchical Softmax 14:55 - Negative Sampling 22:30 - Mysterious 3/4 Power 25:50 - Frequent Words Subsampling 28:15 - Empirical Results 29:45 - Conclusion & Comments Paper: https://arxiv.org/abs/1310.4546 Code: https://code.google.com/archive/p/word2vec/ Abstract: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. Authors: Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, today we'll look at distributed representations of words and phrases and their compositionality by Thomas Mikolov, Ilya Sotskyver, Kai Chen, Greg Corrado and Jeffrey Dean. This is another historical paper, it's one of three papers, it's the middle one that introduces the original Word2vec algorithm. And as you might know, Word2vec was extremely influential in NLP since this paper basically until recently, where it's sort of gone out of fashion a bit in research with the rise of things like ELMo and BERT, but it's still very, very relevant. So we'll look at this historical paper today with kind of the hindsight of being a couple years into the future. In fact, as you see right here, this was released in 2013, so it's seven years later now. And we'll look back and we'll see what they said back then about the system. This is not going to be like a very well-enhanced PowerPoint presentation of how Word2vec works. We're going to look at the paper and read it together. If you like content like this, if you like historical paper readings, let me know in the comments, share it out if you do like it and of course subscribe. Because this kind of historical papers, I enjoy them, but many people might already know what these things are. So, yeah. Okay. Let's, you know, go through the paper and pick up their ideas and kind of put them in context. They say the recently introduced continuous skip gram model is an efficient method for learning high quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. So the skip gram model was already introduced by Mikhailov in an earlier paper that came out, I believe not like one or two months prior to this one. As I said, Word2vec is a series of papers. I don't think there is a paper called Word2vec. Rather, they here have released the code along with the paper. The code was called Word2vec. So the skip gram model was introduced previously, but it is replicated right here. So in the skip gram model, what you're trying to do is you're trying to get a distributed word representation. So what does that mean? That means that for each word in your language, let's take these words right here. For each word in the language, you want to come up with a vector that somehow describes that word in a continuous fashion. So with the two might be mapped to, I don't know, 0.1, 0.9, and 0.3. Learn might be mapped to negative 0.5 and so on. So each word gets assigned a vector in the same dimensional space. And what the previous paper kind of discovered is that if you do this correctly, then these vectors, they have some kind of properties. So we can already kind of jump ahead because this was already a bit, a bit researched in the last paper. The semantics of these vectors will be something like this. So here they have a two dimensional PCA. So these are the first two dimensions of the 1000 dimensional skip gram vector. So the vectors they obtain, they can do things like this, where they can show that in these spaces, for example, there appears to be a vector direction that characterizes the capital of a country. So if you take a few countries and their capitals and you average that vector, you get a kind of a direction for capitalness of a city. Given a country, you can see that there is a pretty clear relation here. Now, some of these things have later been revised to such that they are ultimately ended up being not that impressive. For example, there was always this kind of math with vectors. And I don't, I believe this is, this might not be in this. This is in the last paper where they discovered that if you take the vector for king and you subtract the vector for man and you add the vector for woman, then that would result in the vector for queen. So the way they did it was basically they did this calculation right here and then they searched in the point they ended up, they searched for the nearest neighbor in their vocabulary. And that turned out to be queen. But in order to make it queen, actually, you have to exclude the original word king. People quickly discovered that if you don't exclude the original word, the result of this kind of arithmetic will almost always lead back to the original word. And then a lot of these analogy tasks are simply the result of you then discarding that word during the nearest neighbor search. And then queen just happens to be one of the closest words. And it's sort of much less dependent on which exact calculation you do here. So there's been a lot of follow up work kind of analyzing, criticizing these vector maths. But definitely we know that these word vectors turned out to be extremely, extremely helpful and syntactically and semantically relevant in downstream tasks because they have performed very, very well. So how does the skip gram model work? How does it assign vectors to each word? So first of all, it has a dictionary. So there is a word, an input word. And for each word, you have a big dictionary. And the big dictionary basically says that the word two is going to be mapped to this vector point one, da, da, da, da, da, da, and so on. The word learn is going to be mapped to that vector. And then you also have these output vectors right here. And what you're trying to do is you're trying to take a phrase from the data set like this one right here. And you take out one word like this word vector right here. And you're trying to frame this as a prediction task. So you're trying to frame this as, in this case, four different prediction tasks. So you're telling your machine, I give you the word vector, and which other words are around the word vector? You just tell it that you don't tell it anything else. You just say, which other words are around the word vector? And the correct answers in this case would be to, learn, word, and representations. So you construct four different training examples where you have an X and a Y. So the X is always vector, and the Y is two. And then the next training sample, the X is vector, and the Y is learn, and so on. So this here, each training sample is a classification task. And the classification task is, as you can see, no, you can't see right here, but the classification task is you have the input word, and you classify it into one of many, many, many, many, many, many classes. Namely, there are as many classes as you have words in the dictionary. So each word in the dictionary will have a class associated with it. So in ImageNet, you have like 1,000 classes, but in these, that's already a lot. But in these tasks, you're going to have 100,000 classes, because there are 100,000 words in the English language that you want to treat. There are many more, but in this case, they leave away all the words that appear less than five times in their corpus. That's still a lot of words. So it's like a super duper duper lot of classification task. But ultimately, if you do something like this, then the origin, so the representation that you end up with is going to be very, very good at doing these kind of downstream tasks. And that's what they discovered. So their skip gram model is nothing else than taking a word and predicting the surrounding words from that word. And this is what it means. This is the formal statement of the skip gram objective. What you want to do is the objective of the skip gram model is to maximize the average log probability this one. So for the word we're considering, the word T, we want to maximize the log probability of each word w that is in around the word c, sorry, around the word w in a context window of c. That's exactly what we did before. Take a word like this model right here. And from it, we predict all of the words around it in a given window. That's all. That's the entire objective. And that will give you very good representations. And this is how you would implement that. So what you'll have is you'll have these vector representation v that comes from your original dictionary. Those are the things you learn. And then because you have like a 30,000 way classifier, you know that a classification layer is nothing else than a linear layer followed by a softmax operation. And that linear layer also has parameters. These are the v primes. So first you have the look up in the dictionary for the word vector right here. And this is the vector of the classification layer. Now there are modifications where you can use like the same vectors and so on. Or you can also make use of these vectors. But ultimately, you care about these vectors right here. And the vectors here are simply the classification layers weights. So here you can see that there is what you're trying to maximize is the inner product between the word that you're considering and the words around that word. And you're trying to do a classification task. So you need to normalize. Now this is the normalization constant, and it goes over all of your vocabulary. So that's what they tackle here. They say w is the number of words in the vocabulary. This formulation is impractical because the cost of computing the gradient is proportional to w, which is often large. And that's 10 to the five to 10 to the seven terms. So many like tens of millions of terms in your vocabulary. That's just not feasible. So people have been sort of trying different ways to get around very, very large number of classes. And here it seems that that is really our bottleneck. In the previous paper, they've already shown that this objective can give you very good word representation. But now we need to get around the fact that we have such large vocabularies. So the first idea here is hierarchical softmax. And this is kind of a tangent. I find this paper, by the way, it's sort of hard to read because it's like a half engineering paper. But yeah, so first they introduce this hierarchical softmax, which is kind of a distraction. It's kind of a here is what we do. Here is what we considered first, but then didn't end up using really. They do compare with it, but the flow of text is sort of that you expect this to be part of the final model, which it isn't. So in the hierarchical softmax, what you do instead of having this giant multi class classification task right here, you take all of these classes right here, and you put them in a sort of a tree. Okay, so you take this and you put them into a tree. So instead of classifying, you know, let's say we have 1000 classes, instead of classifying 1000 ways, we first classify in two ways. And then we classify in two ways again, from each one, and then we classify in two ways again, as you know, 1000 is like two to the 10. So we need approximately 10 layers of this before we are actually arriving at 1000 classes. But it also means that we only have two way classifications each time. So in the hierarchical softmax, we build trees like this, and then we so we have a word, we look up its vector, sorry, its vector, and then we classify it for each of these nodes. So your output isn't going to be 1000, 1000 log probabilities, your output is going to be a log probability, a binary log probability for each of the nodes right here. So you want to know, okay, here, is it in the upper half or the lower half of my classes? Okay, cool. It's in the upper half. Okay, here is in the upper half or the lower half and so on. And you learn all to predict all of these junctions right here. And that's going to end up you with you having to predict less. Now, of course, you are constrained, you impose a very big prior on the class distribution, classes aren't independently anymore. Namely, if two classes here are in the same subtree, that means that they are going to be predicted, their predictions are going to be correlated because the path to them is the same partially. So how you arrange the classes here is very important. And there has been a lot of work in this. But as I said, this is rather a distraction right here. Hierarchical softmax is a way to solve this. However, they went with a different way right here. They went with this approach called negative sampling. Negative sampling has been, it's been very influential. Not only in word2vec, but negative sampling is one of the cornerstones of the current trend in self supervised learning and contrastive estimation and so on. So this all of this, you know, it pops up in unlikely ways in other fields. And it sort of, I'm not going to say it originated here, but definitely it was introduced into the popular deep learning world right here. So they say an alternative to hierarchical softmax is noise contrastive estimation. Okay, so in noise contrastive estimation posits that a good model should be able to differentiate data from noise by means of logistic regression. You know, that seems very reasonable. This is similar to the hinge loss and so on, yada yada yada. While NCE can be shown to approximately maximize the log probability of the softmax, the skip grab model is only concerned with learning high quality vector representations. So we are free to simplify noise contrastive estimation as long as the vector representations retain their quality. We define negative sampling by this following objective. So this is very interesting. They see, okay, noise contrastive estimation, you know, it approximately maximizes the log probability. So the noise contrastive estimation would actually be the correct way to approximate their problem. However, they say, well, as long as, you know, as long as something reasonable comes out, we're free to change that up a bit. So they go with this negative sampling approach right here. And you can see that this is almost the same. So it's written a bit differently from the original softmax thing because the original softmax thing was written as a fraction and here it's as a sum. But what you're trying to do in the negative sampling framework is you're trying to maximize the following. You're trying to maximize the inner product of the word you're considering and the words around them. Okay. So you're trying to still predict the words around you. But now instead of having this prediction softmax over all of the classes, you only have the softmax over a subset of classes. So what you'll do is you sample words from your vocabulary at random and you sample k of them and you're simply trying to now minimize the inner product between those words and your word. Okay. So what does that ultimately lead to? It ultimately leads to the following. You have a word like this word here, negative. And what you're trying to do is you're not trying that much to predict the word sampling. What you're trying to do is you're trying to say that in my space right here, I simply want sampling to be closer than any other words that's not in the context window. Okay. So here is my word negative and here is my word sampling. And I want these two to be close. And if I sample another word, like here, this is the word cake. If I, sorry, if I sample that, I simply want that to be far away, farther than the word sampling. Okay. So this is now a comparative. It's not I classify sampling as the highest class. It's simply I want to classify the word sampling against the other classes higher. All right. So, and this is now much, much easier. So instead of a thousand or 10,000 or a million way classification, I now maybe have, I have a K plus one way classification, right? Pretty easy, right? I simply sample K other words. And I assume because I have so many words, chances that I actually sample one that's in my context window is very small, right? So I simply sample other words and I say, well, these other words are random. They have nothing to do with the current frame that I'm looking at. So they should be, you know, they can be whatever they want, but at least they should be farther away than the words that are actually in my con in my context. And that is negative sampling, the process of sampling negatives, this right here, and then making sure that the positives, which are these here, um, in this case, the words in the context are classified with a higher probability than the negatives for a given input word, right? This here is the input word. That's it. That's negative sampling. And of course, yeah, as I said, you recognize this from current things like, um, self supervised learning where you want to have the same image augmented twice, go through the pipeline, you know, you augment, you put a little bit of different noise and then you have a different image and at the end you say these two should be close together while this other one should be far apart. It's the exact same thing here, except that you have a different way of obtaining the positive and the negative samples. In this case, positive samples are everything that's in the context. Negative samples are just randomly sampled from the dataset. And that, you know, works, of course that works much, much, much faster. And you can see that this, um, this, uh, turns out to give you vectors that are pretty good and you can train with higher vectors, sorry, with higher dimensional vectors, you can train with bigger vocabularies with this. This has turned out to be very, very influential. As I said, uh, now with the rise of BERT and so on, work to back is kind of getting forgotten, but, um, this was a revolution and distributed vectors. So it wasn't a thing really. It kind of was a thing before that, but it wasn't really a thing that people used. What people would do is still, they would do N-gram models before that. So they would kind of dist, dist, they would sort of chunk up their sentences into N-grams into overlapping N-grams and then have a big giant, uh, table for their, where they index their N-grams. So the word, I don't know, so the word, um, hello is ID one. The word hello there is ID two and so on. So you have a big table for all the N-grams. And then what we would try to do is you would try to do this kind of bag of words estimation where you would take a, you know, whatever N-grams appeared in your sentence and you would have this big classification where you'd associate the N-grams with each other and so on. So distributed word representations were kind of a revolution at that point, especially distributed representation that actually outperformed these old N-gram methods. Um, yeah. So there are a number of tricks right here that are, I think, not understood until this day. For example, the question is how do you sample these negative samples? Right here, this basically says get K words from your vocabulary at random according to this distribution right here. Now how are you going to do that? Basically you have a spectrum of options. The one side of the spectrum is going to be completely uniform. Okay. We sample each word with the same probability. And the other side of the spectrum is something like sample this according to their uni-gram. These are two different things. They're opposites in this, in this fashion. So here you say, Hey, um, some words appear way, way, way more often than other words. Shouldn't we prefer them when we sample? Right? So if we have a corpus, um, and shouldn't we sample from the corpus? And if in the corpus, one word appears 50 times more than the other word, then shouldn't we sample that 50 times more as a negative because it's, you know, so abundant and it should give a higher classification accuracy. Whereas on the other hand, you could say, no, no, no, we should simply sample every word in our dictionary uniformly. They came up with something in between, which they say, um, both NCE and negative sampling have noise distribution as a free parameter. We investigated a number of choices and found that the uni-gram distribution raised to the three quarter power, i.e. uni-gram to the three quarter, outperformed significantly the uni-gram and uniform distributions. For both NCE and negative on every task we tried including language modeling. This I think is a mystery until today. And it actually turned out that this exponent right here is magically much better than like the exponent of one or even the exponent of one half. Like you might be reasonably assumed that the square root, you know, might be something, but the three quarters I think turned out to be very good and very mystical. So what does it mean? It means that you have kind of a balance between words that appear often and words that don't appear often. Usually in these kind of things, you have a power law where you have very few words that appear very often. And then you have, okay, that's the tail shouldn't go up, but you have a very long tail of words. And what you want to do is in this case, you want to sample these words here more, but they appear so much more often than if you simply sample according to their uni-gram distribution, you'll basically not regard these words right here, you'll forget about them and your performance will suffer because they do appear every now and then. So what you want to do is you want to push that those down a little bit and the optimal amount for the little bit turns out to be to raise it the you raise it to the three quarters. Strange but you know, turned out to work well. The other thing they do is they do the they do a sub sampling of frequent words. So again, this is a way to kind of push down the often appearing words where they say the most frequent words can easily occur hundreds of millions of times like in the or a such words usually provide less information value than the rare words. For example, while the skipgram model benefits from observing the co-occurrences of France and Paris, it benefits much less from observing the frequent co-occurrences of France and the as nearly every word co-occurring frequently with in a sentence with the. So they do another trick here to counter this imbalance between rare and frequent words use a simple sub sampling approach, each word in the training set is discarded with probability computed by that formula. Right, so therefore formula right here and you might be asking again why why this formula? So this is the sampling probability of a word and it goes with one over T. T is a temperature parameter and F is the frequency with which the word appears in the corpus. So as you can see, as the word appears more in the in the corpus, then so this is the frequency as the word appears more than this thing goes down than this thing goes up. So it's discarded with this probability. So it's discarded with a higher probability if it appears more often. Where F is frequency of a word, T is a chosen threshold. We chose this sub sampling formula because it aggressively sub samples words whose frequency is greater than T while preserving the ranking of the frequencies. Although this sub sampling formula was chosen heuristically, we found it to work well in practice. It accelerates learning and even significantly improves the accuracy of the learned vectors of the rare words as will be shown in the following sections. So again, something sort of arbitrary, it's more understandable than the three quarters, but still it's sort of arbitrary. They experimented around, they found this works well and then everybody ended up using that. So that's how this kind of stuff happens. Okay, so now we get into the empirical results. And the empirical results in this case were already sort of given in the previous paper, but here they have these the analogical reasoning task where you can see that the negative sampling did outperform the others by quite a bit right here. So the negative sampling approaches outperformed the hierarchical softmax and the noise contrastive estimation. And in the previous paper, they also compared with other baselines and saw that it also outperforms those while being quite time efficient. So you can see that especially with the sub sampling approaches, the time here is 36 minutes for and I think they have like a huge corpus that they train on these were to back code turned out to be really, really efficient code. And that's why it got so popular as well. They did the same thing for phrases right here. So for phrases like New York Times and so on, but this was kind of more of a this was more of a side thing. The phrase vectors turned out to be, you know, rather a side thing from the actual code right here. So yeah, as I said, this paper is very different from other research papers in that it's it's sort of half an engineering paper and all of these papers are they're kind of hard to read because they just kind of state some things in the order is kind of weird sometimes. Why they do things is kind of weird sometimes. But you can't you know, you can't deny that it had the quite the effect on the community. And this it is a very cool paper, very cool series of papers. And it's very cool that actually, they released the code, and they made the code such that it is super duper efficient, even like on a single machine. And that was very cool, because you know, being Google, they could have just released code that is very efficient on a distributed data center. And they didn't do that. So that this is, it's sort of not really like today anymore. Like today, when they release code, it's always you need you need like 50 cloud TPUs to do it. And it's still cool that they release code. But this was this was really a step into kind of democratizing AI. And yeah, so that was my rant about Word2vec. I hope you enjoyed this. I hope this still was useful to you, even though most of you probably already knew Word2vec. And yeah, so I'll see you next time. Bye bye.
[ { "end": 5.5200000000000005, "start": 0, "text": " Hi there, today we'll look at distributed representations of words and phrases and their" }, { "end": 12.56, "start": 5.5200000000000005, "text": " compositionality by Thomas Mikolov, Ilya Sotskyver, Kai Chen, Greg Corrado and Jeffrey Dean." }, { "end": 17.580000000000002, "start": 12.56, "text": " This is another historical paper, it's one of three papers, it's the middle one that" }, { "end": 21.16, "start": 17.580000000000002, "text": " introduces the original Word2vec algorithm." }, { "end": 29.240000000000002, "start": 21.16, "text": " And as you might know, Word2vec was extremely influential in NLP since this paper basically" }, { "end": 34.6, "start": 29.24, "text": " until recently, where it's sort of gone out of fashion a bit in research with the rise" }, { "end": 40.12, "start": 34.6, "text": " of things like ELMo and BERT, but it's still very, very relevant." }, { "end": 45.12, "start": 40.12, "text": " So we'll look at this historical paper today with kind of the hindsight of being a couple" }, { "end": 46.12, "start": 45.12, "text": " years into the future." }, { "end": 53.92, "start": 46.12, "text": " In fact, as you see right here, this was released in 2013, so it's seven years later now." }, { "end": 58.86, "start": 53.92, "text": " And we'll look back and we'll see what they said back then about the system." }, { "end": 66.6, "start": 58.86, "text": " This is not going to be like a very well-enhanced PowerPoint presentation of how Word2vec works." }, { "end": 70.96, "start": 66.6, "text": " We're going to look at the paper and read it together." }, { "end": 75.56, "start": 70.96, "text": " If you like content like this, if you like historical paper readings, let me know in" }, { "end": 81.34, "start": 75.56, "text": " the comments, share it out if you do like it and of course subscribe." }, { "end": 86.74, "start": 81.34, "text": " Because this kind of historical papers, I enjoy them, but many people might already" }, { "end": 89, "start": 86.74, "text": " know what these things are." }, { "end": 90.83999999999999, "start": 89, "text": " So, yeah." }, { "end": 91.83999999999999, "start": 90.83999999999999, "text": " Okay." }, { "end": 97.56, "start": 91.83999999999999, "text": " Let's, you know, go through the paper and pick up their ideas and kind of put them in" }, { "end": 98.94, "start": 97.56, "text": " context." }, { "end": 103.06, "start": 98.94, "text": " They say the recently introduced continuous skip gram model is an efficient method for" }, { "end": 109.28, "start": 103.06, "text": " learning high quality distributed vector representations that capture a large number of precise syntactic" }, { "end": 111.44, "start": 109.28, "text": " and semantic word relationships." }, { "end": 116.24, "start": 111.44, "text": " So the skip gram model was already introduced by Mikhailov in an earlier paper that came" }, { "end": 121.08, "start": 116.24, "text": " out, I believe not like one or two months prior to this one." }, { "end": 123.75999999999999, "start": 121.08, "text": " As I said, Word2vec is a series of papers." }, { "end": 126.67999999999999, "start": 123.75999999999999, "text": " I don't think there is a paper called Word2vec." }, { "end": 132, "start": 126.67999999999999, "text": " Rather, they here have released the code along with the paper." }, { "end": 135.28, "start": 132, "text": " The code was called Word2vec." }, { "end": 140.42, "start": 135.28, "text": " So the skip gram model was introduced previously, but it is replicated right here." }, { "end": 146.23999999999998, "start": 140.42, "text": " So in the skip gram model, what you're trying to do is you're trying to get a distributed" }, { "end": 148.28, "start": 146.23999999999998, "text": " word representation." }, { "end": 149.27999999999997, "start": 148.28, "text": " So what does that mean?" }, { "end": 154.07999999999998, "start": 149.27999999999997, "text": " That means that for each word in your language, let's take these words right here." }, { "end": 158.51999999999998, "start": 154.07999999999998, "text": " For each word in the language, you want to come up with a vector that somehow describes" }, { "end": 161.07999999999998, "start": 158.51999999999998, "text": " that word in a continuous fashion." }, { "end": 170.60000000000002, "start": 161.08, "text": " So with the two might be mapped to, I don't know, 0.1, 0.9, and 0.3." }, { "end": 174.48000000000002, "start": 170.60000000000002, "text": " Learn might be mapped to negative 0.5 and so on." }, { "end": 179.72000000000003, "start": 174.48000000000002, "text": " So each word gets assigned a vector in the same dimensional space." }, { "end": 184.98000000000002, "start": 179.72000000000003, "text": " And what the previous paper kind of discovered is that if you do this correctly, then these" }, { "end": 187.64000000000001, "start": 184.98000000000002, "text": " vectors, they have some kind of properties." }, { "end": 194.04, "start": 187.64, "text": " So we can already kind of jump ahead because this was already a bit, a bit researched in" }, { "end": 195.92, "start": 194.04, "text": " the last paper." }, { "end": 199.55999999999997, "start": 195.92, "text": " The semantics of these vectors will be something like this." }, { "end": 202.76, "start": 199.55999999999997, "text": " So here they have a two dimensional PCA." }, { "end": 208.14, "start": 202.76, "text": " So these are the first two dimensions of the 1000 dimensional skip gram vector." }, { "end": 213.79999999999998, "start": 208.14, "text": " So the vectors they obtain, they can do things like this, where they can show that in these" }, { "end": 221, "start": 213.8, "text": " spaces, for example, there appears to be a vector direction that characterizes the capital" }, { "end": 222.72, "start": 221, "text": " of a country." }, { "end": 229.12, "start": 222.72, "text": " So if you take a few countries and their capitals and you average that vector, you get a kind" }, { "end": 233.52, "start": 229.12, "text": " of a direction for capitalness of a city." }, { "end": 237.76000000000002, "start": 233.52, "text": " Given a country, you can see that there is a pretty clear relation here." }, { "end": 245.92, "start": 237.76, "text": " Now, some of these things have later been revised to such that they are ultimately ended" }, { "end": 247.56, "start": 245.92, "text": " up being not that impressive." }, { "end": 252.78, "start": 247.56, "text": " For example, there was always this kind of math with vectors." }, { "end": 256.2, "start": 252.78, "text": " And I don't, I believe this is, this might not be in this." }, { "end": 262.52, "start": 256.2, "text": " This is in the last paper where they discovered that if you take the vector for king and you" }, { "end": 272.32, "start": 262.52, "text": " subtract the vector for man and you add the vector for woman, then that would result in" }, { "end": 274.96, "start": 272.32, "text": " the vector for queen." }, { "end": 281.56, "start": 274.96, "text": " So the way they did it was basically they did this calculation right here and then they" }, { "end": 285.35999999999996, "start": 281.56, "text": " searched in the point they ended up, they searched for the nearest neighbor in their" }, { "end": 287.38, "start": 285.35999999999996, "text": " vocabulary." }, { "end": 288.79999999999995, "start": 287.38, "text": " And that turned out to be queen." }, { "end": 295.36, "start": 288.8, "text": " But in order to make it queen, actually, you have to exclude the original word king." }, { "end": 301.6, "start": 295.36, "text": " People quickly discovered that if you don't exclude the original word, the result of this" }, { "end": 305.82, "start": 301.6, "text": " kind of arithmetic will almost always lead back to the original word." }, { "end": 311.94, "start": 305.82, "text": " And then a lot of these analogy tasks are simply the result of you then discarding that" }, { "end": 313.72, "start": 311.94, "text": " word during the nearest neighbor search." }, { "end": 317.90000000000003, "start": 313.72, "text": " And then queen just happens to be one of the closest words." }, { "end": 323.78, "start": 317.9, "text": " And it's sort of much less dependent on which exact calculation you do here." }, { "end": 329.2, "start": 323.78, "text": " So there's been a lot of follow up work kind of analyzing, criticizing these vector maths." }, { "end": 334.85999999999996, "start": 329.2, "text": " But definitely we know that these word vectors turned out to be extremely, extremely helpful" }, { "end": 341.08, "start": 334.85999999999996, "text": " and syntactically and semantically relevant in downstream tasks because they have performed" }, { "end": 343.12, "start": 341.08, "text": " very, very well." }, { "end": 346.17999999999995, "start": 343.12, "text": " So how does the skip gram model work?" }, { "end": 352.84000000000003, "start": 346.18, "text": " How does it assign vectors to each word?" }, { "end": 357.04, "start": 352.84000000000003, "text": " So first of all, it has a dictionary." }, { "end": 360.38, "start": 357.04, "text": " So there is a word, an input word." }, { "end": 363.44, "start": 360.38, "text": " And for each word, you have a big dictionary." }, { "end": 369.68, "start": 363.44, "text": " And the big dictionary basically says that the word two is going to be mapped to this" }, { "end": 372.54, "start": 369.68, "text": " vector point one, da, da, da, da, da, da, and so on." }, { "end": 377.8, "start": 372.54, "text": " The word learn is going to be mapped to that vector." }, { "end": 383.62, "start": 377.8, "text": " And then you also have these output vectors right here." }, { "end": 390.56, "start": 383.62, "text": " And what you're trying to do is you're trying to take a phrase from the data set like this" }, { "end": 392.48, "start": 390.56, "text": " one right here." }, { "end": 398.46000000000004, "start": 392.48, "text": " And you take out one word like this word vector right here." }, { "end": 405.08, "start": 398.46, "text": " And you're trying to frame this as a prediction task." }, { "end": 410.68, "start": 405.08, "text": " So you're trying to frame this as, in this case, four different prediction tasks." }, { "end": 417.4, "start": 410.68, "text": " So you're telling your machine, I give you the word vector, and which other words are" }, { "end": 419.84, "start": 417.4, "text": " around the word vector?" }, { "end": 421.96, "start": 419.84, "text": " You just tell it that you don't tell it anything else." }, { "end": 426.03999999999996, "start": 421.96, "text": " You just say, which other words are around the word vector?" }, { "end": 433.04, "start": 426.04, "text": " And the correct answers in this case would be to, learn, word, and representations." }, { "end": 439.92, "start": 433.04, "text": " So you construct four different training examples where you have an X and a Y." }, { "end": 445.36, "start": 439.92, "text": " So the X is always vector, and the Y is two." }, { "end": 453.92, "start": 445.36, "text": " And then the next training sample, the X is vector, and the Y is learn, and so on." }, { "end": 459.24, "start": 453.92, "text": " So this here, each training sample is a classification task." }, { "end": 466.8, "start": 459.24, "text": " And the classification task is, as you can see, no, you can't see right here, but the" }, { "end": 474.44, "start": 466.8, "text": " classification task is you have the input word, and you classify it into one of many," }, { "end": 477.48, "start": 474.44, "text": " many, many, many, many, many classes." }, { "end": 482.64, "start": 477.48, "text": " Namely, there are as many classes as you have words in the dictionary." }, { "end": 488.76, "start": 482.64, "text": " So each word in the dictionary will have a class associated with it." }, { "end": 493.36, "start": 488.76, "text": " So in ImageNet, you have like 1,000 classes, but in these, that's already a lot." }, { "end": 499.59999999999997, "start": 493.36, "text": " But in these tasks, you're going to have 100,000 classes, because there are 100,000 words in" }, { "end": 502.47999999999996, "start": 499.59999999999997, "text": " the English language that you want to treat." }, { "end": 507.28, "start": 502.47999999999996, "text": " There are many more, but in this case, they leave away all the words that appear less" }, { "end": 509.08, "start": 507.28, "text": " than five times in their corpus." }, { "end": 510.84, "start": 509.08, "text": " That's still a lot of words." }, { "end": 515.6999999999999, "start": 510.84, "text": " So it's like a super duper duper lot of classification task." }, { "end": 522.22, "start": 515.6999999999999, "text": " But ultimately, if you do something like this, then the origin, so the representation that" }, { "end": 527.56, "start": 522.22, "text": " you end up with is going to be very, very good at doing these kind of downstream tasks." }, { "end": 529.88, "start": 527.56, "text": " And that's what they discovered." }, { "end": 536.68, "start": 529.88, "text": " So their skip gram model is nothing else than taking a word and predicting the surrounding" }, { "end": 540.4, "start": 536.68, "text": " words from that word." }, { "end": 542.48, "start": 540.4, "text": " And this is what it means." }, { "end": 545.76, "start": 542.48, "text": " This is the formal statement of the skip gram objective." }, { "end": 552.28, "start": 545.76, "text": " What you want to do is the objective of the skip gram model is to maximize the average" }, { "end": 554.68, "start": 552.28, "text": " log probability this one." }, { "end": 561.6, "start": 554.68, "text": " So for the word we're considering, the word T, we want to maximize the log probability" }, { "end": 571.4, "start": 561.6, "text": " of each word w that is in around the word c, sorry, around the word w in a context window" }, { "end": 572.4, "start": 571.4, "text": " of c." }, { "end": 573.72, "start": 572.4, "text": " That's exactly what we did before." }, { "end": 576.6, "start": 573.72, "text": " Take a word like this model right here." }, { "end": 585, "start": 576.6, "text": " And from it, we predict all of the words around it in a given window." }, { "end": 586, "start": 585, "text": " That's all." }, { "end": 587.5, "start": 586, "text": " That's the entire objective." }, { "end": 592.76, "start": 587.5, "text": " And that will give you very good representations." }, { "end": 594.96, "start": 592.76, "text": " And this is how you would implement that." }, { "end": 602.68, "start": 594.96, "text": " So what you'll have is you'll have these vector representation v that comes from your original" }, { "end": 603.68, "start": 602.68, "text": " dictionary." }, { "end": 605.32, "start": 603.68, "text": " Those are the things you learn." }, { "end": 610.7, "start": 605.32, "text": " And then because you have like a 30,000 way classifier, you know that a classification" }, { "end": 616.08, "start": 610.7, "text": " layer is nothing else than a linear layer followed by a softmax operation." }, { "end": 618.58, "start": 616.08, "text": " And that linear layer also has parameters." }, { "end": 620.4000000000001, "start": 618.58, "text": " These are the v primes." }, { "end": 627.48, "start": 620.4000000000001, "text": " So first you have the look up in the dictionary for the word vector right here." }, { "end": 630.74, "start": 627.48, "text": " And this is the vector of the classification layer." }, { "end": 634.36, "start": 630.74, "text": " Now there are modifications where you can use like the same vectors and so on." }, { "end": 636.94, "start": 634.36, "text": " Or you can also make use of these vectors." }, { "end": 641.88, "start": 636.94, "text": " But ultimately, you care about these vectors right here." }, { "end": 646.52, "start": 641.88, "text": " And the vectors here are simply the classification layers weights." }, { "end": 654.46, "start": 646.52, "text": " So here you can see that there is what you're trying to maximize is the inner product between" }, { "end": 662.28, "start": 654.46, "text": " the word that you're considering and the words around that word." }, { "end": 665.94, "start": 662.28, "text": " And you're trying to do a classification task." }, { "end": 667.58, "start": 665.94, "text": " So you need to normalize." }, { "end": 674.8000000000001, "start": 667.58, "text": " Now this is the normalization constant, and it goes over all of your vocabulary." }, { "end": 677.48, "start": 674.8000000000001, "text": " So that's what they tackle here." }, { "end": 682.6, "start": 677.48, "text": " They say w is the number of words in the vocabulary." }, { "end": 688.0600000000001, "start": 682.6, "text": " This formulation is impractical because the cost of computing the gradient is proportional" }, { "end": 691.2, "start": 688.0600000000001, "text": " to w, which is often large." }, { "end": 694, "start": 691.2, "text": " And that's 10 to the five to 10 to the seven terms." }, { "end": 699.6, "start": 694, "text": " So many like tens of millions of terms in your vocabulary." }, { "end": 701.52, "start": 699.6, "text": " That's just not feasible." }, { "end": 707.44, "start": 701.52, "text": " So people have been sort of trying different ways to get around very, very large number" }, { "end": 708.92, "start": 707.44, "text": " of classes." }, { "end": 711.88, "start": 708.92, "text": " And here it seems that that is really our bottleneck." }, { "end": 716.08, "start": 711.88, "text": " In the previous paper, they've already shown that this objective can give you very good" }, { "end": 717.9, "start": 716.08, "text": " word representation." }, { "end": 722.32, "start": 717.9, "text": " But now we need to get around the fact that we have such large vocabularies." }, { "end": 724.88, "start": 722.32, "text": " So the first idea here is hierarchical softmax." }, { "end": 726.36, "start": 724.88, "text": " And this is kind of a tangent." }, { "end": 732.2, "start": 726.36, "text": " I find this paper, by the way, it's sort of hard to read because it's like a half engineering" }, { "end": 733.88, "start": 732.2, "text": " paper." }, { "end": 740.4000000000001, "start": 733.88, "text": " But yeah, so first they introduce this hierarchical softmax, which is kind of a distraction." }, { "end": 743.0400000000001, "start": 740.4000000000001, "text": " It's kind of a here is what we do." }, { "end": 746.9200000000001, "start": 743.0400000000001, "text": " Here is what we considered first, but then didn't end up using really." }, { "end": 753, "start": 746.92, "text": " They do compare with it, but the flow of text is sort of that you expect this to be part" }, { "end": 755.12, "start": 753, "text": " of the final model, which it isn't." }, { "end": 760.4799999999999, "start": 755.12, "text": " So in the hierarchical softmax, what you do instead of having this giant multi class classification" }, { "end": 767.4799999999999, "start": 760.4799999999999, "text": " task right here, you take all of these classes right here, and you put them in a sort of" }, { "end": 768.4799999999999, "start": 767.4799999999999, "text": " a tree." }, { "end": 773.24, "start": 768.4799999999999, "text": " Okay, so you take this and you put them into a tree." }, { "end": 777.8, "start": 773.24, "text": " So instead of classifying, you know, let's say we have 1000 classes, instead of classifying" }, { "end": 782, "start": 777.8, "text": " 1000 ways, we first classify in two ways." }, { "end": 787.64, "start": 782, "text": " And then we classify in two ways again, from each one, and then we classify in two ways" }, { "end": 791.1, "start": 787.64, "text": " again, as you know, 1000 is like two to the 10." }, { "end": 800.16, "start": 791.1, "text": " So we need approximately 10 layers of this before we are actually arriving at 1000 classes." }, { "end": 805.88, "start": 800.16, "text": " But it also means that we only have two way classifications each time." }, { "end": 812.5, "start": 805.88, "text": " So in the hierarchical softmax, we build trees like this, and then we so we have a word," }, { "end": 818.4399999999999, "start": 812.5, "text": " we look up its vector, sorry, its vector, and then we classify it for each of these" }, { "end": 819.4399999999999, "start": 818.4399999999999, "text": " nodes." }, { "end": 826.64, "start": 819.4399999999999, "text": " So your output isn't going to be 1000, 1000 log probabilities, your output is going to" }, { "end": 832.96, "start": 826.64, "text": " be a log probability, a binary log probability for each of the nodes right here." }, { "end": 838.96, "start": 832.96, "text": " So you want to know, okay, here, is it in the upper half or the lower half of my classes?" }, { "end": 839.96, "start": 838.96, "text": " Okay, cool." }, { "end": 840.96, "start": 839.96, "text": " It's in the upper half." }, { "end": 844, "start": 840.96, "text": " Okay, here is in the upper half or the lower half and so on." }, { "end": 848.16, "start": 844, "text": " And you learn all to predict all of these junctions right here." }, { "end": 851.4, "start": 848.16, "text": " And that's going to end up you with you having to predict less." }, { "end": 859.0799999999999, "start": 851.4, "text": " Now, of course, you are constrained, you impose a very big prior on the class distribution," }, { "end": 860.68, "start": 859.0799999999999, "text": " classes aren't independently anymore." }, { "end": 866, "start": 860.68, "text": " Namely, if two classes here are in the same subtree, that means that they are going to" }, { "end": 873.88, "start": 866, "text": " be predicted, their predictions are going to be correlated because the path to them" }, { "end": 875.8, "start": 873.88, "text": " is the same partially." }, { "end": 880.68, "start": 875.8, "text": " So how you arrange the classes here is very important." }, { "end": 882.9599999999999, "start": 880.68, "text": " And there has been a lot of work in this." }, { "end": 888.52, "start": 882.9599999999999, "text": " But as I said, this is rather a distraction right here." }, { "end": 891.16, "start": 888.52, "text": " Hierarchical softmax is a way to solve this." }, { "end": 895.92, "start": 891.16, "text": " However, they went with a different way right here." }, { "end": 899.12, "start": 895.92, "text": " They went with this approach called negative sampling." }, { "end": 904.5999999999999, "start": 899.12, "text": " Negative sampling has been, it's been very influential." }, { "end": 910.56, "start": 904.5999999999999, "text": " Not only in word2vec, but negative sampling is one of the cornerstones of the current" }, { "end": 915.8, "start": 910.56, "text": " trend in self supervised learning and contrastive estimation and so on." }, { "end": 922.4399999999999, "start": 915.8, "text": " So this all of this, you know, it pops up in unlikely ways in other fields." }, { "end": 929.9, "start": 922.4399999999999, "text": " And it sort of, I'm not going to say it originated here, but definitely it was introduced into" }, { "end": 933.28, "start": 929.9, "text": " the popular deep learning world right here." }, { "end": 939.1199999999999, "start": 933.28, "text": " So they say an alternative to hierarchical softmax is noise contrastive estimation." }, { "end": 946.04, "start": 939.12, "text": " Okay, so in noise contrastive estimation posits that a good model should be able to differentiate" }, { "end": 949.2, "start": 946.04, "text": " data from noise by means of logistic regression." }, { "end": 951.72, "start": 949.2, "text": " You know, that seems very reasonable." }, { "end": 956.16, "start": 951.72, "text": " This is similar to the hinge loss and so on, yada yada yada." }, { "end": 961.32, "start": 956.16, "text": " While NCE can be shown to approximately maximize the log probability of the softmax, the skip" }, { "end": 965.66, "start": 961.32, "text": " grab model is only concerned with learning high quality vector representations." }, { "end": 971.1999999999999, "start": 965.66, "text": " So we are free to simplify noise contrastive estimation as long as the vector representations" }, { "end": 973.24, "start": 971.1999999999999, "text": " retain their quality." }, { "end": 976.4, "start": 973.24, "text": " We define negative sampling by this following objective." }, { "end": 977.88, "start": 976.4, "text": " So this is very interesting." }, { "end": 983.64, "start": 977.88, "text": " They see, okay, noise contrastive estimation, you know, it approximately maximizes the log" }, { "end": 984.64, "start": 983.64, "text": " probability." }, { "end": 989.92, "start": 984.64, "text": " So the noise contrastive estimation would actually be the correct way to approximate" }, { "end": 990.92, "start": 989.92, "text": " their problem." }, { "end": 997, "start": 990.92, "text": " However, they say, well, as long as, you know, as long as something reasonable comes out," }, { "end": 998.92, "start": 997, "text": " we're free to change that up a bit." }, { "end": 1004.24, "start": 998.92, "text": " So they go with this negative sampling approach right here." }, { "end": 1009.26, "start": 1004.24, "text": " And you can see that this is almost the same." }, { "end": 1015.16, "start": 1009.26, "text": " So it's written a bit differently from the original softmax thing because the original" }, { "end": 1018.64, "start": 1015.16, "text": " softmax thing was written as a fraction and here it's as a sum." }, { "end": 1025.52, "start": 1018.64, "text": " But what you're trying to do in the negative sampling framework is you're trying to maximize" }, { "end": 1026.66, "start": 1025.52, "text": " the following." }, { "end": 1032.04, "start": 1026.66, "text": " You're trying to maximize the inner product of the word you're considering and the words" }, { "end": 1033.2, "start": 1032.04, "text": " around them." }, { "end": 1034.2, "start": 1033.2, "text": " Okay." }, { "end": 1038.16, "start": 1034.2, "text": " So you're trying to still predict the words around you." }, { "end": 1045.12, "start": 1038.16, "text": " But now instead of having this prediction softmax over all of the classes, you only" }, { "end": 1049.56, "start": 1045.12, "text": " have the softmax over a subset of classes." }, { "end": 1057.28, "start": 1049.56, "text": " So what you'll do is you sample words from your vocabulary at random and you sample k" }, { "end": 1064.76, "start": 1057.28, "text": " of them and you're simply trying to now minimize the inner product between those words and" }, { "end": 1066.08, "start": 1064.76, "text": " your word." }, { "end": 1067.08, "start": 1066.08, "text": " Okay." }, { "end": 1070.2199999999998, "start": 1067.08, "text": " So what does that ultimately lead to?" }, { "end": 1073.3, "start": 1070.2199999999998, "text": " It ultimately leads to the following." }, { "end": 1077.8799999999999, "start": 1073.3, "text": " You have a word like this word here, negative." }, { "end": 1082.9199999999998, "start": 1077.8799999999999, "text": " And what you're trying to do is you're not trying that much to predict the word sampling." }, { "end": 1088.6599999999999, "start": 1082.9199999999998, "text": " What you're trying to do is you're trying to say that in my space right here, I simply" }, { "end": 1095.68, "start": 1088.6599999999999, "text": " want sampling to be closer than any other words that's not in the context window." }, { "end": 1096.68, "start": 1095.68, "text": " Okay." }, { "end": 1101.9199999999998, "start": 1096.68, "text": " So here is my word negative and here is my word sampling." }, { "end": 1104.6000000000001, "start": 1101.92, "text": " And I want these two to be close." }, { "end": 1109.76, "start": 1104.6000000000001, "text": " And if I sample another word, like here, this is the word cake." }, { "end": 1116.96, "start": 1109.76, "text": " If I, sorry, if I sample that, I simply want that to be far away, farther than the word" }, { "end": 1117.96, "start": 1116.96, "text": " sampling." }, { "end": 1118.96, "start": 1117.96, "text": " Okay." }, { "end": 1120.6200000000001, "start": 1118.96, "text": " So this is now a comparative." }, { "end": 1124.2, "start": 1120.6200000000001, "text": " It's not I classify sampling as the highest class." }, { "end": 1132.56, "start": 1124.2, "text": " It's simply I want to classify the word sampling against the other classes higher." }, { "end": 1133.56, "start": 1132.56, "text": " All right." }, { "end": 1136.68, "start": 1133.56, "text": " So, and this is now much, much easier." }, { "end": 1142.0800000000002, "start": 1136.68, "text": " So instead of a thousand or 10,000 or a million way classification, I now maybe have, I have" }, { "end": 1146, "start": 1142.0800000000002, "text": " a K plus one way classification, right?" }, { "end": 1147, "start": 1146, "text": " Pretty easy, right?" }, { "end": 1149.52, "start": 1147, "text": " I simply sample K other words." }, { "end": 1155.72, "start": 1149.52, "text": " And I assume because I have so many words, chances that I actually sample one that's" }, { "end": 1158.8, "start": 1155.72, "text": " in my context window is very small, right?" }, { "end": 1162.68, "start": 1158.8, "text": " So I simply sample other words and I say, well, these other words are random." }, { "end": 1166.8, "start": 1162.68, "text": " They have nothing to do with the current frame that I'm looking at." }, { "end": 1173.2, "start": 1166.8, "text": " So they should be, you know, they can be whatever they want, but at least they should be farther" }, { "end": 1180.0800000000002, "start": 1173.2, "text": " away than the words that are actually in my con in my context." }, { "end": 1185.8, "start": 1180.0800000000002, "text": " And that is negative sampling, the process of sampling negatives, this right here, and" }, { "end": 1191.96, "start": 1185.8, "text": " then making sure that the positives, which are these here, um, in this case, the words" }, { "end": 1198.8, "start": 1191.96, "text": " in the context are classified with a higher probability than the negatives for a given" }, { "end": 1200.04, "start": 1198.8, "text": " input word, right?" }, { "end": 1205.3999999999999, "start": 1200.04, "text": " This here is the input word." }, { "end": 1206.3999999999999, "start": 1205.3999999999999, "text": " That's it." }, { "end": 1207.3999999999999, "start": 1206.3999999999999, "text": " That's negative sampling." }, { "end": 1214.12, "start": 1207.3999999999999, "text": " And of course, yeah, as I said, you recognize this from current things like, um, self supervised" }, { "end": 1220.8799999999999, "start": 1214.12, "text": " learning where you want to have the same image augmented twice, go through the pipeline," }, { "end": 1224.72, "start": 1220.8799999999999, "text": " you know, you augment, you put a little bit of different noise and then you have a different" }, { "end": 1231, "start": 1224.72, "text": " image and at the end you say these two should be close together while this other one should" }, { "end": 1232.94, "start": 1231, "text": " be far apart." }, { "end": 1238.52, "start": 1232.94, "text": " It's the exact same thing here, except that you have a different way of obtaining the" }, { "end": 1241.26, "start": 1238.52, "text": " positive and the negative samples." }, { "end": 1245.66, "start": 1241.26, "text": " In this case, positive samples are everything that's in the context." }, { "end": 1252.14, "start": 1245.66, "text": " Negative samples are just randomly sampled from the dataset." }, { "end": 1256.5200000000002, "start": 1252.14, "text": " And that, you know, works, of course that works much, much, much faster." }, { "end": 1263.8400000000001, "start": 1256.5200000000002, "text": " And you can see that this, um, this, uh, turns out to give you vectors that are pretty good" }, { "end": 1268.44, "start": 1263.8400000000001, "text": " and you can train with higher vectors, sorry, with higher dimensional vectors, you can train" }, { "end": 1270.6000000000001, "start": 1268.44, "text": " with bigger vocabularies with this." }, { "end": 1274.0600000000002, "start": 1270.6000000000001, "text": " This has turned out to be very, very influential." }, { "end": 1280.2800000000002, "start": 1274.0600000000002, "text": " As I said, uh, now with the rise of BERT and so on, work to back is kind of getting forgotten," }, { "end": 1285.84, "start": 1280.28, "text": " but, um, this was a revolution and distributed vectors." }, { "end": 1288.12, "start": 1285.84, "text": " So it wasn't a thing really." }, { "end": 1292.92, "start": 1288.12, "text": " It kind of was a thing before that, but it wasn't really a thing that people used." }, { "end": 1297.18, "start": 1292.92, "text": " What people would do is still, they would do N-gram models before that." }, { "end": 1302.6, "start": 1297.18, "text": " So they would kind of dist, dist, they would sort of chunk up their sentences into N-grams" }, { "end": 1308.68, "start": 1302.6, "text": " into overlapping N-grams and then have a big giant, uh, table for their, where they index" }, { "end": 1309.68, "start": 1308.68, "text": " their N-grams." }, { "end": 1316.2, "start": 1309.68, "text": " So the word, I don't know, so the word, um, hello is ID one." }, { "end": 1321.4, "start": 1316.2, "text": " The word hello there is ID two and so on." }, { "end": 1324.14, "start": 1321.4, "text": " So you have a big table for all the N-grams." }, { "end": 1328.5600000000002, "start": 1324.14, "text": " And then what we would try to do is you would try to do this kind of bag of words estimation" }, { "end": 1333.88, "start": 1328.5600000000002, "text": " where you would take a, you know, whatever N-grams appeared in your sentence and you" }, { "end": 1340.5200000000002, "start": 1333.88, "text": " would have this big classification where you'd associate the N-grams with each other and" }, { "end": 1341.5200000000002, "start": 1340.5200000000002, "text": " so on." }, { "end": 1346.2, "start": 1341.5200000000002, "text": " So distributed word representations were kind of a revolution at that point, especially" }, { "end": 1351.48, "start": 1346.2, "text": " distributed representation that actually outperformed these old N-gram methods." }, { "end": 1353.18, "start": 1351.48, "text": " Um, yeah." }, { "end": 1358.3200000000002, "start": 1353.18, "text": " So there are a number of tricks right here that are, I think, not understood until this" }, { "end": 1359.3200000000002, "start": 1358.3200000000002, "text": " day." }, { "end": 1364, "start": 1359.32, "text": " For example, the question is how do you sample these negative samples?" }, { "end": 1372.8799999999999, "start": 1364, "text": " Right here, this basically says get K words from your vocabulary at random according to" }, { "end": 1374.84, "start": 1372.8799999999999, "text": " this distribution right here." }, { "end": 1376.84, "start": 1374.84, "text": " Now how are you going to do that?" }, { "end": 1379.6, "start": 1376.84, "text": " Basically you have a spectrum of options." }, { "end": 1384.4399999999998, "start": 1379.6, "text": " The one side of the spectrum is going to be completely uniform." }, { "end": 1385.4399999999998, "start": 1384.4399999999998, "text": " Okay." }, { "end": 1388.76, "start": 1385.4399999999998, "text": " We sample each word with the same probability." }, { "end": 1396.64, "start": 1388.76, "text": " And the other side of the spectrum is something like sample this according to their uni-gram." }, { "end": 1398.96, "start": 1396.64, "text": " These are two different things." }, { "end": 1401.16, "start": 1398.96, "text": " They're opposites in this, in this fashion." }, { "end": 1409, "start": 1401.16, "text": " So here you say, Hey, um, some words appear way, way, way more often than other words." }, { "end": 1411.36, "start": 1409, "text": " Shouldn't we prefer them when we sample?" }, { "end": 1412.36, "start": 1411.36, "text": " Right?" }, { "end": 1419.1599999999999, "start": 1412.36, "text": " So if we have a corpus, um, and shouldn't we sample from the corpus?" }, { "end": 1423.76, "start": 1419.1599999999999, "text": " And if in the corpus, one word appears 50 times more than the other word, then shouldn't" }, { "end": 1428.6, "start": 1423.76, "text": " we sample that 50 times more as a negative because it's, you know, so abundant and it" }, { "end": 1431.6399999999999, "start": 1428.6, "text": " should give a higher classification accuracy." }, { "end": 1434.3799999999999, "start": 1431.6399999999999, "text": " Whereas on the other hand, you could say, no, no, no, we should simply sample every" }, { "end": 1437.08, "start": 1434.3799999999999, "text": " word in our dictionary uniformly." }, { "end": 1445.8799999999999, "start": 1437.08, "text": " They came up with something in between, which they say, um, both NCE and negative sampling" }, { "end": 1448.3799999999999, "start": 1445.8799999999999, "text": " have noise distribution as a free parameter." }, { "end": 1454.12, "start": 1448.3799999999999, "text": " We investigated a number of choices and found that the uni-gram distribution raised to the" }, { "end": 1461.1399999999999, "start": 1454.12, "text": " three quarter power, i.e. uni-gram to the three quarter, outperformed significantly" }, { "end": 1463.96, "start": 1461.1399999999999, "text": " the uni-gram and uniform distributions." }, { "end": 1469.32, "start": 1463.96, "text": " For both NCE and negative on every task we tried including language modeling." }, { "end": 1471.92, "start": 1469.32, "text": " This I think is a mystery until today." }, { "end": 1478.72, "start": 1471.92, "text": " And it actually turned out that this exponent right here is magically much better than like" }, { "end": 1481.4, "start": 1478.72, "text": " the exponent of one or even the exponent of one half." }, { "end": 1487.48, "start": 1481.4, "text": " Like you might be reasonably assumed that the square root, you know, might be something," }, { "end": 1492.64, "start": 1487.48, "text": " but the three quarters I think turned out to be very good and very mystical." }, { "end": 1494.68, "start": 1492.64, "text": " So what does it mean?" }, { "end": 1499.16, "start": 1494.68, "text": " It means that you have kind of a balance between words that appear often and words that don't" }, { "end": 1500.44, "start": 1499.16, "text": " appear often." }, { "end": 1504.8200000000002, "start": 1500.44, "text": " Usually in these kind of things, you have a power law where you have very few words" }, { "end": 1506.46, "start": 1504.8200000000002, "text": " that appear very often." }, { "end": 1512.16, "start": 1506.46, "text": " And then you have, okay, that's the tail shouldn't go up, but you have a very long tail of words." }, { "end": 1517.3600000000001, "start": 1512.16, "text": " And what you want to do is in this case, you want to sample these words here more, but" }, { "end": 1522.0800000000002, "start": 1517.3600000000001, "text": " they appear so much more often than if you simply sample according to their uni-gram" }, { "end": 1526.96, "start": 1522.08, "text": " distribution, you'll basically not regard these words right here, you'll forget about" }, { "end": 1532.08, "start": 1526.96, "text": " them and your performance will suffer because they do appear every now and then." }, { "end": 1537.96, "start": 1532.08, "text": " So what you want to do is you want to push that those down a little bit and the optimal" }, { "end": 1544.48, "start": 1537.96, "text": " amount for the little bit turns out to be to raise it the you raise it to the three" }, { "end": 1547.48, "start": 1544.48, "text": " quarters." }, { "end": 1551.9199999999998, "start": 1547.48, "text": " Strange but you know, turned out to work well." }, { "end": 1557.88, "start": 1551.92, "text": " The other thing they do is they do the they do a sub sampling of frequent words." }, { "end": 1564.16, "start": 1557.88, "text": " So again, this is a way to kind of push down the often appearing words where they say the" }, { "end": 1569.44, "start": 1564.16, "text": " most frequent words can easily occur hundreds of millions of times like in the or a such" }, { "end": 1573.74, "start": 1569.44, "text": " words usually provide less information value than the rare words." }, { "end": 1577.96, "start": 1573.74, "text": " For example, while the skipgram model benefits from observing the co-occurrences of France" }, { "end": 1582.72, "start": 1577.96, "text": " and Paris, it benefits much less from observing the frequent co-occurrences of France and" }, { "end": 1589.24, "start": 1582.72, "text": " the as nearly every word co-occurring frequently with in a sentence with the." }, { "end": 1595.68, "start": 1589.24, "text": " So they do another trick here to counter this imbalance between rare and frequent words" }, { "end": 1601.04, "start": 1595.68, "text": " use a simple sub sampling approach, each word in the training set is discarded with probability" }, { "end": 1603.44, "start": 1601.04, "text": " computed by that formula." }, { "end": 1610.92, "start": 1603.44, "text": " Right, so therefore formula right here and you might be asking again why why this formula?" }, { "end": 1618.96, "start": 1610.92, "text": " So this is the sampling probability of a word and it goes with one over T. T is a temperature" }, { "end": 1625.2, "start": 1618.96, "text": " parameter and F is the frequency with which the word appears in the corpus." }, { "end": 1632.76, "start": 1625.2, "text": " So as you can see, as the word appears more in the in the corpus, then so this is the" }, { "end": 1638.72, "start": 1632.76, "text": " frequency as the word appears more than this thing goes down than this thing goes up." }, { "end": 1642.48, "start": 1638.72, "text": " So it's discarded with this probability." }, { "end": 1648.92, "start": 1642.48, "text": " So it's discarded with a higher probability if it appears more often." }, { "end": 1653.46, "start": 1648.92, "text": " Where F is frequency of a word, T is a chosen threshold." }, { "end": 1658, "start": 1653.46, "text": " We chose this sub sampling formula because it aggressively sub samples words whose frequency" }, { "end": 1663.24, "start": 1658, "text": " is greater than T while preserving the ranking of the frequencies." }, { "end": 1667, "start": 1663.24, "text": " Although this sub sampling formula was chosen heuristically, we found it to work well in" }, { "end": 1668.08, "start": 1667, "text": " practice." }, { "end": 1673.12, "start": 1668.08, "text": " It accelerates learning and even significantly improves the accuracy of the learned vectors" }, { "end": 1676.8, "start": 1673.12, "text": " of the rare words as will be shown in the following sections." }, { "end": 1682.18, "start": 1676.8, "text": " So again, something sort of arbitrary, it's more understandable than the three quarters," }, { "end": 1684.08, "start": 1682.18, "text": " but still it's sort of arbitrary." }, { "end": 1689.28, "start": 1684.08, "text": " They experimented around, they found this works well and then everybody ended up using" }, { "end": 1690.28, "start": 1689.28, "text": " that." }, { "end": 1693.6799999999998, "start": 1690.28, "text": " So that's how this kind of stuff happens." }, { "end": 1698.36, "start": 1693.6799999999998, "text": " Okay, so now we get into the empirical results." }, { "end": 1704.04, "start": 1698.36, "text": " And the empirical results in this case were already sort of given in the previous paper," }, { "end": 1713.9199999999998, "start": 1704.04, "text": " but here they have these the analogical reasoning task where you can see that the negative sampling" }, { "end": 1718.68, "start": 1713.92, "text": " did outperform the others by quite a bit right here." }, { "end": 1724.48, "start": 1718.68, "text": " So the negative sampling approaches outperformed the hierarchical softmax and the noise contrastive" }, { "end": 1726.02, "start": 1724.48, "text": " estimation." }, { "end": 1730.44, "start": 1726.02, "text": " And in the previous paper, they also compared with other baselines and saw that it also" }, { "end": 1738.92, "start": 1730.44, "text": " outperforms those while being quite time efficient." }, { "end": 1747.3600000000001, "start": 1738.92, "text": " So you can see that especially with the sub sampling approaches, the time here is 36 minutes" }, { "end": 1754.0800000000002, "start": 1747.3600000000001, "text": " for and I think they have like a huge corpus that they train on these were to back code" }, { "end": 1757.3600000000001, "start": 1754.0800000000002, "text": " turned out to be really, really efficient code." }, { "end": 1760.3600000000001, "start": 1757.3600000000001, "text": " And that's why it got so popular as well." }, { "end": 1765.1200000000001, "start": 1760.3600000000001, "text": " They did the same thing for phrases right here." }, { "end": 1772.1799999999998, "start": 1765.12, "text": " So for phrases like New York Times and so on, but this was kind of more of a this was" }, { "end": 1775.52, "start": 1772.1799999999998, "text": " more of a side thing." }, { "end": 1782.6, "start": 1775.52, "text": " The phrase vectors turned out to be, you know, rather a side thing from the actual code right" }, { "end": 1785.36, "start": 1782.6, "text": " here." }, { "end": 1792.08, "start": 1785.36, "text": " So yeah, as I said, this paper is very different from other research papers in that it's it's" }, { "end": 1797.1999999999998, "start": 1792.08, "text": " sort of half an engineering paper and all of these papers are they're kind of hard to" }, { "end": 1805.1799999999998, "start": 1797.1999999999998, "text": " read because they just kind of state some things in the order is kind of weird sometimes." }, { "end": 1808.58, "start": 1805.1799999999998, "text": " Why they do things is kind of weird sometimes." }, { "end": 1815.04, "start": 1808.58, "text": " But you can't you know, you can't deny that it had the quite the effect on the community." }, { "end": 1820.8999999999999, "start": 1815.04, "text": " And this it is a very cool paper, very cool series of papers." }, { "end": 1826.48, "start": 1820.9, "text": " And it's very cool that actually, they released the code, and they made the code such that" }, { "end": 1831.3200000000002, "start": 1826.48, "text": " it is super duper efficient, even like on a single machine." }, { "end": 1836.3600000000001, "start": 1831.3200000000002, "text": " And that was very cool, because you know, being Google, they could have just released" }, { "end": 1842.88, "start": 1836.3600000000001, "text": " code that is very efficient on a distributed data center." }, { "end": 1844.8000000000002, "start": 1842.88, "text": " And they didn't do that." }, { "end": 1849.5600000000002, "start": 1844.8000000000002, "text": " So that this is, it's sort of not really like today anymore." }, { "end": 1856.44, "start": 1849.56, "text": " Like today, when they release code, it's always you need you need like 50 cloud TPUs to do" }, { "end": 1857.44, "start": 1856.44, "text": " it." }, { "end": 1858.84, "start": 1857.44, "text": " And it's still cool that they release code." }, { "end": 1866.44, "start": 1858.84, "text": " But this was this was really a step into kind of democratizing AI." }, { "end": 1870.6799999999998, "start": 1866.44, "text": " And yeah, so that was my rant about Word2vec." }, { "end": 1871.96, "start": 1870.6799999999998, "text": " I hope you enjoyed this." }, { "end": 1878.1599999999999, "start": 1871.96, "text": " I hope this still was useful to you, even though most of you probably already knew Word2vec." }, { "end": 1880.8000000000002, "start": 1878.16, "text": " And yeah, so I'll see you next time." }, { "end": 1908.8, "start": 1880.8, "text": " Bye bye." } ]
GWt6Fu05voI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Classic] Deep Residual Learning for Image Recognition (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "vision", "computer vision", "kaiming he", "google", "resnet", "resnet50", "resnet151", "deep neural network", "imagenet", "residual", "identity function", "very deep", "convolutional neural network", "bottleneck", "overfitting" ]
#ai #research #resnet ResNets are one of the cornerstones of modern Computer Vision. Before their invention, people were not able to scale deep neural networks beyond 20 or so layers, but with this paper's invention of residual connections, all of a sudden networks could be arbitrarily deep. This led to a big spike in the performance of convolutional neural networks and rapid adoption in the community. To this day, ResNets are the backbone of most vision models and residual connections appear all throughout deep learning. OUTLINE: 0:00 - Intro & Overview 1:45 - The Problem with Depth 3:15 - VGG-Style Networks 6:00 - Overfitting is Not the Problem 7:25 - Motivation for Residual Connections 10:25 - Residual Blocks 12:10 - From VGG to ResNet 18:50 - Experimental Results 23:30 - Bottleneck Blocks 24:40 - Deeper ResNets 28:15 - More Results 29:50 - Conclusion & Comments Paper: https://arxiv.org/abs/1512.03385 Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. Authors: Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Today we'll look at deep residual learning for image recognition by Kai Ming He, Xiang Yu Cheng, Shao Qing Ren and Jian Sun. So this, you know it, this is an old paper. It is from 2015 but I thought we'd still look at it because this not only is it one of the most influential papers in modern deep learning, it is also a very well written paper and I remember it like it was yesterday when this came out. This was like a bomb. So around that time this this meme was going around. I was winning ImageNet but then someone made a deeper net. This was a this was a the time when after AlexNet people were trying to build bigger and bigger networks and every time someone managed to build a bigger network the accuracy on ImageNet data set would increase pretty much in lockstep with how much bigger the network was but people got to the limit of building big networks and then this paper drops and changed everything and now residual connections are everywhere not only in image recognition they are in transformers they are in whatever wherever you go you'll probably find some residual connections somewhere in there. So yeah let's let's look at this paper and let's revisit what kind of problems people had then and how they solved it. So here they go directly into into this problem of deep neural networks and the problem that people had was they knew that if you can increase the if you can increase the depth of a neural network you can make it perform better you can make it generalize better you can reach lower training loss but optimizing it was hard. Specifically this was a phenomenon that people observed. So if you have a 20 layer neural network you could train it and you know there is this learning rate drop people have all had already figured out that you need to drop drop the learning rate and it would reach a certain level and here this would be the test error over here. However if after a certain point if they increase the depth even more the training error would actually go up again and so would the test error and this is not a problem of overfitting because overfitting would be when the training error is lower or as low and then the test error went up. So this is the first thing this is not a phenomenon of overfitting of too many parameters so why can't we train bigger layers networks until that time have very much followed kind of the original network design that was envisioned by sort of people like Jan LeCun and also Alex Net and the most popular ones were these VGG nets and they were very much of the philosophy that you'd have like some you have the image here and you input that into convolutional layers which first would kind of keep a big resolution but would increase the channel size by you know some amount and then you would sort of downscale the image as you increase the number of filters so you would stack more and more filters and draw more filters you would stack more and more filters while downscaling the resolution of the image the reasoning was that if you do image classification right then you know where on this where on this maybe you want to classify this into a Lego tower or whatever that is it's not that important where it is so on the lower levels you would want to parse out like very low layer features like edges and so on and these are still important where they are right the fact that here's an edge here's an edge here's an edge but then as you go higher up and go to more and more abstracted features and we already knew that these neural network they tend to learn more and more abstract features as you go up the layers the hypothesis was that the exact localization of these abstract features would be less and less important so if there is if you recognize that there is a rectangle it's not that important where it is just that it's somewhere there and maybe where it is in relation to the other so if you have if you recognize want to recognize a car the lower layers would recognize the fact that there are edges and then the intermediate layers would recognize the geometric shapes of maybe here the wheels and these bodies but it's not that important where exactly they are and then the higher layers would learn to combine the individual parts to each other and again it becomes less and less important where these things are and more and more important that you build more expressive features so people would downscale the resolution upscale the number of filters now that's a good heuristic but this is based this was basically the architecture of these networks and we would question why would if we increase the number of layers so if we instead of one here we have two of these layers right we simply have two of these layers and here we have two of these layers why does it get worse especially this paper here makes an interesting observation so it is not caused caused by overfitting and adding more layers leads to a higher training error the degradation indicates that is not all systems are similarly easy to optimize let us consider a shallower architecture and its deeper counterparts that adds more layers onto it there exists a solution by construction to the deeper model the added layers are identity mapping and the other layers are copied from the learned shallower model so pretty easy if you have a shallow model like five layers that learns a particular function I can pretty easily prove that there is a deep model that learns the same function by simply copying over these five layers and having these here learn the identity function okay so if we are able to learn this we should be able to train this network to at least the same accuracy that's what this paper argues because it can you know these layers can simply learn the identity function so it must have something to do with the easiness of optimizing these deep architectures not with overfitting this is I think if you read the entire text here it's very very clear if you read it they lead you through this reasoning saying that look all these layers have to do is learn the identity function and then we could at least get the same accuracy so so what why don't they learn the identity function well because we initialize most weights you know towards zero we initialize them randomly but mostly we initialize them around zero our initialization procedure usually sample from some Gaussian with some kind of a standard deviation but around the mean of zero and also if we use things like weight decay L2 regularization all of these things they do they bias the weights towards zero so if there is any natural thing that these networks are good at is they learn the zero function really well learning the identity function is as difficult as learning any other function the identity function the convolutional filter is actually pretty difficult to learn because you know if I have a my if I have my three by three filter where is my no no this is my three by three filter the identity function is it like a one here and zeros everywhere else that's the that that would be one of the things it's not that easy you need to learn nine weights in the correct way so this paper says can we do something to make the default function of the network not be the zero function or whatever the randomly initialized function can we make the default function the one function can we make the default function the identity function and that brings you to residual connection so instead of learning to transform X via a neural network into X which is the identity function why don't we have X stay X and then learn whatever we need to change okay so if let's call that tilde if the assumption is that it's a good default to not change much so this is almost the same as this we might make this build this directly into the architecture the fact that these two are equal plus plus some deviation that is learned right here and the hypothesis is that especially the deeper you go if you go very deep each function here will actually learn not that much it will learn to basically change the signal a little bit but mostly it will learn the identity function if it behaves well and therefore it might be you know reasonable to build this into the architecture and of course this has turned out to be very accurate it has actually been reasonable to build this into the architecture so that's what they propose right here so instead of just having weight layers one after another what they propose is to have these skip connections in here so these skip connections they will instead of learning the function they call this entire function H of X which might be very complicated they learn the function whatever F and F is whatever you need to change about X you see at the end you add X to it so these weight layers here they simply learn whatever makes this next this output different from this input and learning differences now you have the desired property because what do we know about weight layers from before well they tend towards the zero function right if we use weight decay or generally how we initialize them they tend towards the zero function well if F tends towards the zero function then H becomes the identity function so the default function of this network is the identity function and whenever we learn something we learn how to deviate from the identity function and that is that is a much better default function now it's not entirely true that the default function is the identity function you see that here for example there's after the skip connection there is actually a relu so there's still it's still a nonlinear function in total the network in total but the default for the individual blocks here is the identity okay now if you chain these blocks you get a residual network and that's what they propose right here so on the left you see this original VGG architecture like we described it so you can see you have an image which has four channels and you first up it to 64 channels you keep the resolution and then you max pool which halves the resolution but you go up with the filters to 128 you max pool again go up with the filters and so on now this has even though it doesn't look like it this has a lot of parameters and it needs a lot of computation so it has 19.6 billion floating point operation for a forward pass in contrast the networks we're going to build here the residual networks have 3.6 billion flops so they are much much less in terms of complexity than the old VGG networks while still being much deeper okay the hypothesis is the deeper the better and as a trade-off per layer you don't actually need to have that many parameters because you don't learn that much per layer but the succession of layers it gains you much more than simply having single massive layers you can see at the same size of resolution here you the the Resnets can get away with much less amounts of filters and that's why they are less they are of less size so this is the comparison the VGG 19 now they do build this 34 layer network which they call plane and you can see it is simply a 34 layer network with no pooling right here and here instead of pooling they do a stride to convolution which has also become this has become kind of more standard than doing max or average pooling to downscale to do simply stride to convolution so this paper has actually set the standards for a lot of things in modern deep learning so our goal is to going to be to compare first of all the VGG 19 to the 34 layer plane to show that you will lose performance when you simply up the number of layers but then when you introduce the residual connections as you can see right here so there is always this jumping connection right here so along these jumping connections the signal can travel as the identity function what we're going to see is that if we go from plane to residual introducing no extra parameters just these skip connections will change everything will make this network all of a sudden trainable and make the deeper networks the better networks okay the only little caveat here is of course in order to build a residual connection the output has to be of the same size as the input because you need to add the input to the output and this here for example is not given so here you can see this signal after this layer is going to be half as big because it's a stride to convolution so the output right here is only half the size but it is it is twice the number of filters you can see right here this has 64 filters and here we go to 128 filters that's why this connection right here has parameters in order to simply expand the number of filters these are these one by one convolutions that simply up that simply project the 64 filters to 128 filters however this doesn't introduce too many parameters because it's only one by one in fact here the 34 parameters residual network no I'm wrong you have different options so the world has ended up at the option of doing one by one convolutions but in this paper they still they still explore three different options and I guess here in this particular experiment the option a is simply to zero pad so to leave the first 64 channels but to simply append 128 zero padded filters there or channels option B is the one by one convolution and option C is actually that all of these connections right here also have the one by one convolutions which introduces extra parameters and they they realize that option C isn't improving over option B substantially and in fact is only improving marginally and they say okay that's probably just because we have more parameters so ultimately they went with option B and I think that's what the world does right now I also I when I read this first I particularly enjoyed this paragraph right here let's read it together our implementation for image net follows the practice in the data the image is resized with shorter randomly sampled in this for scale augmentation a this crop is randomly sampled from the image or its horizontal flip with the per pixel has been subtracted the standard color augmentation is used we adopt the batch normalization right after each convolution before activation this an age-old discussion was born when to use batch normalization before come before the activation or after the activation I still I think people are still fighting over this today we initialize the weights as in 13 and train all plain residual nets from scratch use SGD data data data the learning rate starts from this is divided by then so here in this perhaps in this paragraph they detail basically all the training procedure and all the tricks that they use that I remember specifically that you know I've read all of this which was the idea and I could follow like oh this is super well explained this is so cool and so on and then I expect basically an implementation of that and then there's one single paragraph with like 20 lines saying oh and by the way we use these 50 tricks from these other papers and yeah that's when it I guess it was already happening you needed to do all the modern tricks in order to really reach the top accuracies but you know in hindsight we know it wasn't the tricks that helped them it was actually their idea I just I just thought it was rather funny so you can see right here the results of this if you look at the left these are the plane networks and we've already sort of seen this now this is on image net right here you can see the 18 layer network simply has lower train and validation accuracy so the solid line here is the validation on image net bold curves denote the validation error of the center crops so I guess they do yeah they do center crops so the training error is going to be higher because they do these different augmentations but you can see the training and the validation error are higher in the deeper network if you don't use residual connections again this is not due to overfitting and it this is because we can't train these deep networks because we should be able to the solution space of the 18 layer network is a subspace of the solution space of the 34 layer network everything tells us we should be able to learn the 34 layers to at least the accuracy of the 18 layers but we can't however introduce residual connections bum bum bum bum and you can see that the trend is exactly reversed now the 34 layer with residual connections has a much much lower training and validation error than the 18 layer in fact look at this table right here if you introduce the residual connections to the 18 layers it's marginally better however if you introduce the residual connections to the 34 layers it is a lot better and this is another testament to the fact that these residual connections they really help more and more the deeper you go you can see the effect in so this 18 layers this is sort of a VGG 19 depth network well if and there we already know we can train these without residual connections right because we were able to train VGG 19 however if we go higher to more layers we can these residual connections all of a sudden make it a lot a lot better you can see that it's not it's not that we can't train the 34 layers but the residual connections just help a lot more and it most of a sudden it most of most importantly they don't degrade the performance from the shallower network so they they explore the different options right here and compare it to others different options as I said being a B and C where a is the zero padding for the projection B is having projections simply between where the channels don't fit and C being having projections in every single residual connection and you can see right here that the option B gives you quite a bit of a boost well option C doesn't give you that much of a boost introduces many more parameters and you know overall is I guess they decided against it which since then the world has also decided against it they also do deeper networks so they built deeper networks like 50 layer resnet 101 layer resnet and 152 layer resnet and the 152 layer resnet ended up being the best one as you can see here and you can see a pretty gain like it almost almost lock step gain depth more depth means better network in this at the time this these numbers they were unheard of like even 50 layer deep neural network was bombastic but a hundred and fifty two layers it was it was crazy and the fact that still it has less parameters than the VGG 19 and performs better that was mind mind-blowing absolutely mind-blowing and then at the end they built an ensemble of these models and ended up taking the 2015 ImageNet competition winner that was still like very important back then it was still very important who wins who wins ImageNet that year where I think I haven't even followed up on the last few years it's some kind of wide fixed resnet whatnot with pre-trained and 50 billion extra data yeah so for the deeper networks they decide that they are computationally rather become rather expensive so they introduce these bottleneck blocks here on the right where as you can see so here if you have a 64 dimensional input you do 64 feature channels in your convolution have a 64 dimensional output you can save computation if you first project the higher so here you have a 256 dimensional input and they say we can save computational power by pretty much projecting down to 64 first because then our complexity of this layer which is the expensive layer will be the same as the complexity of one of these layers and then we can project up again the one by one convolution they are significantly lower computational intensive than the three by three convolutions like it's nine times less operations if you think about it so that's what they use to build the deeper residual networks and these residual networks the ResNet 50, 101, 152 they are still staples today you can have these are you can have pre-trained versions of those and people still use it like ResNet 50 is used in every segmentation whatnot application so yeah this has turned out these decisions here have you know made it long way here you can see the number of parameters in these residual networks and this was the absolute craziest thing right here 1202 layers okay so you can see still until here ResNet 110 now this is on CIFAR 10 right here not on image net anymore but you can see that even 110 layers still had less parameters or actually the select the same order of parameters than these previous networks that were only 19 layers deep this was unheard of and much more unheard of 1202 layer network to train on CIFAR 10 it's a bit of an overkill but they say their goal was explicitly to study depth and you can see here that with the deeper and deeper networks they outperformed all of the previous networks so all of the baselines and themselves as they went deeper and deeper and deeper however once you go to 1002 layers you go up again so here's the question was this all just kind of a trick a hack and do we run into the same problem again and that's the question they ask themselves and the answer is no so if you look right here so here you see again the plane networks in the plane networks you can pretty easily see that the more layers you have the higher your error goes whereas in the residual network it's exactly the opposite way the more layers you have the lower your error and if you compare this 110 layer network with the 1200 layer network you see your validation error going up again however your training error and I can't zoom in more but it's the same it's the same and it's at zero so here they conclude and the the here they conclude now we are overfitting they don't use like the biggest data augmentation like we use today so overfitting was still a thing back then so now they conclude okay now we have actually built a large enough network that is overfitting and then and the fact that we go up again in the training error is due to the fact that we are probably overfitting so not only have they enabled us to build deeper networks they have effectively shown that this can get you to the to the point where you don't need deeper networks anymore at least on C410 because you are overfitting and it can effectively get you there this is a lot of evidence for the fact that this biasing the networks towards the identity function is a very valid thing to do and is the solution to the we can't train deep networks problems lastly they investigate the size of the responses so their hypothesis is that if if it is really beneficial to bias the network towards the identity function and if it is really true that each of these layers only learns a little bit right because the identity function is already very good each of these layers only needs to learn kind of a small function they look at the responses of these things so the the response magnitude of these layers right here of the signal through the layers and they compare those with the response magnitude of the other neural networks where you don't have the skip connection the hypothesis is if we look at these then the responses of these layers should be much larger because they have to learn much more and the responses here will be much smaller because the identity function is already doing most of the work and that's exactly what you find so here the layers are ordered by response and you can see the plane networks in the dashed lines are significantly above the residual network even and that's not a function of the depth because if the depth was actually equal here you would expect that the dashed lines would would stretch like this right they would kind of stretch out however exactly the opposite is happening you can see that the residual networks even at the beginning their responses are very much smaller and this is kind of what I like about this paper it's it's one narrative it is a hypothesis and then every single like the the hypothesis is taken and they make predictions from the hypothesis they say okay if we are right with our hypothesis not only should our idea get us better accuracy that's what most people most papers do today but also you know but also it should be that we can for example push our network to the brink of where we actually are overfitting like here and it should also be that the responses of our signal through our layers is smaller and yeah that's research like this is just pretty pretty cool and it's I think a lesson for us that sadly the world has taken the resonance but the world hasn't all taken the research methodology of this paper yeah if you again if you want a good read it's very well written you I'm very sure you can follow it even if you have read very few papers and with that yeah I hope you enjoyed this please tell me what you think of going through kind of old papers looking at whether or not they have stood the test of time and yeah any other comments leave them in the comments I do read them and I'll see you next time bye bye
[ { "end": 4.74, "start": 0, "text": " Hi there! Today we'll look at deep residual learning for image recognition" }, { "end": 13.44, "start": 4.74, "text": " by Kai Ming He, Xiang Yu Cheng, Shao Qing Ren and Jian Sun. So this, you know it, this is an" }, { "end": 21.400000000000002, "start": 13.44, "text": " old paper. It is from 2015 but I thought we'd still look at it because this not" }, { "end": 26.68, "start": 21.400000000000002, "text": " only is it one of the most influential papers in modern deep learning, it is" }, { "end": 33.12, "start": 26.68, "text": " also a very well written paper and I remember it like it was yesterday when" }, { "end": 40.480000000000004, "start": 33.12, "text": " this came out. This was like a bomb. So around that time this this meme was" }, { "end": 48.84, "start": 40.480000000000004, "text": " going around. I was winning ImageNet but then someone made a deeper net. This" }, { "end": 54.879999999999995, "start": 48.84, "text": " was a this was a the time when after AlexNet people were trying to build" }, { "end": 60.440000000000005, "start": 54.88, "text": " bigger and bigger networks and every time someone managed to build a bigger" }, { "end": 67.48, "start": 60.440000000000005, "text": " network the accuracy on ImageNet data set would increase pretty much in lockstep" }, { "end": 73, "start": 67.48, "text": " with how much bigger the network was but people got to the limit of building big" }, { "end": 78.84, "start": 73, "text": " networks and then this paper drops and changed everything and now residual" }, { "end": 83.32000000000001, "start": 78.84, "text": " connections are everywhere not only in image recognition they are in" }, { "end": 88.44, "start": 83.32, "text": " transformers they are in whatever wherever you go you'll probably find" }, { "end": 96.08, "start": 88.44, "text": " some residual connections somewhere in there. So yeah let's let's look at this" }, { "end": 102.96, "start": 96.08, "text": " paper and let's revisit what kind of problems people had then and how they" }, { "end": 110.72, "start": 102.96, "text": " solved it. So here they go directly into into this problem of deep neural" }, { "end": 118.56, "start": 110.72, "text": " networks and the problem that people had was they knew that if you can increase" }, { "end": 124.28, "start": 118.56, "text": " the if you can increase the depth of a neural network you can make it perform" }, { "end": 129.48, "start": 124.28, "text": " better you can make it generalize better you can reach lower training loss but" }, { "end": 134.68, "start": 129.48, "text": " optimizing it was hard. Specifically this was a phenomenon that people observed. So" }, { "end": 138.56, "start": 134.68, "text": " if you have a 20 layer neural network you could train it and you know there is" }, { "end": 142.72, "start": 138.56, "text": " this learning rate drop people have all had already figured out that you need to" }, { "end": 148.76, "start": 142.72, "text": " drop drop the learning rate and it would reach a certain level and here this would" }, { "end": 154.76, "start": 148.76, "text": " be the test error over here. However if after a certain point if they increase" }, { "end": 161.96, "start": 154.76, "text": " the depth even more the training error would actually go up again and so would" }, { "end": 167.56, "start": 161.96, "text": " the test error and this is not a problem of overfitting because overfitting would" }, { "end": 173.52, "start": 167.56, "text": " be when the training error is lower or as low and then the test error went up. So" }, { "end": 176.84, "start": 173.52, "text": " this is the first thing this is not a phenomenon of overfitting of too many" }, { "end": 183.56, "start": 176.84, "text": " parameters so why can't we train bigger layers networks until that time have" }, { "end": 189.28, "start": 183.56, "text": " very much followed kind of the original network design that was envisioned by" }, { "end": 196.48000000000002, "start": 189.28, "text": " sort of people like Jan LeCun and also Alex Net and the most popular ones were" }, { "end": 201.79999999999998, "start": 196.48, "text": " these VGG nets and they were very much of the philosophy that you'd have like" }, { "end": 209.23999999999998, "start": 201.79999999999998, "text": " some you have the image here and you input that into convolutional layers" }, { "end": 216.44, "start": 209.23999999999998, "text": " which first would kind of keep a big resolution but would increase the" }, { "end": 221.56, "start": 216.44, "text": " channel size by you know some amount and then you would sort of downscale the" }, { "end": 227.28, "start": 221.56, "text": " image as you increase the number of filters so you would stack more and more" }, { "end": 233.12, "start": 227.28, "text": " filters and draw more filters you would stack more and more filters while" }, { "end": 238.88, "start": 233.12, "text": " downscaling the resolution of the image the reasoning was that if you do image" }, { "end": 244.88, "start": 238.88, "text": " classification right then you know where on this where on this maybe you want to" }, { "end": 252.24, "start": 244.88, "text": " classify this into a Lego tower or whatever that is it's not that important" }, { "end": 257.48, "start": 252.24, "text": " where it is so on the lower levels you would want to parse out like very low" }, { "end": 262.52, "start": 257.48, "text": " layer features like edges and so on and these are still important where they are" }, { "end": 266.24, "start": 262.52, "text": " right the fact that here's an edge here's an edge here's an edge but then" }, { "end": 270.8, "start": 266.24, "text": " as you go higher up and go to more and more abstracted features and we already" }, { "end": 276, "start": 270.8, "text": " knew that these neural network they tend to learn more and more abstract features" }, { "end": 281, "start": 276, "text": " as you go up the layers the hypothesis was that the exact localization of" }, { "end": 285.8, "start": 281, "text": " these abstract features would be less and less important so if there is if you" }, { "end": 291.16, "start": 285.8, "text": " recognize that there is a rectangle it's not that important where it is just that" }, { "end": 295.6, "start": 291.16, "text": " it's somewhere there and maybe where it is in relation to the other so if you" }, { "end": 301.24, "start": 295.6, "text": " have if you recognize want to recognize a car the lower layers would recognize" }, { "end": 305.6, "start": 301.24, "text": " the fact that there are edges and then the intermediate layers would recognize" }, { "end": 310.8, "start": 305.6, "text": " the geometric shapes of maybe here the wheels and these bodies but it's not that" }, { "end": 314.52000000000004, "start": 310.8, "text": " important where exactly they are and then the higher layers would learn to" }, { "end": 321.64000000000004, "start": 314.52000000000004, "text": " combine the individual parts to each other and again it becomes less and less" }, { "end": 325.96, "start": 321.64, "text": " important where these things are and more and more important that you build" }, { "end": 330.96, "start": 325.96, "text": " more expressive features so people would downscale the resolution upscale the" }, { "end": 335.47999999999996, "start": 330.96, "text": " number of filters now that's a good heuristic but this is based this was" }, { "end": 342.76, "start": 335.47999999999996, "text": " basically the architecture of these networks and we would question why would" }, { "end": 348.96, "start": 342.76, "text": " if we increase the number of layers so if we instead of one here we have two of" }, { "end": 354.35999999999996, "start": 348.96, "text": " these layers right we simply have two of these layers and here we have two of" }, { "end": 361.88, "start": 354.35999999999996, "text": " these layers why does it get worse especially this paper here makes an" }, { "end": 370.35999999999996, "start": 361.88, "text": " interesting observation so it is not caused caused by overfitting and adding" }, { "end": 377.15999999999997, "start": 370.35999999999996, "text": " more layers leads to a higher training error the degradation indicates that is" }, { "end": 382.44, "start": 377.16, "text": " not all systems are similarly easy to optimize let us consider a shallower" }, { "end": 387.08000000000004, "start": 382.44, "text": " architecture and its deeper counterparts that adds more layers onto it there" }, { "end": 391.40000000000003, "start": 387.08000000000004, "text": " exists a solution by construction to the deeper model the added layers are" }, { "end": 395.36, "start": 391.40000000000003, "text": " identity mapping and the other layers are copied from the learned shallower" }, { "end": 401.84000000000003, "start": 395.36, "text": " model so pretty easy if you have a shallow model like five layers that" }, { "end": 406.52000000000004, "start": 401.84000000000003, "text": " learns a particular function I can pretty easily prove that there is a deep" }, { "end": 412.15999999999997, "start": 406.52, "text": " model that learns the same function by simply copying over these five layers and" }, { "end": 418.91999999999996, "start": 412.15999999999997, "text": " having these here learn the identity function okay so if we are able to learn" }, { "end": 423.79999999999995, "start": 418.91999999999996, "text": " this we should be able to train this network to at least the same accuracy" }, { "end": 428.24, "start": 423.79999999999995, "text": " that's what this paper argues because it can you know these layers can simply" }, { "end": 432.76, "start": 428.24, "text": " learn the identity function so it must have something to do with the easiness" }, { "end": 440.15999999999997, "start": 432.76, "text": " of optimizing these deep architectures not with overfitting this is I think if" }, { "end": 445.68, "start": 440.15999999999997, "text": " you read the entire text here it's very very clear if you read it they lead you" }, { "end": 451.3, "start": 445.68, "text": " through this reasoning saying that look all these layers have to do is learn the" }, { "end": 457.68, "start": 451.3, "text": " identity function and then we could at least get the same accuracy so so what" }, { "end": 462.48, "start": 457.68, "text": " why don't they learn the identity function well because we initialize most" }, { "end": 467.6, "start": 462.48, "text": " weights you know towards zero we initialize them randomly but mostly we" }, { "end": 472.04, "start": 467.6, "text": " initialize them around zero our initialization procedure usually sample" }, { "end": 477.36, "start": 472.04, "text": " from some Gaussian with some kind of a standard deviation but around the mean" }, { "end": 483.84000000000003, "start": 477.36, "text": " of zero and also if we use things like weight decay L2 regularization all of" }, { "end": 490.72, "start": 483.84000000000003, "text": " these things they do they bias the weights towards zero so if there is any" }, { "end": 495.46000000000004, "start": 490.72, "text": " natural thing that these networks are good at is they learn the zero function" }, { "end": 501.12, "start": 495.46000000000004, "text": " really well learning the identity function is as difficult as learning any" }, { "end": 505.36, "start": 501.12, "text": " other function the identity function the convolutional filter is actually pretty" }, { "end": 511.72, "start": 505.36, "text": " difficult to learn because you know if I have a my if I have my three by three" }, { "end": 519.52, "start": 511.72, "text": " filter where is my no no this is my three by three filter the identity" }, { "end": 524.76, "start": 519.52, "text": " function is it like a one here and zeros everywhere else that's the that that" }, { "end": 529.4399999999999, "start": 524.76, "text": " would be one of the things it's not that easy you need to learn nine weights in" }, { "end": 537.48, "start": 529.4399999999999, "text": " the correct way so this paper says can we do something to make the default" }, { "end": 541.4, "start": 537.48, "text": " function of the network not be the zero function or whatever the randomly" }, { "end": 546.1999999999999, "start": 541.4, "text": " initialized function can we make the default function the one function can we" }, { "end": 551.2, "start": 546.2, "text": " make the default function the identity function and that brings you to" }, { "end": 556.88, "start": 551.2, "text": " residual connection so instead of learning to transform X via a neural" }, { "end": 566, "start": 556.88, "text": " network into X which is the identity function why don't we have X stay X and" }, { "end": 574.12, "start": 566, "text": " then learn whatever we need to change okay so if let's call that tilde if the" }, { "end": 579.68, "start": 574.12, "text": " assumption is that it's a good default to not change much so this is almost the" }, { "end": 585.88, "start": 579.68, "text": " same as this we might make this build this directly into the architecture the" }, { "end": 592.96, "start": 585.88, "text": " fact that these two are equal plus plus some deviation that is learned right" }, { "end": 599.64, "start": 592.96, "text": " here and the hypothesis is that especially the deeper you go if you go" }, { "end": 604.8, "start": 599.64, "text": " very deep each function here will actually learn not that much it will" }, { "end": 609.28, "start": 604.8, "text": " learn to basically change the signal a little bit but mostly it will learn the" }, { "end": 613.52, "start": 609.28, "text": " identity function if it behaves well and therefore it might be you know" }, { "end": 617.52, "start": 613.52, "text": " reasonable to build this into the architecture and of course this has" }, { "end": 622.8, "start": 617.52, "text": " turned out to be very accurate it has actually been reasonable to build this" }, { "end": 628.26, "start": 622.8, "text": " into the architecture so that's what they propose right here so instead of" }, { "end": 632.88, "start": 628.26, "text": " just having weight layers one after another what they propose is to have" }, { "end": 638.84, "start": 632.88, "text": " these skip connections in here so these skip connections they will instead of" }, { "end": 642.66, "start": 638.84, "text": " learning the function they call this entire function H of X which might be" }, { "end": 651.36, "start": 642.66, "text": " very complicated they learn the function whatever F and F is whatever you need to" }, { "end": 657.28, "start": 651.36, "text": " change about X you see at the end you add X to it so these weight layers here" }, { "end": 664.12, "start": 657.28, "text": " they simply learn whatever makes this next this output different from this" }, { "end": 669.9599999999999, "start": 664.12, "text": " input and learning differences now you have the desired property because what" }, { "end": 674.12, "start": 669.9599999999999, "text": " do we know about weight layers from before well they tend towards the zero" }, { "end": 679.28, "start": 674.12, "text": " function right if we use weight decay or generally how we initialize them they" }, { "end": 684.92, "start": 679.28, "text": " tend towards the zero function well if F tends towards the zero function then H" }, { "end": 691.0799999999999, "start": 684.92, "text": " becomes the identity function so the default function of this network is the" }, { "end": 695.36, "start": 691.0799999999999, "text": " identity function and whenever we learn something we learn how to deviate from" }, { "end": 702.5999999999999, "start": 695.36, "text": " the identity function and that is that is a much better default function now" }, { "end": 706, "start": 702.5999999999999, "text": " it's not entirely true that the default function is the identity function you" }, { "end": 711.0799999999999, "start": 706, "text": " see that here for example there's after the skip connection there is actually a" }, { "end": 717.08, "start": 711.08, "text": " relu so there's still it's still a nonlinear function in total the network" }, { "end": 722.8000000000001, "start": 717.08, "text": " in total but the default for the individual blocks here is the identity" }, { "end": 728.5600000000001, "start": 722.8000000000001, "text": " okay now if you chain these blocks you get a residual network and that's what" }, { "end": 734.88, "start": 728.5600000000001, "text": " they propose right here so on the left you see this original VGG architecture" }, { "end": 738.84, "start": 734.88, "text": " like we described it so you can see you have an image which has four channels" }, { "end": 744.96, "start": 738.84, "text": " and you first up it to 64 channels you keep the resolution and then you max" }, { "end": 750.52, "start": 744.96, "text": " pool which halves the resolution but you go up with the filters to 128 you max" }, { "end": 758.1600000000001, "start": 750.52, "text": " pool again go up with the filters and so on now this has even though it doesn't" }, { "end": 762.32, "start": 758.1600000000001, "text": " look like it this has a lot of parameters and it needs a lot of" }, { "end": 767.08, "start": 762.32, "text": " computation so it has 19.6 billion floating point operation for a forward" }, { "end": 772.32, "start": 767.08, "text": " pass in contrast the networks we're going to build here the residual" }, { "end": 780.4000000000001, "start": 772.32, "text": " networks have 3.6 billion flops so they are much much less in terms of complexity" }, { "end": 787.6800000000001, "start": 780.4000000000001, "text": " than the old VGG networks while still being much deeper okay the hypothesis is" }, { "end": 794.24, "start": 787.6800000000001, "text": " the deeper the better and as a trade-off per layer you don't actually need to" }, { "end": 798.72, "start": 794.24, "text": " have that many parameters because you don't learn that much per layer but the" }, { "end": 803.84, "start": 798.72, "text": " succession of layers it gains you much more than simply having single massive" }, { "end": 809.76, "start": 803.84, "text": " layers you can see at the same size of resolution here you the the Resnets can" }, { "end": 815.8, "start": 809.76, "text": " get away with much less amounts of filters and that's why they are less they" }, { "end": 822.64, "start": 815.8, "text": " are of less size so this is the comparison the VGG 19 now they do build" }, { "end": 829.28, "start": 822.64, "text": " this 34 layer network which they call plane and you can see it is simply a 34" }, { "end": 835.04, "start": 829.28, "text": " layer network with no pooling right here and here instead of pooling they do a" }, { "end": 839.92, "start": 835.04, "text": " stride to convolution which has also become this has become kind of more" }, { "end": 845.68, "start": 839.92, "text": " standard than doing max or average pooling to downscale to do simply stride" }, { "end": 850.8199999999999, "start": 845.68, "text": " to convolution so this paper has actually set the standards for a lot of" }, { "end": 856.48, "start": 850.82, "text": " things in modern deep learning so our goal is to going to be to compare first" }, { "end": 863.7600000000001, "start": 856.48, "text": " of all the VGG 19 to the 34 layer plane to show that you will lose performance" }, { "end": 868.96, "start": 863.7600000000001, "text": " when you simply up the number of layers but then when you introduce the residual" }, { "end": 873.6400000000001, "start": 868.96, "text": " connections as you can see right here so there is always this jumping connection" }, { "end": 878.36, "start": 873.6400000000001, "text": " right here so along these jumping connections the signal can travel as the" }, { "end": 882.88, "start": 878.36, "text": " identity function what we're going to see is that if we go from plane to" }, { "end": 889.28, "start": 882.88, "text": " residual introducing no extra parameters just these skip connections will change" }, { "end": 895.84, "start": 889.28, "text": " everything will make this network all of a sudden trainable and make the deeper" }, { "end": 902.28, "start": 895.84, "text": " networks the better networks okay the only little caveat here is of course in" }, { "end": 906.52, "start": 902.28, "text": " order to build a residual connection the output has to be of the same size as" }, { "end": 911.16, "start": 906.52, "text": " the input because you need to add the input to the output and this here for" }, { "end": 917.3199999999999, "start": 911.16, "text": " example is not given so here you can see this signal after this layer is going to" }, { "end": 921.92, "start": 917.3199999999999, "text": " be half as big because it's a stride to convolution so the output right here is" }, { "end": 929.88, "start": 921.92, "text": " only half the size but it is it is twice the number of filters you can see right" }, { "end": 935.76, "start": 929.88, "text": " here this has 64 filters and here we go to 128 filters that's why this" }, { "end": 941.4, "start": 935.76, "text": " connection right here has parameters in order to simply expand the number of" }, { "end": 947.08, "start": 941.4, "text": " filters these are these one by one convolutions that simply up that simply" }, { "end": 954.08, "start": 947.08, "text": " project the 64 filters to 128 filters however this doesn't introduce too many" }, { "end": 961.92, "start": 954.08, "text": " parameters because it's only one by one in fact here the 34 parameters residual" }, { "end": 970.4799999999999, "start": 961.92, "text": " network no I'm wrong you have different options so the world has ended up at the" }, { "end": 975.1999999999999, "start": 970.4799999999999, "text": " option of doing one by one convolutions but in this paper they still they still" }, { "end": 979.16, "start": 975.1999999999999, "text": " explore three different options and I guess here in this particular experiment" }, { "end": 987.9599999999999, "start": 979.16, "text": " the option a is simply to zero pad so to leave the first 64 channels but to" }, { "end": 997.2, "start": 987.96, "text": " simply append 128 zero padded filters there or channels option B is the one by" }, { "end": 1002.6800000000001, "start": 997.2, "text": " one convolution and option C is actually that all of these connections right here" }, { "end": 1008.36, "start": 1002.6800000000001, "text": " also have the one by one convolutions which introduces extra parameters and" }, { "end": 1013.9200000000001, "start": 1008.36, "text": " they they realize that option C isn't improving over option B" }, { "end": 1019.16, "start": 1013.92, "text": " substantially and in fact is only improving marginally and they say okay" }, { "end": 1023.36, "start": 1019.16, "text": " that's probably just because we have more parameters so ultimately they went" }, { "end": 1030.3999999999999, "start": 1023.36, "text": " with option B and I think that's what the world does right now I also I when I" }, { "end": 1034.96, "start": 1030.3999999999999, "text": " read this first I particularly enjoyed this paragraph right here let's read it" }, { "end": 1038.72, "start": 1034.96, "text": " together our implementation for image net follows the practice in the data" }, { "end": 1043.04, "start": 1038.72, "text": " the image is resized with shorter randomly sampled in this for scale" }, { "end": 1047.12, "start": 1043.04, "text": " augmentation a this crop is randomly sampled from the image or its horizontal" }, { "end": 1050.3999999999999, "start": 1047.12, "text": " flip with the per pixel has been subtracted the standard color" }, { "end": 1053.76, "start": 1050.3999999999999, "text": " augmentation is used we adopt the batch normalization right after each" }, { "end": 1060.1599999999999, "start": 1053.76, "text": " convolution before activation this an age-old discussion was born when to use" }, { "end": 1065.56, "start": 1060.1599999999999, "text": " batch normalization before come before the activation or after the activation I" }, { "end": 1071.68, "start": 1065.56, "text": " still I think people are still fighting over this today we initialize the weights" }, { "end": 1077.5600000000002, "start": 1071.68, "text": " as in 13 and train all plain residual nets from scratch use SGD data data" }, { "end": 1081.88, "start": 1077.5600000000002, "text": " data the learning rate starts from this is divided by then so here in this" }, { "end": 1086.8400000000001, "start": 1081.88, "text": " perhaps in this paragraph they detail basically all the training procedure and" }, { "end": 1090.8400000000001, "start": 1086.8400000000001, "text": " all the tricks that they use that I remember specifically that you know I've" }, { "end": 1095.3200000000002, "start": 1090.8400000000001, "text": " read all of this which was the idea and I could follow like oh this is super" }, { "end": 1100.1200000000001, "start": 1095.3200000000002, "text": " well explained this is so cool and so on and then I expect basically an" }, { "end": 1104.6799999999998, "start": 1100.12, "text": " implementation of that and then there's one single paragraph with like 20 lines" }, { "end": 1110.8799999999999, "start": 1104.6799999999998, "text": " saying oh and by the way we use these 50 tricks from these other papers and yeah" }, { "end": 1115.84, "start": 1110.8799999999999, "text": " that's when it I guess it was already happening you needed to do all the" }, { "end": 1122.8, "start": 1115.84, "text": " modern tricks in order to really reach the top accuracies but you know in" }, { "end": 1126.7199999999998, "start": 1122.8, "text": " hindsight we know it wasn't the tricks that helped them it was actually their" }, { "end": 1134.96, "start": 1126.72, "text": " idea I just I just thought it was rather funny so you can see right here the" }, { "end": 1140.32, "start": 1134.96, "text": " results of this if you look at the left these are the plane networks and we've" }, { "end": 1145.44, "start": 1140.32, "text": " already sort of seen this now this is on image net right here you can see the 18" }, { "end": 1151.08, "start": 1145.44, "text": " layer network simply has lower train and validation accuracy so the solid line" }, { "end": 1160.48, "start": 1151.08, "text": " here is the validation on image net bold curves denote the validation error of" }, { "end": 1165, "start": 1160.48, "text": " the center crops so I guess they do yeah they do center crops so the training" }, { "end": 1170.6399999999999, "start": 1165, "text": " error is going to be higher because they do these different augmentations but you" }, { "end": 1176.84, "start": 1170.6399999999999, "text": " can see the training and the validation error are higher in the deeper network" }, { "end": 1181.28, "start": 1176.84, "text": " if you don't use residual connections again this is not due to overfitting" }, { "end": 1187.6399999999999, "start": 1181.28, "text": " and it this is because we can't train these deep networks because we should be" }, { "end": 1192.36, "start": 1187.6399999999999, "text": " able to the solution space of the 18 layer network is a subspace of the" }, { "end": 1197.28, "start": 1192.36, "text": " solution space of the 34 layer network everything tells us we should be able to" }, { "end": 1202.32, "start": 1197.28, "text": " learn the 34 layers to at least the accuracy of the 18 layers but we can't" }, { "end": 1208.6399999999999, "start": 1202.32, "text": " however introduce residual connections bum bum bum bum and you can see that the" }, { "end": 1213.56, "start": 1208.6399999999999, "text": " trend is exactly reversed now the 34 layer with residual connections has a" }, { "end": 1220.84, "start": 1213.56, "text": " much much lower training and validation error than the 18 layer in fact look at" }, { "end": 1225.56, "start": 1220.84, "text": " this table right here if you introduce the residual connections to the 18" }, { "end": 1231, "start": 1225.56, "text": " layers it's marginally better however if you introduce the residual connections" }, { "end": 1236.4, "start": 1231, "text": " to the 34 layers it is a lot better and this is another testament to the fact" }, { "end": 1241.36, "start": 1236.4, "text": " that these residual connections they really help more and more the deeper you" }, { "end": 1248.96, "start": 1241.36, "text": " go you can see the effect in so this 18 layers this is sort of a VGG 19 depth" }, { "end": 1254.36, "start": 1248.96, "text": " network well if and there we already know we can train these without residual" }, { "end": 1260.24, "start": 1254.36, "text": " connections right because we were able to train VGG 19 however if we go higher" }, { "end": 1266, "start": 1260.24, "text": " to more layers we can these residual connections all of a sudden make it a" }, { "end": 1272.48, "start": 1266, "text": " lot a lot better you can see that it's not it's not that we can't train the 34" }, { "end": 1278.1200000000001, "start": 1272.48, "text": " layers but the residual connections just help a lot more and it most of a sudden" }, { "end": 1284.08, "start": 1278.1200000000001, "text": " it most of most importantly they don't degrade the performance from the" }, { "end": 1291.48, "start": 1284.08, "text": " shallower network so they they explore the different options right here and" }, { "end": 1298.56, "start": 1291.48, "text": " compare it to others different options as I said being a B and C where a is the" }, { "end": 1303.6, "start": 1298.56, "text": " zero padding for the projection B is having projections simply between where" }, { "end": 1308.6799999999998, "start": 1303.6, "text": " the channels don't fit and C being having projections in every single" }, { "end": 1313.1999999999998, "start": 1308.6799999999998, "text": " residual connection and you can see right here that the option B gives you" }, { "end": 1317.68, "start": 1313.2, "text": " quite a bit of a boost well option C doesn't give you that much of a boost" }, { "end": 1324.68, "start": 1317.68, "text": " introduces many more parameters and you know overall is I guess they decided" }, { "end": 1330.52, "start": 1324.68, "text": " against it which since then the world has also decided against it they also" }, { "end": 1341.24, "start": 1330.52, "text": " do deeper networks so they built deeper networks like 50 layer resnet 101 layer" }, { "end": 1348.88, "start": 1341.24, "text": " resnet and 152 layer resnet and the 152 layer resnet ended up being the best one" }, { "end": 1354.32, "start": 1348.88, "text": " as you can see here and you can see a pretty gain like it almost almost lock" }, { "end": 1360.92, "start": 1354.32, "text": " step gain depth more depth means better network in this at the time this these" }, { "end": 1367.44, "start": 1360.92, "text": " numbers they were unheard of like even 50 layer deep neural network was" }, { "end": 1375.1200000000001, "start": 1367.44, "text": " bombastic but a hundred and fifty two layers it was it was crazy and the fact" }, { "end": 1381.64, "start": 1375.1200000000001, "text": " that still it has less parameters than the VGG 19 and performs better that was" }, { "end": 1386.2, "start": 1381.64, "text": " mind mind-blowing absolutely mind-blowing and then at the end they" }, { "end": 1392.44, "start": 1386.2, "text": " built an ensemble of these models and ended up taking the 2015 ImageNet" }, { "end": 1396.8, "start": 1392.44, "text": " competition winner that was still like very important back then it was still" }, { "end": 1402.68, "start": 1396.8, "text": " very important who wins who wins ImageNet that year where I think I" }, { "end": 1407.68, "start": 1402.68, "text": " haven't even followed up on the last few years it's some kind of wide fixed resnet" }, { "end": 1414.9199999999998, "start": 1407.68, "text": " whatnot with pre-trained and 50 billion extra data yeah so for the deeper" }, { "end": 1420.68, "start": 1414.9199999999998, "text": " networks they decide that they are computationally rather become rather" }, { "end": 1425.1599999999999, "start": 1420.68, "text": " expensive so they introduce these bottleneck blocks here on the right" }, { "end": 1434.6000000000001, "start": 1425.16, "text": " where as you can see so here if you have a 64 dimensional input you do 64 feature" }, { "end": 1439.3400000000001, "start": 1434.6000000000001, "text": " channels in your convolution have a 64 dimensional output you can save" }, { "end": 1444.96, "start": 1439.3400000000001, "text": " computation if you first project the higher so here you have a 256" }, { "end": 1450.92, "start": 1444.96, "text": " dimensional input and they say we can save computational power by pretty much" }, { "end": 1456.0800000000002, "start": 1450.92, "text": " projecting down to 64 first because then our complexity of this layer which is" }, { "end": 1460.44, "start": 1456.0800000000002, "text": " the expensive layer will be the same as the complexity of one of these layers" }, { "end": 1466.16, "start": 1460.44, "text": " and then we can project up again the one by one convolution they are" }, { "end": 1470.28, "start": 1466.16, "text": " significantly lower computational intensive than the three by three" }, { "end": 1477.2, "start": 1470.28, "text": " convolutions like it's nine times less operations if you think about it so" }, { "end": 1482.0800000000002, "start": 1477.2, "text": " that's what they use to build the deeper residual networks and these residual" }, { "end": 1488.44, "start": 1482.0800000000002, "text": " networks the ResNet 50, 101, 152 they are still staples today you can have these" }, { "end": 1492.76, "start": 1488.44, "text": " are you can have pre-trained versions of those and people still use it like" }, { "end": 1501.4, "start": 1492.76, "text": " ResNet 50 is used in every segmentation whatnot application so yeah this has" }, { "end": 1506.82, "start": 1501.4, "text": " turned out these decisions here have you know made it long way here you can see" }, { "end": 1513.08, "start": 1506.82, "text": " the number of parameters in these residual networks and this was the" }, { "end": 1523.48, "start": 1513.08, "text": " absolute craziest thing right here 1202 layers okay so you can see still until" }, { "end": 1529.48, "start": 1523.48, "text": " here ResNet 110 now this is on CIFAR 10 right here not on image net anymore but" }, { "end": 1536.24, "start": 1529.48, "text": " you can see that even 110 layers still had less parameters or actually the" }, { "end": 1541.76, "start": 1536.24, "text": " select the same order of parameters than these previous networks that were only" }, { "end": 1552.44, "start": 1541.76, "text": " 19 layers deep this was unheard of and much more unheard of 1202 layer network" }, { "end": 1557.48, "start": 1552.44, "text": " to train on CIFAR 10 it's a bit of an overkill but they say their goal was" }, { "end": 1563.36, "start": 1557.48, "text": " explicitly to study depth and you can see here that with the deeper and deeper" }, { "end": 1569.1999999999998, "start": 1563.36, "text": " networks they outperformed all of the previous networks so all of the" }, { "end": 1575.6799999999998, "start": 1569.1999999999998, "text": " baselines and themselves as they went deeper and deeper and deeper however once" }, { "end": 1584.1599999999999, "start": 1575.6799999999998, "text": " you go to 1002 layers you go up again so here's the question was this all just" }, { "end": 1588.56, "start": 1584.1599999999999, "text": " kind of a trick a hack and do we run into the same problem again and that's" }, { "end": 1596.36, "start": 1588.56, "text": " the question they ask themselves and the answer is no so if you look right here" }, { "end": 1602.08, "start": 1596.36, "text": " so here you see again the plane networks in the plane networks you can pretty" }, { "end": 1609.84, "start": 1602.08, "text": " easily see that the more layers you have the higher your error goes whereas in" }, { "end": 1614.56, "start": 1609.84, "text": " the residual network it's exactly the opposite way the more layers you have" }, { "end": 1622.12, "start": 1614.56, "text": " the lower your error and if you compare this 110 layer network with the 1200" }, { "end": 1627.1599999999999, "start": 1622.12, "text": " layer network you see your validation error going up again however your" }, { "end": 1632.32, "start": 1627.1599999999999, "text": " training error and I can't zoom in more but it's the same it's the same and it's" }, { "end": 1639.34, "start": 1632.32, "text": " at zero so here they conclude and the the here they conclude now we are" }, { "end": 1643.72, "start": 1639.34, "text": " overfitting they don't use like the biggest data augmentation like we use" }, { "end": 1649.3600000000001, "start": 1643.72, "text": " today so overfitting was still a thing back then so now they conclude okay now" }, { "end": 1653.8, "start": 1649.3600000000001, "text": " we have actually built a large enough network that is overfitting and then and" }, { "end": 1659.44, "start": 1653.8, "text": " the fact that we go up again in the training error is due to the fact that" }, { "end": 1665.6200000000001, "start": 1659.44, "text": " we are probably overfitting so not only have they enabled us to build deeper" }, { "end": 1673.52, "start": 1665.62, "text": " networks they have effectively shown that this can get you to the to the point" }, { "end": 1678.52, "start": 1673.52, "text": " where you don't need deeper networks anymore at least on C410 because you are" }, { "end": 1683.6399999999999, "start": 1678.52, "text": " overfitting and it can effectively get you there this is a lot of evidence for" }, { "end": 1688.9599999999998, "start": 1683.6399999999999, "text": " the fact that this biasing the networks towards the identity function is a very" }, { "end": 1694.9199999999998, "start": 1688.9599999999998, "text": " valid thing to do and is the solution to the we can't train deep networks" }, { "end": 1700.8400000000001, "start": 1694.92, "text": " problems lastly they investigate the size of the responses so their hypothesis" }, { "end": 1706.4, "start": 1700.8400000000001, "text": " is that if if it is really beneficial to bias the network towards the identity" }, { "end": 1714.28, "start": 1706.4, "text": " function and if it is really true that each of these layers only learns a" }, { "end": 1718.72, "start": 1714.28, "text": " little bit right because the identity function is already very good each of" }, { "end": 1723.72, "start": 1718.72, "text": " these layers only needs to learn kind of a small function they look at the" }, { "end": 1730.28, "start": 1723.72, "text": " responses of these things so the the response magnitude of these layers right" }, { "end": 1734.8, "start": 1730.28, "text": " here of the signal through the layers and they compare those with the response" }, { "end": 1739.16, "start": 1734.8, "text": " magnitude of the other neural networks where you don't have the skip" }, { "end": 1745.8, "start": 1739.16, "text": " connection the hypothesis is if we look at these then the responses of these" }, { "end": 1753.1000000000001, "start": 1745.8, "text": " layers should be much larger because they have to learn much more and the" }, { "end": 1756.8799999999999, "start": 1753.1, "text": " responses here will be much smaller because the identity function is already" }, { "end": 1762.4399999999998, "start": 1756.8799999999999, "text": " doing most of the work and that's exactly what you find so here the layers" }, { "end": 1766.1999999999998, "start": 1762.4399999999998, "text": " are ordered by response and you can see the plane networks in the dashed lines" }, { "end": 1771.6399999999999, "start": 1766.1999999999998, "text": " are significantly above the residual network even and that's not a function" }, { "end": 1777.8, "start": 1771.6399999999999, "text": " of the depth because if the depth was actually equal here you would expect" }, { "end": 1782.56, "start": 1777.8, "text": " that the dashed lines would would stretch like this right they would kind" }, { "end": 1786.6399999999999, "start": 1782.56, "text": " of stretch out however exactly the opposite is happening you can see that" }, { "end": 1790.56, "start": 1786.6399999999999, "text": " the residual networks even at the beginning their responses are very much" }, { "end": 1795.84, "start": 1790.56, "text": " smaller and this is kind of what I like about this paper it's it's one narrative" }, { "end": 1802.8, "start": 1795.84, "text": " it is a hypothesis and then every single like the the hypothesis is taken and" }, { "end": 1806.52, "start": 1802.8, "text": " they make predictions from the hypothesis they say okay if we are right" }, { "end": 1812.1799999999998, "start": 1806.52, "text": " with our hypothesis not only should our idea get us better accuracy that's what" }, { "end": 1818.18, "start": 1812.18, "text": " most people most papers do today but also you know but also it should be that" }, { "end": 1823.64, "start": 1818.18, "text": " we can for example push our network to the brink of where we actually are" }, { "end": 1829.3600000000001, "start": 1823.64, "text": " overfitting like here and it should also be that the responses of our signal" }, { "end": 1836.5600000000002, "start": 1829.3600000000001, "text": " through our layers is smaller and yeah that's research like this is just" }, { "end": 1843.52, "start": 1836.56, "text": " pretty pretty cool and it's I think a lesson for us that sadly the world has" }, { "end": 1848.76, "start": 1843.52, "text": " taken the resonance but the world hasn't all taken the research methodology of" }, { "end": 1855.6399999999999, "start": 1848.76, "text": " this paper yeah if you again if you want a good read it's very well written you" }, { "end": 1862.52, "start": 1855.6399999999999, "text": " I'm very sure you can follow it even if you have read very few papers and with" }, { "end": 1867.44, "start": 1862.52, "text": " that yeah I hope you enjoyed this please tell me what you think of going through" }, { "end": 1874.04, "start": 1867.44, "text": " kind of old papers looking at whether or not they have stood the test of time and" }, { "end": 1879.08, "start": 1874.04, "text": " yeah any other comments leave them in the comments I do read them and I'll" }, { "end": 1893.6799999999998, "start": 1879.08, "text": " see you next time bye bye" } ]
GwItCHOifG8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
I'M TAKING A BREAK... (Channel Update July 2020)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper" ]
Past, Present & Future of this Channel. OUTLINE: 0:00 - I'm going on a break 0:20 - Channel Stats 1:20 - Other Platforms 4:20 - Drama Videos 5:30 - Flatland 8:40 - SpineNet Thumbnail 9:55 - Future Content 12:55 - How do I select papers? 15:50 - Financial Support, Ads & Merch 18:50 - Conclusion Our Flatland Repo: https://github.com/yk/youtube-flatland Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Yes, you read that right. I am going on a break. Don't worry though, there will still be videos, just not as many. I've decided to basically reduce the upload frequency a little bit, mostly because I am going on a break, but also because I kind of want to have time to do other things, but we'll get to that later. So how's our little channel doing? We've just passed 1 million views. 1 million times someone thought, well that's kind of worth watching, and only about 900,000 times where they were severely disappointed after clicking on a video. I think, I still think that's a net gain, honestly. The channel just surpassed 30,000 subscribers, so technically in log space we're already halfway to 100,000. It's only a matter of time. And I think I've said this in the last update, but this is just absolutely overwhelming how many people are interested in machine learning research and topics related to it. So that's pretty cool and encouraging. Thank you everyone who has already subscribed, and especially the people that leave comments, the people that share the videos. This means a lot and I think it's awesome. And it's quite motivating to continue doing this, honestly. I'm having lots of fun. Along with that, I've gained almost 5,000 Twitter followers. I think more than 5,000 Twitter followers. Which is strange, because Twitter is weird. But, you know. So that's pretty cool, I guess. I wonder if all of those are subscribed to the channel. In any case, I just want to highlight again that the community around machine learning research is in the absolute largest part a very, very positive community. You people are absolutely great. The comment sections are just so much better than anything else on the entire internet. Including paper reviews at major conferences. Really, this is a half joke that the comment section is better than the reviews on papers, but it is actually very often true. People are discussing ideas in the comments that are valuable and creative and asking interesting questions and helping each other out. And that also counts for our Discord server. So if you're not on our Discord server, we do have one. There is a channel for beginners question. There's a channel for discussing the videos that are on the YouTube channel. And people are generally very, very helpful there. It's a vibrant community and I can only recommend that if you're looking to contribute to the community and be part of it, it's a great place. That being said, I'm also on a number of other platforms such as LinkedIn. I finally made a LinkedIn account. I was always kind of sceptic. I don't know how LinkedIn works. What is the difference between follow and connect? And then people write little messages while connecting and it says, I'd love to connect. But then you accept them and then that message pops up and then it's saying, I'd love to connect, but you've already connected at that point. This is weird. How does LinkedIn work? Someone tell me. What is it for? I get it. It's like professional social networking. Ah, it's just it seems weird to me. OK, but there is an entire community there and I do post my videos there. I'm not like super active on LinkedIn. I have to say that. I'm also on BitChute, Minds, Parlor. So the reason why I'm mentioning these things is that with recent developments, especially around this Yann LeCun video, there were some developments that potentially threatened the existence of this channel and I don't want to make it a single point of failure. So I would appreciate it if you'd follow me on at least one other thing, at least one other point of contact so that in the case that something might happen, which is unlikely, but you know, can I still have a way of distributing this content? All the links are in the video description. I'd love to see you there wherever. So with respect to the Yann LeCun situation, he has left Twitter now de facto. And people wanted me to kind of make a follow up video, asked me about it. But I feel, you know, I have I have nothing more substantial to say and just to make a video for video's sake is not really a thing I want to go into general. It's kind of sad, but these kind of news and drama videos, they do get a lot of attention, not like outrages, but they do get. I want to keep this channel mostly about the machine learning research, and I only want to make videos when I really do have some information to add. You know, Yann LeCun is an adult and he's able to make his decisions of whether he wants to leave Twitter or not. It's probably for the better for his mental health. So with respect to the drama videos, I always kind of say that I'll pull you in with the drama and then before you know it, I educate you. Ha, checkmate. So that's how this works and how we ultimately end up with many more machine learners than originally wanted to be. We gotcha. So in other news, Flatland, round one of Flatland has officially started. And I've previously made a video about Flatland. It is a NeurIPS challenge. It is a challenge where you have to route trains around a 2D map. And I wanted to make this kind of a community project where we do research together, hopefully do something with machine learning, with reinforcement learning to tackle this problem and to crush the challenge. And this is looking extremely good. So on our Discord server, there is a core group of people that is really engaged. And this is one of the reasons why I kind of want to reduce my upload frequency, because I want to have time to participate more in these community efforts. I really want to have time to do more myself in the Flatland research group that we have. We do open research. Anyone is welcome. You're still very welcome. Join our Discord server and join us and contribute to the code. There is a core group right now that is really pushing forward. So just to highlight a few of them, there is Novik, Edward Durek, Dolash, Frostbite, China CEO, trademarked, Ai Adrian and I'm Peter. And I forgot NuberPonage. I'm so sorry, NuberPonage. You're beautiful. These people are helping each other reach more and better performance, profiling code, implementing parts from other papers. And it's just great to see that people can collaborate on this, even though it's technically a competition. But, you know, I guess we're competing against nature. For anyone I haven't mentioned here, I'm very, very sorry. Any list of names is always prone to leave people out. I do not want to diminish your impact. So right now, this DQN algorithm is able to reach about a 95 percent success rate with three to five agents on the map. So three to five trains. But round one has just started and we see that some of these environments have many, many more agents in them. So there's still a lot of work to do. So we need you to come and contribute and join the fun. It is fun. And as I said, I will be working on this myself more as well because it's fun. So again, big shout out to anyone on this court that has contributed in any way. This is just awesome. We've just had recently our first Flatland Town Hall with entirely community generated content. So these people came together and basically joined in making one PowerPoint presentation and then presented to each other their knowledge of the environment. And amazingly, we also had the Flatland organizers come in and tell us their perspective about the challenge, the environment and what's challenging and what's changed and so on. This was just unprecedented for me, the amount of contributions there. The first town hall is available if you join this court. It's linked there. It's recorded. You can still see it. And as I said, there's still plenty of time to join the competition together with us. OK, next thing, you've probably seen the SpineNet video and the thumbnail there was excessively beautiful. So the story behind it is on the Discord server, I've asked people to help me with the thumbnail, which was originally rather boring. And I just kind of wanted to know which subtitle I should put there. And then one of the Discord members, Lucas Ferreira, just gone ahead and drawn up this very beautiful image of a SpineNet robot that has the SpineNet as a spine. And this is this kind of stuff is just awesome. So again, big shout out to Lucas Ferreira and the absolutely amazing thumbnail that this has generated. And also the contributions to anyone that comes on the Discord server and into the beginners question channel, ask some question and usually get some form of help. Now, that being said, please don't just come and we'll solve your problem like try to search for a solution before going into that group of very well meaning people. Because, you know, if if too many people just expect them to solve their problems, it won't be as well meaning anymore in the future. OK, so how does this channel go forward? I want to make the content a bit more diverse and kind of branch out. And as I already said, the upload frequency will be sort of lower after my break and also during my break. But I have some ideas of how to generate kind of more interesting content or different content. So here are my ideas. And this is a list. And please tell me what you think of it. And you can do this at this video and you can do this at any point. You can give feedback about what kind of videos you like, what kind of style of videos you like, anything. Really, I'm happy to listen to people and incorporate all the feedback that I can. So I want to do some more channel updates, maybe more frequently, maybe once every two or three weeks just to let you know what's going on, kind of what's going on with the channel, what's going on with the community. This should be fun. So another idea I had is to look at kind of historical papers. And I think I got the idea from a comment in my comment section. So shout out to whoever lifted me to that. It's a great idea to basically go back to historical papers and just kind of see what people back then knew and didn't know and predicted and were right about. And especially I wonder what kind of choices did they make that survive until this day, kind of arbitrary choices that someone made in some paper that just stuck around. It's interesting to see. And there will be a series of kind of classical papers that I will extend from time to time. And I hope you enjoy that. Also want to do more a bit of live coding videos. Lots of people have requested that. I'm not the best coder in the world, but I have done my fair share of machine learning research hands on. I have lots of more stupid ideas that might or might not work out and I'm very happy to implement them live. Then next thing I want to branch out in topics, maybe more exotic things. Causality is a big thing. Quantum machine learning, what not, more practical applications, robotics control, also fairness. A lot of people, especially after the Yann LeCun video, asked me to look more into the fairness literature. I am naturally very interested in that and approach it from kind of a technical but also a societal standpoint. Throughout all of that I would like to include the community more. So you. I don't really know how to do that properly yet. So here's the first thing that I'm going to include you. If you have a good idea of how to do more community inclusion in the channel's content, please tell me because I think the community has lots of stuff to give. And it would be a shame if it were only me always doing all the things where other people would be much better at it. OK, the last question on this is a question that I get very often and that is how do I select papers? And there seems to be a misconception in this. So let me tell you, I select papers by what interests me and that alone. If I make a video about a paper, it means that I found it interesting enough to read it and I found it interesting enough to make a video about it. Now, this can be accelerated by sending me appropriate amounts of protein and carbohydrates. But generally, me making a video is not an endorsement of a paper. It doesn't mean that this paper is the most important paper or influential. It doesn't mean anything beyond I find it interesting. If I ever get sponsored deals, I'll let you know that a video is sponsored. And that's that. I will not change how I select papers. I will not go by some kind of impact. I would like to branch out my content. I see the danger in only kind of covering what the big companies do, but honestly, they do a lot of interesting stuff. They do a lot of stuff and I'm constantly looking at research that's kind of outside the box. Whatever is interesting, you know, that's what it's going to be. I will not start going by some impact factor and I will not start politicizing my paper review selection any time soon. You know, I've already had multiple people, high profile people come to me and say, well, with your platform, couldn't you once a week take a paper where the first author is an underrepresented minority and review that? And, you know, I appreciate the sentiment behind it and I see where it comes from. But if you consider the practical implications of something like this, like I'd have to, you know, go through papers, Google the first author, kind of try to find a talk or a picture of them and estimate whether the melanin content in their skin is high enough for this to qualify now. And something like this and just the thought of this, how someone could do something like this and not start to vomit is just beyond me. I don't know what to say other than that. So let me say this. If your thinking leads you to a place where it's necessary to treat people differently based on the color of their skin, you're wrong. Like, that's my opinion. But you're wrong. The answer to bias cannot consist of more bias. That's that. I do not care how the person that wrote a paper looks like. If your paper is on this channel, it means your work was interesting to me. And I hope that can be my contribution to making the community more fair and just. OK, last thing. Lots of people have asked me if I had a Patreon or something like this. And I've sort of resisted that kind of stuff until now, mainly because I knew that the day would come when I reduce my upload frequency. I didn't want to kind of trick people into thinking that I was going to continue this forever. Again, financial support is not my main goal here. And it is completely, absolutely and utterly voluntary. And so I just want to have that out there. So I have made a Patreon page. I do have some reservation with respect to Patreon because of free speech issues and so on. So I've also made a Subscribestar page. Both are equal. Both have equal tiers. All the tiers are equal. There's no option where it just where you could just put an amount which I would like. So I just try to make a bunch of tiers. All of them are equal. So I have to ask myself, what do I give as a benefit? Because I don't want someone to have to pay for like extra content because the entire goal of this channel is to educate people, including people that don't have money to go to good universities that might live in other parts of the world where education is not as available, where resources are not as available. To give extra content to people that pay seems to be... So I thought, okay, what I could give to the people that do support me on these pages is you will get a PDF of my scribbled OneNote document of the papers that I review. I mean, it's not very helpful because I mostly scribble and it's going to be like subdividing the pages weirdly. Maybe it has more of a symbolic value and if you're really into that, you know, at least there's something. I've also made a bunch of crypto wallets. So if you'd rather want to use that to support me, you are welcome to do so. All the links are in the description of the video. Again, financial support, very, very, very optional and very voluntary. Though, of course, I do thank anyone that does. I am also going to experiment with ads on the videos. And as creators, we have kind of different options of which ads are displayed and how often and so on. I find mid video ads annoying. I find non-skippable ads annoying and so on. I'm really counting on you here to give me feedback after various videos of how much the ads annoy you, which ones annoy you, which ones don't. I'm really counting on you. Okay. Okay, last thing I am planning, planning on a line of merch, mainly because I think it's funny. But I don't know if that's going to work out. But, you know, maybe if you have fun t-shirt ideas or so, just let me know. All right. That was the update. As I said, I probably won't be reading comments too much, but I will catch up after the break. And I hope you continue enjoying this channel even with kind of the lower upload frequency and the new types of content that come in. If you do have suggestions for new exotic content that vaguely has to do with machine learning or not, let me know. Let me know what you think of anything I said. And I wish you an awesome summer. And I hope to see you here anytime. Ciao.
[ { "end": 7.5, "start": 0, "text": " Yes, you read that right. I am going on a break. Don't worry though, there will still be videos, just not as many." }, { "end": 14.5, "start": 7.5, "text": " I've decided to basically reduce the upload frequency a little bit, mostly because I am going on a break," }, { "end": 20, "start": 14.5, "text": " but also because I kind of want to have time to do other things, but we'll get to that later." }, { "end": 24, "start": 20, "text": " So how's our little channel doing? We've just passed 1 million views." }, { "end": 34.5, "start": 24, "text": " 1 million times someone thought, well that's kind of worth watching, and only about 900,000 times where they were severely disappointed after clicking on a video." }, { "end": 37.5, "start": 34.5, "text": " I think, I still think that's a net gain, honestly." }, { "end": 46, "start": 37.5, "text": " The channel just surpassed 30,000 subscribers, so technically in log space we're already halfway to 100,000." }, { "end": 50, "start": 46, "text": " It's only a matter of time. And I think I've said this in the last update," }, { "end": 59, "start": 50, "text": " but this is just absolutely overwhelming how many people are interested in machine learning research and topics related to it." }, { "end": 62.5, "start": 59, "text": " So that's pretty cool and encouraging." }, { "end": 71, "start": 62.5, "text": " Thank you everyone who has already subscribed, and especially the people that leave comments, the people that share the videos." }, { "end": 74.5, "start": 71, "text": " This means a lot and I think it's awesome." }, { "end": 78, "start": 74.5, "text": " And it's quite motivating to continue doing this, honestly." }, { "end": 80, "start": 78, "text": " I'm having lots of fun." }, { "end": 86.5, "start": 80, "text": " Along with that, I've gained almost 5,000 Twitter followers. I think more than 5,000 Twitter followers." }, { "end": 91.5, "start": 86.5, "text": " Which is strange, because Twitter is weird." }, { "end": 96.5, "start": 91.5, "text": " But, you know. So that's pretty cool, I guess." }, { "end": 100.5, "start": 96.5, "text": " I wonder if all of those are subscribed to the channel." }, { "end": 111, "start": 100.5, "text": " In any case, I just want to highlight again that the community around machine learning research is in the absolute largest part a very, very positive community." }, { "end": 119.5, "start": 111, "text": " You people are absolutely great. The comment sections are just so much better than anything else on the entire internet." }, { "end": 123, "start": 119.5, "text": " Including paper reviews at major conferences." }, { "end": 131, "start": 123, "text": " Really, this is a half joke that the comment section is better than the reviews on papers, but it is actually very often true." }, { "end": 141, "start": 131, "text": " People are discussing ideas in the comments that are valuable and creative and asking interesting questions and helping each other out." }, { "end": 143.5, "start": 141, "text": " And that also counts for our Discord server." }, { "end": 148, "start": 143.5, "text": " So if you're not on our Discord server, we do have one." }, { "end": 150.5, "start": 148, "text": " There is a channel for beginners question." }, { "end": 154, "start": 150.5, "text": " There's a channel for discussing the videos that are on the YouTube channel." }, { "end": 157.5, "start": 154, "text": " And people are generally very, very helpful there." }, { "end": 167, "start": 157.5, "text": " It's a vibrant community and I can only recommend that if you're looking to contribute to the community and be part of it, it's a great place." }, { "end": 172, "start": 167, "text": " That being said, I'm also on a number of other platforms such as LinkedIn." }, { "end": 177, "start": 172, "text": " I finally made a LinkedIn account. I was always kind of sceptic." }, { "end": 182, "start": 177, "text": " I don't know how LinkedIn works. What is the difference between follow and connect?" }, { "end": 188, "start": 182, "text": " And then people write little messages while connecting and it says, I'd love to connect." }, { "end": 196, "start": 188, "text": " But then you accept them and then that message pops up and then it's saying, I'd love to connect, but you've already connected at that point." }, { "end": 201, "start": 196, "text": " This is weird. How does LinkedIn work? Someone tell me. What is it for?" }, { "end": 204, "start": 201, "text": " I get it. It's like professional social networking." }, { "end": 212, "start": 204, "text": " Ah, it's just it seems weird to me. OK, but there is an entire community there and I do post my videos there." }, { "end": 216, "start": 212, "text": " I'm not like super active on LinkedIn. I have to say that." }, { "end": 221, "start": 216, "text": " I'm also on BitChute, Minds, Parlor." }, { "end": 228, "start": 221, "text": " So the reason why I'm mentioning these things is that with recent developments, especially around this Yann LeCun video," }, { "end": 238, "start": 228, "text": " there were some developments that potentially threatened the existence of this channel and I don't want to make it a single point of failure." }, { "end": 246, "start": 238, "text": " So I would appreciate it if you'd follow me on at least one other thing, at least one other point of contact" }, { "end": 257, "start": 246, "text": " so that in the case that something might happen, which is unlikely, but you know, can I still have a way of distributing this content?" }, { "end": 261, "start": 257, "text": " All the links are in the video description. I'd love to see you there wherever." }, { "end": 269, "start": 261, "text": " So with respect to the Yann LeCun situation, he has left Twitter now de facto." }, { "end": 274, "start": 269, "text": " And people wanted me to kind of make a follow up video, asked me about it." }, { "end": 284, "start": 274, "text": " But I feel, you know, I have I have nothing more substantial to say and just to make a video for video's sake is not really a thing I want to go into general." }, { "end": 292, "start": 284, "text": " It's kind of sad, but these kind of news and drama videos, they do get a lot of attention, not like outrages, but they do get." }, { "end": 301, "start": 292, "text": " I want to keep this channel mostly about the machine learning research, and I only want to make videos when I really do have some information to add." }, { "end": 308, "start": 301, "text": " You know, Yann LeCun is an adult and he's able to make his decisions of whether he wants to leave Twitter or not." }, { "end": 312, "start": 308, "text": " It's probably for the better for his mental health." }, { "end": 320, "start": 312, "text": " So with respect to the drama videos, I always kind of say that I'll pull you in with the drama and then before you know it, I educate you." }, { "end": 329, "start": 320, "text": " Ha, checkmate. So that's how this works and how we ultimately end up with many more machine learners than originally wanted to be." }, { "end": 336, "start": 329, "text": " We gotcha. So in other news, Flatland, round one of Flatland has officially started." }, { "end": 340, "start": 336, "text": " And I've previously made a video about Flatland. It is a NeurIPS challenge." }, { "end": 346, "start": 340, "text": " It is a challenge where you have to route trains around a 2D map." }, { "end": 350, "start": 346, "text": " And I wanted to make this kind of a community project where we do research together," }, { "end": 358, "start": 350, "text": " hopefully do something with machine learning, with reinforcement learning to tackle this problem and to crush the challenge." }, { "end": 366, "start": 358, "text": " And this is looking extremely good. So on our Discord server, there is a core group of people that is really engaged." }, { "end": 375, "start": 366, "text": " And this is one of the reasons why I kind of want to reduce my upload frequency, because I want to have time to participate more in these community efforts." }, { "end": 381, "start": 375, "text": " I really want to have time to do more myself in the Flatland research group that we have." }, { "end": 389, "start": 381, "text": " We do open research. Anyone is welcome. You're still very welcome. Join our Discord server and join us and contribute to the code." }, { "end": 393, "start": 389, "text": " There is a core group right now that is really pushing forward." }, { "end": 404, "start": 393, "text": " So just to highlight a few of them, there is Novik, Edward Durek, Dolash, Frostbite, China CEO, trademarked, Ai Adrian and I'm Peter." }, { "end": 409, "start": 404, "text": " And I forgot NuberPonage. I'm so sorry, NuberPonage. You're beautiful." }, { "end": 418, "start": 409, "text": " These people are helping each other reach more and better performance, profiling code, implementing parts from other papers." }, { "end": 423, "start": 418, "text": " And it's just great to see that people can collaborate on this, even though it's technically a competition." }, { "end": 427, "start": 423, "text": " But, you know, I guess we're competing against nature." }, { "end": 433, "start": 427, "text": " For anyone I haven't mentioned here, I'm very, very sorry. Any list of names is always prone to leave people out." }, { "end": 436, "start": 433, "text": " I do not want to diminish your impact." }, { "end": 444, "start": 436, "text": " So right now, this DQN algorithm is able to reach about a 95 percent success rate with three to five agents on the map." }, { "end": 452, "start": 444, "text": " So three to five trains. But round one has just started and we see that some of these environments have many, many more agents in them." }, { "end": 459, "start": 452, "text": " So there's still a lot of work to do. So we need you to come and contribute and join the fun." }, { "end": 466, "start": 459, "text": " It is fun. And as I said, I will be working on this myself more as well because it's fun." }, { "end": 473, "start": 466, "text": " So again, big shout out to anyone on this court that has contributed in any way." }, { "end": 482, "start": 473, "text": " This is just awesome. We've just had recently our first Flatland Town Hall with entirely community generated content." }, { "end": 492, "start": 482, "text": " So these people came together and basically joined in making one PowerPoint presentation and then presented to each other their knowledge of the environment." }, { "end": 502, "start": 492, "text": " And amazingly, we also had the Flatland organizers come in and tell us their perspective about the challenge, the environment and what's challenging and what's changed and so on." }, { "end": 508, "start": 502, "text": " This was just unprecedented for me, the amount of contributions there." }, { "end": 515, "start": 508, "text": " The first town hall is available if you join this court. It's linked there. It's recorded. You can still see it." }, { "end": 522, "start": 515, "text": " And as I said, there's still plenty of time to join the competition together with us." }, { "end": 531, "start": 522, "text": " OK, next thing, you've probably seen the SpineNet video and the thumbnail there was excessively beautiful." }, { "end": 540, "start": 531, "text": " So the story behind it is on the Discord server, I've asked people to help me with the thumbnail, which was originally rather boring." }, { "end": 544, "start": 540, "text": " And I just kind of wanted to know which subtitle I should put there." }, { "end": 557, "start": 544, "text": " And then one of the Discord members, Lucas Ferreira, just gone ahead and drawn up this very beautiful image of a SpineNet robot that has the SpineNet as a spine." }, { "end": 567, "start": 557, "text": " And this is this kind of stuff is just awesome. So again, big shout out to Lucas Ferreira and the absolutely amazing thumbnail that this has generated." }, { "end": 577, "start": 567, "text": " And also the contributions to anyone that comes on the Discord server and into the beginners question channel, ask some question and usually get some form of help." }, { "end": 589, "start": 577, "text": " Now, that being said, please don't just come and we'll solve your problem like try to search for a solution before going into that group of very well meaning people." }, { "end": 597, "start": 589, "text": " Because, you know, if if too many people just expect them to solve their problems, it won't be as well meaning anymore in the future." }, { "end": 604, "start": 597, "text": " OK, so how does this channel go forward? I want to make the content a bit more diverse and kind of branch out." }, { "end": 613, "start": 604, "text": " And as I already said, the upload frequency will be sort of lower after my break and also during my break." }, { "end": 618, "start": 613, "text": " But I have some ideas of how to generate kind of more interesting content or different content." }, { "end": 624, "start": 618, "text": " So here are my ideas. And this is a list. And please tell me what you think of it." }, { "end": 628, "start": 624, "text": " And you can do this at this video and you can do this at any point." }, { "end": 635, "start": 628, "text": " You can give feedback about what kind of videos you like, what kind of style of videos you like, anything." }, { "end": 642, "start": 635, "text": " Really, I'm happy to listen to people and incorporate all the feedback that I can." }, { "end": 653, "start": 642, "text": " So I want to do some more channel updates, maybe more frequently, maybe once every two or three weeks just to let you know what's going on, kind of what's going on with the channel, what's going on with the community." }, { "end": 658, "start": 653, "text": " This should be fun. So another idea I had is to look at kind of historical papers." }, { "end": 662, "start": 658, "text": " And I think I got the idea from a comment in my comment section." }, { "end": 676, "start": 662, "text": " So shout out to whoever lifted me to that. It's a great idea to basically go back to historical papers and just kind of see what people back then knew and didn't know and predicted and were right about." }, { "end": 688, "start": 676, "text": " And especially I wonder what kind of choices did they make that survive until this day, kind of arbitrary choices that someone made in some paper that just stuck around." }, { "end": 697, "start": 688, "text": " It's interesting to see. And there will be a series of kind of classical papers that I will extend from time to time. And I hope you enjoy that." }, { "end": 708, "start": 697, "text": " Also want to do more a bit of live coding videos. Lots of people have requested that. I'm not the best coder in the world, but I have done my fair share of machine learning research hands on." }, { "end": 715, "start": 708, "text": " I have lots of more stupid ideas that might or might not work out and I'm very happy to implement them live." }, { "end": 728, "start": 715, "text": " Then next thing I want to branch out in topics, maybe more exotic things. Causality is a big thing. Quantum machine learning, what not, more practical applications, robotics control, also fairness." }, { "end": 742, "start": 728, "text": " A lot of people, especially after the Yann LeCun video, asked me to look more into the fairness literature. I am naturally very interested in that and approach it from kind of a technical but also a societal standpoint." }, { "end": 754, "start": 742, "text": " Throughout all of that I would like to include the community more. So you. I don't really know how to do that properly yet. So here's the first thing that I'm going to include you." }, { "end": 764, "start": 754, "text": " If you have a good idea of how to do more community inclusion in the channel's content, please tell me because I think the community has lots of stuff to give." }, { "end": 772, "start": 764, "text": " And it would be a shame if it were only me always doing all the things where other people would be much better at it." }, { "end": 779, "start": 772, "text": " OK, the last question on this is a question that I get very often and that is how do I select papers?" }, { "end": 789, "start": 779, "text": " And there seems to be a misconception in this. So let me tell you, I select papers by what interests me and that alone." }, { "end": 798, "start": 789, "text": " If I make a video about a paper, it means that I found it interesting enough to read it and I found it interesting enough to make a video about it." }, { "end": 804, "start": 798, "text": " Now, this can be accelerated by sending me appropriate amounts of protein and carbohydrates." }, { "end": 813, "start": 804, "text": " But generally, me making a video is not an endorsement of a paper. It doesn't mean that this paper is the most important paper or influential." }, { "end": 821, "start": 813, "text": " It doesn't mean anything beyond I find it interesting. If I ever get sponsored deals, I'll let you know that a video is sponsored." }, { "end": 830, "start": 821, "text": " And that's that. I will not change how I select papers. I will not go by some kind of impact. I would like to branch out my content." }, { "end": 837, "start": 830, "text": " I see the danger in only kind of covering what the big companies do, but honestly, they do a lot of interesting stuff." }, { "end": 841, "start": 837, "text": " They do a lot of stuff and I'm constantly looking at research that's kind of outside the box." }, { "end": 845, "start": 841, "text": " Whatever is interesting, you know, that's what it's going to be." }, { "end": 854, "start": 845, "text": " I will not start going by some impact factor and I will not start politicizing my paper review selection any time soon." }, { "end": 862, "start": 854, "text": " You know, I've already had multiple people, high profile people come to me and say, well, with your platform," }, { "end": 872, "start": 862, "text": " couldn't you once a week take a paper where the first author is an underrepresented minority and review that?" }, { "end": 878, "start": 872, "text": " And, you know, I appreciate the sentiment behind it and I see where it comes from." }, { "end": 887, "start": 878, "text": " But if you consider the practical implications of something like this, like I'd have to, you know, go through papers, Google the first author," }, { "end": 899, "start": 887, "text": " kind of try to find a talk or a picture of them and estimate whether the melanin content in their skin is high enough for this to qualify now." }, { "end": 910, "start": 899, "text": " And something like this and just the thought of this, how someone could do something like this and not start to vomit is just beyond me." }, { "end": 914, "start": 910, "text": " I don't know what to say other than that. So let me say this." }, { "end": 927, "start": 914, "text": " If your thinking leads you to a place where it's necessary to treat people differently based on the color of their skin, you're wrong." }, { "end": 937, "start": 927, "text": " Like, that's my opinion. But you're wrong. The answer to bias cannot consist of more bias." }, { "end": 942, "start": 937, "text": " That's that. I do not care how the person that wrote a paper looks like." }, { "end": 946, "start": 942, "text": " If your paper is on this channel, it means your work was interesting to me." }, { "end": 951, "start": 946, "text": " And I hope that can be my contribution to making the community more fair and just." }, { "end": 957, "start": 951, "text": " OK, last thing. Lots of people have asked me if I had a Patreon or something like this." }, { "end": 967, "start": 957, "text": " And I've sort of resisted that kind of stuff until now, mainly because I knew that the day would come when I reduce my upload frequency." }, { "end": 974, "start": 967, "text": " I didn't want to kind of trick people into thinking that I was going to continue this forever." }, { "end": 979, "start": 974, "text": " Again, financial support is not my main goal here." }, { "end": 987, "start": 979, "text": " And it is completely, absolutely and utterly voluntary. And so I just want to have that out there." }, { "end": 996, "start": 987, "text": " So I have made a Patreon page. I do have some reservation with respect to Patreon because of free speech issues and so on." }, { "end": 1003, "start": 996, "text": " So I've also made a Subscribestar page. Both are equal. Both have equal tiers. All the tiers are equal." }, { "end": 1008, "start": 1003, "text": " There's no option where it just where you could just put an amount which I would like." }, { "end": 1011, "start": 1008, "text": " So I just try to make a bunch of tiers. All of them are equal." }, { "end": 1014, "start": 1011, "text": " So I have to ask myself, what do I give as a benefit?" }, { "end": 1022, "start": 1014, "text": " Because I don't want someone to have to pay for like extra content because the entire goal of this channel is to educate people," }, { "end": 1030, "start": 1022, "text": " including people that don't have money to go to good universities that might live in other parts of the world" }, { "end": 1034, "start": 1030, "text": " where education is not as available, where resources are not as available." }, { "end": 1037, "start": 1034, "text": " To give extra content to people that pay seems to be..." }, { "end": 1044, "start": 1037, "text": " So I thought, okay, what I could give to the people that do support me on these pages is" }, { "end": 1052, "start": 1044, "text": " you will get a PDF of my scribbled OneNote document of the papers that I review." }, { "end": 1058, "start": 1052, "text": " I mean, it's not very helpful because I mostly scribble and it's going to be like subdividing the pages weirdly." }, { "end": 1065, "start": 1058, "text": " Maybe it has more of a symbolic value and if you're really into that, you know, at least there's something." }, { "end": 1076, "start": 1065, "text": " I've also made a bunch of crypto wallets. So if you'd rather want to use that to support me, you are welcome to do so." }, { "end": 1080, "start": 1076, "text": " All the links are in the description of the video." }, { "end": 1085, "start": 1080, "text": " Again, financial support, very, very, very optional and very voluntary." }, { "end": 1088, "start": 1085, "text": " Though, of course, I do thank anyone that does." }, { "end": 1092, "start": 1088, "text": " I am also going to experiment with ads on the videos." }, { "end": 1098, "start": 1092, "text": " And as creators, we have kind of different options of which ads are displayed and how often and so on." }, { "end": 1103, "start": 1098, "text": " I find mid video ads annoying. I find non-skippable ads annoying and so on." }, { "end": 1110, "start": 1103, "text": " I'm really counting on you here to give me feedback after various videos of how much the ads annoy you," }, { "end": 1114, "start": 1110, "text": " which ones annoy you, which ones don't. I'm really counting on you. Okay." }, { "end": 1122, "start": 1114, "text": " Okay, last thing I am planning, planning on a line of merch, mainly because I think it's funny." }, { "end": 1129, "start": 1122, "text": " But I don't know if that's going to work out. But, you know, maybe if you have fun t-shirt ideas or so, just let me know." }, { "end": 1131, "start": 1129, "text": " All right. That was the update." }, { "end": 1137, "start": 1131, "text": " As I said, I probably won't be reading comments too much, but I will catch up after the break." }, { "end": 1146, "start": 1137, "text": " And I hope you continue enjoying this channel even with kind of the lower upload frequency and the new types of content that come in." }, { "end": 1154, "start": 1146, "text": " If you do have suggestions for new exotic content that vaguely has to do with machine learning or not, let me know." }, { "end": 1156, "start": 1154, "text": " Let me know what you think of anything I said." }, { "end": 1160, "start": 1156, "text": " And I wish you an awesome summer." }, { "end": 1176, "start": 1160, "text": " And I hope to see you here anytime. Ciao." } ]
5IRlUVrEVL8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Deep Ensembles: A Loss Landscape Perspective (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ensembles", "bayesian", "modes", "loss function", "nonconvex", "google", "deepmind", "stan fort", "foundational", "weight space", "labels", "agreement", "minima", "loss landscape", "trajectory", "local minima", "optimization" ]
#ai #research #optimization Deep Ensembles work surprisingly well for improving the generalization capabilities of deep neural networks. Surprisingly, they outperform Bayesian Networks, which are - in theory - doing the same thing. This paper investigates how Deep Ensembles are especially suited to capturing the non-convex loss landscape of neural networks. OUTLINE: 0:00 - Intro & Overview 2:05 - Deep Ensembles 4:15 - The Solution Space of Deep Networks 7:30 - Bayesian Models 9:00 - The Ensemble Effect 10:25 - Experiment Setup 11:30 - Solution Equality While Training 19:40 - Tracking Multiple Trajectories 21:20 - Similarity of Independent Solutions 24:10 - Comparison to Baselines 30:10 - Weight Space Cross-Sections 35:55 - Diversity vs Accuracy 41:00 - Comparing Ensembling Methods 44:55 - Conclusion & Comments Paper: https://arxiv.org/abs/1912.02757 Abstract: Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable variational Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. Finally, we evaluate the relative effects of ensembling, subspace based methods and ensembles of subspace based methods, and the experimental results validate our hypothesis. Authors: Stanislav Fort, Huiyi Hu, Balaji Lakshminarayanan Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at Deep Ensembles, a lost landscape perspective by Stanislav Fort, Hui Yi Hu and Balaji Lakshminarayanan. This paper on a high level explains the lost landscape of deep ensemble models, so ensembles of deep neural network. And it hypothesizes, and it shows through experiments, that each member of the ensemble, by means of being initialized at a random point, will go and, through optimization, go and end up at a different place in weight space. And therefore the deep ensemble is able to capture different modes of the functional space, of the space of solutions. They compare this to Bayesian networks, which are sort of promised to do the same thing, but they often only characterize a single mode, and therefore they don't generalize as well. So join me exploring this paper, I think it's a pretty cool paper. The experiments are cleverly designed to show what they're supposed to show, and I generally enjoy this type of research because it's kind of an explanatory research that shows you what's going on inside of these networks, rather than, you know, chasing the next state-of-the-art number. It's also an example of research that you can still do while you don't have, you know, giant resources of compute, even though this is by DeepMind. But I do believe that this kind of research is still, you know, wide open and available to academia, and whereas the other kind, the state-of-the-art kind, slowly goes into more and more of the money game. All right, in any case, join me in reading this paper. If you like it, share it out, leave a comment to tell me what you think, and leave a like if you enjoyed it. All right, so we'll start off. The abstract says, deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty, and out-of-distribution robustness of deep learning models. So what are deep ensembles? Really quick, an ensemble model, and we're in the classification setting. So in the classification setting, we have data points, and each data point has features, so which are the x, x is some kind of d-dimensional feature, and then you have y, which is the label. So that's in some, let's say that's some natural number or something like this, or is element of a class set. Now that's the complex numbers. It's element of some bounded set of class labels, so it's either a cat or a dog or you know, whatever you want. So you have a data set of these things, and your plan is to use x to predict y. If you build a model, a deep neural network, for example, for this task, you would simply characterize this function here, you would parameterize it as a deep neural network of many, many layers. If you build an ensemble now, what you would do is you would take the data set and simply train multiple different ones of these deep neural networks. So you'll train multiple different ones. And if you now want to classify data point, you'll input that data point into all of these three. And at the end, you would somehow aggregate, and there are different methods of doing this, but the most obvious one is simply either to aggregate by the mean or the mode, median, whatever you want, you could also kind of also learn something here. But you can just average the predictions. And that will usually give you a better prediction than if you only have one model. So this is called an ensemble model. And if the ensemble members, these things here are neural networks or deep networks, this is called a deep ensemble. So why do we hope to become better? That's the point of this paper is to show what happens in the lost landscape of these deep neural networks. And why do they perform better than other methods that are supposed to achieve the same thing. So usually, when you build an ensemble model, what are you hoping for? You're hoping to sort of learn a generalizable function. And they have this drawing right here, where it's a bit of a you have to sort of think differently than you usually do. So on the x axis, you have the space of solutions. So imagine that your, your neural network only has a single weight. So this axis here is that single weight, or you can project or or whatnot, this is the space of different solutions. So after you optimize, you land somewhere on this axis. And you can see that there is a solid line which represents the accuracy on the training set. And then there is a dashed line which represents the accuracy of the validation set for a given parameter. So what you usually do is you optimize one neural network to its very best training accuracy. So let's say you start off here, what you would do is you would see my training accuracy is this high, I need a different color right here is this high. And you calculate the gradient, and you could do gradient descent. And that means you go down the loss up the accuracy. So you go over and over and over until you reach this point right here, where you have maximum training accuracy, and then you'll suffer some generalization loss like you're gonna see right here, it's all for some generalization loss, because the validation accuracy at that point isn't as high. But generally, it's correlated, as you can see, by the general overlap of these two lines of these two shapes right here. Okay, so this is called a maximum a posteriori estimate, you simply optimize one neural network until the best training error. There are different approaches right here, there are approaches that say, okay, we can do we could do better. So first of all, what you see right here is rather peculiar. And you might not be used to this, that there are different peaks right here, there are different peaks, as you can see in the training and the validation error. So they're correlated. And the idea is that neural networks are very nonlinear. And we've known from other papers that they have many, many local minima. And in fact, so this is one of the astounding things about neural network, most of these minima are performing equally well. So even though the neural network has different local minima, they all perform about equally well. And other papers even say they're all sort of connected on a low loss landscape. So there are many, many things that are still mysterious about neural network. But we know that there are multiple minima. And we know that we basically need to find one of them. And it doesn't really matter which one they all perform sort of equally well. Now, as you can, as you might imagine, there are people who aren't really satisfied with this. And there are approaches to say, why don't we just capture this entire curve right here. So if we could build a model that could not only tell us at this point right here, you're this good, but could tell us that at any point, how good we are captured the entire distribution of solutions. And these are usually in the category of the Bayesian neural networks, they try to capture the entire distribution. Of course, that's not really feasible, because you always have to calculate that posterior. So what they end up doing is they do some approximation. And usually they do some sort of a multivariate Gaussian approximation, because you can calculate posteriors in closed form and so on. And this paper, this paper's hypothesis is that these can only usually capture one of these peaks. So they are very able to capture the surrounding right here, they're, they can capture very accurately what happens around this particular peak. They are very aware of the shape of the curvature here, and can tell you a lot of things about it. So they can tell you, for example, that the validation so that you might want to be a bit over here, rather than over here. But they cannot they don't generally know about these other modes, because they are only approximations. They generally don't produce multimodal solutions. Another approach is a deep ensemble. Now, this paper shows that in general, if you train a deep ensemble, what will happen is because you randomly initialize the deep ensemble, at some points, it will happen that if you do gradient descent on all of them, they will end up sort of covering all these different modes, they still they don't have an idea of you know, the curvature, sorry, this one shouldn't go here, this one should go here, the curve, they don't really know about the curvature, but they will give you these different minima right here. And therefore, they can capture the landscape much, much more easily. If you know that these three are minima, you sort of, it might look something like this. And that's a hell of a lot better than simply the Bayesian approximation that you have to capture one of the peaks, but really accurately. So, their hypothesis here is that deep ensembles do this job of capturing the different modes of the functional space much better than the Bayesian methods. And it is why the deep methods, sorry, why the deep ensembles work so well, because they end up in different minima. And that is, it's really interesting proposition. And what I find really interesting as well are the experiments that they do to show this. So they have a lot of these experiments right here. First of all, to the setup, they use C410, C4100, and so on. And on C410, you can see right here, they use a small CNN, medium CNN, and a ResNet. Now the small CNN and a ResNet. Now the small and medium CNNs, their accuracy is really, really subpar. So, take the results here with some grain of salt, because there are effects in these neural network that are really qualitatively different if you are seriously underperforming, like this one, like if you have a seriously too small network rather than a large network. Now they do verify all of their things also with this large network and 90% accuracy is acceptable for C410. I don't think there's the big qualitative difference between 90 and 95 and so on. But the 64, if it were only this, I would be rather critical of this work. But it's fine to, if you see the effect at 64, and then some of the effects you check to carry over to the 90% one, I'm going to generally believe you. Okay, so first of all, what they do here is they look at a training trajectory of just a single run. So this paper is half about ensembles, but also half generally about what does training of neural networks do? And they reach some very, very cool conclusions that even are independent of deep ensembles. So here, the first thing we do is we have some initial random initialization in weight space of your weight, and then you do gradient descent and you run and you run, right, and you get to some minima right here, some minimum right here. And then you do a second one. So you initialize somewhere else. And because you initialize somewhere else, you run, you run, you run, you end up at a different minimum. Okay, this is a property. So these are not convex functions, right? We know about neural networks, you'll end up a different minima, but the minima, they will, they will perform about equally well. So the question is, do those different minima that perform equally well, describe the same function? Or are they fundamentally different functions that just happen to reach the same accuracy? And the question is very interesting. And this paper attempts to answer that. So here you can see in the description, on the left, cosine similarity between checkpoints to measure weight space alignment along optimization trajectory. So we only consider one of these runs, only consider the left one, for example, and you plot it here, and here, this later one comes later, sorry. So you plot the left only a single run, and you ask yourself, the checkpoint that I have after epoch 20, how similar is it to the checkpoint that I have after epoch five? That would be right here. Now, we have to read up how they compare the checkpoints. And this is weight space alignment. Okay, so weight space alignment, it basically means how much do the weights align in the cosine fashion, as you can see right here, this is simply the cosine between the weights, this is one way of comparing two functions. If two functions align in weight space, there's a decent chance that they describe the same thing. So as you can see here, we go as we go down the optimization trajectory, of course, each one is similar to themselves. But you can see that there is kind of a shift right here. So at the beginning, the zero of checkpoint is very dissimilar to the checkpoint at the end. But after very short while, you kind of cross over, and then all these checkpoints right here are sort of similar. So the if you just look at two rows, you look at the bottom row, and you look at the top row, the bottom row tells you how similar are the checkpoints during training to the initial checkpoint. And you can see pretty quickly, they are very dissimilar. So at this point right here, there is kind of a dissimilarity happening where the checkpoint goes away from its initialization to something else. And the top row tells you how similar are they to where the network ends up. And you can see that there appears to be a period in, let's say here, where this shift away starts up until here, where it's kind of not similar to anything. But then after that, after here, everything is similar to the final checkpoint. Okay, so this is sort of tells us a hypothesis is that you initialize randomly somewhere you have this lost landscape, right? You initialize randomly somewhere here. And then you go go go and at some point you fall into one of one of those valleys, and then you simply go to that to that valley. If you initialize somewhere differently, you can see that at the beginning, you might be here somewhere, and then you fall into that valley over here. And after that, you're pretty much set. So this is going to be our hypothesis from now on that in these neural networks, you the initialization is basically you you're somewhere and you kind of meander around a bit until you happen to go into one of these directions, which happens pretty quickly. And then you fall into a hole basically. And that's that's rather a convex setting in that thing. Okay, a really interesting thing that they do is a really interesting thing is that they check the disagreement of predictions. So you might think that if a neural network achieves 65 or 90, let's call it 90% accuracy on C410, right, that there are just you know, there are this data set, that's 100%. And there are just these 10% over here, that are just the hardest, right. And the more you train, the more are you you're able to push this boundary to the right. So if you train more, if you have a better network, you're just able to explain more and more of the samples. However, this this experiment here is going to show that this is not the case. What they measure is the disagreement in predictions, which basically means that if I there is this data set, the validation data set, and if I have one random initialization and I train it to 90% accuracy, it will have it will say these, it will not be able to classify these here. But if I have the same network, but a different initialization, it might not be able to classify these over here, but will be perfectly able to classify these over here. Right. This is a very, also very interesting property. And you can see right here, the disagreement of predictions as you go through the training. So again, we're going to look at the bottom and the top row. So the bottom row, and the top row, red is very disagreeing, blue is very agreeing. You can see again, that that I introduced, again, I introduced the different runs, I'm already taking this away from later, we are just looking at one single run for now. This this is a result that's going to come up later, when we look at two different runs of the same neural network. And that's the astounding part. Okay, here, we're just going to look at one run again during training. So we can see right here at the beginning, of course, every checkpoint agrees with itself on the predictions. However, you can see that pretty quickly, the checkpoints start disagreeing very quickly, everything is red right here. However, on the top, you can see how much how much do these checkpoints agree with the end with the 30th epoch checkpoint, and see that there is a period that is red, right from here to let's say here. And then after that, they all start agreeing. So from here on out, it's all pretty blue, which basically means that that they agree with the last checkpoint. So with the that all of these agree with the end of the training. Again, this is our hypothesis here that once you're in this valley, that the function kind of stays the same, and you only sort of micro optimize the function. However, at the beginning, you decide into which of those valleys you want to go. And the different initializations will lead you to different valleys. And that's what they show right here. So they do a t-sne plot of predictions t-sne is a method to project to down project high dimensional vectors. And this is the weight space projected to two dimensions. So t-sne x is one, and two, these are rather arbitrary. It's just the if you think of a PCA, it's the directions of maximum variance. And you can see the three different runs, they immediately at the beginning right here, they immediately go, you can see they have they do large distances at the beginning, between the steps of optimization. And they do in very different directions, just by means of being initialized at different points and having maybe a bit of noise in the training process. But once they are at the particular location, they sort of just kind of bounce around right here and try to find the best minima in that region. So this is our first indication that the if we train the same network multiple times with random initializations, it's going to end up at multi at different places. And what we're wondering is we already know that a single network is very different at the end than at the beginning of training. What we want to know is our two networks also very different, even though they're trained on the same objective, just because they are at different places in the weight space doesn't mean they are functionally that different, there are symmetries. And it's going to turn out yes, they actually are very, very different. So this is right here, here you can see two different things. And we're going to read the plot along with it. Just so I remember what I'm seeing here. So using two different architectures, okay, for each of these architectures, the left subplot shows the cosine similarity between the different solution weight space, and the right subplot shows the fraction of labels on which the predictions from different solutions disagree. Okay, so it's the same as before, the left is the alignment. And now it's not during training. Now we restart independently, we train the same network 10 different times. And after that, we're going to compare the 10 different solutions. Remember, these all achieve roughly the same accuracy on the data sets. And this is the same whether you go to a big architecture like this ResNet 20, or to a small architecture like this small CNN right here. You can see that every single solution, of course, agrees a lot with itself. That's the diagonal right here. But it's completely, it's not a line, it's completely orthogonal to all the other solutions. So all the solutions in weight space are orthogonal. Now there's still the chance that there's, you know, some symmetry in weight space because, you know, if I have a neural network, I can just exchange the connections. And if I also exchange the neurons, then it will be the same function. However, you can see right here that they completely disagree. So this small CNN, remember, it had like a 65% accuracy, the solutions, the red here, they disagree on 25% of the labels. So that this is exactly this effect we saw before, we train one solution, and it will not be able to classify these parts of the validation data set. And we train the same network with the same data set with the same loss with everything the same, again, just from a random initialization that's different, it will end up equally performing equally well, but it will make the mistakes and an entirely different set of the validation data points. Like this is rather astounding, I feel, because I think most people are of the of the idea that all the kind of data points have an intrinsic hardness. And if if we get to 70% accuracy, it will always be the same 70% of data points that we miss classify or sorry, that we correctly classify. This is not the case. And this is one thing I think this paper and this line of research does pretty cool is to look at these networks in terms of their prediction agreement. So they go further and they compare this to four different methods. So they say, okay, ensembles, ensembles are one method of kind of doing these getting different solutions, which means we start from random initializations, but there are other ones. So for example, there is just place this correctly here, random subspace sampling. So what does it mean? They say we start at an optimized solution. So you train a network one single network, and then we choose a random direction at V in weight space, we step in that direction by choosing different values of t looking at the predictions at configurations, theta zero plus TV, we repeat this from many different V, but always the same theta zero. So in our original kind of drawing of this thing, we optimize one single network, let's say that's here. And then we sort of wiggle around in here, into different random directions. Now, of course, there's only one random direction right here. If maybe we can look at this at, at the so if here, we have the loss landscape, and maybe over here, there is a bit of a of a thing. And over here, there is a bit of a thing you thought I was going to draw. Okay, so we start here, and maybe that converges to somewhere here. So what we're going to do is we're going to select random directions in that space. And we're just going to go a few steps into this direction and compare the weights. Now here, you can already see by the way I'm drawing it that this will probably make you stay in the same region. And our hope with ensembles is of course, is that they are able to capture all of the three different modes right here. But it is a way to obtain different solutions that all also perform quite well, if you only perturb your solution by a little bit, it also works quite well. And you can build an ensemble out of these methods right here, you can build an ensemble out of these. In fact, these Bayesian methods, if you do these approximations with Gaussians, that's pretty much what they will end up doing is they will end up characterizing the local, the local landscape around one of these minimum. But here, we simply do it by randomly stepping into a direction. That's the first method that we're going to investigate to obtain an ensemble of different solutions. So deep ensembles means we initially random, we randomly initialize many times and then train from scratch each member. This method here means we start from an or from a solution and we simply perturb it into random directions. The next thing we can do is we can do dropout subspace dropout. We again start at an optimized solution, apply dropout with randomly chosen probability. Again, our hypothesis is going to be that this is going to keep the network rather in the sort of same kind of functional mode and not switch over diagonal Gaussian subspace. Again, we start from an optimized solution, you can see the pattern. And here we actually do some sort of a Gaussian approximation to the functional space, we calculate a mean and a standard deviation. And we draw samples of the parameters from that distribution. And then the same in a low rank regime. And here you see what happens. So here is these things, for example, the random subspaces. Here is this overlapped with the plot we saw before. So here we have three different trajectories of runs. And then at the end of each trajectory, we take the best solution, and we do this random exploration around it. And this here is the t-sne, the t-sne projection. Now this isn't, I have to say this isn't the projection of the weights itself. Sorry, I did not say that before. This is a projection of the predictions, I believe of a subset of data points. So this is the prediction projection of that. And you can see that if we perturb the solutions like this, all of these solutions, all of these ensembles, they rather they stay in their basin of attraction, as you can see right here. So with a deep ensemble, we would build an ensemble that, sorry, sorry, sorry. We would build an ensemble that combines this point, and this point, and this point. Whereas here, we will simply either build an ensemble that combines points in here, or we'll build an ensemble that combines points in here, and so on. And you'll see this for all the different methods that we consider here, especially the Gaussian methods. And that's a hint to why, even though Bayesian networks explicitly try to capture the entire distribution right here, what they'll end up doing is they'll simply end up capturing a single mode. And that's important because the single mode is always functionally sort of equal. We saw that this is a training trajectory, and at the end of training, after like this step right here, all the functions are pretty much the same, right? They pretty much agree with the end optimum. Whereas between the runs, these functions completely disagree with each other. So it is important if we want to build an ensemble to capture as many of these modes as possible, and only the deep ensembles can do that so far. So this is another experiment where they show this lost landscape. And I really like these kind of plots. So what you see here is a plane, it's a 2D plane. And the 2D plane is described by three points. So one point is the origin, you see right here the origin, that's the origin of that's the zero in weight space. Okay, then what you have are the two optima. So you run an optimization, and then you run an optimization two different times. Once it's initialized here, and it runs to here. And once it's initialized here, and it runs to here. Okay, so that defines the plane that we're going to look at. Now they for each single pixel in this plane, or actually for each single pixel in this half circle right here, they evaluated the networks. So what you'll do is simply do linear combination of these weight of the weight vector here at this optimum, the weight vector here at this optimum. And you can for each point here defines a neural network with those weights. And you can evaluate it. And that's what you get. This is the accuracy of the neural network at that point. So here you can see very, very clearly that there are these two different modes right here. So each even though they're initialized super close to each other, right, you can see this right here, they're initialized super close to each other, because they are this is the flat area right here that we saw before, because they are in the flat area. They're even though they're initialized pretty close, the red one is a little bit more to this basin of attraction, the blue one is a little bit more to this basin of attraction. So they move over and as soon as they're in, it's like boom, they go to the minimum of that basin. And this area is rather convex. And this area is rather convex. And in the middle, you can see is a less is more loss. So no solution will go there. That's how you get these different minima. That's how you get these different modes. And you can see the accuracy or from the color is going to be the same in each of the in each of the valleys, consistent with our what we know so far. Now here, the pink stripe is a Gaussian exploration. So if you now do a Gaussian perturbation Gaussian exploration around this minimum, you basically you can see again, you don't get out of this valley, you don't you're not going to go to different modes, the weight space is just too large, and you're going to simply be stuck in there. So the only chance almost you have is to initialize again, and hope that you end up in a different place. And I guess my hypothesis is that there are many, many more of these valleys of these basins than that you can you could ever capture. So basically, every single initialization that is different will lead you to a different one of these basins. I guess it's only a matter of this size. So here again, they do a function similarity. So in this case, it's the function similarity to optimum one. And this is again, how many of the labels agree with the optimum right here. And you can see that within this basin of attraction, you have fairly high overlap of the functional similarity. But here, none, right? So 15% or so, it's not going to be zero, because they're going to agree on like some of the examples, I guess there's still something like intrinsic hardness, but they agree on almost none of the labels, at least I guess, if you normalize by their base accuracy. So even though optimum two is performing as well, it is functionally extremely dissimilar to optimum one. So these describe really different functions. And I really don't know what to make of this other than, you know, each one of these is maybe sort of deciding to look at different features in the in the data set, right? To or to maybe build different high level features from the same low level features. And maybe we're still under parameterizing these models, because not a single model can sort of look at both features at the same time, as evidenced by the fact that each of these is always going to one of these things. Or it could be that in fact, the task is way too simple. And it's it can be solved in like 500 different ways. And each of these Optima is simply one way of solving the task one way of combining feature. And it's actually completely an over specified problem. That's another hypothesis, it would be, I guess, interesting to look at these things. And I'm sure there's work on this. So you can see the same thing for optimum two right here, where, okay, go away, where you can see that optimum one, it agrees almost nothing with optimum two, right? It doesn't it doesn't agree. There's not even a hint of a valley right here, in the terms of functional similarity. That is very, very interesting. So it really means that these two things describe two different functions. They do these other plots right here, that they call diversity versus accuracy plots. So what they'll do is they are going to have different models, and they're going to look at them in terms of their diversity and their accuracy. So here, the y axis is going to be how different are these functions, and that's going to be again in fraction of labels changed normalized by the base accuracy. So here you can see, we're always start from this baseline optimum, this baseline optimum has zero diversity zero, because it agrees with itself on all the on all the different labels, of course. And then we're going to disturb that using our four methods that we defined before. So we're going to randomly subspace, we're going to drop out, we're going to Gaussian perturb it, and so on. And the more we perturb it, the more diverse our function is going to be naturally, right? Because we perturb the function, it starts disagreeing more and more and more with our original optimum. However, what we also expect is, you know, if we are in the local optimum, and maybe here, and you know, maybe the validation accuracy is sort of beside it, or a little bit larger, and so on. So if we perturb it a little bit, you know, that might not make too much to our accuracy. But if we perturb it a lot, you know, we go actually up the loss landscape, and then we get less accuracy also on the validation set. So that's what you see right here in these in these this curve right here, as you make the function more diverse, so you perturb it a little bit, you see that your accuracy doesn't suffer too much, you kind of stay at the same accuracy. But as if you make it more and more and more diverse, you can see that the accuracy suffers until the diversity of one basically means that you disagree on the maximum amount of labels that you can. And so you're sort of sort of out of this valley right here. And also, you can see that your accuracy goes to zero. So the more these functions disagree, the less their accuracy is, that seems natural. However, you can see that these red stars right here, they seem to be also very different, they seem to not agree with the original baseline optimum, but they seem to be doing perfectly fine in terms of accuracy, be at the same accuracy. And those are the independent optima, those are the optima from runs where we initialized at a different point and then also trained. And that, again, is evidence for the fact that there are probably other optima far away, where these different initializations find this here. So they are very different in terms of functional space, they're quite far apart, they predict different things. However, in terms of loss, they're almost the same or actually the same. So this is very, very cool experiments right here. And they do this for the different for different architectures. And you can see that, especially the larger architectures, this actually happens more pronounced. And they also make the point of saying, if you go to harder problems like a CIFAR 100 or an image net, this this effect is more pronounced that you can see, these here are closer together as a curve. And these these are these these are the the independent optima. So I hope you're all you're already on board and still know why we're doing these things. We're doing these things because we want to build some sort of ensemble that captures the distribution of solutions in order to generalize better. Now we have two options, either we start kind of from a an optimum and characterize the space around that optimum, which is what these methods do right here. And what also the Bayesian methods do this, even though they don't want to, because they do these approximations, because they're intractable, they're going to end up doing this. Or our other our other option is to restart training a bunch of times. And then we end up at different optima. And the point of the paper is it's better to do that than to build the ensemble out of the of these Gaussian methods or of these perturbation methods. And the paper, I guess the main claim of the paper is why that happens. And it happens because the ensemble members obtain different minima that are functionally different. Okay, so exactly, that's what they do here. So they now build ensembles out of these different things. And you can see that here on the x axis, you have the ensemble size. So how many ensemble members do you have? And the dashed lines here are the baseline accuracies if you just have a single model. And the test accuracy is plotted on the y axis. Now I actually was, I was not, that's not correct. You always build the ensemble out of random initializations. But on top of that, you do these things. So what you can see right here is, if I have this classifier, which is my original classifier, and I add on top of that, this PCA Gaussian, you know, perturbation stuff, I increase in accuracy. However, if I build an ensemble, I increase in accuracy, if I build an ensemble out of 10 members, I increase in accuracy this much. And then if, because I can do both things. So if I increase if I do an ensemble, and then on top of that to the PCA Gaussian, I gain another this much right here. So that's sort of evidence for the fact that you'd rather build an ensemble than do this, these other methods of, of approximating the Bayesian posterior of weights. So yes, I'm, I'm sort of convinced. I hope you are too. And they do a lot of they do some more experiments right here where you can see that the difference between so this is single model, sorry, this is, I guess, here, accuracy, oh, yeah, if you this is the out of distribution test, so you can take a data set, and you can corrupt it by corruption. So there are predefined data sets, but you can also do it yourself, you crop it, you can do luminosity, whatever, you can destroy parts of the image, you can see that having more ensemble members, so this is your original models, here is how they sync with increasing corruption. It almost doesn't matter which ones of these methods you do, you see the bottom one is the original model, and you gain a little bit by doing these things, but not nearly as much by building an ensemble and going here, or actually an ensemble of two members or five members, in which case you jump this much in accuracy. So these ensembles from different initializations are also very, very, are also very good at countering corruption, which you see also here. Yeah, so this is the JS divergence, okay, I've read that, but let's not go here, videos already too long. And this is the last thing is on ImageNet test set and the ImageNet corrupted set, where they pretty much show the same thing. It's not as pronounced here, but you can see pretty much how the different, if you go from single model to ensemble with two members to ensemble with four members, there is a general upwards trend, and the general upwards trend is much less pronounced within each ensemble, so if you go just go from method to method, then it is between the different groups of ensembles, meaning that the ensemble is a much more pronounced effect that these other effects. So I hope I have convinced you a little bit of how these subspaces look like, how the loss landscape of neural networks look like, especially the fact that there are these different minima, and the random initializations of these different groups of models, and the random initializations will almost always hit these different minima. And the interesting part is that even though these different minima perform equally well, they are functionally very different. And an ensemble of differently initialized and independently optimized models can actually capture these different modes of the functional space. And therefore, if you build an ensemble out of that, it will generalize better, because it kind of can draw information from all of those different modes, rather than if you do some sort of Bayesian network, which will, because you have to approximate usually with Gaussians, will end up only covering one of these modes. That is sort of a good summary of what this paper says. Again, I enjoy research like this, because it's easy and it gives, it kind of makes you think, right? So I'll be thinking about these things for a while now and thinking of new kind of experiments that one could do. And yeah, as I said, this research is still wide open. We don't know so many things about neural network. And you know, tell me, tell me what you think is going on, actually, that that would be very interesting. And yeah, I'll see you next time. Bye bye.
[ { "end": 5.8, "start": 0, "text": " Hi there! Today we'll look at Deep Ensembles, a lost landscape perspective by Stanislav Fort," }, { "end": 12.8, "start": 5.8, "text": " Hui Yi Hu and Balaji Lakshminarayanan. This paper on a high level explains the lost landscape" }, { "end": 19.080000000000002, "start": 12.8, "text": " of deep ensemble models, so ensembles of deep neural network. And it hypothesizes, and it" }, { "end": 24.6, "start": 19.080000000000002, "text": " shows through experiments, that each member of the ensemble, by means of being initialized" }, { "end": 31.880000000000003, "start": 24.6, "text": " at a random point, will go and, through optimization, go and end up at a different place in weight" }, { "end": 36.800000000000004, "start": 31.880000000000003, "text": " space. And therefore the deep ensemble is able to capture different modes of the functional space," }, { "end": 43.08, "start": 36.800000000000004, "text": " of the space of solutions. They compare this to Bayesian networks, which are sort of promised to" }, { "end": 48.32, "start": 43.08, "text": " do the same thing, but they often only characterize a single mode, and therefore they don't generalize" }, { "end": 54.68, "start": 48.32, "text": " as well. So join me exploring this paper, I think it's a pretty cool paper. The experiments are" }, { "end": 60.92, "start": 54.68, "text": " cleverly designed to show what they're supposed to show, and I generally enjoy this type of research" }, { "end": 66.76, "start": 60.92, "text": " because it's kind of an explanatory research that shows you what's going on inside of these networks," }, { "end": 72.24000000000001, "start": 66.76, "text": " rather than, you know, chasing the next state-of-the-art number. It's also an example of research that you" }, { "end": 80.08, "start": 72.24, "text": " can still do while you don't have, you know, giant resources of compute, even though this is by" }, { "end": 86.16, "start": 80.08, "text": " DeepMind. But I do believe that this kind of research is still, you know, wide open and" }, { "end": 95.75999999999999, "start": 86.16, "text": " available to academia, and whereas the other kind, the state-of-the-art kind, slowly goes into more" }, { "end": 101.72, "start": 95.75999999999999, "text": " and more of the money game. All right, in any case, join me in reading this paper. If you like it," }, { "end": 109.08, "start": 101.72, "text": " share it out, leave a comment to tell me what you think, and leave a like if you enjoyed it." }, { "end": 117.12, "start": 109.08, "text": " All right, so we'll start off. The abstract says, deep ensembles have been empirically shown to be" }, { "end": 122.2, "start": 117.12, "text": " a promising approach for improving accuracy, uncertainty, and out-of-distribution robustness" }, { "end": 129.9, "start": 122.2, "text": " of deep learning models. So what are deep ensembles? Really quick, an ensemble model, and we're in the" }, { "end": 134.48000000000002, "start": 129.9, "text": " classification setting. So in the classification setting, we have data points, and each data point" }, { "end": 141.52, "start": 134.48000000000002, "text": " has features, so which are the x, x is some kind of d-dimensional feature, and then you have y," }, { "end": 149.72, "start": 141.52, "text": " which is the label. So that's in some, let's say that's some natural number or something like this," }, { "end": 158.48000000000002, "start": 149.72, "text": " or is element of a class set. Now that's the complex numbers. It's element of some bounded" }, { "end": 165.44, "start": 158.48, "text": " set of class labels, so it's either a cat or a dog or you know, whatever you want. So you have a" }, { "end": 173.95999999999998, "start": 165.44, "text": " data set of these things, and your plan is to use x to predict y. If you build a model, a deep neural" }, { "end": 178.35999999999999, "start": 173.95999999999998, "text": " network, for example, for this task, you would simply characterize this function here, you would" }, { "end": 185.92, "start": 178.35999999999999, "text": " parameterize it as a deep neural network of many, many layers. If you build an ensemble now, what you" }, { "end": 191.72, "start": 185.92, "text": " would do is you would take the data set and simply train multiple different ones of these deep neural" }, { "end": 198.44, "start": 191.72, "text": " networks. So you'll train multiple different ones. And if you now want to classify data point, you'll" }, { "end": 204.77999999999997, "start": 198.44, "text": " input that data point into all of these three. And at the end, you would somehow aggregate, and there" }, { "end": 210.04, "start": 204.77999999999997, "text": " are different methods of doing this, but the most obvious one is simply either to aggregate by the" }, { "end": 218.64, "start": 210.04, "text": " mean or the mode, median, whatever you want, you could also kind of also learn something here. But" }, { "end": 224.44, "start": 218.64, "text": " you can just average the predictions. And that will usually give you a better prediction than if" }, { "end": 230.28, "start": 224.44, "text": " you only have one model. So this is called an ensemble model. And if the ensemble members," }, { "end": 237.32, "start": 230.28, "text": " these things here are neural networks or deep networks, this is called a deep ensemble. So why" }, { "end": 245.44, "start": 237.32, "text": " do we hope to become better? That's the point of this paper is to show what happens in the lost" }, { "end": 252.95999999999998, "start": 245.44, "text": " landscape of these deep neural networks. And why do they perform better than other methods that are" }, { "end": 259, "start": 252.95999999999998, "text": " supposed to achieve the same thing. So usually, when you build an ensemble model, what are you" }, { "end": 266.88, "start": 259, "text": " hoping for? You're hoping to sort of learn a generalizable function. And they have this drawing" }, { "end": 274.6, "start": 266.88, "text": " right here, where it's a bit of a you have to sort of think differently than you usually do. So on" }, { "end": 282.08, "start": 274.6, "text": " the x axis, you have the space of solutions. So imagine that your, your neural network only has a" }, { "end": 289.68, "start": 282.08, "text": " single weight. So this axis here is that single weight, or you can project or or whatnot, this is" }, { "end": 297, "start": 289.68, "text": " the space of different solutions. So after you optimize, you land somewhere on this axis. And you" }, { "end": 303.92, "start": 297, "text": " can see that there is a solid line which represents the accuracy on the training set. And then there" }, { "end": 309.96000000000004, "start": 303.92, "text": " is a dashed line which represents the accuracy of the validation set for a given parameter. So what" }, { "end": 317.84000000000003, "start": 309.96000000000004, "text": " you usually do is you optimize one neural network to its very best training accuracy. So let's say" }, { "end": 323.91999999999996, "start": 317.84, "text": " you start off here, what you would do is you would see my training accuracy is this high, I need a" }, { "end": 330.08, "start": 323.91999999999996, "text": " different color right here is this high. And you calculate the gradient, and you could do gradient" }, { "end": 336.08, "start": 330.08, "text": " descent. And that means you go down the loss up the accuracy. So you go over and over and over" }, { "end": 341.84, "start": 336.08, "text": " until you reach this point right here, where you have maximum training accuracy, and then you'll" }, { "end": 347.32, "start": 341.84, "text": " suffer some generalization loss like you're gonna see right here, it's all for some generalization" }, { "end": 351.88, "start": 347.32, "text": " loss, because the validation accuracy at that point isn't as high. But generally, it's correlated," }, { "end": 358.92, "start": 351.88, "text": " as you can see, by the general overlap of these two lines of these two shapes right here. Okay," }, { "end": 366.08, "start": 358.92, "text": " so this is called a maximum a posteriori estimate, you simply optimize one neural network until the" }, { "end": 374.76, "start": 366.08, "text": " best training error. There are different approaches right here, there are approaches that say, okay," }, { "end": 380.03999999999996, "start": 374.76, "text": " we can do we could do better. So first of all, what you see right here is rather peculiar. And" }, { "end": 385.4, "start": 380.03999999999996, "text": " you might not be used to this, that there are different peaks right here, there are different" }, { "end": 392.92, "start": 385.4, "text": " peaks, as you can see in the training and the validation error. So they're correlated. And the" }, { "end": 399.68, "start": 392.92, "text": " idea is that neural networks are very nonlinear. And we've known from other papers that they have" }, { "end": 405.6, "start": 399.68, "text": " many, many local minima. And in fact, so this is one of the astounding things about neural network," }, { "end": 412.96000000000004, "start": 405.6, "text": " most of these minima are performing equally well. So even though the neural network has different" }, { "end": 420.16, "start": 412.96000000000004, "text": " local minima, they all perform about equally well. And other papers even say they're all sort of" }, { "end": 427.6, "start": 420.16, "text": " connected on a low loss landscape. So there are many, many things that are still mysterious about" }, { "end": 434, "start": 427.6, "text": " neural network. But we know that there are multiple minima. And we know that we basically need to find" }, { "end": 442.32000000000005, "start": 434, "text": " one of them. And it doesn't really matter which one they all perform sort of equally well. Now," }, { "end": 450.16, "start": 443.44, "text": " as you can, as you might imagine, there are people who aren't really satisfied with this. And there" }, { "end": 456.88, "start": 450.16, "text": " are approaches to say, why don't we just capture this entire curve right here. So if we could build" }, { "end": 463.84, "start": 456.88, "text": " a model that could not only tell us at this point right here, you're this good, but could tell us" }, { "end": 470.64, "start": 463.84, "text": " that at any point, how good we are captured the entire distribution of solutions. And these are" }, { "end": 477.92, "start": 470.64, "text": " usually in the category of the Bayesian neural networks, they try to capture the entire distribution." }, { "end": 482.8, "start": 477.92, "text": " Of course, that's not really feasible, because you always have to calculate that posterior." }, { "end": 487.36, "start": 482.8, "text": " So what they end up doing is they do some approximation. And usually they do some sort" }, { "end": 493.12, "start": 487.36, "text": " of a multivariate Gaussian approximation, because you can calculate posteriors in closed form and so" }, { "end": 500.32, "start": 493.12, "text": " on. And this paper, this paper's hypothesis is that these can only usually capture one of these" }, { "end": 506.72, "start": 500.32, "text": " peaks. So they are very able to capture the surrounding right here, they're, they can capture" }, { "end": 513.2, "start": 506.72, "text": " very accurately what happens around this particular peak. They are very aware of the shape of the" }, { "end": 518.96, "start": 513.2, "text": " curvature here, and can tell you a lot of things about it. So they can tell you, for example, that" }, { "end": 528.5600000000001, "start": 518.96, "text": " the validation so that you might want to be a bit over here, rather than over here. But they cannot" }, { "end": 533.76, "start": 528.5600000000001, "text": " they don't generally know about these other modes, because they are only approximations." }, { "end": 540.64, "start": 533.76, "text": " They generally don't produce multimodal solutions. Another approach is a deep ensemble." }, { "end": 547.68, "start": 541.52, "text": " Now, this paper shows that in general, if you train a deep ensemble, what will happen is because" }, { "end": 555.36, "start": 547.68, "text": " you randomly initialize the deep ensemble, at some points, it will happen that if you do gradient" }, { "end": 560.56, "start": 555.36, "text": " descent on all of them, they will end up sort of covering all these different modes, they still" }, { "end": 565.1999999999999, "start": 560.56, "text": " they don't have an idea of you know, the curvature, sorry, this one shouldn't go here," }, { "end": 569.28, "start": 565.1999999999999, "text": " this one should go here, the curve, they don't really know about the curvature, but they will" }, { "end": 576.16, "start": 569.28, "text": " give you these different minima right here. And therefore, they can capture the landscape much," }, { "end": 582.4799999999999, "start": 576.16, "text": " much more easily. If you know that these three are minima, you sort of, it might look something like" }, { "end": 588.3199999999999, "start": 582.4799999999999, "text": " this. And that's a hell of a lot better than simply the Bayesian approximation that you have" }, { "end": 595.6800000000001, "start": 588.32, "text": " to capture one of the peaks, but really accurately. So, their hypothesis here is that deep ensembles" }, { "end": 604.08, "start": 595.6800000000001, "text": " do this job of capturing the different modes of the functional space much better than the Bayesian" }, { "end": 611.7600000000001, "start": 604.8000000000001, "text": " methods. And it is why the deep methods, sorry, why the deep ensembles work so well, because they" }, { "end": 618.88, "start": 611.76, "text": " end up in different minima. And that is, it's really interesting proposition. And what I find" }, { "end": 624.88, "start": 618.88, "text": " really interesting as well are the experiments that they do to show this. So they have a lot of these" }, { "end": 632.4, "start": 624.88, "text": " experiments right here. First of all, to the setup, they use C410, C4100, and so on. And on C410," }, { "end": 639.28, "start": 633.36, "text": " you can see right here, they use a small CNN, medium CNN, and a ResNet. Now the small CNN" }, { "end": 645.92, "start": 639.28, "text": " and a ResNet. Now the small and medium CNNs, their accuracy is really, really subpar. So," }, { "end": 653.4399999999999, "start": 646.72, "text": " take the results here with some grain of salt, because there are effects in these neural network" }, { "end": 659.92, "start": 653.4399999999999, "text": " that are really qualitatively different if you are seriously underperforming, like this one," }, { "end": 666.4, "start": 659.92, "text": " like if you have a seriously too small network rather than a large network. Now they do verify" }, { "end": 674, "start": 666.4, "text": " all of their things also with this large network and 90% accuracy is acceptable for C410. I don't" }, { "end": 679.68, "start": 674, "text": " think there's the big qualitative difference between 90 and 95 and so on. But the 64," }, { "end": 687.76, "start": 680.56, "text": " if it were only this, I would be rather critical of this work. But it's fine to, if you see the" }, { "end": 694.72, "start": 687.76, "text": " effect at 64, and then some of the effects you check to carry over to the 90% one, I'm going to" }, { "end": 704, "start": 694.72, "text": " generally believe you. Okay, so first of all, what they do here is they look at a training trajectory" }, { "end": 712.32, "start": 704, "text": " of just a single run. So this paper is half about ensembles, but also half generally about" }, { "end": 718.5600000000001, "start": 713.9200000000001, "text": " what does training of neural networks do? And they reach some very, very cool conclusions that even" }, { "end": 725.1199999999999, "start": 718.56, "text": " are independent of deep ensembles. So here, the first thing we do is we have some initial random" }, { "end": 730.8, "start": 725.1199999999999, "text": " initialization in weight space of your weight, and then you do gradient descent and you run and you" }, { "end": 739.04, "start": 730.8, "text": " run, right, and you get to some minima right here, some minimum right here. And then you do a second" }, { "end": 746.64, "start": 739.04, "text": " one. So you initialize somewhere else. And because you initialize somewhere else, you run, you run," }, { "end": 752.72, "start": 746.64, "text": " you run, you end up at a different minimum. Okay, this is a property. So these are not convex" }, { "end": 758, "start": 752.72, "text": " functions, right? We know about neural networks, you'll end up a different minima, but the minima," }, { "end": 765.4399999999999, "start": 758, "text": " they will, they will perform about equally well. So the question is, do those different minima that" }, { "end": 771.76, "start": 765.4399999999999, "text": " perform equally well, describe the same function? Or are they fundamentally different functions" }, { "end": 780.3199999999999, "start": 771.76, "text": " that just happen to reach the same accuracy? And the question is very interesting. And this paper" }, { "end": 788.08, "start": 781.36, "text": " attempts to answer that. So here you can see in the description, on the left, cosine similarity" }, { "end": 795.04, "start": 788.08, "text": " between checkpoints to measure weight space alignment along optimization trajectory. So we" }, { "end": 802.16, "start": 795.04, "text": " only consider one of these runs, only consider the left one, for example, and you plot it here," }, { "end": 809.36, "start": 802.16, "text": " and here, this later one comes later, sorry. So you plot the left only a single run, and you ask" }, { "end": 818.3199999999999, "start": 809.36, "text": " yourself, the checkpoint that I have after epoch 20, how similar is it to the checkpoint that I have" }, { "end": 827.0400000000001, "start": 818.32, "text": " after epoch five? That would be right here. Now, we have to read up how they compare the checkpoints." }, { "end": 833.6, "start": 827.0400000000001, "text": " And this is weight space alignment. Okay, so weight space alignment, it basically means how much do" }, { "end": 839.36, "start": 833.6, "text": " the weights align in the cosine fashion, as you can see right here, this is simply the cosine between" }, { "end": 845.9200000000001, "start": 839.36, "text": " the weights, this is one way of comparing two functions. If two functions align in weight space," }, { "end": 850.16, "start": 845.92, "text": " there's a decent chance that they describe the same thing. So as you can see here," }, { "end": 857.92, "start": 851.4399999999999, "text": " we go as we go down the optimization trajectory, of course, each one is similar to themselves. But" }, { "end": 865.36, "start": 859.12, "text": " you can see that there is kind of a shift right here. So at the beginning, the zero of checkpoint" }, { "end": 872.0799999999999, "start": 865.36, "text": " is very dissimilar to the checkpoint at the end. But after very short while, you kind of cross over," }, { "end": 876.72, "start": 872.08, "text": " and then all these checkpoints right here are sort of similar. So the" }, { "end": 884.48, "start": 879.5200000000001, "text": " if you just look at two rows, you look at the bottom row, and you look at the top row," }, { "end": 889.6, "start": 884.48, "text": " the bottom row tells you how similar are the checkpoints during training to the initial" }, { "end": 896.8000000000001, "start": 889.6, "text": " checkpoint. And you can see pretty quickly, they are very dissimilar. So at this point right here," }, { "end": 902.9599999999999, "start": 896.8, "text": " there is kind of a dissimilarity happening where the checkpoint goes away from its initialization" }, { "end": 908.88, "start": 902.9599999999999, "text": " to something else. And the top row tells you how similar are they to where the network ends up." }, { "end": 918.24, "start": 909.8399999999999, "text": " And you can see that there appears to be a period in, let's say here, where this shift away starts" }, { "end": 926.16, "start": 918.24, "text": " up until here, where it's kind of not similar to anything. But then after that, after here," }, { "end": 933.76, "start": 926.16, "text": " everything is similar to the final checkpoint. Okay, so this is sort of tells us a hypothesis is" }, { "end": 940, "start": 933.76, "text": " that you initialize randomly somewhere you have this lost landscape, right? You initialize randomly" }, { "end": 945.76, "start": 940, "text": " somewhere here. And then you go go go and at some point you fall into one of one of those valleys," }, { "end": 951.92, "start": 945.76, "text": " and then you simply go to that to that valley. If you initialize somewhere differently, you can see" }, { "end": 957.4399999999999, "start": 951.92, "text": " that at the beginning, you might be here somewhere, and then you fall into that valley over here." }, { "end": 964.4799999999999, "start": 957.4399999999999, "text": " And after that, you're pretty much set. So this is going to be our hypothesis from now on that" }, { "end": 971.04, "start": 964.4799999999999, "text": " in these neural networks, you the initialization is basically you you're somewhere and you kind of" }, { "end": 976.3199999999999, "start": 971.04, "text": " meander around a bit until you happen to go into one of these directions, which happens pretty" }, { "end": 983.9200000000001, "start": 976.32, "text": " quickly. And then you fall into a hole basically. And that's that's rather a convex setting in that" }, { "end": 993.6800000000001, "start": 983.9200000000001, "text": " thing. Okay, a really interesting thing that they do is a really interesting thing is that they check" }, { "end": 1003.2, "start": 993.6800000000001, "text": " the disagreement of predictions. So you might think that if a neural network achieves 65 or 90," }, { "end": 1008.96, "start": 1003.2, "text": " let's call it 90% accuracy on C410, right, that there are just you know, there are this data set," }, { "end": 1016.32, "start": 1009.84, "text": " that's 100%. And there are just these 10% over here, that are just the hardest, right. And the" }, { "end": 1022.5600000000001, "start": 1016.32, "text": " more you train, the more are you you're able to push this boundary to the right. So if you train" }, { "end": 1027.68, "start": 1022.5600000000001, "text": " more, if you have a better network, you're just able to explain more and more of the samples." }, { "end": 1033.52, "start": 1027.68, "text": " However, this this experiment here is going to show that this is not the case. What they measure" }, { "end": 1039.1200000000001, "start": 1033.52, "text": " is the disagreement in predictions, which basically means that if I there is this data set," }, { "end": 1046, "start": 1039.1200000000001, "text": " the validation data set, and if I have one random initialization and I train it to 90% accuracy," }, { "end": 1053.2, "start": 1046, "text": " it will have it will say these, it will not be able to classify these here. But if I have the" }, { "end": 1059.92, "start": 1053.2, "text": " same network, but a different initialization, it might not be able to classify these over here," }, { "end": 1066.0800000000002, "start": 1059.92, "text": " but will be perfectly able to classify these over here. Right. This is a very, also very interesting" }, { "end": 1073.68, "start": 1066.0800000000002, "text": " property. And you can see right here, the disagreement of predictions as you go through" }, { "end": 1079.2, "start": 1073.68, "text": " the training. So again, we're going to look at the bottom and the top row. So the bottom row," }, { "end": 1087.04, "start": 1079.2, "text": " and the top row, red is very disagreeing, blue is very agreeing. You can see again, that" }, { "end": 1096.16, "start": 1088.4, "text": " that I introduced, again, I introduced the different runs, I'm already taking this away from later," }, { "end": 1102, "start": 1096.8, "text": " we are just looking at one single run for now. This this is a result that's going to come up" }, { "end": 1106.24, "start": 1102, "text": " later, when we look at two different runs of the same neural network. And that's the astounding" }, { "end": 1111.92, "start": 1106.24, "text": " part. Okay, here, we're just going to look at one run again during training. So we can see right" }, { "end": 1117.76, "start": 1111.92, "text": " here at the beginning, of course, every checkpoint agrees with itself on the predictions. However," }, { "end": 1124.64, "start": 1119.04, "text": " you can see that pretty quickly, the checkpoints start disagreeing very quickly, everything is red" }, { "end": 1132.24, "start": 1124.64, "text": " right here. However, on the top, you can see how much how much do these checkpoints agree" }, { "end": 1139.1200000000001, "start": 1132.24, "text": " with the end with the 30th epoch checkpoint, and see that there is a period that is red," }, { "end": 1146.88, "start": 1139.1200000000001, "text": " right from here to let's say here. And then after that, they all start agreeing. So from here on out," }, { "end": 1156.56, "start": 1146.88, "text": " it's all pretty blue, which basically means that that they agree with the last checkpoint. So with" }, { "end": 1166.1599999999999, "start": 1156.56, "text": " the that all of these agree with the end of the training. Again, this is our hypothesis here that" }, { "end": 1172.3999999999999, "start": 1166.1599999999999, "text": " once you're in this valley, that the function kind of stays the same, and you only sort of" }, { "end": 1176.96, "start": 1172.3999999999999, "text": " micro optimize the function. However, at the beginning, you decide into which of those" }, { "end": 1181.6799999999998, "start": 1176.96, "text": " valleys you want to go. And the different initializations will lead you to different" }, { "end": 1187.3600000000001, "start": 1181.68, "text": " valleys. And that's what they show right here. So they do a t-sne plot of predictions t-sne is" }, { "end": 1196.24, "start": 1187.3600000000001, "text": " a method to project to down project high dimensional vectors. And this is the weight space projected to" }, { "end": 1203.44, "start": 1196.24, "text": " two dimensions. So t-sne x is one, and two, these are rather arbitrary. It's just the if you think" }, { "end": 1209.04, "start": 1203.44, "text": " of a PCA, it's the directions of maximum variance. And you can see the three different runs, they" }, { "end": 1214.32, "start": 1209.04, "text": " immediately at the beginning right here, they immediately go, you can see they have they do" }, { "end": 1220.8, "start": 1214.32, "text": " large distances at the beginning, between the steps of optimization. And they do in very" }, { "end": 1226.08, "start": 1220.8, "text": " different directions, just by means of being initialized at different points and having maybe" }, { "end": 1232.96, "start": 1226.08, "text": " a bit of noise in the training process. But once they are at the particular location, they sort of" }, { "end": 1240.88, "start": 1232.96, "text": " just kind of bounce around right here and try to find the best minima in that region. So" }, { "end": 1248.56, "start": 1242.8, "text": " this is our first indication that the if we train the same network multiple times with random" }, { "end": 1256.24, "start": 1248.56, "text": " initializations, it's going to end up at multi at different places. And what we're wondering is we" }, { "end": 1263.68, "start": 1256.24, "text": " already know that a single network is very different at the end than at the beginning of training." }, { "end": 1269.36, "start": 1263.68, "text": " What we want to know is our two networks also very different, even though they're trained on the same" }, { "end": 1274.4, "start": 1269.36, "text": " objective, just because they are at different places in the weight space doesn't mean they are" }, { "end": 1279.44, "start": 1274.4, "text": " functionally that different, there are symmetries. And it's going to turn out yes, they actually are" }, { "end": 1289.1200000000001, "start": 1279.44, "text": " very, very different. So this is right here, here you can see two different things. And we're going" }, { "end": 1296.96, "start": 1289.1200000000001, "text": " to read the plot along with it. Just so I remember what I'm seeing here. So using two different" }, { "end": 1303.28, "start": 1296.96, "text": " architectures, okay, for each of these architectures, the left subplot shows the cosine similarity" }, { "end": 1307.76, "start": 1303.28, "text": " between the different solution weight space, and the right subplot shows the fraction of labels on" }, { "end": 1312.8799999999999, "start": 1307.76, "text": " which the predictions from different solutions disagree. Okay, so it's the same as before," }, { "end": 1319.04, "start": 1312.8799999999999, "text": " the left is the alignment. And now it's not during training. Now we restart independently," }, { "end": 1326.08, "start": 1319.04, "text": " we train the same network 10 different times. And after that, we're going to compare the 10" }, { "end": 1332.8799999999999, "start": 1326.08, "text": " different solutions. Remember, these all achieve roughly the same accuracy on the data sets. And" }, { "end": 1339.1200000000001, "start": 1332.88, "text": " this is the same whether you go to a big architecture like this ResNet 20, or to a small" }, { "end": 1345.92, "start": 1339.1200000000001, "text": " architecture like this small CNN right here. You can see that every single solution, of course," }, { "end": 1351.6000000000001, "start": 1345.92, "text": " agrees a lot with itself. That's the diagonal right here. But it's completely, it's not a line," }, { "end": 1357.1200000000001, "start": 1351.6000000000001, "text": " it's completely orthogonal to all the other solutions. So all the solutions in weight space" }, { "end": 1362, "start": 1357.1200000000001, "text": " are orthogonal. Now there's still the chance that there's, you know, some symmetry in weight space" }, { "end": 1371.04, "start": 1362, "text": " because, you know, if I have a neural network, I can just exchange the connections. And if I also" }, { "end": 1376.64, "start": 1371.04, "text": " exchange the neurons, then it will be the same function. However, you can see right here that" }, { "end": 1385.36, "start": 1376.64, "text": " they completely disagree. So this small CNN, remember, it had like a 65% accuracy, the solutions," }, { "end": 1393.28, "start": 1385.36, "text": " the red here, they disagree on 25% of the labels. So that this is exactly this effect we saw before," }, { "end": 1400, "start": 1393.28, "text": " we train one solution, and it will not be able to classify these parts of the validation data set." }, { "end": 1405.36, "start": 1400, "text": " And we train the same network with the same data set with the same loss with everything the same," }, { "end": 1410.9599999999998, "start": 1405.36, "text": " again, just from a random initialization that's different, it will end up equally performing" }, { "end": 1416.64, "start": 1410.96, "text": " equally well, but it will make the mistakes and an entirely different set of the validation data" }, { "end": 1423.8400000000001, "start": 1416.64, "text": " points. Like this is rather astounding, I feel, because I think most people are of the of the idea" }, { "end": 1432.48, "start": 1423.8400000000001, "text": " that all the kind of data points have an intrinsic hardness. And if if we get to 70% accuracy," }, { "end": 1438.24, "start": 1432.48, "text": " it will always be the same 70% of data points that we miss classify or sorry, that we correctly" }, { "end": 1444.96, "start": 1438.24, "text": " classify. This is not the case. And this is one thing I think this paper and this line of research" }, { "end": 1453.52, "start": 1444.96, "text": " does pretty cool is to look at these networks in terms of their prediction agreement. So they go" }, { "end": 1461.28, "start": 1453.52, "text": " further and they compare this to four different methods. So they say, okay, ensembles, ensembles" }, { "end": 1467.36, "start": 1461.28, "text": " are one method of kind of doing these getting different solutions, which means we start from" }, { "end": 1474, "start": 1467.36, "text": " random initializations, but there are other ones. So for example, there is just place this correctly" }, { "end": 1480.7199999999998, "start": 1474, "text": " here, random subspace sampling. So what does it mean? They say we start at an optimized solution." }, { "end": 1487.12, "start": 1480.7199999999998, "text": " So you train a network one single network, and then we choose a random direction at V in weight" }, { "end": 1491.4399999999998, "start": 1487.12, "text": " space, we step in that direction by choosing different values of t looking at the predictions" }, { "end": 1498.0800000000002, "start": 1491.44, "text": " at configurations, theta zero plus TV, we repeat this from many different V, but always the same" }, { "end": 1505.52, "start": 1498.0800000000002, "text": " theta zero. So in our original kind of drawing of this thing, we optimize one single network," }, { "end": 1512.56, "start": 1506.48, "text": " let's say that's here. And then we sort of wiggle around in here, into different random" }, { "end": 1517.3600000000001, "start": 1512.56, "text": " directions. Now, of course, there's only one random direction right here. If maybe we can look at this" }, { "end": 1524.32, "start": 1517.36, "text": " at, at the so if here, we have the loss landscape, and maybe over here, there is a bit of a" }, { "end": 1529.12, "start": 1524.9599999999998, "text": " of a thing. And over here, there is a bit of a thing you thought I was going to draw." }, { "end": 1538.3999999999999, "start": 1531.9199999999998, "text": " Okay, so we start here, and maybe that converges to somewhere here. So what we're going to do is" }, { "end": 1543.4399999999998, "start": 1538.3999999999999, "text": " we're going to select random directions in that space. And we're just going to go a few steps into" }, { "end": 1549.76, "start": 1543.44, "text": " this direction and compare the weights. Now here, you can already see by the way I'm drawing it that" }, { "end": 1557.44, "start": 1549.76, "text": " this will probably make you stay in the same region. And our hope with ensembles is of course," }, { "end": 1564.4, "start": 1557.44, "text": " is that they are able to capture all of the three different modes right here. But it is a way to" }, { "end": 1569.8400000000001, "start": 1564.4, "text": " obtain different solutions that all also perform quite well, if you only perturb your solution by" }, { "end": 1576.72, "start": 1569.84, "text": " a little bit, it also works quite well. And you can build an ensemble out of these methods right" }, { "end": 1583.12, "start": 1576.72, "text": " here, you can build an ensemble out of these. In fact, these Bayesian methods, if you do these" }, { "end": 1589.04, "start": 1583.12, "text": " approximations with Gaussians, that's pretty much what they will end up doing is they will" }, { "end": 1594.9599999999998, "start": 1589.04, "text": " end up characterizing the local, the local landscape around one of these minimum. But here," }, { "end": 1600.32, "start": 1594.96, "text": " we simply do it by randomly stepping into a direction. That's the first method that we're" }, { "end": 1608.4, "start": 1600.32, "text": " going to investigate to obtain an ensemble of different solutions. So deep ensembles means we" }, { "end": 1615.6000000000001, "start": 1608.4, "text": " initially random, we randomly initialize many times and then train from scratch each member." }, { "end": 1622.8, "start": 1616.48, "text": " This method here means we start from an or from a solution and we simply perturb it into random" }, { "end": 1631.12, "start": 1622.8, "text": " directions. The next thing we can do is we can do dropout subspace dropout. We again start at an" }, { "end": 1638.32, "start": 1631.12, "text": " optimized solution, apply dropout with randomly chosen probability. Again, our hypothesis is going" }, { "end": 1644.72, "start": 1638.32, "text": " to be that this is going to keep the network rather in the sort of same kind of functional mode" }, { "end": 1651.44, "start": 1644.72, "text": " and not switch over diagonal Gaussian subspace. Again, we start from an optimized solution," }, { "end": 1657.1200000000001, "start": 1651.44, "text": " you can see the pattern. And here we actually do some sort of a Gaussian approximation to the" }, { "end": 1663.76, "start": 1657.1200000000001, "text": " functional space, we calculate a mean and a standard deviation. And we draw samples of" }, { "end": 1671.3600000000001, "start": 1663.76, "text": " the parameters from that distribution. And then the same in a low rank regime. And here you see" }, { "end": 1677.04, "start": 1671.3600000000001, "text": " what happens. So here is these things, for example, the random subspaces." }, { "end": 1683.04, "start": 1677.04, "text": " Here is this overlapped with the plot we saw before. So here we have three different trajectories" }, { "end": 1689.2, "start": 1683.04, "text": " of runs. And then at the end of each trajectory, we take the best solution, and we do this random" }, { "end": 1696.8799999999999, "start": 1689.2, "text": " exploration around it. And this here is the t-sne, the t-sne projection. Now this isn't, I have to" }, { "end": 1702.3999999999999, "start": 1696.8799999999999, "text": " say this isn't the projection of the weights itself. Sorry, I did not say that before. This is a" }, { "end": 1711.3600000000001, "start": 1702.4, "text": " projection of the predictions, I believe of a subset of data points. So this is the prediction" }, { "end": 1719.44, "start": 1711.3600000000001, "text": " projection of that. And you can see that if we perturb the solutions like this, all of these" }, { "end": 1727.52, "start": 1719.44, "text": " solutions, all of these ensembles, they rather they stay in their basin of attraction, as you can see" }, { "end": 1734.48, "start": 1727.52, "text": " right here. So with a deep ensemble, we would build an ensemble that, sorry, sorry, sorry." }, { "end": 1741.44, "start": 1735.36, "text": " We would build an ensemble that combines this point, and this point, and this point. Whereas" }, { "end": 1748.4, "start": 1741.44, "text": " here, we will simply either build an ensemble that combines points in here, or we'll build an" }, { "end": 1753.52, "start": 1748.4, "text": " ensemble that combines points in here, and so on. And you'll see this for all the different methods" }, { "end": 1759.84, "start": 1753.52, "text": " that we consider here, especially the Gaussian methods. And that's a hint to why, even though" }, { "end": 1767.04, "start": 1759.84, "text": " Bayesian networks explicitly try to capture the entire distribution right here, what they'll end" }, { "end": 1774, "start": 1767.04, "text": " up doing is they'll simply end up capturing a single mode. And that's important because the" }, { "end": 1781.44, "start": 1774, "text": " single mode is always functionally sort of equal. We saw that this is a training trajectory," }, { "end": 1786.8, "start": 1781.44, "text": " and at the end of training, after like this step right here, all the functions are pretty much the" }, { "end": 1794.4, "start": 1786.8, "text": " same, right? They pretty much agree with the end optimum. Whereas between the runs, these functions" }, { "end": 1799.44, "start": 1794.4, "text": " completely disagree with each other. So it is important if we want to build an ensemble to" }, { "end": 1808.3200000000002, "start": 1799.44, "text": " capture as many of these modes as possible, and only the deep ensembles can do that so far. So this" }, { "end": 1815.12, "start": 1808.32, "text": " is another experiment where they show this lost landscape. And I really like these kind of plots." }, { "end": 1822.24, "start": 1815.12, "text": " So what you see here is a plane, it's a 2D plane. And the 2D plane is described by three points." }, { "end": 1828.32, "start": 1822.24, "text": " So one point is the origin, you see right here the origin, that's the origin of that's the zero in" }, { "end": 1836.96, "start": 1828.32, "text": " weight space. Okay, then what you have are the two optima. So you run an optimization, and then you" }, { "end": 1844.64, "start": 1836.96, "text": " run an optimization two different times. Once it's initialized here, and it runs to here. And once" }, { "end": 1852, "start": 1844.64, "text": " it's initialized here, and it runs to here. Okay, so that defines the plane that we're going to look" }, { "end": 1858.4, "start": 1852, "text": " at. Now they for each single pixel in this plane, or actually for each single pixel in this half" }, { "end": 1867.0400000000002, "start": 1858.4, "text": " circle right here, they evaluated the networks. So what you'll do is simply do linear combination" }, { "end": 1872.72, "start": 1867.0400000000002, "text": " of these weight of the weight vector here at this optimum, the weight vector here at this optimum." }, { "end": 1880.64, "start": 1873.2800000000002, "text": " And you can for each point here defines a neural network with those weights. And you can evaluate" }, { "end": 1886.88, "start": 1880.64, "text": " it. And that's what you get. This is the accuracy of the neural network at that point. So here you" }, { "end": 1895.6000000000001, "start": 1886.88, "text": " can see very, very clearly that there are these two different modes right here. So each even though" }, { "end": 1899.92, "start": 1895.6000000000001, "text": " they're initialized super close to each other, right, you can see this right here, they're" }, { "end": 1905.2800000000002, "start": 1899.92, "text": " initialized super close to each other, because they are this is the flat area right here that" }, { "end": 1913.1200000000001, "start": 1905.2800000000002, "text": " we saw before, because they are in the flat area. They're even though they're initialized pretty" }, { "end": 1918.7199999999998, "start": 1913.12, "text": " close, the red one is a little bit more to this basin of attraction, the blue one is a little bit" }, { "end": 1923.4399999999998, "start": 1918.7199999999998, "text": " more to this basin of attraction. So they move over and as soon as they're in, it's like boom," }, { "end": 1928.7199999999998, "start": 1923.4399999999998, "text": " they go to the minimum of that basin. And this area is rather convex. And this area is rather" }, { "end": 1936.9599999999998, "start": 1928.7199999999998, "text": " convex. And in the middle, you can see is a less is more loss. So no solution will go there. That's" }, { "end": 1941.1999999999998, "start": 1936.9599999999998, "text": " how you get these different minima. That's how you get these different modes. And you can see the" }, { "end": 1948.88, "start": 1941.2, "text": " accuracy or from the color is going to be the same in each of the in each of the valleys," }, { "end": 1957.68, "start": 1949.44, "text": " consistent with our what we know so far. Now here, the pink stripe is a Gaussian exploration." }, { "end": 1963.2, "start": 1957.68, "text": " So if you now do a Gaussian perturbation Gaussian exploration around this minimum," }, { "end": 1968.32, "start": 1963.2, "text": " you basically you can see again, you don't get out of this valley, you don't you're not going to go" }, { "end": 1975.12, "start": 1968.32, "text": " to different modes, the weight space is just too large, and you're going to simply be stuck in there." }, { "end": 1981.04, "start": 1975.84, "text": " So the only chance almost you have is to initialize again, and hope that you end up in a different" }, { "end": 1988.48, "start": 1981.04, "text": " place. And I guess my hypothesis is that there are many, many more of these valleys of these basins" }, { "end": 1994.32, "start": 1988.48, "text": " than that you can you could ever capture. So basically, every single initialization that is" }, { "end": 2002.3999999999999, "start": 1994.32, "text": " different will lead you to a different one of these basins. I guess it's only a matter of this size." }, { "end": 2010.1599999999999, "start": 2003.76, "text": " So here again, they do a function similarity. So in this case, it's the function similarity" }, { "end": 2018.6399999999999, "start": 2010.1599999999999, "text": " to optimum one. And this is again, how many of the labels agree with the optimum right here." }, { "end": 2024.48, "start": 2018.64, "text": " And you can see that within this basin of attraction, you have fairly high overlap of the functional" }, { "end": 2032.4, "start": 2024.48, "text": " similarity. But here, none, right? So 15% or so, it's not going to be zero, because they're going" }, { "end": 2038.3200000000002, "start": 2032.4, "text": " to agree on like some of the examples, I guess there's still something like intrinsic hardness," }, { "end": 2046.96, "start": 2038.3200000000002, "text": " but they agree on almost none of the labels, at least I guess, if you normalize by their base" }, { "end": 2057.84, "start": 2046.96, "text": " accuracy. So even though optimum two is performing as well, it is functionally extremely dissimilar" }, { "end": 2064.32, "start": 2057.84, "text": " to optimum one. So these describe really different functions. And I really don't know what to make of" }, { "end": 2072.16, "start": 2064.32, "text": " this other than, you know, each one of these is maybe sort of deciding to look at different" }, { "end": 2078.72, "start": 2072.16, "text": " features in the in the data set, right? To or to maybe build different high level features from" }, { "end": 2086.3999999999996, "start": 2078.72, "text": " the same low level features. And maybe we're still under parameterizing these models, because not a" }, { "end": 2092.08, "start": 2086.3999999999996, "text": " single model can sort of look at both features at the same time, as evidenced by the fact that" }, { "end": 2098.96, "start": 2092.7999999999997, "text": " each of these is always going to one of these things. Or it could be that in fact, the task" }, { "end": 2106.8, "start": 2098.96, "text": " is way too simple. And it's it can be solved in like 500 different ways. And each of these" }, { "end": 2112.2400000000002, "start": 2106.8, "text": " Optima is simply one way of solving the task one way of combining feature. And it's actually" }, { "end": 2117.6, "start": 2112.2400000000002, "text": " completely an over specified problem. That's another hypothesis, it would be, I guess," }, { "end": 2122.48, "start": 2117.6, "text": " interesting to look at these things. And I'm sure there's work on this. So you can see the same thing" }, { "end": 2134.08, "start": 2122.48, "text": " for optimum two right here, where, okay, go away, where you can see that optimum one, it agrees" }, { "end": 2140.4, "start": 2134.64, "text": " almost nothing with optimum two, right? It doesn't it doesn't agree. There's not even a" }, { "end": 2146.56, "start": 2140.4, "text": " hint of a valley right here, in the terms of functional similarity. That is very, very" }, { "end": 2153.2, "start": 2146.56, "text": " interesting. So it really means that these two things describe two different functions." }, { "end": 2160.4, "start": 2153.92, "text": " They do these other plots right here, that they call diversity versus accuracy plots. So" }, { "end": 2168.48, "start": 2161.68, "text": " what they'll do is they are going to have different models, and they're going to look at them" }, { "end": 2176.96, "start": 2168.48, "text": " in terms of their diversity and their accuracy. So here, the y axis is going to be how different" }, { "end": 2183.6, "start": 2176.96, "text": " are these functions, and that's going to be again in fraction of labels changed normalized by the" }, { "end": 2190.72, "start": 2183.6, "text": " base accuracy. So here you can see, we're always start from this baseline optimum, this baseline" }, { "end": 2200.3199999999997, "start": 2190.72, "text": " optimum has zero diversity zero, because it agrees with itself on all the on all the different labels," }, { "end": 2207.04, "start": 2200.3199999999997, "text": " of course. And then we're going to disturb that using our four methods that we defined before." }, { "end": 2213.68, "start": 2207.04, "text": " So we're going to randomly subspace, we're going to drop out, we're going to Gaussian perturb it," }, { "end": 2218.9599999999996, "start": 2213.68, "text": " and so on. And the more we perturb it, the more diverse our function is going to be naturally," }, { "end": 2224.4, "start": 2218.96, "text": " right? Because we perturb the function, it starts disagreeing more and more and more with our" }, { "end": 2231.84, "start": 2224.4, "text": " original optimum. However, what we also expect is, you know, if we are in the local optimum," }, { "end": 2239.52, "start": 2231.84, "text": " and maybe here, and you know, maybe the validation accuracy is sort of beside it, or a little bit" }, { "end": 2245.2, "start": 2239.52, "text": " larger, and so on. So if we perturb it a little bit, you know, that might not make too much to our" }, { "end": 2250.7999999999997, "start": 2245.2, "text": " accuracy. But if we perturb it a lot, you know, we go actually up the loss landscape, and then we get" }, { "end": 2258, "start": 2250.7999999999997, "text": " less accuracy also on the validation set. So that's what you see right here in these in these this" }, { "end": 2264.96, "start": 2258, "text": " curve right here, as you make the function more diverse, so you perturb it a little bit, you see" }, { "end": 2270.96, "start": 2264.96, "text": " that your accuracy doesn't suffer too much, you kind of stay at the same accuracy. But as if you" }, { "end": 2277.52, "start": 2270.96, "text": " make it more and more and more diverse, you can see that the accuracy suffers until the diversity" }, { "end": 2284.2400000000002, "start": 2277.52, "text": " of one basically means that you disagree on the maximum amount of labels that you can. And so" }, { "end": 2291.68, "start": 2284.2400000000002, "text": " you're sort of sort of out of this valley right here. And also, you can see that your accuracy" }, { "end": 2300.56, "start": 2291.68, "text": " goes to zero. So the more these functions disagree, the less their accuracy is, that seems natural." }, { "end": 2307.6, "start": 2300.56, "text": " However, you can see that these red stars right here, they seem to be also very different, they" }, { "end": 2313.6, "start": 2307.6, "text": " seem to not agree with the original baseline optimum, but they seem to be doing perfectly fine" }, { "end": 2320.4, "start": 2313.6, "text": " in terms of accuracy, be at the same accuracy. And those are the independent optima, those are" }, { "end": 2326.08, "start": 2320.4, "text": " the optima from runs where we initialized at a different point and then also trained." }, { "end": 2332.72, "start": 2326.08, "text": " And that, again, is evidence for the fact that there are probably other optima far away," }, { "end": 2340.24, "start": 2333.68, "text": " where these different initializations find this here. So they are very different in terms of" }, { "end": 2345.7599999999998, "start": 2340.24, "text": " functional space, they're quite far apart, they predict different things. However, in terms of" }, { "end": 2354.7999999999997, "start": 2345.7599999999998, "text": " loss, they're almost the same or actually the same. So this is very, very cool experiments" }, { "end": 2361.92, "start": 2354.8, "text": " right here. And they do this for the different for different architectures. And you can see that," }, { "end": 2367.2000000000003, "start": 2361.92, "text": " especially the larger architectures, this actually happens more pronounced. And they also make the" }, { "end": 2374.32, "start": 2367.2000000000003, "text": " point of saying, if you go to harder problems like a CIFAR 100 or an image net, this this effect is" }, { "end": 2383.1200000000003, "start": 2374.32, "text": " more pronounced that you can see, these here are closer together as a curve. And these these are" }, { "end": 2391.04, "start": 2383.12, "text": " these these are the the independent optima. So I hope you're all you're already on board" }, { "end": 2398.08, "start": 2392.16, "text": " and still know why we're doing these things. We're doing these things because we want to build some" }, { "end": 2404.16, "start": 2398.08, "text": " sort of ensemble that captures the distribution of solutions in order to generalize better." }, { "end": 2412.72, "start": 2404.16, "text": " Now we have two options, either we start kind of from a an optimum and characterize the space" }, { "end": 2419.52, "start": 2412.72, "text": " around that optimum, which is what these methods do right here. And what also the Bayesian methods" }, { "end": 2424, "start": 2419.52, "text": " do this, even though they don't want to, because they do these approximations, because they're" }, { "end": 2433.52, "start": 2424, "text": " intractable, they're going to end up doing this. Or our other our other option is to restart" }, { "end": 2440.88, "start": 2433.52, "text": " training a bunch of times. And then we end up at different optima. And the point of the paper is" }, { "end": 2449.6, "start": 2440.88, "text": " it's better to do that than to build the ensemble out of the of these Gaussian methods or of these" }, { "end": 2455.44, "start": 2449.6, "text": " perturbation methods. And the paper, I guess the main claim of the paper is why that happens. And" }, { "end": 2460.96, "start": 2455.44, "text": " it happens because the ensemble members obtain different minima that are functionally different." }, { "end": 2471.2, "start": 2460.96, "text": " Okay, so exactly, that's what they do here. So they now build ensembles out of these different things." }, { "end": 2480, "start": 2471.2, "text": " And you can see that here on the x axis, you have the ensemble size. So how many ensemble members" }, { "end": 2485.92, "start": 2480, "text": " do you have? And the dashed lines here are the baseline accuracies if you just have a single model." }, { "end": 2496.88, "start": 2485.92, "text": " And the test accuracy is plotted on the y axis. Now I actually was, I was not, that's not correct." }, { "end": 2502.88, "start": 2496.88, "text": " You always build the ensemble out of random initializations. But on top of that, you do these" }, { "end": 2510.96, "start": 2502.88, "text": " things. So what you can see right here is, if I have this classifier, which is my original classifier," }, { "end": 2521.2, "start": 2510.96, "text": " and I add on top of that, this PCA Gaussian, you know, perturbation stuff, I increase in accuracy." }, { "end": 2529.92, "start": 2522.08, "text": " However, if I build an ensemble, I increase in accuracy, if I build an ensemble out of 10 members," }, { "end": 2538.08, "start": 2529.92, "text": " I increase in accuracy this much. And then if, because I can do both things. So if I increase" }, { "end": 2545.12, "start": 2538.08, "text": " if I do an ensemble, and then on top of that to the PCA Gaussian, I gain another this much right here." }, { "end": 2551.84, "start": 2545.12, "text": " So that's sort of evidence for the fact that you'd rather build an ensemble than do this," }, { "end": 2562.08, "start": 2551.84, "text": " these other methods of, of approximating the Bayesian posterior of weights. So yes, I'm," }, { "end": 2569.68, "start": 2562.08, "text": " I'm sort of convinced. I hope you are too. And they do a lot of they do some more experiments" }, { "end": 2575.68, "start": 2569.68, "text": " right here where you can see that the difference between so this is single model, sorry, this is," }, { "end": 2585.2, "start": 2577.12, "text": " I guess, here, accuracy, oh, yeah, if you this is the out of distribution test, so you can take a" }, { "end": 2591.04, "start": 2585.2, "text": " data set, and you can corrupt it by corruption. So there are predefined data sets, but you can also" }, { "end": 2599.12, "start": 2591.04, "text": " do it yourself, you crop it, you can do luminosity, whatever, you can destroy parts of the image," }, { "end": 2607.8399999999997, "start": 2599.12, "text": " you can see that having more ensemble members, so this is your original models, here is how they" }, { "end": 2614.6400000000003, "start": 2607.84, "text": " sync with increasing corruption. It almost doesn't matter which ones of these methods you do, you see" }, { "end": 2619.28, "start": 2614.6400000000003, "text": " the bottom one is the original model, and you gain a little bit by doing these things, but not" }, { "end": 2625.36, "start": 2619.28, "text": " nearly as much by building an ensemble and going here, or actually an ensemble of two members or" }, { "end": 2632.7200000000003, "start": 2625.36, "text": " five members, in which case you jump this much in accuracy. So these ensembles from different" }, { "end": 2642.8799999999997, "start": 2632.72, "text": " initializations are also very, very, are also very good at countering corruption, which you see also" }, { "end": 2653.3599999999997, "start": 2642.8799999999997, "text": " here. Yeah, so this is the JS divergence, okay, I've read that, but let's not go here, videos" }, { "end": 2659.2, "start": 2653.3599999999997, "text": " already too long. And this is the last thing is on ImageNet test set and the ImageNet" }, { "end": 2665.04, "start": 2659.2, "text": " corrupted set, where they pretty much show the same thing. It's not as pronounced here, but you" }, { "end": 2673.68, "start": 2665.04, "text": " can see pretty much how the different, if you go from single model to ensemble with two members to" }, { "end": 2679.4399999999996, "start": 2673.68, "text": " ensemble with four members, there is a general upwards trend, and the general upwards trend is" }, { "end": 2685.4399999999996, "start": 2679.4399999999996, "text": " much less pronounced within each ensemble, so if you go just go from method to method, then it" }, { "end": 2692.4, "start": 2685.44, "text": " is between the different groups of ensembles, meaning that the ensemble is a much more pronounced" }, { "end": 2702.16, "start": 2692.4, "text": " effect that these other effects. So I hope I have convinced you a little bit of how these subspaces" }, { "end": 2707.36, "start": 2702.16, "text": " look like, how the loss landscape of neural networks look like, especially the fact that there" }, { "end": 2713.36, "start": 2707.36, "text": " are these different minima, and the random initializations of these different groups of" }, { "end": 2718.96, "start": 2713.36, "text": " models, and the random initializations will almost always hit these different minima. And the" }, { "end": 2723.76, "start": 2718.96, "text": " interesting part is that even though these different minima perform equally well, they are" }, { "end": 2730.48, "start": 2723.76, "text": " functionally very different. And an ensemble of differently initialized and independently" }, { "end": 2736.96, "start": 2730.48, "text": " optimized models can actually capture these different modes of the functional space. And" }, { "end": 2742.1600000000003, "start": 2736.96, "text": " therefore, if you build an ensemble out of that, it will generalize better, because it kind of can" }, { "end": 2747.68, "start": 2742.16, "text": " draw information from all of those different modes, rather than if you do some sort of Bayesian" }, { "end": 2753.7599999999998, "start": 2747.68, "text": " network, which will, because you have to approximate usually with Gaussians, will end up only covering" }, { "end": 2763.52, "start": 2753.7599999999998, "text": " one of these modes. That is sort of a good summary of what this paper says. Again, I enjoy" }, { "end": 2771.44, "start": 2764.3199999999997, "text": " research like this, because it's easy and it gives, it kind of makes you think, right? So I'll be" }, { "end": 2776.2400000000002, "start": 2771.44, "text": " thinking about these things for a while now and thinking of new kind of experiments that one could" }, { "end": 2782, "start": 2776.2400000000002, "text": " do. And yeah, as I said, this research is still wide open. We don't know so many things about" }, { "end": 2787.76, "start": 2782, "text": " neural network. And you know, tell me, tell me what you think is going on, actually, that that would" }, { "end": 2801.84, "start": 2787.76, "text": " be very interesting. And yeah, I'll see you next time. Bye bye." } ]
v-ZxzTSpmk4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Gradient Origin Networks (Paper Explained w/ Live Coding)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gon", "gradient", "negative gradient", "implicit", "implicit representation", "siren", "sirens", "deep neural networks", "convolutional neural network", "dnns", "mnist", "cifar10", "fashion mnist", "gradient descent", "sgd", "inner loop", "backpropagation", "live code", "code", "machine learning code", "research", "research paper" ]
Neural networks for implicit representations, such as SIRENs, have been very successful at modeling natural signals. However, in the classical approach, each data point requires its own neural network to be fit. This paper extends implicit representations to an entire dataset by introducing latent vectors of data points to SIRENs. Interestingly, the paper shows that such latent vectors can be obtained without the need for an explicit encoder, by simply looking at the negative gradient of the zero-vector through the representation function. OUTLINE: 0:00 - Intro & Overview 2:10 - Implicit Generative Models 5:30 - Implicitly Represent a Dataset 11:00 - Gradient Origin Networks 23:55 - Relation to Gradient Descent 28:05 - Messing with their Code 37:40 - Implicit Encoders 38:50 - Using GONs as classifiers 40:55 - Experiments & Conclusion Paper: https://arxiv.org/abs/2007.02798 Code: https://github.com/cwkx/GON Project Page: https://cwkx.github.io/data/GON/ My Video on SIREN: https://youtu.be/Q5g3p9Zwjrk Abstract: This paper proposes a new type of implicit generative model that is able to quickly learn a latent representation without an explicit encoder. This is achieved with an implicit neural network that takes as inputs points in the coordinate space alongside a latent vector initialised with zeros. The gradients of the data fitting loss with respect to this zero vector are jointly optimised to act as latent points that capture the data manifold. The results show similar characteristics to autoencoders, but with fewer parameters and the advantages of implicit representation networks. Authors: Sam Bond-Taylor, Chris G. Willcocks Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher
Hi there, today we'll look at gradient origin networks by Sam Bond Taylor and Chris G Wilcox of Durham University. So on a high level, this paper trains implicit representation networks, but not on single data points, but on entire data set. It does so by using a latent encoding of each data point. And it doesn't obtain that encoding through an explicit encoder, but by simply looking at the gradient of the latent variable when initialized at the origin. So it's a bit of a weird formulation, and I've seen this paper upvoted on Reddit, and the top comments would always say like, I don't really get it, I don't really get it. And I thought, you know, maybe I'm completely wrong, but I can just give my opinion kind of what's going on in this paper. Now also, most people on Reddit or a lot did say, I don't really get it, but here is what I think is going on. And then listing something and that's there is where I stopped reading. So as to not be kind of as to form my own opinion, I like to kind of understand papers by myself. So again, maybe I'm completely wrong. But here is my opinion. If you like opinions, hit the like button and subscribe if you aren't yet. And yeah, share this video out maybe that helps someone else understand. So this paper is a very short paper, it is four pages. And it's a dense paper, it definitely can warrant it definitely can warrant making a longer paper out of it. That being said, it's an archive paper for now. So you know, there's nothing wrong with archiving kind of unfinished work. But we're just going to look at it and try to understand it. Okay. So the abstract says this paper proposes a new type of implicit generative model that is able to quickly learn a latent representation without an explicit encoder. So for that, you need to know what an implicit generative model is. And I've covered one type of implicit generative models, specifically the type that they're using here, what they're called siren. So sirens are implicit representation networks. And I've made a video about sirens. So if you don't know what that is, go look it up. But very quickly, a siren will is a neural network to represent a single data point. So each data point in a data set is represented by its own neural network. And the neural network, so this might be a bit foreign to you. But usually you have some kind of image, right? And it's simply represented as an as an array of RGB coordinates, right? It's it's simply an array of this is like one zero point five and so on. So all the pixels are in this array. This is the explicit representation of that data point. Now, this here is a long list, and it has some regularities to it. So that's why you can also think of an implicit representation of the data point. The implicit representation works as follows. You imagine again your image, your image is made up of pixels, and these pixels are on X and Y coordinates. So this pixel right here would be zero zero. This pixel right here would be zero one, and so on. A siren is or generally an implicit representation network is a network that takes in any X and Y coordinate as the input. So the input itself is the numerical X and Y coordinate of that picture, and it passes it through a neural network and outcomes the RGB value. OK, so an entire picture is represented by this neural network. The neural network maps each coordinate to its RGB value. And here you can see that a single picture can become an entire data set for this neural network. In fact, it has to, because for a different picture, of course, there is a different mapping from X and Y coordinates to RGB coordinates. But this allows you to do multiple things. So first of all, this neural network can be smaller than the explicit representation. Second of all, it can capture some regularity in the data. Usually sirens have sine waves as nonlinearities in the neural network here, which is also a bit special, but lends itself very well to capture natural signals, because natural signals are often repeated at different scales and derivatives of themselves and so on. So I've covered this all in my video. And also this allows you to have a continuous representation rather than a discrete representation. Like here, you just have each pixel. Now you have a continuous representation. All right, so these are implicit representation models or implicit generative models, or these neural networks right here that map from coordinates to two colors. Now what's the problem with this is, as we said, you need one neural network per data point. Now, the idea that these people here go with is that can't we do kind of the same thing, but except we have one neural network per data point, we want to have the same neural network for the entire data set. So again, they want to have a neural network that somehow outputs RGB coordinates. But now it's not for a single image. Now we have a data set. Okay, and the data set has many images like this is image i, this is image j, this is image k. So what we could do is we could simply tell the neural network the x and y coordinate that we where we would like the RGB values to know. And we could also tell it which image it is right, k, or i or j. And this will give us a neural network right here that can represent the entire data set because it always can see, ah, I want of image j, I want these and these x y coordinate doesn't help you very much though, because it still has to learn for each image individually, how to encode it, how to produce it. What's much more interesting is, if you kind of mix this with the kind of old style, the kind of old style generative models. So in old style generative models, let's consider for example, an auto encoder. So in an auto encoder, what you would do is you would take your image and you would put it through an encoder. And this encoder will give you a latent variable z. And then you would put it through a decoder again. And that would give you an image. So your generative model now is this part right here, and this z variable is your latent encoding of this data point. Now, if you train these models correctly, be this a be this a an auto encoder or a variational auto encoder, or the green part can actually just be a GAN, right? If you train this correctly, then this z right here will be sort of a a latent encoding of the what the what of the information in the image itself. Okay. And that can generalize. So now I can input a picture that the model has never seen during training. And the encoder will map it to a latent representation that sort of makes sense that is able to reconstruct the image that I've put in. Okay, so the your hope with these latent representation is is that there is some kind of data manifold somewhere in hidden in the in the entire space of parameters. And as long as you're on that data manifold, you will produce a sensible data point. And this is kind of a continuous and so on. So even though you've only seen a few during training, if you have a new one during testing, then you can sort of it will be mapped to a correct place on the data manifold and it will produce a data point again. And you've seen this right, you've seen these interpolations in GANs where you can interpolate in latent space, and, and so on. The problem here is that, you know, in so in GANs, we sample these things right here. So that's a different story. But in VAEs, we need this encoder or in auto encoders, we need this encoder to obtain a latent representation for a given data point. In GANs, there is no way if we have an image, there is no way to obtain the corresponding Z variable. If we don't have an encoder, right, and that's the problem we're tackling right here. So here, what we want to do is we want to give the X and Y, we want to give the Z, we say we have some way of obtaining a latent representation of one of the image right here. And from that, we want to generate the RGB variables. Now the question is, think of again, the question is, how do we obtain the Z variable without having without having access to the encoder? And that's that's the problem of this paper. And this paper proposes a solution. So they say this is achieved with an implicit neural network that takes as inputs points in the coordinate space, alongside a latent vector initialized with zero. So that's the model that we saw. That's this, this is sorry about that. This is this right here, it takes in the coordinates, this is the coordinates, and it takes in the latent vector Z. Now, this whole point with it being initialized at zeros will get will get to that in one second. Okay. For the fact right now is just that the represent the implicit neural network also takes the identity of the image. So each image, the image is always going to have the same Z. And then we sort of say which x and y coordinate of that image we want. So the Z is per image. And then each image has all the x and y coordinates of, you know, itself. So yeah. So if yeah, you you I think you can follow. They go on they say the gradients of the data fitting loss with respect to this zero vector are jointly optimized to act as latent points that capture the data manifold. So this is where this is where I already got lost reading the first time through the results show similar characteristics to auto encoders, but with fewer parameters and the advantages of implicit representation networks. Okay, so we'll actually we'll, we'll jump to this right here. So this is the this is the comparison between a variational auto encoder and the gradient origin network. So in a variational auto encoder, what you would do is you would have this explicit encoder right here, as we said, and in the variational auto encoder, you don't obtain the latent representation directly, you actually obtain the distribution in terms of the mean and standard deviation of the latent representation. And then you sample from that distribution to obtain that latent representation. I think the point here is simply to show that you first of all, you do need an encoder, which you do need to train. And second of all, it's kind of a complicated process to get that latent representation for the data point x. And then you need to decoder that generates an image. And then you have the loss right here that compares the two that is used to train the encoder and the decoder. Whereas in the gradient origin networks, what you do is you start you basically have a function f and the function f it's a bit weird right here, the function f uses two things. So this here is that z, which is termed zero here. But in fact, it's the latent representation of the image, which is derived from the image itself. And I don't really know, so I guess you can hear you can input this x is derived from the image itself by some way that doesn't require parameters that is not learned. And it also takes in these coordinates, and it produces that image. Now let's disentangle two things right here, what we're going to see is equally applicable to non implicit neural networks. So for the rest of this paper, I'm not saying it's going to work as well, maybe it's going to work specifically well with implicit neural networks. But we need to differentiate the these two things. So the first thing is explicit versus implicit. Okay, we're simply going to view these as functions that take a z and give you an x. Okay, if this is this is most notably the explicit version, the implicit version is simply that we're going to take a z along with all the x and y of the image. And we're going to obtain the R, g and b values of all the images, right, which is equal to the x. So this this entire set of RGB values is equal to the x, and we input the entire set right here. So essentially, it's simply a function that takes in a latent representation of an image and gives you back a image. The second thing, which is an entirely different thing, in my opinion, is how do we obtain a z from an x? So how do we get to have an image? How do we obtain the corresponding latent representation? And such that such that. So this must be such that this function right here, the function that gives you the x from the z will reproduce the x. Okay. So how do we obtain the correct latent representation for any for any input data point? Two different things. Don't. So I think they're not dependent on each other, except, as I said, they might work especially well together or something like this. All right. So this becomes a lot easier right now in this formula. So this is the thing ultimately that they optimize. They optimize the this thing and it's introduced like I don't know why they limited themselves to four pages here. And again, this is work in progress, as I understand it. But it is it is not it's like cold water. It's like, you know, an expressive neural network can be trained in this space to mimic this by minimizing the gradient origin network loss function. That's that's it. That's what you that's what you get. And then you get the loss thrown in your face. But let's deconstruct it. So this g thing right here, what's it? This is the loss that you minimize. Okay, you can see that this is simply an integral of this loss function over your entire coordinate space. So see here is the entire coordinate space. So this is for a given for a given image, right for a given image f x, you would minimize this actually across your across your entire data set. So you would minimize the parameters of f f here is going to be your generator neural network, your siren, whatever you minimize over the parameters of f across your entire data set. Okay, so this is your standard loss function. And this is some across your entire data set. Cool. So what are you going to minimize, you're going to minimize each data point consists of an integral over the coordinate space, which you can't see of this loss function right here. Now, this is simply due to the fact that this is an implicit representation. If this were an explicit representation, it would simply be the loss function of that data point, okay. So don't don't be scared by the integral. I'm usually scared by integrals, I never get them. And then I try to talk to them and people be like, do you think you know a remany an integral or a little big integral? And I'm like, okay, but in in this case, this is this simply means that you want the loss of each of the coordinates and you want to sum them up, right, which is the same as simply the the normal loss function with respect to a data point. This right here is the data point itself. As you can see, this is the this is your natural signal. So this is the function that you don't know. This is the true image function that maps the coordinates to the RGB space. In the case of explicit representation, this here is simply x. Okay, and forget about this integral for now. Cool. So we have a loss between x and whatever this is right here. This is a bit too long and whatever this is right here, you can see the loss function between two things. So what is this thing? The loss function, I can tell you the one they use in this particular paper is the L two loss. This is simply the reconstruction loss between a data point and its its reconstruction. Okay, so this part on the right is what's going to make the reconstruction. You can see, yes, our F here is going to be our siren, our neural network that will take in a Z. So F is one of these function explicit or implicit that takes in a Z and gives you x the the reconstruction. Now the question is, what does F take in? F takes in two things. First of all, the coordinates concatenated with the thing on the right. And you remember, we said that instead of giving x y to the implicit representation, we now give x y and z where z is the latent vector of the image we're trying to reconstruct. So if we were to see this as a non implicit method, we can simply leave away this right. So we as we leave away the x and y coordinates in a in a GAN or a VAE, we simply give it this thing right here. Again, we're trying to disentangle the implicit network, the implicit generator from how we are going to obtain the Z. So this is not important. So what remains is this quantity right here. So this must be our Z for the image. Okay, this thing. So what's this thing? I'm running slowly out of colors. This thing is going to be somehow the negative gradient of something. Again, you have the integral right here of the loss function. This again is x. This here again, we can leave this away. We can leave away the integral and you'll start to see kind of a repetitive thing. So this is going to be the gradient somehow of your loss function with that. Again, there is x and then there is f of z zero. So this is somehow an x to an x hat as well. But it's a special x hat. Let's call it x hat prime or x hat zero. Because the input is not z, but the input is now z zero. Okay, this is kind of a complicated thing. So I'm going to explain what's going on right here. Maybe in drawing. So what you want to do is you want to start out with z zero, which is an initial guess of what your latent representation is. You do it without looking even at the image, at the data point. You simply start with one. And there are multiple ways to do this. And this paper right here simply says we're going to see zero is just going to be a constant value zero, the constant value zero. That's why it's called gradient origin networks, because you always start with your z zero, your initial guess of your latent representation is the origin. Okay. Then you use F, your neural network to obtain a estimate, a first estimate of what your image could look like. Again, you have not looked at the image, you're simply taking the z zero and you produce an image. Then you somehow somehow obtain a better representation z. And that you use your F again to obtain x hat. And then from that x hat, you can now compare this to your x and that will give you your loss that you back propagate. So two things here, you can see you use F twice, which means that your loss, if you back propagate it, you must somehow back propagate to both of these things. Okay, so this is the first the first thing if you back propagate. The second thing is what's this thing right here? How are we going to obtain somehow a better z? And the better z is going to be obtained by basically looking at the gradient. So you've seen that we have a gradient of z zero of the loss of x and f of z zero. That's that thing here is going to be your z. z equals that. What does it mean? It basically means that so you've tried to produce an image, but this is the real image that you want to get and the loss measures how far apart you are from that real image. How would you need to change your initial guess in order to make that loss go down? So the negative here is to make the loss go down because otherwise it would make the loss go up. Okay, so it basically simply says how do you need to change your z zero in order to decrease the loss in order to get a better z for representing this particular image right here. And in the paper here is where I kind of disagree because in the paper they say that this in a single step this gives you the correct z or something like this. And I don't agree. They say with respect to the origin we obtain a latent vector that minimizes the reconstruction loss is obtained in a single step thereby playing the similar role to an explicit encoder. So this is true. This is kind of like an encoder, right? You simply ask what z would I need to put in in order to make this representation be a better sorry in order to make the latent representation be a better latent representation for the particular image x. However, if you compare so what is this? This is essentially gradient descent in the latent space, right? And the fact that we look at the explicit gradient is only because they started at the zero point right here. The fact that they started at the zero point means that here they can just leave away the following. So if we were to do gradient descent, what you would do is you would say this my z is going to be equal to z zero minus this thing, right? Now it looks much more like gradient descent in the latent space because you have some initial guess and then you update it using the gradient. Now there is no learning rate right here. So the learning rate is one in this case. So this is and again, the z zero because it's zero, you can just leave it away. So this is simply one single step of gradient descent in the latent space in order to get a better z right here. However, this is not a this is doesn't it doesn't guarantee you that in the single step you're actually going to find the correct zero even an appropriate z simply means that you're going to find a better z than z zero for that particular image. And this can work right. And again, because you back propagate to both of the F's, you say you basically say I want my neural network first of all to reconstruct the data point better from a given latent representation and I also want my neural network to give me a latent representation basically to help my latent to help this procedure. You back propagate through the gradient descent procedure. So you say I want my neural network to help me obtain a better latent representation if I do one step of gradient descent. So therefore it's not just pure gradient descent in that space. It actually the back propagation makes it such that your neural network also supports that supports obtaining a good representation in one step. Okay, now that we've disentangled this, basically, you can see two things. First of all, you could probably get an even better representation by doing multiple steps of gradient descent right here, maybe adjusting the learning rate a bit. It depends right because you have to back propagate through all the gradient descent steps. But pretty sure you could probably improve this by doing multiple steps. Second of all, it doesn't really matter that this is a constant zero. It gives you know, there's a cool name gradient origin networks, but you could probably start with any constant or even here's the thing even non constant initial points, you could sample them from a distribution and so on. Okay, so let's change like let's imagine changing z zero to be sampled from some normal distribution. And then it looks much more like a game, right? Alright, so here we go. I've cloned the repo and I ran the code once just to make sure that the data is downloaded and everything. And the code is, you know, pretty, pretty easy. So there is one file, and I didn't do it in the colab because the colab was, I think, a bit slow for me. I don't know if I've caught a wrong runtime. But essentially, there is a bunch of setup code, they know these siren layers and so on. And then you have the real deal thing right here. So you have the step. So we do 500 steps. And in each step, we as you can see right here, we start with zeros as z, then we put this into f concatenated with the coordinates. So the coordinates is like a kind of a mesh grid type thing. We obtain the inner loss right here, we do a gradient with respect so of the inner loss with respect to z. And then the negative gradient that's going to become our outer z. So this z up here is z zero, and this z down here is going to be our true z from the paper. We are going to concatenate that again, with the coordinates to obtain the g, which is the kind of reconstruction of x. And then our outer loss is going to be simply this reconstruction loss right here. And then we're going to backward to all of the parameters. Okay. So first hypothesis is that this here is simply kind of gradient descent. So what we should be able to do is first, let's run let's run this. So I've run this like that. So this is shipping it to a GPU server. And as you will be able to see, the loss will be output. And it's going to kind of decrease the loss over the course of 500 steps. And we can also look at the samples. So while that's happening, what we can do is we can actually already prepare what we want to do. So if this is really gradient descent, we should be basically just able to do this z minus this gradient right here, because it's zeros. We would simply expect this to yield the same loss. So we're going to do this, and then we're going to ship this off to the server again. Sorry. So we were here. And okay, the logs failed. All right. So this is called images. I have this thing set up such that it's called logs. But you can basically see that the loss right here was from 24 going to down to about 13 or so over the course of training. So by subtracting z minus the gradient, we there really shouldn't be any change, right? Because z is zero at the beginning. So again, we're going to run this. And while it's running, we're going to prepare the different things. So my hypothesis is that we can maybe we could make this z here pretty much anything. So let's do it. Let's put it into ones. Again, you see that the loss, I guess, you know, we get an idea of kind of the noisiness of this thing. And 2119, and so on. We can in fact, over here, we might be able to if we ship it to a different GPU, might be able to run two things in parallel. So this now is when we just start with ones instead of zeros. So let's see how that happens. While that's the case. So you can see right here that we ended up at also about 1413. This pretty much is the same if you can we can look at the images that it's produced. So the reconstructions look kind of like this of fashion MNIST, the samples kind of look like this. And the interval interpolations, you can look at those as well. But we're mainly interested also in the in the kind of loss right here. You can see that with the ones, pretty much the same thing is happening. So let's say we actually change this to a normal distribution. What does that do? And while that's happening, we're going to revert this to the original zeros. And we're going to investigate what happens if we just do more than one step of gradient descent. So in order to do that, it's actually pretty easy. So this here is the gradient descent step. What we can do is we can simply double that. So now if this is correct, I'm pretty sure this is correct. So the normal initialized isn't really the hit right here, as you can see. Wow. Okay. The normal isn't. Maybe it's because it's too large. I'm not sure. The other thing is deterministic, so that's going to be a lot easier. We can quickly go back and let's go ones. Let's go to normal. And let's multiply it with a tiny 0.01 or so. I just want to see whether this works. I have no big hopes. Okay. So we are here again, and we're going to make this into two different things. Two steps of gradient descent. All right. So now we have two steps of gradient descent. And let's see whether that helps. Ah, okay. So the normal distribution already helps or is not worse. We simply initialized it with too big of a variance. The 0.01 seems to be some kind of magic number for normal distributions and neural networks. So on the right side over here, and you can see we're a bit off, but I guess with a bit of tuning you could do that. And it gets down to about the same loss as you saw. If we look at the images that this produced, I'm going to guess they seem a bit worse, but it kind of works. On the right side, however, if you do more than one step of gradient descent, wah, wah, wee wah. You see, we already started lower losses. And since this is gradient descent, we can also, you know, there's no need why the learning rate should be one. So let's try to divide it by a generous three and then by maybe, you know, it's a six, like a decreasing learning rate seems like a rather good idea. And yeah, let's just take the two steps with the decreasing learning rate. Oops. So you can see that the loss now is way down just because we did two steps of gradient descent and the reconstructions, I'm going to guess they're almost perfect. So we're now, I guess we're overfitting a bit. So this is now trading off kind of power of the encoder decoder and so on. But ultimately, yeah, so let's just for the last part, just try to have this gradient descent with the decreasing step size and see where that gets us if that gets us to even a lower reconstruction loss. And that will be our investigation into the code right here. Okay. Do do do do do. Okay, we start with 19. Maybe we're as good as before. That's fine, you know. But I hope I hope that kind of gives a bit of evidence to my point that this is basically reversing a generator by using gradient descent, which has been around for a while. And I happen to know someone who who once attempted to write a paper about it. So yeah, but it's it's with implicit networks, which are pretty cool. So you know, maybe this might work especially well with them given that the gradient of a siren is a gradient and is a siren, and so on. Yep, as you can see, this works as well decreasing learning rate. And now you can go nuts. Oh, nine. Wow. This is the lowest loss we've gotten so far. Right? Yeah. So pretty cool. Interpolations look like things. These are the best samples. I think these are the best samples we've seen today. Maybe not. I'm not sure. Let's look at the interpolations quickly. Yeah, this looks like interpolations. I mean, if you squint. Okay, this was it for coding. See ya. Now, GANs have come with encoders before or it much more looks like a variational auto encoder as well. The difference here is we replace the encoder. So this here is our encoder, right? This is our implicit encoder is simply gradient descent. This has also been done before for GANs. So people train GANs and then they try to find the latent representation by back propagating. And some people even do this while training. They do gradient descent and then either do or do not back prop through the GAN, through the gradient descent procedure. So in a way or another, this is kind of sort of like those ideas, not saying it is equal. And again, there could be like some special interaction because you actually back prop through both these things and there could be some special interaction because these are implicit neural networks. However, I very much view these as two different things. The cool, there is a rather cool derivation of that where you can say, okay, you can also use it as a classifier by basically doing this. And now hope you can understand this much better. So what we'll have is we'll have the classification loss for sample X is going to be your cross entropy loss between two things. Okay. Well, can you please go down again? Thanks. So your cross your loss between two things is going to be the loss between your label Y. So that's one thing. And usually you have the feature, the logits on this side, right? Now you can see right here, you have an F that's probably that something that gives you the logits from your features. And here your features aren't going to be the data point itself, but your features are going to be the Z variable that comes with the data point. So basically you use this as a feature producer. And the feature producer is made by again, minimizing this reconstruction loss. Now I'm not sure this is going to work really well for classifiers because classifiers generally don't require you to reconstruct things. And we know this, you know, people try to, this is like you were to have a variational autoencoder and then simply use that encoder as a feature producer for a classifier, which generally doesn't work very well. But you know, you can, you can do it right here. And the cool thing is that you can actually use the implicit representation network F to give you features for the entire data sample Z. So you're kind of freed from the coordinate representation here and you get kind of a latent vector back. So this is how you would use an implicit neural network in order to do classification. That's I think, you know, pretty, pretty cool derivation of this. So here they make some empirical claims, which I don't, I don't want to go too much into, but there are certain advantages, certain practical advantages of doing things like this. Like you can have very, very few parameters to represent an entire set of data. The interpolations here work nicely as you can see. And I think generally they make the claim that this trains fast and you can see after three seconds, it already has a lot of information about the data set and it does some sensible things. Okay. So the code is available and in fact, I'll probably enter, inter parse into this video a let's actually test our hypotheses, right? Let's test these hypotheses that I said. So first hypothesis is probably we can start with something else than the constant zero and second hypothesis is we can probably improve by doing multiple steps of gradient descent in the inner loop. Yes, I, this might be somewhere in this video. And if not, it comes at the end like right now. Okay. So I'll see you next time. Bye bye.
[ { "end": 6.92, "start": 0, "text": " Hi there, today we'll look at gradient origin networks by Sam Bond Taylor and Chris G Wilcox" }, { "end": 8.96, "start": 6.92, "text": " of Durham University." }, { "end": 14.72, "start": 8.96, "text": " So on a high level, this paper trains implicit representation networks, but not on single" }, { "end": 17.66, "start": 14.72, "text": " data points, but on entire data set." }, { "end": 21.54, "start": 17.66, "text": " It does so by using a latent encoding of each data point." }, { "end": 27, "start": 21.54, "text": " And it doesn't obtain that encoding through an explicit encoder, but by simply looking" }, { "end": 33.480000000000004, "start": 27, "text": " at the gradient of the latent variable when initialized at the origin." }, { "end": 38.480000000000004, "start": 33.480000000000004, "text": " So it's a bit of a weird formulation, and I've seen this paper upvoted on Reddit, and" }, { "end": 44.28, "start": 38.480000000000004, "text": " the top comments would always say like, I don't really get it, I don't really get it." }, { "end": 49.28, "start": 44.28, "text": " And I thought, you know, maybe I'm completely wrong, but I can just give my opinion kind" }, { "end": 51.760000000000005, "start": 49.28, "text": " of what's going on in this paper." }, { "end": 57.64, "start": 51.76, "text": " Now also, most people on Reddit or a lot did say, I don't really get it, but here is what" }, { "end": 59.099999999999994, "start": 57.64, "text": " I think is going on." }, { "end": 62.96, "start": 59.099999999999994, "text": " And then listing something and that's there is where I stopped reading." }, { "end": 69, "start": 62.96, "text": " So as to not be kind of as to form my own opinion, I like to kind of understand papers" }, { "end": 70.14, "start": 69, "text": " by myself." }, { "end": 72.72, "start": 70.14, "text": " So again, maybe I'm completely wrong." }, { "end": 75.82, "start": 72.72, "text": " But here is my opinion." }, { "end": 82.24, "start": 75.82, "text": " If you like opinions, hit the like button and subscribe if you aren't yet." }, { "end": 87.03999999999999, "start": 82.24, "text": " And yeah, share this video out maybe that helps someone else understand." }, { "end": 92.75999999999999, "start": 87.03999999999999, "text": " So this paper is a very short paper, it is four pages." }, { "end": 100.35999999999999, "start": 92.75999999999999, "text": " And it's a dense paper, it definitely can warrant it definitely can warrant making a" }, { "end": 101.88, "start": 100.35999999999999, "text": " longer paper out of it." }, { "end": 105.19999999999999, "start": 101.88, "text": " That being said, it's an archive paper for now." }, { "end": 111.68, "start": 105.2, "text": " So you know, there's nothing wrong with archiving kind of unfinished work." }, { "end": 115.60000000000001, "start": 111.68, "text": " But we're just going to look at it and try to understand it." }, { "end": 116.60000000000001, "start": 115.60000000000001, "text": " Okay." }, { "end": 122.76, "start": 116.60000000000001, "text": " So the abstract says this paper proposes a new type of implicit generative model that" }, { "end": 129.24, "start": 122.76, "text": " is able to quickly learn a latent representation without an explicit encoder." }, { "end": 133.92000000000002, "start": 129.24, "text": " So for that, you need to know what an implicit generative model is." }, { "end": 139.48, "start": 133.92, "text": " And I've covered one type of implicit generative models, specifically the type that they're" }, { "end": 142.39999999999998, "start": 139.48, "text": " using here, what they're called siren." }, { "end": 146.44, "start": 142.39999999999998, "text": " So sirens are implicit representation networks." }, { "end": 147.83999999999997, "start": 146.44, "text": " And I've made a video about sirens." }, { "end": 149.83999999999997, "start": 147.83999999999997, "text": " So if you don't know what that is, go look it up." }, { "end": 156.82, "start": 149.83999999999997, "text": " But very quickly, a siren will is a neural network to represent a single data point." }, { "end": 163.5, "start": 156.82, "text": " So each data point in a data set is represented by its own neural network." }, { "end": 167.16, "start": 163.5, "text": " And the neural network, so this might be a bit foreign to you." }, { "end": 170.2, "start": 167.16, "text": " But usually you have some kind of image, right?" }, { "end": 175.12, "start": 170.2, "text": " And it's simply represented as an as an array of RGB coordinates, right?" }, { "end": 180.44, "start": 175.12, "text": " It's it's simply an array of this is like one zero point five and so on." }, { "end": 182.1, "start": 180.44, "text": " So all the pixels are in this array." }, { "end": 186.28, "start": 182.1, "text": " This is the explicit representation of that data point." }, { "end": 191.32, "start": 186.28, "text": " Now, this here is a long list, and it has some regularities to it." }, { "end": 196.6, "start": 191.32, "text": " So that's why you can also think of an implicit representation of the data point." }, { "end": 199.32, "start": 196.6, "text": " The implicit representation works as follows." }, { "end": 204.78, "start": 199.32, "text": " You imagine again your image, your image is made up of pixels, and these pixels are on" }, { "end": 206.32, "start": 204.78, "text": " X and Y coordinates." }, { "end": 209.1, "start": 206.32, "text": " So this pixel right here would be zero zero." }, { "end": 212.95999999999998, "start": 209.1, "text": " This pixel right here would be zero one, and so on." }, { "end": 220.12, "start": 212.95999999999998, "text": " A siren is or generally an implicit representation network is a network that takes in any X and" }, { "end": 222.14000000000001, "start": 220.12, "text": " Y coordinate as the input." }, { "end": 228.84, "start": 222.14000000000001, "text": " So the input itself is the numerical X and Y coordinate of that picture, and it passes" }, { "end": 234.12, "start": 228.84, "text": " it through a neural network and outcomes the RGB value." }, { "end": 240.20000000000002, "start": 234.12, "text": " OK, so an entire picture is represented by this neural network." }, { "end": 244, "start": 240.20000000000002, "text": " The neural network maps each coordinate to its RGB value." }, { "end": 251, "start": 244, "text": " And here you can see that a single picture can become an entire data set for this neural" }, { "end": 252, "start": 251, "text": " network." }, { "end": 255.86, "start": 252, "text": " In fact, it has to, because for a different picture, of course, there is a different mapping" }, { "end": 260.08, "start": 255.86, "text": " from X and Y coordinates to RGB coordinates." }, { "end": 261.92, "start": 260.08, "text": " But this allows you to do multiple things." }, { "end": 267.26, "start": 261.92, "text": " So first of all, this neural network can be smaller than the explicit representation." }, { "end": 271.08, "start": 267.26, "text": " Second of all, it can capture some regularity in the data." }, { "end": 279.2, "start": 271.08, "text": " Usually sirens have sine waves as nonlinearities in the neural network here, which is also" }, { "end": 284.91999999999996, "start": 279.2, "text": " a bit special, but lends itself very well to capture natural signals, because natural" }, { "end": 290.44, "start": 284.91999999999996, "text": " signals are often repeated at different scales and derivatives of themselves and so on." }, { "end": 294.56, "start": 290.44, "text": " So I've covered this all in my video." }, { "end": 300.15999999999997, "start": 294.56, "text": " And also this allows you to have a continuous representation rather than a discrete representation." }, { "end": 302.20000000000005, "start": 300.16, "text": " Like here, you just have each pixel." }, { "end": 305.36, "start": 302.20000000000005, "text": " Now you have a continuous representation." }, { "end": 312.36, "start": 305.36, "text": " All right, so these are implicit representation models or implicit generative models, or these" }, { "end": 318.64000000000004, "start": 312.36, "text": " neural networks right here that map from coordinates to two colors." }, { "end": 323.92, "start": 318.64000000000004, "text": " Now what's the problem with this is, as we said, you need one neural network per data" }, { "end": 324.92, "start": 323.92, "text": " point." }, { "end": 333.04, "start": 324.92, "text": " Now, the idea that these people here go with is that can't we do kind of the same thing," }, { "end": 338.08000000000004, "start": 333.04, "text": " but except we have one neural network per data point, we want to have the same neural" }, { "end": 341.52000000000004, "start": 338.08000000000004, "text": " network for the entire data set." }, { "end": 348.96000000000004, "start": 341.52000000000004, "text": " So again, they want to have a neural network that somehow outputs RGB coordinates." }, { "end": 351.12, "start": 348.96000000000004, "text": " But now it's not for a single image." }, { "end": 352.48, "start": 351.12, "text": " Now we have a data set." }, { "end": 357.92, "start": 352.48, "text": " Okay, and the data set has many images like this is image i, this is image j, this is" }, { "end": 359.12, "start": 357.92, "text": " image k." }, { "end": 365.76, "start": 359.12, "text": " So what we could do is we could simply tell the neural network the x and y coordinate" }, { "end": 370.20000000000005, "start": 365.76, "text": " that we where we would like the RGB values to know." }, { "end": 376.44, "start": 370.20000000000005, "text": " And we could also tell it which image it is right, k, or i or j." }, { "end": 383.6, "start": 376.44, "text": " And this will give us a neural network right here that can represent the entire data set" }, { "end": 388.76, "start": 383.6, "text": " because it always can see, ah, I want of image j, I want these and these x y coordinate doesn't" }, { "end": 394.32, "start": 388.76, "text": " help you very much though, because it still has to learn for each image individually," }, { "end": 397.84, "start": 394.32, "text": " how to encode it, how to produce it." }, { "end": 404.76, "start": 397.84, "text": " What's much more interesting is, if you kind of mix this with the kind of old style, the" }, { "end": 407, "start": 404.76, "text": " kind of old style generative models." }, { "end": 412.18, "start": 407, "text": " So in old style generative models, let's consider for example, an auto encoder." }, { "end": 416.4, "start": 412.18, "text": " So in an auto encoder, what you would do is you would take your image and you would put" }, { "end": 419.24, "start": 416.4, "text": " it through an encoder." }, { "end": 422.84, "start": 419.24, "text": " And this encoder will give you a latent variable z." }, { "end": 426.14, "start": 422.84, "text": " And then you would put it through a decoder again." }, { "end": 428.58, "start": 426.14, "text": " And that would give you an image." }, { "end": 434.88, "start": 428.58, "text": " So your generative model now is this part right here, and this z variable is your latent" }, { "end": 437.4, "start": 434.88, "text": " encoding of this data point." }, { "end": 445, "start": 437.4, "text": " Now, if you train these models correctly, be this a be this a an auto encoder or a variational" }, { "end": 450.32, "start": 445, "text": " auto encoder, or the green part can actually just be a GAN, right?" }, { "end": 459.08, "start": 450.32, "text": " If you train this correctly, then this z right here will be sort of a a latent encoding of" }, { "end": 463.84, "start": 459.08, "text": " the what the what of the information in the image itself." }, { "end": 464.84, "start": 463.84, "text": " Okay." }, { "end": 466.4, "start": 464.84, "text": " And that can generalize." }, { "end": 472.48, "start": 466.4, "text": " So now I can input a picture that the model has never seen during training." }, { "end": 480.2, "start": 472.48, "text": " And the encoder will map it to a latent representation that sort of makes sense that is able to reconstruct" }, { "end": 482.76, "start": 480.2, "text": " the image that I've put in." }, { "end": 490.03999999999996, "start": 482.76, "text": " Okay, so the your hope with these latent representation is is that there is some kind of data manifold" }, { "end": 496.4, "start": 490.03999999999996, "text": " somewhere in hidden in the in the entire space of parameters." }, { "end": 501.84, "start": 496.4, "text": " And as long as you're on that data manifold, you will produce a sensible data point." }, { "end": 504.08, "start": 501.84, "text": " And this is kind of a continuous and so on." }, { "end": 510.96, "start": 504.08, "text": " So even though you've only seen a few during training, if you have a new one during testing," }, { "end": 517.12, "start": 510.96, "text": " then you can sort of it will be mapped to a correct place on the data manifold and it" }, { "end": 519.98, "start": 517.12, "text": " will produce a data point again." }, { "end": 524.54, "start": 519.98, "text": " And you've seen this right, you've seen these interpolations in GANs where you can interpolate" }, { "end": 527.92, "start": 524.54, "text": " in latent space, and, and so on." }, { "end": 534.66, "start": 527.92, "text": " The problem here is that, you know, in so in GANs, we sample these things right here." }, { "end": 536.24, "start": 534.66, "text": " So that's a different story." }, { "end": 543.68, "start": 536.24, "text": " But in VAEs, we need this encoder or in auto encoders, we need this encoder to obtain a" }, { "end": 546.76, "start": 543.68, "text": " latent representation for a given data point." }, { "end": 552.68, "start": 546.76, "text": " In GANs, there is no way if we have an image, there is no way to obtain the corresponding" }, { "end": 554.3199999999999, "start": 552.68, "text": " Z variable." }, { "end": 559.5400000000001, "start": 554.32, "text": " If we don't have an encoder, right, and that's the problem we're tackling right here." }, { "end": 564.6, "start": 559.5400000000001, "text": " So here, what we want to do is we want to give the X and Y, we want to give the Z, we" }, { "end": 571.44, "start": 564.6, "text": " say we have some way of obtaining a latent representation of one of the image right here." }, { "end": 575.2800000000001, "start": 571.44, "text": " And from that, we want to generate the RGB variables." }, { "end": 583.22, "start": 575.2800000000001, "text": " Now the question is, think of again, the question is, how do we obtain the Z variable without" }, { "end": 589.64, "start": 583.22, "text": " having without having access to the encoder?" }, { "end": 592.36, "start": 589.64, "text": " And that's that's the problem of this paper." }, { "end": 596.52, "start": 592.36, "text": " And this paper proposes a solution." }, { "end": 603.88, "start": 596.52, "text": " So they say this is achieved with an implicit neural network that takes as inputs points" }, { "end": 608.64, "start": 603.88, "text": " in the coordinate space, alongside a latent vector initialized with zero." }, { "end": 610.36, "start": 608.64, "text": " So that's the model that we saw." }, { "end": 614.08, "start": 610.36, "text": " That's this, this is sorry about that." }, { "end": 621.08, "start": 614.08, "text": " This is this right here, it takes in the coordinates, this is the coordinates, and it takes in the" }, { "end": 623.44, "start": 621.08, "text": " latent vector Z." }, { "end": 629.72, "start": 623.44, "text": " Now, this whole point with it being initialized at zeros will get will get to that in one" }, { "end": 630.72, "start": 629.72, "text": " second." }, { "end": 631.72, "start": 630.72, "text": " Okay." }, { "end": 636.32, "start": 631.72, "text": " For the fact right now is just that the represent the implicit neural network also takes the" }, { "end": 637.72, "start": 636.32, "text": " identity of the image." }, { "end": 641.78, "start": 637.72, "text": " So each image, the image is always going to have the same Z." }, { "end": 646.6600000000001, "start": 641.78, "text": " And then we sort of say which x and y coordinate of that image we want." }, { "end": 648.72, "start": 646.6600000000001, "text": " So the Z is per image." }, { "end": 653.88, "start": 648.72, "text": " And then each image has all the x and y coordinates of, you know, itself." }, { "end": 656.12, "start": 653.88, "text": " So yeah." }, { "end": 661.76, "start": 656.12, "text": " So if yeah, you you I think you can follow." }, { "end": 667.1600000000001, "start": 661.76, "text": " They go on they say the gradients of the data fitting loss with respect to this zero vector" }, { "end": 671.64, "start": 667.16, "text": " are jointly optimized to act as latent points that capture the data manifold." }, { "end": 677.4399999999999, "start": 671.64, "text": " So this is where this is where I already got lost reading the first time through the results" }, { "end": 681.8399999999999, "start": 677.4399999999999, "text": " show similar characteristics to auto encoders, but with fewer parameters and the advantages" }, { "end": 684.76, "start": 681.8399999999999, "text": " of implicit representation networks." }, { "end": 690.18, "start": 684.76, "text": " Okay, so we'll actually we'll, we'll jump to this right here." }, { "end": 696.6, "start": 690.18, "text": " So this is the this is the comparison between a variational auto encoder and the gradient" }, { "end": 697.6, "start": 696.6, "text": " origin network." }, { "end": 704.64, "start": 697.6, "text": " So in a variational auto encoder, what you would do is you would have this explicit encoder" }, { "end": 709.32, "start": 704.64, "text": " right here, as we said, and in the variational auto encoder, you don't obtain the latent" }, { "end": 714.48, "start": 709.32, "text": " representation directly, you actually obtain the distribution in terms of the mean and" }, { "end": 717.72, "start": 714.48, "text": " standard deviation of the latent representation." }, { "end": 722.6, "start": 717.72, "text": " And then you sample from that distribution to obtain that latent representation." }, { "end": 728.12, "start": 722.6, "text": " I think the point here is simply to show that you first of all, you do need an encoder," }, { "end": 729.44, "start": 728.12, "text": " which you do need to train." }, { "end": 733.16, "start": 729.44, "text": " And second of all, it's kind of a complicated process to get that latent representation" }, { "end": 735.48, "start": 733.16, "text": " for the data point x." }, { "end": 738.48, "start": 735.48, "text": " And then you need to decoder that generates an image." }, { "end": 744.5600000000001, "start": 738.48, "text": " And then you have the loss right here that compares the two that is used to train the" }, { "end": 747.52, "start": 744.5600000000001, "text": " encoder and the decoder." }, { "end": 755.96, "start": 747.52, "text": " Whereas in the gradient origin networks, what you do is you start you basically have a function" }, { "end": 763.56, "start": 755.96, "text": " f and the function f it's a bit weird right here, the function f uses two things." }, { "end": 768.24, "start": 763.56, "text": " So this here is that z, which is termed zero here." }, { "end": 773.88, "start": 768.24, "text": " But in fact, it's the latent representation of the image, which is derived from the image" }, { "end": 774.88, "start": 773.88, "text": " itself." }, { "end": 780.6, "start": 774.88, "text": " And I don't really know, so I guess you can hear you can input this x is derived from" }, { "end": 786.72, "start": 780.6, "text": " the image itself by some way that doesn't require parameters that is not learned." }, { "end": 792.52, "start": 786.72, "text": " And it also takes in these coordinates, and it produces that image." }, { "end": 799.5, "start": 792.52, "text": " Now let's disentangle two things right here, what we're going to see is equally applicable" }, { "end": 802.08, "start": 799.5, "text": " to non implicit neural networks." }, { "end": 807.36, "start": 802.08, "text": " So for the rest of this paper, I'm not saying it's going to work as well, maybe it's going" }, { "end": 810.5400000000001, "start": 807.36, "text": " to work specifically well with implicit neural networks." }, { "end": 813.96, "start": 810.5400000000001, "text": " But we need to differentiate the these two things." }, { "end": 818.88, "start": 813.96, "text": " So the first thing is explicit versus implicit." }, { "end": 827.6800000000001, "start": 818.88, "text": " Okay, we're simply going to view these as functions that take a z and give you an x." }, { "end": 833.2199999999999, "start": 827.68, "text": " Okay, if this is this is most notably the explicit version, the implicit version is" }, { "end": 839.1999999999999, "start": 833.2199999999999, "text": " simply that we're going to take a z along with all the x and y of the image." }, { "end": 846.92, "start": 839.1999999999999, "text": " And we're going to obtain the R, g and b values of all the images, right, which is equal to" }, { "end": 849.78, "start": 846.92, "text": " the x." }, { "end": 854.8, "start": 849.78, "text": " So this this entire set of RGB values is equal to the x, and we input the entire set right" }, { "end": 855.8, "start": 854.8, "text": " here." }, { "end": 862.28, "start": 855.8, "text": " So essentially, it's simply a function that takes in a latent representation of an image" }, { "end": 865.68, "start": 862.28, "text": " and gives you back a image." }, { "end": 871.78, "start": 865.68, "text": " The second thing, which is an entirely different thing, in my opinion, is how do we obtain" }, { "end": 873.74, "start": 871.78, "text": " a z from an x?" }, { "end": 877, "start": 873.74, "text": " So how do we get to have an image?" }, { "end": 881.16, "start": 877, "text": " How do we obtain the corresponding latent representation?" }, { "end": 884.52, "start": 881.16, "text": " And such that such that." }, { "end": 890.48, "start": 884.52, "text": " So this must be such that this function right here, the function that gives you the x from" }, { "end": 893.16, "start": 890.48, "text": " the z will reproduce the x." }, { "end": 894.16, "start": 893.16, "text": " Okay." }, { "end": 901.24, "start": 894.16, "text": " So how do we obtain the correct latent representation for any for any input data point?" }, { "end": 902.8, "start": 901.24, "text": " Two different things." }, { "end": 904, "start": 902.8, "text": " Don't." }, { "end": 909.78, "start": 904, "text": " So I think they're not dependent on each other, except, as I said, they might work especially" }, { "end": 911.8, "start": 909.78, "text": " well together or something like this." }, { "end": 912.8, "start": 911.8, "text": " All right." }, { "end": 916.3199999999999, "start": 912.8, "text": " So this becomes a lot easier right now in this formula." }, { "end": 920.12, "start": 916.3199999999999, "text": " So this is the thing ultimately that they optimize." }, { "end": 926.9599999999999, "start": 920.12, "text": " They optimize the this thing and it's introduced like I don't know why they limited themselves" }, { "end": 928.12, "start": 926.9599999999999, "text": " to four pages here." }, { "end": 931.0999999999999, "start": 928.12, "text": " And again, this is work in progress, as I understand it." }, { "end": 934.78, "start": 931.0999999999999, "text": " But it is it is not it's like cold water." }, { "end": 941.12, "start": 934.78, "text": " It's like, you know, an expressive neural network can be trained in this space to mimic" }, { "end": 944.08, "start": 941.12, "text": " this by minimizing the gradient origin network loss function." }, { "end": 945.24, "start": 944.08, "text": " That's that's it." }, { "end": 947.52, "start": 945.24, "text": " That's what you that's what you get." }, { "end": 950.12, "start": 947.52, "text": " And then you get the loss thrown in your face." }, { "end": 951.76, "start": 950.12, "text": " But let's deconstruct it." }, { "end": 956.38, "start": 951.76, "text": " So this g thing right here, what's it?" }, { "end": 958.36, "start": 956.38, "text": " This is the loss that you minimize." }, { "end": 966.44, "start": 958.36, "text": " Okay, you can see that this is simply an integral of this loss function over your entire coordinate" }, { "end": 967.44, "start": 966.44, "text": " space." }, { "end": 969.76, "start": 967.44, "text": " So see here is the entire coordinate space." }, { "end": 975.98, "start": 969.76, "text": " So this is for a given for a given image, right for a given image f x, you would minimize" }, { "end": 979.68, "start": 975.98, "text": " this actually across your across your entire data set." }, { "end": 986.72, "start": 979.68, "text": " So you would minimize the parameters of f f here is going to be your generator neural" }, { "end": 993.06, "start": 986.72, "text": " network, your siren, whatever you minimize over the parameters of f across your entire" }, { "end": 994.06, "start": 993.06, "text": " data set." }, { "end": 997.72, "start": 994.06, "text": " Okay, so this is your standard loss function." }, { "end": 1000.96, "start": 997.72, "text": " And this is some across your entire data set." }, { "end": 1002.24, "start": 1000.96, "text": " Cool." }, { "end": 1007.8000000000001, "start": 1002.24, "text": " So what are you going to minimize, you're going to minimize each data point consists" }, { "end": 1014.44, "start": 1007.8000000000001, "text": " of an integral over the coordinate space, which you can't see of this loss function" }, { "end": 1015.44, "start": 1014.44, "text": " right here." }, { "end": 1020.14, "start": 1015.44, "text": " Now, this is simply due to the fact that this is an implicit representation." }, { "end": 1025.48, "start": 1020.14, "text": " If this were an explicit representation, it would simply be the loss function of that" }, { "end": 1027.78, "start": 1025.48, "text": " data point, okay." }, { "end": 1030.08, "start": 1027.78, "text": " So don't don't be scared by the integral." }, { "end": 1033.08, "start": 1030.08, "text": " I'm usually scared by integrals, I never get them." }, { "end": 1037.4, "start": 1033.08, "text": " And then I try to talk to them and people be like, do you think you know a remany an" }, { "end": 1039.44, "start": 1037.4, "text": " integral or a little big integral?" }, { "end": 1047.6200000000001, "start": 1039.44, "text": " And I'm like, okay, but in in this case, this is this simply means that you want the loss" }, { "end": 1055.04, "start": 1047.6200000000001, "text": " of each of the coordinates and you want to sum them up, right, which is the same as simply" }, { "end": 1059.06, "start": 1055.04, "text": " the the normal loss function with respect to a data point." }, { "end": 1063.6399999999999, "start": 1059.06, "text": " This right here is the data point itself." }, { "end": 1068.3799999999999, "start": 1063.6399999999999, "text": " As you can see, this is the this is your natural signal." }, { "end": 1072.12, "start": 1068.3799999999999, "text": " So this is the function that you don't know." }, { "end": 1078.32, "start": 1072.12, "text": " This is the true image function that maps the coordinates to the RGB space." }, { "end": 1083.32, "start": 1078.32, "text": " In the case of explicit representation, this here is simply x." }, { "end": 1088, "start": 1083.32, "text": " Okay, and forget about this integral for now." }, { "end": 1089.1599999999999, "start": 1088, "text": " Cool." }, { "end": 1094.8799999999999, "start": 1089.1599999999999, "text": " So we have a loss between x and whatever this is right here." }, { "end": 1098.8, "start": 1094.8799999999999, "text": " This is a bit too long and whatever this is right here, you can see the loss function" }, { "end": 1099.8, "start": 1098.8, "text": " between two things." }, { "end": 1101.1599999999999, "start": 1099.8, "text": " So what is this thing?" }, { "end": 1105.48, "start": 1101.1599999999999, "text": " The loss function, I can tell you the one they use in this particular paper is the L" }, { "end": 1106.48, "start": 1105.48, "text": " two loss." }, { "end": 1113.68, "start": 1106.48, "text": " This is simply the reconstruction loss between a data point and its its reconstruction." }, { "end": 1117.64, "start": 1113.68, "text": " Okay, so this part on the right is what's going to make the reconstruction." }, { "end": 1124.1200000000001, "start": 1117.64, "text": " You can see, yes, our F here is going to be our siren, our neural network that will take" }, { "end": 1125.64, "start": 1124.1200000000001, "text": " in a Z." }, { "end": 1131.44, "start": 1125.64, "text": " So F is one of these function explicit or implicit that takes in a Z and gives you x" }, { "end": 1134.8, "start": 1131.44, "text": " the the reconstruction." }, { "end": 1139.2, "start": 1134.8, "text": " Now the question is, what does F take in?" }, { "end": 1141.8799999999999, "start": 1139.2, "text": " F takes in two things." }, { "end": 1146.9199999999998, "start": 1141.8799999999999, "text": " First of all, the coordinates concatenated with the thing on the right." }, { "end": 1153.72, "start": 1146.9199999999998, "text": " And you remember, we said that instead of giving x y to the implicit representation," }, { "end": 1162.1399999999999, "start": 1153.72, "text": " we now give x y and z where z is the latent vector of the image we're trying to reconstruct." }, { "end": 1169.66, "start": 1162.14, "text": " So if we were to see this as a non implicit method, we can simply leave away this right." }, { "end": 1176.24, "start": 1169.66, "text": " So we as we leave away the x and y coordinates in a in a GAN or a VAE, we simply give it" }, { "end": 1177.24, "start": 1176.24, "text": " this thing right here." }, { "end": 1184.24, "start": 1177.24, "text": " Again, we're trying to disentangle the implicit network, the implicit generator from how we" }, { "end": 1186.7800000000002, "start": 1184.24, "text": " are going to obtain the Z." }, { "end": 1188.7, "start": 1186.7800000000002, "text": " So this is not important." }, { "end": 1192.18, "start": 1188.7, "text": " So what remains is this quantity right here." }, { "end": 1196.92, "start": 1192.18, "text": " So this must be our Z for the image." }, { "end": 1199.18, "start": 1196.92, "text": " Okay, this thing." }, { "end": 1201.72, "start": 1199.18, "text": " So what's this thing?" }, { "end": 1204.56, "start": 1201.72, "text": " I'm running slowly out of colors." }, { "end": 1208.32, "start": 1204.56, "text": " This thing is going to be somehow the negative gradient of something." }, { "end": 1211.92, "start": 1208.32, "text": " Again, you have the integral right here of the loss function." }, { "end": 1215.04, "start": 1211.92, "text": " This again is x." }, { "end": 1218.18, "start": 1215.04, "text": " This here again, we can leave this away." }, { "end": 1224.52, "start": 1218.18, "text": " We can leave away the integral and you'll start to see kind of a repetitive thing." }, { "end": 1232.6000000000001, "start": 1224.52, "text": " So this is going to be the gradient somehow of your loss function with that." }, { "end": 1236.78, "start": 1232.6000000000001, "text": " Again, there is x and then there is f of z zero." }, { "end": 1240.3200000000002, "start": 1236.78, "text": " So this is somehow an x to an x hat as well." }, { "end": 1241.92, "start": 1240.3200000000002, "text": " But it's a special x hat." }, { "end": 1247.1200000000001, "start": 1241.92, "text": " Let's call it x hat prime or x hat zero." }, { "end": 1252.56, "start": 1247.12, "text": " Because the input is not z, but the input is now z zero." }, { "end": 1257.6, "start": 1252.56, "text": " Okay, this is kind of a complicated thing." }, { "end": 1262.6799999999998, "start": 1257.6, "text": " So I'm going to explain what's going on right here." }, { "end": 1263.76, "start": 1262.6799999999998, "text": " Maybe in drawing." }, { "end": 1269.3999999999999, "start": 1263.76, "text": " So what you want to do is you want to start out with z zero, which is an initial guess" }, { "end": 1271.3999999999999, "start": 1269.3999999999999, "text": " of what your latent representation is." }, { "end": 1275.36, "start": 1271.3999999999999, "text": " You do it without looking even at the image, at the data point." }, { "end": 1277.36, "start": 1275.36, "text": " You simply start with one." }, { "end": 1280.08, "start": 1277.36, "text": " And there are multiple ways to do this." }, { "end": 1286.4599999999998, "start": 1280.08, "text": " And this paper right here simply says we're going to see zero is just going to be a constant" }, { "end": 1290.6, "start": 1286.4599999999998, "text": " value zero, the constant value zero." }, { "end": 1296.3999999999999, "start": 1290.6, "text": " That's why it's called gradient origin networks, because you always start with your z zero," }, { "end": 1300.4399999999998, "start": 1296.3999999999999, "text": " your initial guess of your latent representation is the origin." }, { "end": 1301.6799999999998, "start": 1300.4399999999998, "text": " Okay." }, { "end": 1309.8, "start": 1301.68, "text": " Then you use F, your neural network to obtain a estimate, a first estimate of what your" }, { "end": 1310.96, "start": 1309.8, "text": " image could look like." }, { "end": 1316.48, "start": 1310.96, "text": " Again, you have not looked at the image, you're simply taking the z zero and you produce an" }, { "end": 1319.0800000000002, "start": 1316.48, "text": " image." }, { "end": 1327.5600000000002, "start": 1319.0800000000002, "text": " Then you somehow somehow obtain a better representation z." }, { "end": 1333.32, "start": 1327.56, "text": " And that you use your F again to obtain x hat." }, { "end": 1340.2, "start": 1333.32, "text": " And then from that x hat, you can now compare this to your x and that will give you your" }, { "end": 1342.3999999999999, "start": 1340.2, "text": " loss that you back propagate." }, { "end": 1348.8799999999999, "start": 1342.3999999999999, "text": " So two things here, you can see you use F twice, which means that your loss, if you" }, { "end": 1353.56, "start": 1348.8799999999999, "text": " back propagate it, you must somehow back propagate to both of these things." }, { "end": 1358.1599999999999, "start": 1353.56, "text": " Okay, so this is the first the first thing if you back propagate." }, { "end": 1360.56, "start": 1358.1599999999999, "text": " The second thing is what's this thing right here?" }, { "end": 1365.1599999999999, "start": 1360.56, "text": " How are we going to obtain somehow a better z?" }, { "end": 1371.34, "start": 1365.1599999999999, "text": " And the better z is going to be obtained by basically looking at the gradient." }, { "end": 1384.08, "start": 1371.34, "text": " So you've seen that we have a gradient of z zero of the loss of x and f of z zero." }, { "end": 1388.36, "start": 1384.08, "text": " That's that thing here is going to be your z." }, { "end": 1392, "start": 1388.36, "text": " z equals that." }, { "end": 1393, "start": 1392, "text": " What does it mean?" }, { "end": 1399.9199999999998, "start": 1393, "text": " It basically means that so you've tried to produce an image, but this is the real image" }, { "end": 1405.72, "start": 1399.92, "text": " that you want to get and the loss measures how far apart you are from that real image." }, { "end": 1413.42, "start": 1405.72, "text": " How would you need to change your initial guess in order to make that loss go down?" }, { "end": 1417.46, "start": 1413.42, "text": " So the negative here is to make the loss go down because otherwise it would make the loss" }, { "end": 1418.46, "start": 1417.46, "text": " go up." }, { "end": 1425.5800000000002, "start": 1418.46, "text": " Okay, so it basically simply says how do you need to change your z zero in order to decrease" }, { "end": 1433.46, "start": 1425.58, "text": " the loss in order to get a better z for representing this particular image right here." }, { "end": 1442.56, "start": 1433.46, "text": " And in the paper here is where I kind of disagree because in the paper they say that this in" }, { "end": 1451.74, "start": 1442.56, "text": " a single step this gives you the correct z or something like this." }, { "end": 1455.1999999999998, "start": 1451.74, "text": " And I don't agree." }, { "end": 1463.38, "start": 1455.2, "text": " They say with respect to the origin we obtain a latent vector that minimizes the reconstruction" }, { "end": 1470.3600000000001, "start": 1463.38, "text": " loss is obtained in a single step thereby playing the similar role to an explicit encoder." }, { "end": 1471.3600000000001, "start": 1470.3600000000001, "text": " So this is true." }, { "end": 1472.9, "start": 1471.3600000000001, "text": " This is kind of like an encoder, right?" }, { "end": 1477.92, "start": 1472.9, "text": " You simply ask what z would I need to put in in order to make this representation be" }, { "end": 1483.32, "start": 1477.92, "text": " a better sorry in order to make the latent representation be a better latent representation" }, { "end": 1485.32, "start": 1483.32, "text": " for the particular image x." }, { "end": 1492.12, "start": 1485.32, "text": " However, if you compare so what is this?" }, { "end": 1496.72, "start": 1492.12, "text": " This is essentially gradient descent in the latent space, right?" }, { "end": 1502.58, "start": 1496.72, "text": " And the fact that we look at the explicit gradient is only because they started at the" }, { "end": 1504.52, "start": 1502.58, "text": " zero point right here." }, { "end": 1511.04, "start": 1504.52, "text": " The fact that they started at the zero point means that here they can just leave away the" }, { "end": 1512.04, "start": 1511.04, "text": " following." }, { "end": 1515.52, "start": 1512.04, "text": " So if we were to do gradient descent, what you would do is you would say this my z is" }, { "end": 1520.1, "start": 1515.52, "text": " going to be equal to z zero minus this thing, right?" }, { "end": 1525.6399999999999, "start": 1520.1, "text": " Now it looks much more like gradient descent in the latent space because you have some" }, { "end": 1529.44, "start": 1525.6399999999999, "text": " initial guess and then you update it using the gradient." }, { "end": 1531.42, "start": 1529.44, "text": " Now there is no learning rate right here." }, { "end": 1535.28, "start": 1531.42, "text": " So the learning rate is one in this case." }, { "end": 1543.08, "start": 1535.28, "text": " So this is and again, the z zero because it's zero, you can just leave it away." }, { "end": 1551.8, "start": 1543.08, "text": " So this is simply one single step of gradient descent in the latent space in order to get" }, { "end": 1554.56, "start": 1551.8, "text": " a better z right here." }, { "end": 1559.56, "start": 1554.56, "text": " However, this is not a this is doesn't it doesn't guarantee you that in the single step" }, { "end": 1564.52, "start": 1559.56, "text": " you're actually going to find the correct zero even an appropriate z simply means that" }, { "end": 1570.68, "start": 1564.52, "text": " you're going to find a better z than z zero for that particular image." }, { "end": 1574.24, "start": 1570.68, "text": " And this can work right." }, { "end": 1580.28, "start": 1574.24, "text": " And again, because you back propagate to both of the F's, you say you basically say I want" }, { "end": 1586.96, "start": 1580.28, "text": " my neural network first of all to reconstruct the data point better from a given latent" }, { "end": 1594.68, "start": 1586.96, "text": " representation and I also want my neural network to give me a latent representation basically" }, { "end": 1598.6000000000001, "start": 1594.68, "text": " to help my latent to help this procedure." }, { "end": 1601.68, "start": 1598.6000000000001, "text": " You back propagate through the gradient descent procedure." }, { "end": 1609.44, "start": 1601.68, "text": " So you say I want my neural network to help me obtain a better latent representation if" }, { "end": 1612.56, "start": 1609.44, "text": " I do one step of gradient descent." }, { "end": 1615.8600000000001, "start": 1612.56, "text": " So therefore it's not just pure gradient descent in that space." }, { "end": 1621.58, "start": 1615.86, "text": " It actually the back propagation makes it such that your neural network also supports" }, { "end": 1626.76, "start": 1621.58, "text": " that supports obtaining a good representation in one step." }, { "end": 1633, "start": 1626.76, "text": " Okay, now that we've disentangled this, basically, you can see two things." }, { "end": 1637.84, "start": 1633, "text": " First of all, you could probably get an even better representation by doing multiple steps" }, { "end": 1642.56, "start": 1637.84, "text": " of gradient descent right here, maybe adjusting the learning rate a bit." }, { "end": 1646.1599999999999, "start": 1642.56, "text": " It depends right because you have to back propagate through all the gradient descent" }, { "end": 1647.1599999999999, "start": 1646.1599999999999, "text": " steps." }, { "end": 1652.3999999999999, "start": 1647.1599999999999, "text": " But pretty sure you could probably improve this by doing multiple steps." }, { "end": 1656.44, "start": 1652.3999999999999, "text": " Second of all, it doesn't really matter that this is a constant zero." }, { "end": 1661.48, "start": 1656.44, "text": " It gives you know, there's a cool name gradient origin networks, but you could probably start" }, { "end": 1669.12, "start": 1661.48, "text": " with any constant or even here's the thing even non constant initial points, you could" }, { "end": 1671.6, "start": 1669.12, "text": " sample them from a distribution and so on." }, { "end": 1681.9599999999998, "start": 1671.6, "text": " Okay, so let's change like let's imagine changing z zero to be sampled from some normal distribution." }, { "end": 1685.7199999999998, "start": 1681.9599999999998, "text": " And then it looks much more like a game, right?" }, { "end": 1687.76, "start": 1685.7199999999998, "text": " Alright, so here we go." }, { "end": 1693.76, "start": 1687.76, "text": " I've cloned the repo and I ran the code once just to make sure that the data is downloaded" }, { "end": 1695.08, "start": 1693.76, "text": " and everything." }, { "end": 1698.08, "start": 1695.08, "text": " And the code is, you know, pretty, pretty easy." }, { "end": 1703.52, "start": 1698.08, "text": " So there is one file, and I didn't do it in the colab because the colab was, I think," }, { "end": 1705.32, "start": 1703.52, "text": " a bit slow for me." }, { "end": 1708.1999999999998, "start": 1705.32, "text": " I don't know if I've caught a wrong runtime." }, { "end": 1714.1999999999998, "start": 1708.1999999999998, "text": " But essentially, there is a bunch of setup code, they know these siren layers and so" }, { "end": 1715.1999999999998, "start": 1714.1999999999998, "text": " on." }, { "end": 1720.54, "start": 1715.1999999999998, "text": " And then you have the real deal thing right here." }, { "end": 1721.84, "start": 1720.54, "text": " So you have the step." }, { "end": 1724.12, "start": 1721.84, "text": " So we do 500 steps." }, { "end": 1730.04, "start": 1724.12, "text": " And in each step, we as you can see right here, we start with zeros as z, then we put" }, { "end": 1733.8799999999999, "start": 1730.04, "text": " this into f concatenated with the coordinates." }, { "end": 1738.36, "start": 1733.8799999999999, "text": " So the coordinates is like a kind of a mesh grid type thing." }, { "end": 1744.4399999999998, "start": 1738.36, "text": " We obtain the inner loss right here, we do a gradient with respect so of the inner loss" }, { "end": 1746.2399999999998, "start": 1744.4399999999998, "text": " with respect to z." }, { "end": 1749.2399999999998, "start": 1746.2399999999998, "text": " And then the negative gradient that's going to become our outer z." }, { "end": 1757.48, "start": 1749.24, "text": " So this z up here is z zero, and this z down here is going to be our true z from the paper." }, { "end": 1763.44, "start": 1757.48, "text": " We are going to concatenate that again, with the coordinates to obtain the g, which is" }, { "end": 1766.36, "start": 1763.44, "text": " the kind of reconstruction of x." }, { "end": 1772.28, "start": 1766.36, "text": " And then our outer loss is going to be simply this reconstruction loss right here." }, { "end": 1775.56, "start": 1772.28, "text": " And then we're going to backward to all of the parameters." }, { "end": 1776.56, "start": 1775.56, "text": " Okay." }, { "end": 1782.52, "start": 1776.56, "text": " So first hypothesis is that this here is simply kind of gradient descent." }, { "end": 1786.56, "start": 1782.52, "text": " So what we should be able to do is first, let's run let's run this." }, { "end": 1792.04, "start": 1786.56, "text": " So I've run this like that." }, { "end": 1795.84, "start": 1792.04, "text": " So this is shipping it to a GPU server." }, { "end": 1802.32, "start": 1795.84, "text": " And as you will be able to see, the loss will be output." }, { "end": 1807.3999999999999, "start": 1802.32, "text": " And it's going to kind of decrease the loss over the course of 500 steps." }, { "end": 1816.6399999999999, "start": 1807.3999999999999, "text": " And we can also look at the samples." }, { "end": 1822.36, "start": 1816.6399999999999, "text": " So while that's happening, what we can do is we can actually already prepare what we" }, { "end": 1823.36, "start": 1822.36, "text": " want to do." }, { "end": 1827.08, "start": 1823.36, "text": " So if this is really gradient descent, we should be basically just able to do this z" }, { "end": 1830.4199999999998, "start": 1827.08, "text": " minus this gradient right here, because it's zeros." }, { "end": 1834.8400000000001, "start": 1830.42, "text": " We would simply expect this to yield the same loss." }, { "end": 1841.28, "start": 1834.8400000000001, "text": " So we're going to do this, and then we're going to ship this off to the server again." }, { "end": 1843.2, "start": 1841.28, "text": " Sorry." }, { "end": 1845.24, "start": 1843.2, "text": " So we were here." }, { "end": 1849.28, "start": 1845.24, "text": " And okay, the logs failed." }, { "end": 1850.28, "start": 1849.28, "text": " All right." }, { "end": 1852.92, "start": 1850.28, "text": " So this is called images." }, { "end": 1856.8200000000002, "start": 1852.92, "text": " I have this thing set up such that it's called logs." }, { "end": 1864.04, "start": 1856.82, "text": " But you can basically see that the loss right here was from 24 going to down to about 13" }, { "end": 1866.32, "start": 1864.04, "text": " or so over the course of training." }, { "end": 1874.82, "start": 1866.32, "text": " So by subtracting z minus the gradient, we there really shouldn't be any change, right?" }, { "end": 1878.34, "start": 1874.82, "text": " Because z is zero at the beginning." }, { "end": 1880.46, "start": 1878.34, "text": " So again, we're going to run this." }, { "end": 1885.96, "start": 1880.46, "text": " And while it's running, we're going to prepare the different things." }, { "end": 1892.44, "start": 1885.96, "text": " So my hypothesis is that we can maybe we could make this z here pretty much anything." }, { "end": 1894.26, "start": 1892.44, "text": " So let's do it." }, { "end": 1896.32, "start": 1894.26, "text": " Let's put it into ones." }, { "end": 1902.16, "start": 1896.32, "text": " Again, you see that the loss, I guess, you know, we get an idea of kind of the noisiness" }, { "end": 1904.4, "start": 1902.16, "text": " of this thing." }, { "end": 1908.42, "start": 1904.4, "text": " And 2119, and so on." }, { "end": 1914.72, "start": 1908.42, "text": " We can in fact, over here, we might be able to if we ship it to a different GPU, might" }, { "end": 1916.32, "start": 1914.72, "text": " be able to run two things in parallel." }, { "end": 1924.68, "start": 1916.32, "text": " So this now is when we just start with ones instead of zeros." }, { "end": 1926.88, "start": 1924.68, "text": " So let's see how that happens." }, { "end": 1928.3600000000001, "start": 1926.88, "text": " While that's the case." }, { "end": 1934.76, "start": 1928.3600000000001, "text": " So you can see right here that we ended up at also about 1413." }, { "end": 1940.26, "start": 1934.76, "text": " This pretty much is the same if you can we can look at the images that it's produced." }, { "end": 1946.04, "start": 1940.26, "text": " So the reconstructions look kind of like this of fashion MNIST, the samples kind of look" }, { "end": 1949.48, "start": 1946.04, "text": " like this." }, { "end": 1952.94, "start": 1949.48, "text": " And the interval interpolations, you can look at those as well." }, { "end": 1957.08, "start": 1952.94, "text": " But we're mainly interested also in the in the kind of loss right here." }, { "end": 1961.44, "start": 1957.08, "text": " You can see that with the ones, pretty much the same thing is happening." }, { "end": 1970.28, "start": 1961.44, "text": " So let's say we actually change this to a normal distribution." }, { "end": 1972.66, "start": 1970.28, "text": " What does that do?" }, { "end": 1978.56, "start": 1972.66, "text": " And while that's happening, we're going to revert this to the original zeros." }, { "end": 1982.8400000000001, "start": 1978.56, "text": " And we're going to investigate what happens if we just do more than one step of gradient" }, { "end": 1984.1200000000001, "start": 1982.8400000000001, "text": " descent." }, { "end": 1987.48, "start": 1984.1200000000001, "text": " So in order to do that, it's actually pretty easy." }, { "end": 1989.8400000000001, "start": 1987.48, "text": " So this here is the gradient descent step." }, { "end": 1993.4399999999998, "start": 1989.84, "text": " What we can do is we can simply double that." }, { "end": 1998.1999999999998, "start": 1993.4399999999998, "text": " So now if this is correct, I'm pretty sure this is correct." }, { "end": 2004.6399999999999, "start": 1998.1999999999998, "text": " So the normal initialized isn't really the hit right here, as you can see." }, { "end": 2005.6399999999999, "start": 2004.6399999999999, "text": " Wow." }, { "end": 2006.6399999999999, "start": 2005.6399999999999, "text": " Okay." }, { "end": 2010.08, "start": 2006.6399999999999, "text": " The normal isn't." }, { "end": 2016.24, "start": 2010.08, "text": " Maybe it's because it's too large." }, { "end": 2017.24, "start": 2016.24, "text": " I'm not sure." }, { "end": 2021.84, "start": 2017.24, "text": " The other thing is deterministic, so that's going to be a lot easier." }, { "end": 2027.96, "start": 2021.84, "text": " We can quickly go back and let's go ones." }, { "end": 2031.64, "start": 2027.96, "text": " Let's go to normal." }, { "end": 2038.92, "start": 2031.64, "text": " And let's multiply it with a tiny 0.01 or so." }, { "end": 2041.2, "start": 2038.92, "text": " I just want to see whether this works." }, { "end": 2043.04, "start": 2041.2, "text": " I have no big hopes." }, { "end": 2044.1200000000001, "start": 2043.04, "text": " Okay." }, { "end": 2050.68, "start": 2044.12, "text": " So we are here again, and we're going to make this into two different things." }, { "end": 2053.2799999999997, "start": 2050.68, "text": " Two steps of gradient descent." }, { "end": 2055.3599999999997, "start": 2053.2799999999997, "text": " All right." }, { "end": 2058.3199999999997, "start": 2055.3599999999997, "text": " So now we have two steps of gradient descent." }, { "end": 2060.48, "start": 2058.3199999999997, "text": " And let's see whether that helps." }, { "end": 2062.18, "start": 2060.48, "text": " Ah, okay." }, { "end": 2068.68, "start": 2062.18, "text": " So the normal distribution already helps or is not worse." }, { "end": 2073.5, "start": 2068.68, "text": " We simply initialized it with too big of a variance." }, { "end": 2079.52, "start": 2073.5, "text": " The 0.01 seems to be some kind of magic number for normal distributions and neural networks." }, { "end": 2086.28, "start": 2079.52, "text": " So on the right side over here, and you can see we're a bit off, but I guess with a bit" }, { "end": 2088.6, "start": 2086.28, "text": " of tuning you could do that." }, { "end": 2092.24, "start": 2088.6, "text": " And it gets down to about the same loss as you saw." }, { "end": 2098.96, "start": 2092.24, "text": " If we look at the images that this produced, I'm going to guess they seem a bit worse," }, { "end": 2101.32, "start": 2098.96, "text": " but it kind of works." }, { "end": 2105.44, "start": 2101.32, "text": " On the right side, however, if you do more than one step of gradient descent, wah, wah," }, { "end": 2106.44, "start": 2105.44, "text": " wee wah." }, { "end": 2109.48, "start": 2106.44, "text": " You see, we already started lower losses." }, { "end": 2114.7200000000003, "start": 2109.48, "text": " And since this is gradient descent, we can also, you know, there's no need why the learning" }, { "end": 2115.84, "start": 2114.7200000000003, "text": " rate should be one." }, { "end": 2126.52, "start": 2115.84, "text": " So let's try to divide it by a generous three and then by maybe, you know, it's a six, like" }, { "end": 2131, "start": 2126.52, "text": " a decreasing learning rate seems like a rather good idea." }, { "end": 2136.56, "start": 2131, "text": " And yeah, let's just take the two steps with the decreasing learning rate." }, { "end": 2137.56, "start": 2136.56, "text": " Oops." }, { "end": 2143.16, "start": 2137.56, "text": " So you can see that the loss now is way down just because we did two steps of gradient" }, { "end": 2146.76, "start": 2143.16, "text": " descent and the reconstructions, I'm going to guess they're almost perfect." }, { "end": 2150.28, "start": 2146.76, "text": " So we're now, I guess we're overfitting a bit." }, { "end": 2155.22, "start": 2150.28, "text": " So this is now trading off kind of power of the encoder decoder and so on." }, { "end": 2161.48, "start": 2155.22, "text": " But ultimately, yeah, so let's just for the last part, just try to have this gradient" }, { "end": 2166, "start": 2161.48, "text": " descent with the decreasing step size and see where that gets us if that gets us to" }, { "end": 2171.4599999999996, "start": 2166, "text": " even a lower reconstruction loss." }, { "end": 2175.9599999999996, "start": 2171.4599999999996, "text": " And that will be our investigation into the code right here." }, { "end": 2177.9599999999996, "start": 2175.9599999999996, "text": " Okay." }, { "end": 2181.16, "start": 2177.9599999999996, "text": " Do do do do do." }, { "end": 2185.12, "start": 2181.16, "text": " Okay, we start with 19." }, { "end": 2188, "start": 2185.12, "text": " Maybe we're as good as before." }, { "end": 2190.48, "start": 2188, "text": " That's fine, you know." }, { "end": 2197.7999999999997, "start": 2190.48, "text": " But I hope I hope that kind of gives a bit of evidence to my point that this is basically" }, { "end": 2205.3199999999997, "start": 2197.7999999999997, "text": " reversing a generator by using gradient descent, which has been around for a while." }, { "end": 2211.7599999999998, "start": 2205.3199999999997, "text": " And I happen to know someone who who once attempted to write a paper about it." }, { "end": 2216.8, "start": 2211.76, "text": " So yeah, but it's it's with implicit networks, which are pretty cool." }, { "end": 2220.5200000000004, "start": 2216.8, "text": " So you know, maybe this might work especially well with them given that the gradient of" }, { "end": 2225.0400000000004, "start": 2220.5200000000004, "text": " a siren is a gradient and is a siren, and so on." }, { "end": 2229.32, "start": 2225.0400000000004, "text": " Yep, as you can see, this works as well decreasing learning rate." }, { "end": 2230.8, "start": 2229.32, "text": " And now you can go nuts." }, { "end": 2231.8, "start": 2230.8, "text": " Oh, nine." }, { "end": 2232.8, "start": 2231.8, "text": " Wow." }, { "end": 2235.0800000000004, "start": 2232.8, "text": " This is the lowest loss we've gotten so far." }, { "end": 2236.0800000000004, "start": 2235.0800000000004, "text": " Right?" }, { "end": 2237.0800000000004, "start": 2236.0800000000004, "text": " Yeah." }, { "end": 2239.1200000000003, "start": 2237.0800000000004, "text": " So pretty cool." }, { "end": 2241.92, "start": 2239.12, "text": " Interpolations look like things." }, { "end": 2243.24, "start": 2241.92, "text": " These are the best samples." }, { "end": 2246.1, "start": 2243.24, "text": " I think these are the best samples we've seen today." }, { "end": 2247.1, "start": 2246.1, "text": " Maybe not." }, { "end": 2248.2799999999997, "start": 2247.1, "text": " I'm not sure." }, { "end": 2250.4, "start": 2248.2799999999997, "text": " Let's look at the interpolations quickly." }, { "end": 2254.3199999999997, "start": 2250.4, "text": " Yeah, this looks like interpolations." }, { "end": 2256.88, "start": 2254.3199999999997, "text": " I mean, if you squint." }, { "end": 2258.96, "start": 2256.88, "text": " Okay, this was it for coding." }, { "end": 2261.3199999999997, "start": 2258.96, "text": " See ya." }, { "end": 2269.1200000000003, "start": 2261.32, "text": " Now, GANs have come with encoders before or it much more looks like a variational auto" }, { "end": 2270.7200000000003, "start": 2269.1200000000003, "text": " encoder as well." }, { "end": 2273.6400000000003, "start": 2270.7200000000003, "text": " The difference here is we replace the encoder." }, { "end": 2277.2200000000003, "start": 2273.6400000000003, "text": " So this here is our encoder, right?" }, { "end": 2281.3, "start": 2277.2200000000003, "text": " This is our implicit encoder is simply gradient descent." }, { "end": 2284.56, "start": 2281.3, "text": " This has also been done before for GANs." }, { "end": 2289.84, "start": 2284.56, "text": " So people train GANs and then they try to find the latent representation by back propagating." }, { "end": 2296.08, "start": 2289.84, "text": " And some people even do this while training." }, { "end": 2302.8, "start": 2296.08, "text": " They do gradient descent and then either do or do not back prop through the GAN, through" }, { "end": 2304.8, "start": 2302.8, "text": " the gradient descent procedure." }, { "end": 2314, "start": 2304.8, "text": " So in a way or another, this is kind of sort of like those ideas, not saying it is equal." }, { "end": 2318.28, "start": 2314, "text": " And again, there could be like some special interaction because you actually back prop" }, { "end": 2322.1400000000003, "start": 2318.28, "text": " through both these things and there could be some special interaction because these" }, { "end": 2324.32, "start": 2322.1400000000003, "text": " are implicit neural networks." }, { "end": 2329.6000000000004, "start": 2324.32, "text": " However, I very much view these as two different things." }, { "end": 2335.7400000000002, "start": 2329.6000000000004, "text": " The cool, there is a rather cool derivation of that where you can say, okay, you can also" }, { "end": 2339.1200000000003, "start": 2335.7400000000002, "text": " use it as a classifier by basically doing this." }, { "end": 2342.88, "start": 2339.1200000000003, "text": " And now hope you can understand this much better." }, { "end": 2348.52, "start": 2342.88, "text": " So what we'll have is we'll have the classification loss for sample X is going to be your cross" }, { "end": 2351.6, "start": 2348.52, "text": " entropy loss between two things." }, { "end": 2352.96, "start": 2351.6, "text": " Okay." }, { "end": 2356.96, "start": 2352.96, "text": " Well, can you please go down again?" }, { "end": 2357.96, "start": 2356.96, "text": " Thanks." }, { "end": 2364.32, "start": 2357.96, "text": " So your cross your loss between two things is going to be the loss between your label" }, { "end": 2365.32, "start": 2364.32, "text": " Y." }, { "end": 2366.32, "start": 2365.32, "text": " So that's one thing." }, { "end": 2371.2000000000003, "start": 2366.32, "text": " And usually you have the feature, the logits on this side, right?" }, { "end": 2374.96, "start": 2371.2, "text": " Now you can see right here, you have an F that's probably that something that gives" }, { "end": 2378.52, "start": 2374.96, "text": " you the logits from your features." }, { "end": 2383.8799999999997, "start": 2378.52, "text": " And here your features aren't going to be the data point itself, but your features are" }, { "end": 2388.7599999999998, "start": 2383.8799999999997, "text": " going to be the Z variable that comes with the data point." }, { "end": 2392.46, "start": 2388.7599999999998, "text": " So basically you use this as a feature producer." }, { "end": 2399.12, "start": 2392.46, "text": " And the feature producer is made by again, minimizing this reconstruction loss." }, { "end": 2404.7999999999997, "start": 2399.12, "text": " Now I'm not sure this is going to work really well for classifiers because classifiers generally" }, { "end": 2407.6, "start": 2404.7999999999997, "text": " don't require you to reconstruct things." }, { "end": 2415, "start": 2407.6, "text": " And we know this, you know, people try to, this is like you were to have a variational" }, { "end": 2421, "start": 2415, "text": " autoencoder and then simply use that encoder as a feature producer for a classifier, which" }, { "end": 2422.96, "start": 2421, "text": " generally doesn't work very well." }, { "end": 2426.7999999999997, "start": 2422.96, "text": " But you know, you can, you can do it right here." }, { "end": 2433.1200000000003, "start": 2426.8, "text": " And the cool thing is that you can actually use the implicit representation network F" }, { "end": 2438.6400000000003, "start": 2433.1200000000003, "text": " to give you features for the entire data sample Z." }, { "end": 2443.5600000000004, "start": 2438.6400000000003, "text": " So you're kind of freed from the coordinate representation here and you get kind of a" }, { "end": 2447.7000000000003, "start": 2443.5600000000004, "text": " latent vector back." }, { "end": 2453.1600000000003, "start": 2447.7000000000003, "text": " So this is how you would use an implicit neural network in order to do classification." }, { "end": 2457.48, "start": 2453.16, "text": " That's I think, you know, pretty, pretty cool derivation of this." }, { "end": 2463.72, "start": 2457.48, "text": " So here they make some empirical claims, which I don't, I don't want to go too much into," }, { "end": 2467.7599999999998, "start": 2463.72, "text": " but there are certain advantages, certain practical advantages of doing things like" }, { "end": 2468.7599999999998, "start": 2467.7599999999998, "text": " this." }, { "end": 2474.52, "start": 2468.7599999999998, "text": " Like you can have very, very few parameters to represent an entire set of data." }, { "end": 2481.04, "start": 2474.52, "text": " The interpolations here work nicely as you can see." }, { "end": 2486.7599999999998, "start": 2481.04, "text": " And I think generally they make the claim that this trains fast and you can see after" }, { "end": 2492.64, "start": 2486.7599999999998, "text": " three seconds, it already has a lot of information about the data set and it does some sensible" }, { "end": 2494.12, "start": 2492.64, "text": " things." }, { "end": 2495.62, "start": 2494.12, "text": " Okay." }, { "end": 2505.72, "start": 2495.62, "text": " So the code is available and in fact, I'll probably enter, inter parse into this video" }, { "end": 2509.66, "start": 2505.72, "text": " a let's actually test our hypotheses, right?" }, { "end": 2511.48, "start": 2509.66, "text": " Let's test these hypotheses that I said." }, { "end": 2516.2, "start": 2511.48, "text": " So first hypothesis is probably we can start with something else than the constant zero" }, { "end": 2521.8999999999996, "start": 2516.2, "text": " and second hypothesis is we can probably improve by doing multiple steps of gradient descent" }, { "end": 2523.68, "start": 2521.8999999999996, "text": " in the inner loop." }, { "end": 2528.68, "start": 2523.68, "text": " Yes, I, this might be somewhere in this video." }, { "end": 2531.72, "start": 2528.68, "text": " And if not, it comes at the end like right now." }, { "end": 2532.72, "start": 2531.72, "text": " Okay." }, { "end": 2533.72, "start": 2532.72, "text": " So I'll see you next time." }, { "end": 2540.48, "start": 2533.72, "text": " Bye bye." } ]
x6T1zMSE4Ts
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
NVAE: A Deep Hierarchical Variational Autoencoder (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gan", "vae", "kl", "elbo", "autoencoder", "variational", "latent", "sampling", "hierarchical", "scales", "faces", "mnist", "cifar10", "swish", "batch norm", "generative", "nvidia", "mixed precision", "memory", "deep", "layers", "depthwise convolutions", "cnn", "convolutional", "generation", "generative model" ]
VAEs have been traditionally hard to train at high resolutions and unstable when going deep with many layers. In addition, VAE samples are often more blurry and less crisp than those from GANs. This paper details all the engineering choices necessary to successfully train a deep hierarchical VAE that exhibits global consistency and astounding sharpness at high resolutions. OUTLINE: 0:00 - Intro & Overview 1:55 - Variational Autoencoders 8:25 - Hierarchical VAE Decoder 12:45 - Output Samples 15:00 - Hierarchical VAE Encoder 17:20 - Engineering Decisions 22:10 - KL from Deltas 26:40 - Experimental Results 28:40 - Appendix 33:00 - Conclusion Paper: https://arxiv.org/abs/2007.03898 Abstract: Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels. Authors: Arash Vahdat, Jan Kautz Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher
Alright, hi there. Have a look at these faces right here. So you're probably used by now to seeing computer-generated faces of really high quality, but probably you're used to seeing these faces coming from a generative adversarial network. However, these faces right here are from a variational autoencoder. Now, variational autoencoders are fundamentally different than GANs, and traditionally they've been a bit harder to scale up to high-resolution images and give sort of very detailed, sharp output. This paper right here attempts to build such a VAE for these high resolution large data set. And it basically details everything you need to do to get a VAE like this. So the paper is called NVAE or NVAE, I don't know how to pronounce that, a deep hierarchical variational autoencoder by Arash Wadat and Jan Kautz of NVIDIA. As I said, on a high level, this paper is about how to build a deep hierarchical variational autoencoder, which is sort of a combination of already existing techniques combined in a clever way, and then listing all the engineering efforts that you need to do to actually make this work. And there is not one thing where you can say, ah, this is the thing that really made it work. But each of these techniques is going to stack and stack and stack until they reach a model that surpasses the state of the art on these data sets. And they are also able to apply this to an entirely new high quality image data set. So these again are some of the samples from that model. And as you can see, they look very, very crisp, very sharp, and also very, let's say, real. Yeah. So really briefly, variational autoencoders. So this paper attempts to build a variational autoencoder. What is it? For that, you need to start with what an autoencoder is. So an autoencoder traditionally, let's say you have an image data set, and you take an image and you train a model that consists of an encoder that maps your image to a lower dimensional space, a compressed space, which you call the latent space Z. And then you train a decoder to, again, go from the latent space back to the image space. And then you train those two models such that the distance between the output and the input is minimized. Okay, this is called the reconstruction loss. And you train the encoder and the decoder to minimize that reconstruction loss. And thereby, you hope that this latent space will learn something about the data. Now, a sort of advanced version of this and a probabilistic version of this is the variational autoencoder, where we say, what we want to do is we don't want the encoder to just output directly the latent code, but we interpret this in a probabilistic fashion. So the encoder is now a probabilistic function that outputs a distribution over latent codes. So we take our same image. And what we want to do is we want a Bayesian, basically, it's a Bayesian way of thinking of it, we want a distribution over latent codes corresponding to that image. So our encoder here is not going to output Z, but it's going to output mu and sigma. So it would be ideal if you could output an entire distribution, but we're going to make some assumptions here that that is a normal distribution. And it's going to output the mean and the standard deviation of that normal distribution. And then you actually, because now you how you're going to feed this into the decoder, if you just feed mu, you are back to the normal autoencoder. So that doesn't work. What you do is you actually instantiate that normal distribution with the mu and the sigma. So you plug that in here, you sample one sample from that normal distribution. And then you feed that sample into your decoder. Again, your decoder outputs an image from that sample. And you compare this with the reconstruction loss. And now you train the entire process. So you train the encoder and the decoder to reproduce these images correctly. Now, if you only do that, then the model will basically regress to a standard autoencoder. Why is that? Well, what's pretty easy for the... You can see that estimating the distribution is harder than estimating just the latent code, at least for the training data set, right? So if you don't pay attention, what's going to what the encoder is going to do is it's going to say, oh, well, if I just make this here, my latent code, and if I just make this as small as possible, like zero, or like one to the minus 10, 10, that's still one, 10 to the minus one. That's not that small. 10 to the minus 10, 11, 12, okay, a very small number, then that normal distribution will basically be just spiky around the thing around my mean. And so this here will always be kind of the same Z. So it won't be a distribution at all. It will just be this dirac. And I'm back to the original autoencoder, which I don't want. I want my probabilistic framework so I can compute likelihoods and so on. There are various advantages to having a probabilistic view of the data rather than just a model that produces it. Okay, and that's why in a VAE, there is not only the objective, not only this objective, the reconstruction objective, but there is a second objective where we say that we impose a regularization. And the regularization is that this here is as close as possible to a standard normal distribution. And I guess you can choose that the prior but in regularly you say, okay, this here, I don't want you encoder, I don't want you to go far away from a standard normal distribution, like do what you have to do to make the loss small, but don't go away too far. Alright, so that's the kind of balance in the VAE. And as you can imagine, if you have a normal distribution, and you sample these Z vectors here, and the reconstruction loss is always the same. So if you input the same x here a bunch of times, you'll get different Z's, right? You get Z1, Z2, Z3, because it's sampled from this distribution, there's a sampling procedure right here. So if your discriminator here is kind of smooth, then it will output different images. Now these images will always be compared to that same input image, right? So you're training this whole architecture to always reconstruct that input image from different images. So there is an interaction, I think, I think that's what's happening. I guess I'm not an expert on VAEs. But this here usually is something like the L2 loss. So in terms of how this affects the images, if I have different images that are sort of the same, but sort of different, and I have to make it L2 loss close to this image right here, then one option I have is to make them kind of blurry. So if I make all of them kind of blurry in the L2 loss, that will give me a lower penalty. So that's, I believe that's one of the explanations I heard at some point why VAEs produce usually blurry images. And that's been a problem for a long time that everything's kind of kind of blurry. So here, the VA, the hierarchical VAE comes to the rescue. So how they are going to battle this problem is by doing a hierarchical variational auto encoder. And this is how it works. So you start off, this is your generator, by the way, once you've trained your VAE, right, once you've trained it, you can simply sample from your prior from this here, because that's, you know, close enough to this, or you can, I guess, learn the prior of your data distribution, and so on. And you can just use the generator right here, the generative part, this part right here is your generator in order to produce images. So you can sample from a VAE, like you could sample from a GAN. Okay, so here, we'll look at a model that could combat those things. On the right side, you can see the model that you would ultimately sample from. So this is going to be your decoder, okay, this generative model right here. And what they do is it's very similar to if you, Nvidia also had this paper about this GAN, where they are on different scales, like progressive, I think, prog GAN, which was the first that introduced actually this high quality face data sets, I believe, at least. So here, we're going to do a very, very similar trick. So the idea is that we start out, this is a learned quantity, but you can also view it as just kind of the zero vector, we start out with our noise, we sample our noise, but our noise is going to be, it's going to be, let's say it's in the shape of an image, we can do that, we can reshape images, right. So it's going to just be a 16 entry vector. And it's going to be shaped like this, okay, we sample noise like this. And then we produce an image of 16 by 16 from it, I think they start with eight by eight or something like this. But in conceptually, you do that, then you have a neural network, this is a residual neural network produce an image out of that noise, right, it maps the noise to the image. So this is your, this is your D, your discriminator part. But then you're not done. What you do is you would actually upscale that image, or that can happen in the neural network, I believe, or you are up sampled from the beginning, and you enlarge these things. But what you would do is you would upscale your neural network. And you go higher. And so on. So you go higher and higher and higher in the hierarchy of noises. So this is a hierarchical model. Oh, yeah, down here. So they start, they start from, as you can see, it consists of 36 groups, in their case, of latent variables, starting from eight by eight, scaled up to 128 to 128, with two residual cells per latent variable groups. Okay, so you continuously scale and scale and scale up your your your images, and each time you add another bunch of these noises right here. So that means that in this model, you can, the uppermost residual model can sort of get the coarse details of the image. And that's going to be blurry, because it's a VAE, but it's going to be blurry in that coarse scale. And then you up sample it and you let another model add on top of that, the next layer of the next layer of features, you can see this is kind of a residual connection right here. And you, again, sample, and you let another neural network up sample, sorry, you know, let another neural network add more features in a higher resolution. So even though each VAE can be blurry in its own scale, it will be upscaled and there, there will be additional details added. And that's why in their samples, you will see that they're not super blurry at all. Though, I have to say something right here, if you look at these images, and you compare them, so later they compare them, they, like, they're almost look like puppets, right. So here you compare it to these are these are previous methods down there. Now, you know, to say that they're pretty, they're, you can see they're clearly kind of worse in that you can hear the symmetry of the faces aren't really given also the symmetries. Here, you can see that there are no symmetries. Here, there are no long range dependencies. The hair details are often missing as compared to like here, this is pretty crisp. But if you look at like the skin of people, and can just kind of the image composition in shadows, it looks like these, these people are like cardboard cutouts here, they have like these multiple layers where I mean, I'm I the only one that just sees this, this is like a plastic cutout. And then the face is again, like a plastic cutout and the faces are so smooth. I mean, look at this. These are like, too pretty. Like you can just look at this for hours. This is so like the diff. It maybe it just seems like this to me, but the difference if you kind of look at the skin and it almost feels like the bottom ones are actual real photographs in just in terms of the faces and the the kind of the color, just the smoothness is just all look like porcelain. This might actually be an effect of the VAE, right? Because it's not blurry, right? As a you know, the the lines and so on. But the the skin texture might just be one one scale too much here. And that's where we now see the blurriness or it might just be that I don't I don't know. Okay, I have no idea. This just this just somehow whereas was popping out to me as the main difference, like they are much more crisp and so on and much more beautiful, but also they look like puppets. Yeah. Alright, so let's get back to the model right here because so once we decided that we want such a hierarchical model, what we need to do is we need to simply build a VAE for each of these hierarchies, right? So the the uppermost thing here is a regular VAE noise, right? We have a noise, we sample from it, and we generate this particular scale of image. Okay, so how do we get that noise? We simply this is this down here is our this is our encoder and this is our decoder. We simply have our encoder, this is a series of neural networks, and we get our latent encoding, right? So the Z is obtained through the the kind of VAE encoding method. Okay, now the interesting part is how do we get Z two, and you might just think, well, we'll just go like one layer up here, but Z two, as you can see here, it depends on Z one during sampling. So during inference, we have also have to have that Z two depends on Z one. And that's why we first need to go to Z one and actually produce a sample. So our method of inferring the latent codes includes already sampling from those latent codes, right? So you sample, and you do the same thing as you would do in the right. In fact, these models are shared. And then you can see that Z two now depends on Z one in this procedure, because you go here, and you go here and here. So Z two depends on Z one, Z three, in turn would depend on Z two and Z one. And you have a properly hierarchically factorized model right here. Okay, so this, this is called a hierarchical VAE, it pretty much works like a VAE, except that it is hierarchical. And you need to do here need to have this bottom up and this top down model in order in your encoder. And so now there are a bunch of questions with respect to the hierarchical VAE. The problem here is that you have not only one sampling procedure, but you have sampling procedure upon sampling procedure upon sampling procedure. And this can get pretty unstable, I guess pretty quickly. So the rest of the paper is going to be how to get this to work. So the main, I think one of the main parts they do in order to get this to work in order to get this to train our residual connections. So we know that residual connections are kind of a sort of a gradient flow highway in order in order to to train very deep networks. And we've already seen this with residual networks in CNNs, where you have an input, and you have some computation in form of a neural network, or in this case, a sampling procedure through a distribution, and you have an output and the residual connection would allow you to skip part of that, as you can see, used here in both the encoders and the decoders. So in the encoders, you have residual connections and also in the decoders right here, you can see you have residual connections. In fact, you always take that lower scale, and you don't transform it into an upper scale, you actually sample noise, and then you add the lower scale and the upper scale together. So it's really an additive model in a hierarchical fashion, even okay, the the pluses might actually not be okay, the pluses can also be combination, I guess. I guess that that I might be wrong, and they can actually be combinations. In any case, they use residual networks in in a in a lot of cases in their generative and in their generator and in their encoder. You can see right here, there is a residual cell for the generative model and a residual cell for the encoder. Now, the exact method of these residual cell, you can see that they use batch norm, then they use one by one convolutions in order to go to a higher to a higher channel number before they do the depth separated five by five convolutions. So five by five, because you need a larger receptive field, they make that clear, they need a large receptive field. However, the large receptive field means many parameters means their model would be too big and too much memory. So they do the depth separated convolutions, which simply means that you don't mix the channels during the convolutions. So you go up the channels, you do a depth separated convolution and go down the channels again. All of these are kind of hacks to make it work, right? Then also they have batch norm and a swish non-linearity as you can see here. And then here as well in the encoder, they also say in the text, like they stress the importance, we found that first the batch norm and then the convolution is better than the other way around and so on. So this there's a lot of engineering work that went into this right here. So you see there's batch norm. And also you have to kind of hack the batch norm, because in batch norm you have these training parameters and people have observed that in VAEs. If you during inference, during sampling, if you use the training way where you only regularize within the batch, it's better than if you use the running averages. So you kind of have to hack that. We modify the momentum parameter of batch norm such that running statistic can catch up faster with the batch statistics. There's a lot of engineering in here. Like there's a lot of things that you have to get right to get something like this to work apparently. And yeah, this the paper, the paper just goes on in this style. So you can see they use the swish activation. They use squeeze and excitation blocks, which are another form of residual blocks that were introduced quite a long time ago, but still being used as you can see. And yeah, so that's the architecture. So you can see they have residual cells there, residual cells here, reducing the memory requirements. They say they use two tricks. First of all, we they do mixed precision using a cool new Nvidia library. Given that they're from Nvidia, they get to try these things out first. And second of all, they also to reduce the memory, we fuse batch norm and swish and we store only one feature map for the backward pass instead of two. And they have to then recompute this trick is known as gradient checkpointing and cries, recomputing batch norm in the backward pass. I believe like future deep learning frameworks should just take care of that for you, instead of you having to do this kind of stuff. Honestly, so they also need to, they hear they say taming the unbounded KL term. So the KL term is what makes the distribution that the encoder outputs close to that distribution that you want like that normal distribution. So this is the regularization term, you can see here, it's a KL divergence between q, which is what your encoder outputs, you can see that's the, the latent code for the image x, between the two, the two, the two, the two, the two. And between that and between your prior which you say it should be, it should be a like a normal distribution. In this case, it should be a hierarchical normal distribution. And they have a special characterization here where they say, because it's hierarchical, right. So So I'm going to have a hierarchy of normal distributions. This is my top hierarchy and then I'm going to sample one sample right here. And then in the next layer I'm going to have a normal distribution around that sample right here and I'm going to sample from that and so on. So my hierarchical normal distribution is going to be always where the next distribution in the next layer is dependent on the distribution in the hierarchy. And they have a special parameterization where in order for the encoder to produce that, so the encoder has to produce a z of the first layer and then a z of the second layer and so on, in order for the encoder to reproduce that and to be close it must match this distribution and it must match this distribution. So if it doesn't match this distribution correctly it will kind of sample somewhere else a bit. And then that distribution, that base, will already be shifted right here. So it thinks that the distribution to match is now this normal distribution. So you can see that the base is already shifted and that's why their encoder only outputs the delta to the, as you can see here, it only outputs the delta to the prior. We define here, we define the Q of the z in a given layer as the normal distribution of the mu i, with mu i, that's your prior, see that's your prior of that layer, plus a delta mu. And also the sigma is the sigma from the prior times a delta sigma that you output. So you're kind of saying you're not supposed to output the actual distribution, you're supposed to output the difference of distribution to the prior. Now in layer 0 that's the same thing, right, because the prior is going to be zero mean and unit variance. So that's this here, this here will be zero and this here will be one. But in all the upper layers this is going to make it easier. So that's one trick you have to make this repeated sampling not hurt you as much. The other trick they employ here is special regular, sorry, spectral regularization, which is a regularization where you regularize the top singular value per layer. You can use that, you can compute that with a power iteration, people have been done doing this before, and also you can build in some normalizing flows. So here if we sample the different layers, what we're going to do is we're going to sample all of these things at once, right, they're dependent on the upper layer in the hierarchy, but we'll sample them all at once. And that means they are not sort of connected to each other. Now if we introduce a flow we'll basically make them all connected to each other and build like a singular distribution of them. But I don't want to go too much into this because it doesn't gain that much, they say you can just build that in if you want. Okay so these are all the things that at least they list in the method section. Now there are like a lot more that they have to do, but ultimately as you can see right here on these on four of these five datasets they achieve state-of-the-art. In fact okay on this dataset no one else has tried, but at least on the other datasets they are very very competitive as you can see right here. And they compare this to, first of all, to other models and even other models with and without auto regressive flows. And they come pretty close to these auto regressive models. So an auto regressive model would be one that generates like one pixel at a time conditioned on the other pixels. This model doesn't do that, this model generates all pixels at once, so it's not auto regressive. But as you can see it beats all the other non or auto regressive models and it gets pretty close to the best auto regressive models which are down here. They are still better, but the gap is kind of shrinking is what they say. Cool so that's the main result. Then they have ablations where they basically, as I said, all of these things kind of contribute a little bit, a little bit, a little bit, a little bit to building this bigger and bigger and deeper variational auto encoder. So it's hard to say what exactly makes this work because all of it makes it work. And I guess they just kept going until they beat state-of-the-art or until you know they ran out of tricks. Again these are the samples that we looked at and I do want to spend some time in the appendix right here because I think it's pretty interesting what they do. So first of all they show that their model doesn't remember the training samples. As you can see right here these are always the nearest neighbor from the training sample so the model is fairly you know fairly far away from the training samples. But yeah I mean okay maybe it's just me but the left they just look like more kind of more ideal idealized humans like very smooth humans like designer babies. Here they show that if you use batch norm as you would use it I think regularly where you keep these running stats or you do the batch norm from training then you get into this kind of degenerate case if you sample at lower temperatures. So the temperature that you sample from describes the width of the Gaussian that you ultimately want to sample from. And if you do they have this method to readjust the batch norm statistics which I don't want to go into here but you can you can read it up to basically fix that problem. It is a problem that apparently other people have observed as well and their method apparently is you know is a is one that manages to do that. Okay lastly there are some more samples right here. And yeah this right here this is honestly this is one of the I think one of the most interesting things where they go and since they have this hierarchical model right so here is like z1 it gives right and so that give gets you like an image and then there's z2 and that gets you an image and then there's z3 and so on and you continuously upscale and hierarchically add the features. Here they say what if what happens if we if we sample z1 once and then we fix it and then we only sample the other ones conditioned on z1 and here see where you see top scale fixed and you can see there is considerable variation in the image but there is there is not really a large scale variation. Okay so the general face keeps constant but there are details changing as you can see so here the hair is kind of going over the image the color is changing here there are a lot of changes the mouth looks slightly different as far as I can see but I might be hallucinating here and then if you fix continuously the top two scales or the top three scales right here top four scales you can see that there are more and more just little details that change more and more so yeah so this is we they are operating at five scales starting from 8 by 8 up to 128 to 128 in each row we fix the samples at a number of top scales and we sample from the rest of the hierarchy as we can see the long-range global structure is mostly recorded at the top of the hierarchy in the 8 by 8 dimensional groups the second scale does apply at some global motive does apply some global modifications such as changing eyes hair color skin tone the shape of the face the bottom groups capture mostly low-level variations however the lowest scale can still still make some subtle long-range modifications for example the hair color is slightly modified when we are only sampling from the lowest scale in the last row this is potentially enabled because of the larger receptive field in our depth wise separate separable residual cell yeah I don't the hair color changes okay slightly maybe I don't know my my eyes are too many faces okay but you know what's certainly the case is that their models exhibit much better kind of global unity compared to these other samples where you can pretty clearly see like the different sides of the faces have little to do with each other and so on and this is the benefit that you get from doing this hierarchically so you have part of your model that's responsible for kind of the global shape of the image and then that keeps it consistent and then you have other parts that are responsible for the details okay so I hope this was something to you know that interested you I myself it's as I said it's it's an engineering paper so there is lots of things described there is not like one jumping idea I guess residual connections are pretty important and these depth wise convolutions save memory and but also all of the all of the other things that you have to do to build something like this are pretty pretty interesting yeah I I hope you gained something from it and I'll see you next time
[ { "end": 5.6000000000000005, "start": 0, "text": " Alright, hi there. Have a look at these faces right here. So you're probably used by now to seeing" }, { "end": 11.200000000000001, "start": 5.6000000000000005, "text": " computer-generated faces of really high quality, but probably you're used to seeing these faces" }, { "end": 17.64, "start": 11.200000000000001, "text": " coming from a generative adversarial network. However, these faces right here are from a" }, { "end": 23, "start": 17.64, "text": " variational autoencoder. Now, variational autoencoders are fundamentally different than GANs," }, { "end": 29.64, "start": 23, "text": " and traditionally they've been a bit harder to scale up to high-resolution images and give sort" }, { "end": 37.6, "start": 29.64, "text": " of very detailed, sharp output. This paper right here attempts to build such a VAE for these high" }, { "end": 45.64, "start": 37.6, "text": " resolution large data set. And it basically details everything you need to do to get a VAE like this." }, { "end": 51.92, "start": 45.64, "text": " So the paper is called NVAE or NVAE, I don't know how to pronounce that, a deep hierarchical" }, { "end": 58.2, "start": 51.92, "text": " variational autoencoder by Arash Wadat and Jan Kautz of NVIDIA. As I said, on a high level," }, { "end": 65.64, "start": 58.2, "text": " this paper is about how to build a deep hierarchical variational autoencoder, which is sort of a" }, { "end": 72.12, "start": 65.64, "text": " combination of already existing techniques combined in a clever way, and then listing all" }, { "end": 78.60000000000001, "start": 72.12, "text": " the engineering efforts that you need to do to actually make this work. And there is not one" }, { "end": 83.12, "start": 78.60000000000001, "text": " thing where you can say, ah, this is the thing that really made it work. But each of these" }, { "end": 90.86, "start": 83.12, "text": " techniques is going to stack and stack and stack until they reach a model that surpasses the state" }, { "end": 96.80000000000001, "start": 90.86, "text": " of the art on these data sets. And they are also able to apply this to an entirely new high quality" }, { "end": 103.4, "start": 96.80000000000001, "text": " image data set. So these again are some of the samples from that model. And as you can see," }, { "end": 114.92, "start": 103.4, "text": " they look very, very crisp, very sharp, and also very, let's say, real. Yeah. So really briefly," }, { "end": 121.04, "start": 114.92, "text": " variational autoencoders. So this paper attempts to build a variational autoencoder. What is it?" }, { "end": 126.28, "start": 121.04, "text": " For that, you need to start with what an autoencoder is. So an autoencoder traditionally," }, { "end": 131.84, "start": 126.28, "text": " let's say you have an image data set, and you take an image and you train a model that consists of" }, { "end": 138.12, "start": 131.84, "text": " an encoder that maps your image to a lower dimensional space, a compressed space, which" }, { "end": 145.68, "start": 138.12, "text": " you call the latent space Z. And then you train a decoder to, again, go from the latent space back" }, { "end": 152.76, "start": 145.68, "text": " to the image space. And then you train those two models such that the distance between the output" }, { "end": 159.88, "start": 152.76, "text": " and the input is minimized. Okay, this is called the reconstruction loss. And you train the encoder" }, { "end": 166.12, "start": 159.88, "text": " and the decoder to minimize that reconstruction loss. And thereby, you hope that this latent space" }, { "end": 173.56, "start": 166.12, "text": " will learn something about the data. Now, a sort of advanced version of this and a probabilistic" }, { "end": 179.48, "start": 173.56, "text": " version of this is the variational autoencoder, where we say, what we want to do is we don't want" }, { "end": 187.51999999999998, "start": 179.48, "text": " the encoder to just output directly the latent code, but we interpret this in a probabilistic" }, { "end": 193.96, "start": 187.52, "text": " fashion. So the encoder is now a probabilistic function that outputs a distribution over latent" }, { "end": 200.8, "start": 193.96, "text": " codes. So we take our same image. And what we want to do is we want a Bayesian, basically," }, { "end": 205.76000000000002, "start": 200.8, "text": " it's a Bayesian way of thinking of it, we want a distribution over latent codes corresponding to" }, { "end": 213.64000000000001, "start": 205.76000000000002, "text": " that image. So our encoder here is not going to output Z, but it's going to output mu and sigma." }, { "end": 218.48, "start": 213.64, "text": " So it would be ideal if you could output an entire distribution, but we're going to make" }, { "end": 223.48, "start": 218.48, "text": " some assumptions here that that is a normal distribution. And it's going to output the mean" }, { "end": 231.67999999999998, "start": 223.48, "text": " and the standard deviation of that normal distribution. And then you actually, because" }, { "end": 236.64, "start": 231.67999999999998, "text": " now you how you're going to feed this into the decoder, if you just feed mu, you are back to the" }, { "end": 242.32, "start": 236.64, "text": " normal autoencoder. So that doesn't work. What you do is you actually instantiate that normal" }, { "end": 249.32, "start": 242.32, "text": " distribution with the mu and the sigma. So you plug that in here, you sample one sample from that" }, { "end": 256.76, "start": 249.32, "text": " normal distribution. And then you feed that sample into your decoder. Again, your decoder outputs" }, { "end": 262.92, "start": 256.76, "text": " an image from that sample. And you compare this with the reconstruction loss. And now you train" }, { "end": 273.72, "start": 262.92, "text": " the entire process. So you train the encoder and the decoder to reproduce these images correctly." }, { "end": 281.88, "start": 273.72, "text": " Now, if you only do that, then the model will basically regress to a standard autoencoder." }, { "end": 287.64, "start": 281.88, "text": " Why is that? Well, what's pretty easy for the... You can see that estimating the distribution is" }, { "end": 294.91999999999996, "start": 287.64, "text": " harder than estimating just the latent code, at least for the training data set, right? So if you" }, { "end": 300.91999999999996, "start": 294.91999999999996, "text": " don't pay attention, what's going to what the encoder is going to do is it's going to say," }, { "end": 309.32, "start": 300.91999999999996, "text": " oh, well, if I just make this here, my latent code, and if I just make this as small as possible," }, { "end": 318.92, "start": 309.32, "text": " like zero, or like one to the minus 10, 10, that's still one, 10 to the minus one. That's not that" }, { "end": 327.24, "start": 318.92, "text": " small. 10 to the minus 10, 11, 12, okay, a very small number, then that normal distribution will" }, { "end": 337.8, "start": 327.24, "text": " basically be just spiky around the thing around my mean. And so this here will always be kind of the" }, { "end": 344.6, "start": 337.8, "text": " same Z. So it won't be a distribution at all. It will just be this dirac. And I'm back to the" }, { "end": 349.8, "start": 344.6, "text": " original autoencoder, which I don't want. I want my probabilistic framework so I can compute" }, { "end": 354.28000000000003, "start": 349.8, "text": " likelihoods and so on. There are various advantages to having a probabilistic view" }, { "end": 362.2, "start": 354.28000000000003, "text": " of the data rather than just a model that produces it. Okay, and that's why in a VAE," }, { "end": 367.96, "start": 362.2, "text": " there is not only the objective, not only this objective, the reconstruction objective, but there" }, { "end": 376.76, "start": 367.96, "text": " is a second objective where we say that we impose a regularization. And the regularization is that" }, { "end": 386.52, "start": 376.76, "text": " this here is as close as possible to a standard normal distribution. And I guess you can choose" }, { "end": 393.4, "start": 386.52, "text": " that the prior but in regularly you say, okay, this here, I don't want you encoder, I don't want" }, { "end": 398.91999999999996, "start": 393.4, "text": " you to go far away from a standard normal distribution, like do what you have to do to" }, { "end": 406.76, "start": 398.91999999999996, "text": " make the loss small, but don't go away too far. Alright, so that's the kind of balance in the VAE." }, { "end": 412.91999999999996, "start": 406.76, "text": " And as you can imagine, if you have a normal distribution, and you sample these Z vectors here," }, { "end": 418.52000000000004, "start": 412.92, "text": " and the reconstruction loss is always the same. So if you input the same x here a bunch of times," }, { "end": 425.16, "start": 418.52000000000004, "text": " you'll get different Z's, right? You get Z1, Z2, Z3, because it's sampled from this distribution," }, { "end": 432.44, "start": 425.16, "text": " there's a sampling procedure right here. So if your discriminator here is kind of smooth," }, { "end": 439.16, "start": 432.44, "text": " then it will output different images. Now these images will always be compared to that same input" }, { "end": 446.28000000000003, "start": 439.16, "text": " image, right? So you're training this whole architecture to always reconstruct that input" }, { "end": 453.16, "start": 446.28000000000003, "text": " image from different images. So there is an interaction, I think, I think that's what's" }, { "end": 460.76000000000005, "start": 453.16, "text": " happening. I guess I'm not an expert on VAEs. But this here usually is something like the L2 loss." }, { "end": 467.72, "start": 460.76000000000005, "text": " So in terms of how this affects the images, if I have different images that are sort of the same," }, { "end": 475.72, "start": 467.72, "text": " but sort of different, and I have to make it L2 loss close to this image right here, then one" }, { "end": 483.40000000000003, "start": 475.72, "text": " option I have is to make them kind of blurry. So if I make all of them kind of blurry in the L2 loss," }, { "end": 490.20000000000005, "start": 483.40000000000003, "text": " that will give me a lower penalty. So that's, I believe that's one of the explanations I heard" }, { "end": 496.6, "start": 490.20000000000005, "text": " at some point why VAEs produce usually blurry images. And that's been a problem for a long time" }, { "end": 506.36, "start": 496.6, "text": " that everything's kind of kind of blurry. So here, the VA, the hierarchical VAE comes to the rescue." }, { "end": 513.88, "start": 507.32000000000005, "text": " So how they are going to battle this problem is by doing a hierarchical variational auto encoder." }, { "end": 519.64, "start": 513.88, "text": " And this is how it works. So you start off, this is your generator, by the way, once you've trained" }, { "end": 526.2, "start": 519.64, "text": " your VAE, right, once you've trained it, you can simply sample from your prior from this here," }, { "end": 531.4000000000001, "start": 526.2, "text": " because that's, you know, close enough to this, or you can, I guess, learn the prior of your data" }, { "end": 536.5200000000001, "start": 531.4000000000001, "text": " distribution, and so on. And you can just use the generator right here, the generative part," }, { "end": 543.88, "start": 536.5200000000001, "text": " this part right here is your generator in order to produce images. So you can sample from a VAE," }, { "end": 552.84, "start": 544.5200000000001, "text": " like you could sample from a GAN. Okay, so here, we'll look at a model that could combat those" }, { "end": 559.1600000000001, "start": 552.84, "text": " things. On the right side, you can see the model that you would ultimately sample from. So this is" }, { "end": 566.12, "start": 559.1600000000001, "text": " going to be your decoder, okay, this generative model right here. And what they do is it's very" }, { "end": 573.48, "start": 566.12, "text": " similar to if you, Nvidia also had this paper about this GAN, where they are on different scales," }, { "end": 579.24, "start": 573.48, "text": " like progressive, I think, prog GAN, which was the first that introduced actually this high quality" }, { "end": 585.96, "start": 579.24, "text": " face data sets, I believe, at least. So here, we're going to do a very, very similar trick." }, { "end": 593.32, "start": 585.96, "text": " So the idea is that we start out, this is a learned quantity, but you can also view it as" }, { "end": 598.84, "start": 593.32, "text": " just kind of the zero vector, we start out with our noise, we sample our noise, but our noise is" }, { "end": 604.6800000000001, "start": 598.84, "text": " going to be, it's going to be, let's say it's in the shape of an image, we can do that, we can" }, { "end": 613.0799999999999, "start": 604.68, "text": " reshape images, right. So it's going to just be a 16 entry vector. And it's going to be shaped like" }, { "end": 621, "start": 613.0799999999999, "text": " this, okay, we sample noise like this. And then we produce an image of 16 by 16 from it, I think" }, { "end": 627.4, "start": 621, "text": " they start with eight by eight or something like this. But in conceptually, you do that, then you" }, { "end": 633.24, "start": 627.4, "text": " have a neural network, this is a residual neural network produce an image out of that noise, right," }, { "end": 640.04, "start": 633.24, "text": " it maps the noise to the image. So this is your, this is your D, your discriminator part. But then" }, { "end": 646.6800000000001, "start": 640.04, "text": " you're not done. What you do is you would actually upscale that image, or that can happen in the" }, { "end": 653.08, "start": 646.6800000000001, "text": " neural network, I believe, or you are up sampled from the beginning, and you enlarge these things." }, { "end": 658.36, "start": 654.12, "text": " But what you would do is you would upscale your neural network." }, { "end": 669.5600000000001, "start": 658.36, "text": " And you go higher. And so on. So you go higher and higher and higher in the hierarchy of noises. So" }, { "end": 676.84, "start": 669.5600000000001, "text": " this is a hierarchical model. Oh, yeah, down here. So they start, they start from, as you can see," }, { "end": 684.9200000000001, "start": 677.4, "text": " it consists of 36 groups, in their case, of latent variables, starting from eight by eight," }, { "end": 695.4, "start": 684.92, "text": " scaled up to 128 to 128, with two residual cells per latent variable groups. Okay, so you continuously" }, { "end": 703, "start": 695.4, "text": " scale and scale and scale up your your your images, and each time you add another bunch of these" }, { "end": 711.56, "start": 703, "text": " noises right here. So that means that in this model, you can, the uppermost residual model can" }, { "end": 717.2399999999999, "start": 711.56, "text": " sort of get the coarse details of the image. And that's going to be blurry, because it's a VAE," }, { "end": 723, "start": 717.2399999999999, "text": " but it's going to be blurry in that coarse scale. And then you up sample it and you let another" }, { "end": 730.04, "start": 723, "text": " model add on top of that, the next layer of the next layer of features, you can see this is kind" }, { "end": 737.0799999999999, "start": 730.04, "text": " of a residual connection right here. And you, again, sample, and you let another neural network" }, { "end": 744.44, "start": 737.08, "text": " up sample, sorry, you know, let another neural network add more features in a higher resolution." }, { "end": 751.48, "start": 744.44, "text": " So even though each VAE can be blurry in its own scale, it will be upscaled and there," }, { "end": 758.6800000000001, "start": 752.84, "text": " there will be additional details added. And that's why in their samples, you will see that" }, { "end": 765, "start": 758.6800000000001, "text": " they're not super blurry at all. Though, I have to say something right here, if you look at" }, { "end": 772.92, "start": 765, "text": " these images, and you compare them, so later they compare them, they, like, they're almost look like" }, { "end": 780.44, "start": 772.92, "text": " puppets, right. So here you compare it to these are these are previous methods down there. Now," }, { "end": 787.56, "start": 780.44, "text": " you know, to say that they're pretty, they're, you can see they're clearly kind of worse in that you" }, { "end": 794.28, "start": 787.56, "text": " can hear the symmetry of the faces aren't really given also the symmetries. Here, you can see that" }, { "end": 800.6, "start": 794.28, "text": " there are no symmetries. Here, there are no long range dependencies. The hair details are often" }, { "end": 807.72, "start": 800.6, "text": " missing as compared to like here, this is pretty crisp. But if you look at like the skin of people," }, { "end": 812.76, "start": 807.72, "text": " and can just kind of the image composition in shadows, it looks like these, these people are" }, { "end": 819.9599999999999, "start": 812.76, "text": " like cardboard cutouts here, they have like these multiple layers where I mean, I'm I the only one" }, { "end": 825.8000000000001, "start": 819.96, "text": " that just sees this, this is like a plastic cutout. And then the face is again, like a plastic cutout" }, { "end": 833.8000000000001, "start": 825.8000000000001, "text": " and the faces are so smooth. I mean, look at this. These are like, too pretty. Like you can just look" }, { "end": 840.76, "start": 833.8000000000001, "text": " at this for hours. This is so like the diff. It maybe it just seems like this to me, but the" }, { "end": 848.6800000000001, "start": 840.76, "text": " difference if you kind of look at the skin and it almost feels like the bottom ones are actual real" }, { "end": 858.28, "start": 848.68, "text": " photographs in just in terms of the faces and the the kind of the color, just the smoothness is just" }, { "end": 865.8, "start": 858.28, "text": " all look like porcelain. This might actually be an effect of the VAE, right? Because it's not blurry," }, { "end": 873.56, "start": 865.8, "text": " right? As a you know, the the lines and so on. But the the skin texture might just be one one scale" }, { "end": 879.16, "start": 873.56, "text": " too much here. And that's where we now see the blurriness or it might just be that I don't I" }, { "end": 888.76, "start": 879.16, "text": " don't know. Okay, I have no idea. This just this just somehow whereas was popping out to me as the" }, { "end": 894.76, "start": 888.76, "text": " main difference, like they are much more crisp and so on and much more beautiful, but also they look" }, { "end": 905.16, "start": 894.76, "text": " like puppets. Yeah. Alright, so let's get back to the model right here because so once we decided" }, { "end": 910.2, "start": 905.16, "text": " that we want such a hierarchical model, what we need to do is we need to simply build a VAE for" }, { "end": 917.72, "start": 910.2, "text": " each of these hierarchies, right? So the the uppermost thing here is a regular VAE noise," }, { "end": 924.52, "start": 917.72, "text": " right? We have a noise, we sample from it, and we generate this particular scale of image." }, { "end": 930.1999999999999, "start": 924.52, "text": " Okay, so how do we get that noise? We simply this is this down here is our this is our encoder and" }, { "end": 936.68, "start": 930.1999999999999, "text": " this is our decoder. We simply have our encoder, this is a series of neural networks, and we get" }, { "end": 946.12, "start": 936.68, "text": " our latent encoding, right? So the Z is obtained through the the kind of VAE encoding method. Okay," }, { "end": 951, "start": 946.12, "text": " now the interesting part is how do we get Z two, and you might just think, well, we'll just go" }, { "end": 957.56, "start": 951, "text": " like one layer up here, but Z two, as you can see here, it depends on Z one during sampling." }, { "end": 963.16, "start": 957.56, "text": " So during inference, we have also have to have that Z two depends on Z one. And that's why we" }, { "end": 969.96, "start": 963.16, "text": " first need to go to Z one and actually produce a sample. So our method of inferring the latent" }, { "end": 979.08, "start": 969.96, "text": " codes includes already sampling from those latent codes, right? So you sample, and you do the same" }, { "end": 985.24, "start": 979.08, "text": " thing as you would do in the right. In fact, these models are shared. And then you can see that Z two" }, { "end": 993.1600000000001, "start": 985.24, "text": " now depends on Z one in this procedure, because you go here, and you go here and here. So Z two" }, { "end": 999.8000000000001, "start": 993.1600000000001, "text": " depends on Z one, Z three, in turn would depend on Z two and Z one. And you have a properly" }, { "end": 1008.2, "start": 999.8000000000001, "text": " hierarchically factorized model right here. Okay, so this, this is called a hierarchical VAE, it" }, { "end": 1013.96, "start": 1008.2, "text": " pretty much works like a VAE, except that it is hierarchical. And you need to do here need to have" }, { "end": 1021.6400000000001, "start": 1013.96, "text": " this bottom up and this top down model in order in your encoder. And so now there are a bunch of" }, { "end": 1027.32, "start": 1021.6400000000001, "text": " questions with respect to the hierarchical VAE. The problem here is that you have not only one" }, { "end": 1032.3600000000001, "start": 1027.32, "text": " sampling procedure, but you have sampling procedure upon sampling procedure upon sampling" }, { "end": 1038.1999999999998, "start": 1032.36, "text": " procedure. And this can get pretty unstable, I guess pretty quickly. So the rest of the paper is" }, { "end": 1045.7199999999998, "start": 1038.1999999999998, "text": " going to be how to get this to work. So the main, I think one of the main parts they do in order to" }, { "end": 1052.36, "start": 1045.7199999999998, "text": " get this to work in order to get this to train our residual connections. So we know that residual" }, { "end": 1061.32, "start": 1052.36, "text": " connections are kind of a sort of a gradient flow highway in order in order to to train very deep" }, { "end": 1068.28, "start": 1061.32, "text": " networks. And we've already seen this with residual networks in CNNs, where you have an input," }, { "end": 1073.8, "start": 1068.28, "text": " and you have some computation in form of a neural network, or in this case, a sampling procedure" }, { "end": 1079.96, "start": 1073.8, "text": " through a distribution, and you have an output and the residual connection would allow you to skip" }, { "end": 1088.2, "start": 1080.52, "text": " part of that, as you can see, used here in both the encoders and the decoders. So in the encoders," }, { "end": 1092.6000000000001, "start": 1088.2, "text": " you have residual connections and also in the decoders right here, you can see you have residual" }, { "end": 1099.48, "start": 1092.6000000000001, "text": " connections. In fact, you always take that lower scale, and you don't transform it into an upper" }, { "end": 1107.56, "start": 1099.48, "text": " scale, you actually sample noise, and then you add the lower scale and the upper scale together." }, { "end": 1116.8400000000001, "start": 1109.64, "text": " So it's really an additive model in a hierarchical fashion, even okay, the the pluses might actually" }, { "end": 1124.76, "start": 1116.84, "text": " not be okay, the pluses can also be combination, I guess. I guess that that I might be wrong," }, { "end": 1132.36, "start": 1124.76, "text": " and they can actually be combinations. In any case, they use residual networks in in a in a lot of" }, { "end": 1139.8, "start": 1132.36, "text": " cases in their generative and in their generator and in their encoder. You can see right here," }, { "end": 1145.9599999999998, "start": 1139.8, "text": " there is a residual cell for the generative model and a residual cell for the encoder. Now," }, { "end": 1151.88, "start": 1145.96, "text": " the exact method of these residual cell, you can see that they use batch norm, then they use one" }, { "end": 1159.88, "start": 1151.88, "text": " by one convolutions in order to go to a higher to a higher channel number before they do the depth" }, { "end": 1168.1200000000001, "start": 1159.88, "text": " separated five by five convolutions. So five by five, because you need a larger receptive field," }, { "end": 1173.48, "start": 1168.1200000000001, "text": " they make that clear, they need a large receptive field. However, the large receptive field means" }, { "end": 1180.6, "start": 1173.48, "text": " many parameters means their model would be too big and too much memory. So they do the depth" }, { "end": 1185.72, "start": 1180.6, "text": " separated convolutions, which simply means that you don't mix the channels during the convolutions." }, { "end": 1191.56, "start": 1185.72, "text": " So you go up the channels, you do a depth separated convolution and go down the channels again." }, { "end": 1197.48, "start": 1191.56, "text": " All of these are kind of hacks to make it work, right? Then also they have batch norm and a swish" }, { "end": 1204.44, "start": 1197.48, "text": " non-linearity as you can see here. And then here as well in the encoder, they also say in the text," }, { "end": 1209.8, "start": 1204.44, "text": " like they stress the importance, we found that first the batch norm and then the convolution" }, { "end": 1215, "start": 1209.8, "text": " is better than the other way around and so on. So this there's a lot of engineering work that" }, { "end": 1222.92, "start": 1215, "text": " went into this right here. So you see there's batch norm. And also you have to kind of hack" }, { "end": 1227.88, "start": 1222.92, "text": " the batch norm, because in batch norm you have these training parameters and people have observed" }, { "end": 1235.88, "start": 1227.88, "text": " that in VAEs. If you during inference, during sampling, if you use the training way where you" }, { "end": 1241.4, "start": 1235.88, "text": " only regularize within the batch, it's better than if you use the running averages. So you kind of" }, { "end": 1247, "start": 1241.4, "text": " have to hack that. We modify the momentum parameter of batch norm such that running statistic can" }, { "end": 1252.44, "start": 1247, "text": " catch up faster with the batch statistics. There's a lot of engineering in here. Like there's a lot" }, { "end": 1258.1200000000001, "start": 1252.44, "text": " of things that you have to get right to get something like this to work apparently. And" }, { "end": 1264.52, "start": 1259.56, "text": " yeah, this the paper, the paper just goes on in this style. So you can see they use the swish" }, { "end": 1270.92, "start": 1264.52, "text": " activation. They use squeeze and excitation blocks, which are another form of residual" }, { "end": 1277.88, "start": 1270.92, "text": " blocks that were introduced quite a long time ago, but still being used as you can see. And" }, { "end": 1284.44, "start": 1277.88, "text": " yeah, so that's the architecture. So you can see they have residual cells there, residual cells" }, { "end": 1291.5600000000002, "start": 1284.44, "text": " here, reducing the memory requirements. They say they use two tricks. First of all, we they do" }, { "end": 1296.92, "start": 1291.5600000000002, "text": " mixed precision using a cool new Nvidia library. Given that they're from Nvidia, they get to try" }, { "end": 1304.2, "start": 1296.92, "text": " these things out first. And second of all, they also to reduce the memory, we fuse batch norm and" }, { "end": 1309.72, "start": 1304.2, "text": " swish and we store only one feature map for the backward pass instead of two. And they have to" }, { "end": 1315.4, "start": 1309.72, "text": " then recompute this trick is known as gradient checkpointing and cries, recomputing batch norm" }, { "end": 1322.28, "start": 1315.4, "text": " in the backward pass. I believe like future deep learning frameworks should just take care of that" }, { "end": 1330.8400000000001, "start": 1322.28, "text": " for you, instead of you having to do this kind of stuff. Honestly, so they also need to, they hear" }, { "end": 1340.36, "start": 1330.84, "text": " they say taming the unbounded KL term. So the KL term is what makes the distribution that the encoder" }, { "end": 1346.36, "start": 1340.36, "text": " outputs close to that distribution that you want like that normal distribution. So this is the" }, { "end": 1352.6, "start": 1346.36, "text": " regularization term, you can see here, it's a KL divergence between q, which is what your encoder" }, { "end": 1360.28, "start": 1352.6, "text": " outputs, you can see that's the, the latent code for the image x, between the two, the two, the" }, { "end": 1368.76, "start": 1360.28, "text": " two, the two, the two. And between that and between your prior which you say it should be, it should" }, { "end": 1375.72, "start": 1368.76, "text": " be a like a normal distribution. In this case, it should be a hierarchical normal distribution. And" }, { "end": 1384.52, "start": 1377.6399999999999, "text": " they have a special characterization here where they say, because it's hierarchical, right. So" }, { "end": 1389.48, "start": 1384.52, "text": " So I'm going to have a hierarchy of normal distributions. This is my top" }, { "end": 1395.04, "start": 1389.48, "text": " hierarchy and then I'm going to sample one sample right here. And then in" }, { "end": 1400.52, "start": 1395.04, "text": " the next layer I'm going to have a normal distribution around that sample" }, { "end": 1406.72, "start": 1400.52, "text": " right here and I'm going to sample from that and so on. So my hierarchical" }, { "end": 1411.04, "start": 1406.72, "text": " normal distribution is going to be always where the next distribution in" }, { "end": 1416.44, "start": 1411.04, "text": " the next layer is dependent on the distribution in the hierarchy." }, { "end": 1424.3999999999999, "start": 1416.44, "text": " And they have a special parameterization where in order for the encoder to produce" }, { "end": 1429.84, "start": 1424.3999999999999, "text": " that, so the encoder has to produce a z of the first layer and then a z of the" }, { "end": 1434.8799999999999, "start": 1429.84, "text": " second layer and so on, in order for the encoder to reproduce that and to be close" }, { "end": 1440.58, "start": 1434.8799999999999, "text": " it must match this distribution and it must match this distribution. So if it" }, { "end": 1444.78, "start": 1440.58, "text": " doesn't match this distribution correctly it will kind of sample" }, { "end": 1450.1999999999998, "start": 1444.78, "text": " somewhere else a bit. And then that distribution, that base, will already be" }, { "end": 1454.8799999999999, "start": 1450.1999999999998, "text": " shifted right here. So it thinks that the distribution to match is now this" }, { "end": 1460, "start": 1454.8799999999999, "text": " normal distribution. So you can see that the base is already shifted and that's" }, { "end": 1467.48, "start": 1460, "text": " why their encoder only outputs the delta to the, as you can see here, it" }, { "end": 1476.24, "start": 1467.48, "text": " only outputs the delta to the prior. We define here, we define the Q of the z in" }, { "end": 1482.72, "start": 1476.24, "text": " a given layer as the normal distribution of the mu i, with mu i, that's your" }, { "end": 1488.76, "start": 1482.72, "text": " prior, see that's your prior of that layer, plus a delta mu. And also the" }, { "end": 1496.52, "start": 1488.76, "text": " sigma is the sigma from the prior times a delta sigma that you output. So you're" }, { "end": 1501.48, "start": 1496.52, "text": " kind of saying you're not supposed to output the actual distribution, you're" }, { "end": 1506.6399999999999, "start": 1501.48, "text": " supposed to output the difference of distribution to the prior. Now in layer" }, { "end": 1511.96, "start": 1506.6399999999999, "text": " 0 that's the same thing, right, because the prior is going to be zero mean and" }, { "end": 1519.6, "start": 1511.96, "text": " unit variance. So that's this here, this here will be zero and this here will be" }, { "end": 1525.44, "start": 1519.6, "text": " one. But in all the upper layers this is going to make it easier. So that's one" }, { "end": 1531.16, "start": 1525.44, "text": " trick you have to make this repeated sampling not hurt you as much. The other" }, { "end": 1537.04, "start": 1531.16, "text": " trick they employ here is special regular, sorry, spectral regularization," }, { "end": 1542.4, "start": 1537.04, "text": " which is a regularization where you regularize the top singular value per" }, { "end": 1547.3600000000001, "start": 1542.4, "text": " layer. You can use that, you can compute that with a power iteration, people have" }, { "end": 1553.0800000000002, "start": 1547.3600000000001, "text": " been done doing this before, and also you can build in some normalizing flows." }, { "end": 1560.1999999999998, "start": 1553.08, "text": " So here if we sample the different layers, what we're" }, { "end": 1564.08, "start": 1560.1999999999998, "text": " going to do is we're going to sample all of these things at once, right, they're" }, { "end": 1569.46, "start": 1564.08, "text": " dependent on the upper layer in the hierarchy, but we'll sample them all at" }, { "end": 1575.56, "start": 1569.46, "text": " once. And that means they are not sort of connected to each other. Now if we" }, { "end": 1580.1999999999998, "start": 1575.56, "text": " introduce a flow we'll basically make them all connected to each other and" }, { "end": 1586.04, "start": 1580.2, "text": " build like a singular distribution of them. But I don't want to go too" }, { "end": 1590.24, "start": 1586.04, "text": " much into this because it doesn't gain that much, they say you can just build" }, { "end": 1596.48, "start": 1590.24, "text": " that in if you want. Okay so these are all the things that at least they list" }, { "end": 1603.16, "start": 1596.48, "text": " in the method section. Now there are like a lot more that they have" }, { "end": 1610.0800000000002, "start": 1603.16, "text": " to do, but ultimately as you can see right here on these on four of these" }, { "end": 1614.52, "start": 1610.08, "text": " five datasets they achieve state-of-the-art. In fact okay on this" }, { "end": 1620, "start": 1614.52, "text": " dataset no one else has tried, but at least on the other datasets they are" }, { "end": 1628.24, "start": 1620, "text": " very very competitive as you can see right here. And they compare this to, first" }, { "end": 1635.96, "start": 1628.24, "text": " of all, to other models and even other models with and without auto" }, { "end": 1641.96, "start": 1635.96, "text": " regressive flows. And they come pretty close to these auto regressive models. So" }, { "end": 1647.2, "start": 1641.96, "text": " an auto regressive model would be one that generates like one pixel at a time" }, { "end": 1651.92, "start": 1647.2, "text": " conditioned on the other pixels. This model doesn't do that, this model" }, { "end": 1658.24, "start": 1651.92, "text": " generates all pixels at once, so it's not auto regressive. But as you can see it" }, { "end": 1667.8, "start": 1658.24, "text": " beats all the other non or auto regressive models and it gets" }, { "end": 1673.8, "start": 1667.8, "text": " pretty close to the best auto regressive models which are down here. They are" }, { "end": 1681.84, "start": 1673.8, "text": " still better, but the gap is kind of shrinking is what they say. Cool so" }, { "end": 1686.92, "start": 1681.84, "text": " that's the main result. Then they have ablations where they basically, as I said," }, { "end": 1691.8400000000001, "start": 1686.92, "text": " all of these things kind of contribute a little bit, a little bit, a little bit, a" }, { "end": 1696.4, "start": 1691.8400000000001, "text": " little bit to building this bigger and bigger and deeper variational auto" }, { "end": 1702.24, "start": 1696.4, "text": " encoder. So it's hard to say what exactly makes this work because all of it makes" }, { "end": 1708.8400000000001, "start": 1702.24, "text": " it work. And I guess they just kept going until they beat state-of-the-art or until" }, { "end": 1713.4, "start": 1708.8400000000001, "text": " you know they ran out of tricks. Again these are the samples that we looked at" }, { "end": 1718.44, "start": 1713.4, "text": " and I do want to spend some time in the appendix right here because I think it's" }, { "end": 1725.88, "start": 1718.44, "text": " pretty interesting what they do. So first of all they show that their" }, { "end": 1731.0800000000002, "start": 1725.88, "text": " model doesn't remember the training samples. As you can see right here these" }, { "end": 1735.92, "start": 1731.0800000000002, "text": " are always the nearest neighbor from the training sample so the model is fairly" }, { "end": 1745.44, "start": 1735.92, "text": " you know fairly far away from the training samples. But yeah I mean okay" }, { "end": 1752.6000000000001, "start": 1745.44, "text": " maybe it's just me but the left they just look like more kind of more ideal" }, { "end": 1763.96, "start": 1752.6000000000001, "text": " idealized humans like very smooth humans like designer babies. Here they show that" }, { "end": 1771.28, "start": 1763.96, "text": " if you use batch norm as you would use it I think regularly where you keep these" }, { "end": 1777.1200000000001, "start": 1771.28, "text": " running stats or you do the batch norm from training then you get into" }, { "end": 1782, "start": 1777.1200000000001, "text": " this kind of degenerate case if you sample at lower temperatures. So the" }, { "end": 1785.4, "start": 1782, "text": " temperature that you sample from describes the width of the Gaussian" }, { "end": 1790.56, "start": 1785.4, "text": " that you ultimately want to sample from. And if you do they have this method to" }, { "end": 1796.1599999999999, "start": 1790.56, "text": " readjust the batch norm statistics which I don't want to go into here but you can" }, { "end": 1801.12, "start": 1796.1599999999999, "text": " you can read it up to basically fix that problem. It is a problem that" }, { "end": 1807.32, "start": 1801.12, "text": " apparently other people have observed as well and their method apparently is you" }, { "end": 1816.32, "start": 1807.32, "text": " know is a is one that manages to do that. Okay lastly there are some more samples" }, { "end": 1823.8, "start": 1816.32, "text": " right here. And yeah this right here this is honestly this is one of the I think" }, { "end": 1828.6799999999998, "start": 1823.8, "text": " one of the most interesting things where they go and since they have this" }, { "end": 1835.24, "start": 1828.6799999999998, "text": " hierarchical model right so here is like z1 it gives right and so that give gets" }, { "end": 1839.36, "start": 1835.24, "text": " you like an image and then there's z2 and that gets you an image and then" }, { "end": 1844.08, "start": 1839.36, "text": " there's z3 and so on and you continuously upscale and hierarchically" }, { "end": 1851.4399999999998, "start": 1844.08, "text": " add the features. Here they say what if what happens if we if we sample z1 once" }, { "end": 1857.12, "start": 1851.4399999999998, "text": " and then we fix it and then we only sample the other ones conditioned on z1" }, { "end": 1863.28, "start": 1857.12, "text": " and here see where you see top scale fixed and you can see there is" }, { "end": 1869.9199999999998, "start": 1863.28, "text": " considerable variation in the image but there is there is not really a large" }, { "end": 1877.04, "start": 1869.92, "text": " scale variation. Okay so the general face keeps constant but there are details" }, { "end": 1882.04, "start": 1877.04, "text": " changing as you can see so here the hair is kind of going over the image the" }, { "end": 1888.52, "start": 1882.04, "text": " color is changing here there are a lot of changes the mouth looks slightly" }, { "end": 1894.52, "start": 1888.52, "text": " different as far as I can see but I might be hallucinating here and then if" }, { "end": 1900.12, "start": 1894.52, "text": " you fix continuously the top two scales or the top three scales right here top" }, { "end": 1905.24, "start": 1900.12, "text": " four scales you can see that there are more and more just little details that" }, { "end": 1914.08, "start": 1905.24, "text": " change more and more so yeah so this is we they are operating at five scales" }, { "end": 1920.48, "start": 1914.08, "text": " starting from 8 by 8 up to 128 to 128 in each row we fix the samples at a number" }, { "end": 1925.3600000000001, "start": 1920.48, "text": " of top scales and we sample from the rest of the hierarchy as we can see the" }, { "end": 1930.16, "start": 1925.3600000000001, "text": " long-range global structure is mostly recorded at the top of the hierarchy in" }, { "end": 1936.16, "start": 1930.16, "text": " the 8 by 8 dimensional groups the second scale does apply at some global motive" }, { "end": 1939.96, "start": 1936.16, "text": " does apply some global modifications such as changing eyes hair color skin" }, { "end": 1944.2, "start": 1939.96, "text": " tone the shape of the face the bottom groups capture mostly low-level" }, { "end": 1948.96, "start": 1944.2, "text": " variations however the lowest scale can still still make some subtle long-range" }, { "end": 1953.52, "start": 1948.96, "text": " modifications for example the hair color is slightly modified when we are only" }, { "end": 1957.96, "start": 1953.52, "text": " sampling from the lowest scale in the last row this is potentially enabled" }, { "end": 1962.96, "start": 1957.96, "text": " because of the larger receptive field in our depth wise separate separable" }, { "end": 1975, "start": 1962.96, "text": " residual cell yeah I don't the hair color changes okay slightly maybe I don't" }, { "end": 1984.28, "start": 1975, "text": " know my my eyes are too many faces okay but you know what's certainly the case" }, { "end": 1990.36, "start": 1984.28, "text": " is that their models exhibit much better kind of global unity compared to these" }, { "end": 1994.72, "start": 1990.36, "text": " other samples where you can pretty clearly see like the different sides of" }, { "end": 1999.56, "start": 1994.72, "text": " the faces have little to do with each other and so on and this is the benefit" }, { "end": 2002.64, "start": 1999.56, "text": " that you get from doing this hierarchically so you have part of your" }, { "end": 2007.92, "start": 2002.64, "text": " model that's responsible for kind of the global shape of the image and then that" }, { "end": 2012.48, "start": 2007.92, "text": " keeps it consistent and then you have other parts that are responsible for the" }, { "end": 2021.2, "start": 2012.48, "text": " details okay so I hope this was something to you know that interested you I" }, { "end": 2026.4, "start": 2021.2, "text": " myself it's as I said it's it's an engineering paper so there is lots of" }, { "end": 2030.64, "start": 2026.4, "text": " things described there is not like one jumping idea I guess residual" }, { "end": 2034.66, "start": 2030.64, "text": " connections are pretty important and these depth wise convolutions save" }, { "end": 2040.2800000000002, "start": 2034.66, "text": " memory and but also all of the all of the other things that you have to do to" }, { "end": 2047.1000000000001, "start": 2040.2800000000002, "text": " build something like this are pretty pretty interesting yeah I I hope you" }, { "end": 2062.6, "start": 2047.1, "text": " gained something from it and I'll see you next time" } ]
Jqvb7jp4Nm8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Addendum for Supermasks in Superposition: A Closer Look (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "supsup", "supermasks", "lottery ticket", "lottery ticket hypothesis", "gradient", "entropy", "surplus", "superfluous neurons", "lifelong learning", "multitask learning", "catastrophic forgetting", "continuous learning", "binary mask", "random network", "optimization", "hopfield network", "gradient descent", "superposition" ]
I take a closer look at "Supermasks in Superposition" after I've already done a video on it. Specifically, I look at: 1. The intuition and theoretical justification behind the G objective, 2. Whether Supermasks and Superposition can be viewed as two distinct ideas and 3. The Paper's Broader Impact Statement. OUTLINE: 0:00 - Intro & Overview 2:00 - SupSup Recap 4:00 - In-Depth Analysis of the G Objective 20:30 - Superposition without Supermasks 25:40 - Broader Impact Statement 36:40 - Conclusion 37:20 - Live Coding Part 1 on SupSup: https://youtu.be/3jT1qJ8ETzk My Code: https://colab.research.google.com/drive/1bEcppdN6qZRpEFplIiv41ZI3vDwDjcvC?usp=sharing Paper: https://arxiv.org/abs/2006.14769 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher
Hi there! Today we'll look at super masks in superposition again. So this is part two of this paper by Mitchell Wurtzman and Vivek Ramanujan and here's the reason why there's a part two. So after yesterday's video on this paper I couldn't sleep because I really felt that I had left out some important aspects that I wanted to touch on during the video. Now sometimes during videos I look at the clock and I realize like oh crap the video is already like an hour long and I know people are watching on 2x speed anyway but still it's like too long and I need to wrap it up really soon. And what I felt were pretty important messages about this paper got lost. So specifically I want to address three different things. First of all they have like a formal analysis, not a formal but a kind of more rigorous analysis of what their modified G objective does. And I also want to give some intuition in that because I felt I really had done a good job at that. The second part is that the two different ideas right here being the super masks and the superposition and I think my opinion is sort of that these are two separate things and they really have nothing to do with each other and I think that didn't really come through last video. And the third one being the broader impact statement of this paper which I you know I usually kind of gloss over it and go like haha but I hear there is an important point to it so yeah we'll get to that. Alright so again not a new paper today I realized this but I think it's worth kind of diving deeper into this paper which is a very cool paper you know so don't don't get me wrong right here and I feel mostly I haven't done a good part at explaining it. Like literally lying awake. Okay so let's go to the first point. We had this so if you hadn't seen the first video super masks and superposition basically says that we want to do lifelong learning and we want to do lifelong learning by lifelong learning is the task where you have a bunch of tasks in sequence and you learn them in sequence so one after the other and basically the goal is to not forget tasks once you learn new tasks and this model does it by always building one of these super masks for each task that is applied to the same randomly initialized base neural network each time and you know by keeping the super mask around you won't forget the task and then at inference time if you're given the task and just retrieve the mask if you're not given the tasks you can do this superposition trick where you apply all the masks in a superposition and then you look at sort of the gradient of an entropy function in order to decide which task reduces the entropy the most so which task is the most certain about a particular data point and that you you kind of infer that that's the task you're gonna go with so instead of the entropy which is you know well reasoned they had this other objective they call a G and G basically looks at the it's really strange it looks at the superfluous neurons so they also add these superfluous neurons these S neurons right here and they they the G objective will only look at the S neurons in order to decide whether or not that's the correct task and it's basically just the log some X of the S neurons and we had some intuition about them being you know all small and so on them being like outlier detectors but there is an entire chapter in the appendix where the authors do a sort of more in-depth theoretical analysis of that which you know I it's not not necessary to do this for them so I really enjoy I enjoyed reading that and that gave me sort of the better intuition of what this G objective does so here they say the aim is not to formally prove properties of the algorithm rather we hope that a more mathematical language may prove useful in extending intuition okay so again that's that's pretty cool so they start off by saying you have your neural network is basically W and the the sorry the it's it's this Phi right here and the W are the last layers weights which compute your log it's so Y is going not to be your class but Y is going to be your log it's and P is going to be the probability vector over your class which if the you calculate this via a softmax is going to be the following expression right here if you have a mask right then at least in the last layer you can in you can infer it as this right here so you multiply the mask by the last these weights and then that gives you your log it's so they say here with they initialize the weights right here actually they initialize the they have no bias term and they initialize the weights by this constant so plus minus this constant it's not really necessary to do that but they do it right here it makes the analysis also a bit easier I guess it just works more well if you have these masks in superposition of course you want to add all of these masks with their respective alpha weighting factor then multiply by the weights and that gives you your log it's so note that this this doesn't necessarily only have to be the last layers weights right here you can view that as any sort of weights of the neural network if you formulate this Phi correctly so you don't think that they only apply the mask to the last layer they do apply the mask to the entire thing all right now the the important part here is what happens if we look at the derivative of G with respect to one of the alphas and take the maximum negative derivative of that G which is that mysterious function that only looks at the at the at the superfluous neurons so what they want they kind of construct this G by principle what they say is we want a function G that mimics the supervised loss right we want a function G that is kind of equal like the supervised loss if we had the task ID right and that's that's pretty cool because you know the the supervised loss you sort of need all the information you need the label you need you need all the all you need the task ID so the supervised loss is unavailable but we want a function G that in its gradient mimics the supervised loss so they go about constructing this right here they say lemma first lemma it's possible to construct a function G such that the gradient matches the gradient from the supervised loss for all s neurons so for all these superfluous neurons specifically we want that the gradient with respect to the log it's if the gradient to the log it's is equal that means the gradient to all the rest of the network is equal because the rest of the network goes through the log it's right the gradient through the log it's is equal to the gradient of the supervised loss to the log it's for all the superfluous neurons and zero otherwise so they say the zero otherwise is pretty easily done in math you know simply set it to zero and in the actual code which you can achieve like this where M indicates the superfluous neurons so this is just they said just multiplied here and the other ones are detached so there is no gradient flowing this is the property that we only look at the superfluous neurons and now we are going to show that the gradient is going to be equal so they say if you had the supervised loss which means if you had the label then this would be your cross entropy loss okay so you cross it divides into this part where you need the label and then this part here you don't need the label now you can pretty much say look the label is certainly going to be one of not the superfluous neurons because the superfluous neurons are superfluous they are never the correct neuron so this is always going to be you know not the not the neurons we look at so the gradient certainly this is always going to be zero because we never we wherever the gradient is flowing that's not where the where this is one so the gradient of any superfluous neuron is just this thing right here and that's exactly why they build the function G so the function G has this exact gradient the function G if you derive it has that same gradient as the supervised loss for the superfluous neurons okay so it's sort of magic but it's not you know it's not magic so they need two more assumptions here to have to get the following properties so the for the first property now because now we want to have G be identifying the correct task so we've already constructed G now we want to show that if we really do this the gradient with respect to the alphas then if we do it for a wrong task for the tasks that it's not the task of that particular data point that goes into computing G then we'll get a value that's probably lower than zero however if we plug in if we derive but with respect to the alpha of the correct task then we get a gradient a negative gradient that's higher than zero okay so we're now going to prove that this with high probability really allows us to distinguish the correct task from the wrong task and we need two assumptions right here the assumption one is we assume that the mask learn on task I will be independent from the data from task J if the task data is from task J then this are independent random variables okay so it sort of means that the the tasks themselves are kind of independent but it's not it's it's not the same requirement but you can think of in in the case of permuted M nest or so this is some it's given except if you consider this kind of frequency of brightness and so on but if you have independent task I think that this is given that means that the features right here and the masks are independent variable if if the data is from tax J then the features and the mask from task I are independent variable sorry the second assumption you need is that we assume that a negative weight and a positive weight are equally likely to be masked out okay so this again you can think of with some regularity this is certainly going to be to be given in a randomly initialized neural network note that when the features are 0 which will be the case for 0 mean random features yeah so um yeah before I said this was your neural network this is your random neural network right and then you mask that and so on if this is a randomly initialized neural network then you can make a case that the expected features of those will be 0 it doesn't need to be the case but you can you can construct it such that it is so if you have the two things right if you have those two things then you can prove the following if the data X comes from task J then when you derive by an alpha that's not of task J you get a number that's smaller than zero in expectation and here the crucial part is you reframe this gradient you reframe reframe reframe and what you'll see is that this here comes out so this is a sum and each element of the sum is going to be greater or equal to zero which means that this thing is greater or equal to zero which means the negative thing is smaller than zero in lemma H1 now we're going to look at lemma H1 to get an intuition of what's going on right here so lemma H1 says if J is the true task and I is not equal to J then this quantity here is greater than zero all right I restarted my tablet and we are back so what's kind of the the intuition behind why this quantity here would be greater or equal to zero and honestly in order to make it a bit easier I first want to look at whenever I equals J so whenever J is the true task and then I equals J then we can sort of think of the opposite like why why this should be smaller or equal to zero so consider this this is the run the feature of the network of you right and then the EUV connects that to the to the mask at point V and the mask at point at that point UV is either zero or one depending on the training so this this Xi right here that's going to be the from the initialization but the mask is going to be zero or one depending on whether that feature contributes sorry whether this entire thing here contributes positively to the task or not so the secret right here why we can make a claim that this is greater or lower than zero is going to be that the mask can only be zero or one it cannot be negative one right so if the mask is zero then obviously this thing is going to be zero however if the mask is one what does it mean if the mask is one that means that this this entire feature right here let's call it F is positively impacting is positively contributing to this particular neuron right here so if the mask is one this is this it means the addition of that feature more of that feature makes that log it go up okay so if the mask is one during training it means that the feature positively contributes to the task so if we look at the gradient with respect to this function with respect to the the log it and the function basically means it's just measure measures how high these superfluous log it's are then what why do we find a negative interaction there because if you look at the neural network and you forward pass and this particular feature is important and you look at the loss G and you backward pass through the log it's if it is smaller than zero that means there there is a negative interaction right here so that basically means that if we make this feature higher then in this case we make this G function go lower okay and that is the case for the correct task because if this is the correct task and the mask is learned adequately that means it should assign a low weight to the superfluous neuron whenever the input features you know are of that task and so it makes sense that this here would be a negative number because what we want if the mask deems the feature important in a positive sense we want that if the feature goes up G goes down and that is exactly why we have the negative interaction right here right so the negative comes from this being negative I hope this sort of makes sense so if the mask is one the mask says basically if that feature goes up the loss goes down now G is a measure of the superfluous neurons the superfluous neurons should be small if the loss is small so if this is really from the task and this feature is really useful that means if we increase the feature the G function should go down and therefore this product here is going to be most likely negative okay and the contrary is you know analogous right here if this is not of this task and the mass can either be 0 or 1 right if it's 0 then this quantity is 0 however if it's 1 it's more likely that the that there the feature here because it's I is not the correct task which basically means that this feature it is for a different task it is good for a different task so the mask of that different task says it's good right here and we have no reason to believe that this would decrease the loss of the loss of this particular data point in this task so it's kind of the inverse reasoning if you look at the actual derivation here it's fairly long and it goes over the cases of the interactions between actually this initialization and the mask so the initialization can be positive or negative as you can see right here and I think I just think that the the intuition here is that the superfluous neurons react differently to a data point of the trained task because they have been kind of made to decrease for that task and for that particular mask as they do for when the data point doesn't match the mask when the data point doesn't match the mask there is no reason for the logits of the superfluous neurons to be low and if the data point task does match the mask there is ample reasons for those to be low I hope that sort of makes sense it is sort of it's a bit more of an intuition but if you really want to dig into it look at the derivation right here okay second point is the fact that the masks and the super positions don't really have to do anything with each other and that's you know I've said throughout the video like remember these tasks are super easy yada yada yada so let me make it clear in this in this diagram right here the super masks these are simply a way to train a neural network in a crude way right I don't think there is you know this distinction between mask and network I don't really like that much because ultimately what you're doing is simply you're training a neural network in a kind of weird way okay the fact that you always use the same underlying you know great neural network doesn't really matter right here it's still what you do in this super mask training is you provide a severely over parameterized network and then the mask simply gets to choose which weights to mix rather than you get to adjust the weights if you adjust the weights you usually get more accurate than with the mask but it's sort of like a quantized neural network that you train right here so that's the super mask thing again I don't think it's important that the underlying network is always the same the only advantage you have is it saves space because these masks are very small the super masks on the other hand this idea that you overlay all of the masks together and then you look at where this at the gradient of the entropy and you look at which of the of the mixing factors the gradient poles the most that's a different idea and the question here is wouldn't that isn't that independent does really depend on the masks or doesn't it and the you know the hypothesis would be that if I simply train you know three different neural networks for three different tasks could I not do the same superposition trick like could I not just add all of them with a respective alpha look at the entropy calculate the gradient with respect to each of the alphas of the entropy and then decide which task it is you know don't need masks simply mix neural networks in superposition so I did it and I actually tried their code is available so big props for their code being available I tried their code it's actually very few changes and I'm going to append my live coding of this at the end of this video so if you want to if you are interested in watching that you can do so but you know the outcome is if I train neural networks and I have I've you know done super quick and initialize them wrongly probably and all but if I train these neural net if I train the masks you get to like 92 percent accuracy in their tasks in each of the tasks and then also in the average if I train the actual neural networks I get to a higher accuracy like 93 something it doesn't matter it's just higher okay so that's hypothesis one is the training masks is just a way of training neural networks the fact that the masks and the network training itself are that close I think is a testament to how easy these tasks are like how easy eminent amnest is I'm going to also hypothesize that if the task gets harder and harder and I don't mean 10 class image net I mean a thousand class image net then these masks are going to degrade severely versus training the actual neural network I might be wrong I mean you can over parameter eyes really heavily and they will still work okay but in any case I trade the train these neural networks and they reached higher accuracy and then I did the exact same thing I laid them in superposition to determine what task it is and I could achieve the exact same result so here in their example they have a hundred percent task classification accuracy and I reached the exact same thing code worked I'm not going to try to scale this up to 250 or 2500 in tasks but I'm going to assume that with you know tuning and stuff that it's going to work about equally well you could make an argument that the masks being sparser they might be differentiated from each other more accurately but I'm not sure maybe but it's it's not a cool it's not a qualitative difference right so these two things are really two separate ideas that find their way together in this paper but ultimately have not much to do with each other okay at least that's from what I can tell I might I might be wrong here and I might be wrong with respect to their G objective and whatnot and you know but I think that that these are two cool ideas but they can be applied independently so the last thing I want to look at is their broader impact statement right here now there is a reason so usually I kind of track these broader impact statement because I think this this is this here is sort of fundamental research right this is fundamental machine learning research we do architecture the multitask learning task isn't really important as long as we have kind of the same tasks right here uncorrelated and so on the same hardness and I've also made the point that it's really important for these tasks to be the same hard for this to work in this place a role right here so um and they do they do describe some of this in this conclusion with you know limitation that we observed has to do with task identity inference when model are not well calibrated models that are overly confident for the wrong task okay so in order for them to infer the correct task they the sort of so if you look at your entropy of the models for the tasks that means you're gonna select the model that is the most sure about the task this only works if the tasks are equally hard okay if one task is much much harder than the other task this other task is always going to say well I'm really confident about this one because the task is just easier it's going to be it's going to train in neural networks is generally more confident and you're going to misclassify a lot of the tasks so so here what does this have to do with the broader impact statement if you look at the broader impact statement what they say right here so they say a goal of continue learning self-manage tasks with a single model however it is not exactly clear what qualifies as a single model therefore a concrete objective has become to learn many tasks as efficiently as possible we believe that subs up is a useful step in this direction however there are consequences to more efficient models both positive and negative so this is sort of what the community does so there are three things that I've seen so far in broader impact statement first you some people say this is not applicable to us which I agree for most fundamental research broader it like the broader impact statement is supposed to be what does this particular method how will this influence broader society so not applicable completely valid for most of these research papers because guess what you can use any method to do good or to do bad and that's that's the second second part second method is basically you you just change a generic statements how you can do good and bad and usually you can't relate it to your particular method in the paper right because your method is I don't know like my faster convergence rate of SGD but and and so what you do is you just go one level up you go up the levels it's always like optimization can be used for good and for bad I mean that's still kind of a bit vague and then you go up further well optimization can do more machine learning and machine learning can be used to do good and bad for example face recognition and things like this so you just go up the levels and that's what they essentially do here and that's what you know most people have defaulted to it's like okay so you know our model here is you know we it basically one can train more efficient models and then they simply highlight what more efficient models can do efficient models require less compute if there's a model by we run on the end device if models are more efficient than large-scale research is not limited to wealthier institutions by the way I also the broader impact statement I believe should be the impact on society and not really on the research community itself so I also this this is a bit shaky with respect to because I'm really regarding what the broader impact statement should be this is not my opinion I'm I'm trying to reflect everything I've read of guidance about what the broader impact statement should be by the way there is also method method three which is to simply tell me more about your paper in the broader impact statement which I guess is the smart method because the broader impact statement can be before before the references so it's in the main part and people are required to read it not like the appendix reviewers are not required to read the appendix reviewers are required to read the broader impact statement so I guess the smart authors will just try to cloak more information about their model in terms of a broader impact statement I guess well whether that's smart is a different discussion but here they just it's it's already defaulting right these it's already the default people simply go level up level up level up until we can you know say something generic and we will also highlight and discuss the negative consequences of models which can efficiently learn many tasks and efficient models in general when models are more efficient they're also more available and less subject to regularization as a study and study of result for instance when a high-impact model is released an institution will hopefully be accompanied by a model card analyzing the bias and intended use of the model by contrast if anyone is able to train a powerful model this may no longer be the case resulting in a proliferation of model with harmful biases or intended use taking the United States for instance bias can be harmful as models show disproportionately more errors for already marginalized groups furthering existing deeply rooted structural racism this this is like well technology this is basically a statement about technology and so why why do I have a particular not issue but why do I pick this broader impact statement they even Rick this here this is this gender shades paper right where people went and they looked at these commercial API's for face recognition I I think that's the paper yeah gender shades so if you have a face recognizer they realized they divided people up by I think gender and race so you know like they built four groups or I haven't I haven't I've just looked at the paper but in my understanding that they divided people up into groups which I find arbitrary to have the these two axes race and gender but okay you can do that and they discovered that these commercial API's have different accuracy for the different groups right and that basically our point is that you know these commercial API's if they're offered for all humans they should work equally well for all humans now now you may be see what it has to do with this paper well this paper is in the business of doing multitask learning so it is very viable to actually frame the the task for example like this is an example if you frame the task of multitask learning like face recognition on different groups of people as a multitask learning problem you have you know group group one right here group two group three and then if at inference time so you can build you know good models for each of the group at inference time you're given an image and you're trying to a fur first which group is that from and then take the appropriate classifier that would be you know that would be a good a hypothetical classifier for this thing now what do we know about this thing this thing is fails if the tasks aren't equally hard also in in specifically if if for one group let's say for group three the the task is way harder because you have less data I guess the one of the main problems there is that the data sets are not equally balanced if you have less data for that then the task becomes de facto harder and the model is less sure about the task which means that it's a double whammy so not only is the model itself less accurate but these the input data point if the person is actually of group three is less likely to be classified correctly into the correct model at to begin with so you know for all the for all I I've had my my share of of comments on the video I made and I still maintain that societal bias can comes about by data set but for all the people saying there are models that exaggerate existing biases in models this would be like if there is any ever any applicability of these broader impact statement guidelines this would be the paper right it's this right here is an actual system that if I have different classifiers and I combine them with this method it will double punish the classifier that is less sure that is less accurate because that is also going to be the one with the higher entropy therefore not as much selected if I give a data point of that particular task and so this is like a I'm not criticizing the method here like by all means like this is a cool method where you can recognize that this happens and try to calibrate accordingly but if there was ever any straight ball for a broader impact statement I would you know this is it and this I'm not I'm not trying I'm not saying that these these authors didn't do that for a reason I believe that look it's been whatever not even half a year since we've started with these general broader impact statements and everybody is already defaulting to simply say technology good technology bad that's that's the people aren't even thinking and so this right this is one of the reasons why I simply find these broader impact statements to be not that like not a good idea because there is a default answer and people are just putting it here even in like when there is an actual obvious immensely obvious thing that they even they even cited like the basis for that so you know that's sort of my take on this I again I enjoyed this paper the code is is available everything is good about this this paper I'm not even the fact that these are you know I think these are kind of two separate ideas they're combined cool they're analyzed formally in theory there's intuition given all good so don't get me wrong this is not like trashing this paper it's just I felt I had something more to say and I think that was it so yeah I'll see you next time with the new paper okay so our goal here is going to be to change this code to not use masks as mixtures but actually use neural networks with real weights as as mixtures and in superposition with each other okay so what we're going to do is we're going to train the different neural networks and then use this kind of superposition trick to figure out which task a data point came from so let's have a look at the code right here and there's a bunch of helper code and if we go down through everything you'll see that this is the MNIST permuted data set so each each task is basically a random permutation of MNIST and if you execute I believe this here and then you train the model and right now it's for five tasks but I guess that's going to be enough for now yeah so if we get some good signal here I guess it's a matter of of doing kind of engineering and plumbing and tuning if until you get it up to whatever 200 or 2000 tasks though I might be wrong there so this is training and I shortly sort of had a look at the code but I haven't actually tried this yet so the thing the model is built here you see this is multi task fully connected which has these different layers right here and it's built by these multi task mask linear models now the multi task mask linear models are defined right here so it's basically a linear model as you can see it's derived from a linear from a linear module and it has a parameter called num tasks and then it has a parameter scores which I guess is are these these masks right here and the scores I'm going to guess are always going to be multiplied by the weights here in the forward so you can see they're in forward you get the weights from the alphas yeah yeah this is the superimposed alright so if we know the task ID down here we get this subnet and we are going to multiply it with the weights if we don't know the task ID we want to get these alphas so the alphas are going to be one over the number of tasks at the beginning we're then going to multiply each of the alphas with the weights and with that we're going to get this subnet mask right here so we need to know what this self dot stacked is so this self dot stacked is getting right here in this cache mask or simply stacking this this get subnet for all of the things so our plan is going to be that this subnet right here is going to be the actual weights of the neural network okay and not just the not just the mask and then we don't need to actually multiply it with the weight we can just just forget about the weight honestly and just train the subnet so for the subnet as you can see here you have this get subnet thing and that's an autograd function which basically means in the forward pass you want to discretize it and in the backward pass this is a straight through estimator so our first task is going and this here should be done now my laptop has stopped breathing so we've trained five tasks and now we can run inference on that so this is when the task is given real quick you can see task one 92 percent 92 percent 92 percent 92 percent so we have a an overall performance of 92.44 percent then when the task is not given we have two things to evaluate whether or not basically how good we are overall and whether or not we get the tasks correct of course the tasks are at this pre requirement so we have a hundred percent task inference accuracy okay so we don't we don't okay we can we could evaluate this here but you can already see the output from last time there is like no difference from the performance of the when the task is given it's always being able to infer the task we want to check out the same thing so we want to change first of all this get subnet this is where it's these scores are discretized now given that these scores are going to be and to end up being our actual weights we want we don't do that we simply return the scores now this is a this is pretty pointless right now but we'll keep it just to be as close as possible to the to that now mask in it this is where we initialize the mask now this is climbing uniform and it has some thing but we want probably we want to train the neural network to be initialized you know as we know it so let's try what what our other initialize function so in it dot what do we have here do we have what's usual I don't even know normal Savi that that sounds about right that sounds about right all right all right so scores and yeah let's try this this could this could I break everything right if you initialize wrongly you get like dumb results so okay signed constant yada yada yada where is that used huh okay that's also initializing something so we calculate the gain and then okay this doesn't seem good we'll just keep it hey why not why not why not just keep it at that all right so cool oh yeah this is for the weight anyway we won't use the weight at all of this layer we'll just use our own weights so here we have these stacked okay we get the scores that's all good like I'm pretty happy with that I'm pretty happy with this mask in it that we make our parameters so these are going to be our different neural networks that we train this all looks good the alphas look good now the only thing we want to do honestly is just to have not the weight times the subnet here but the subnet as such like this is this it do we now train actual neural networks I I have my doubts honestly like there should be no this should be it hmm yeah yeah let's just try it like we're gonna get a mistake somewhere like crash nope nope okay all right actually training so for real like these scores right here the fact what made them a mask is that we discretize them right here so we made them into a mask right here we're not doing that anymore so we're just training floats and then we're also not multiplying it by the weight we are just using those floats which means that we are using the basically a neural network and then here the bias I was worried about the bias but the bias is always zero as you can see here so the bias is always false yeah so we're training five different neural networks for five different tasks and you know according to my hypothesis these masked things are just kind of crude quantized ways are of training neural networks and if if my hypothesis is correct this here is going to turn out probably even better than this masked thing okay so last task training right here I'm starting to breathe good laptop fast laptop very nice come on come on come on and we're done so again we have an average top one performance of 92 point is this even did I even oh no I ran this right here okay like that's the exact same number it was last time so we need to run inference again and if we're given the task ID then we are at 93.9% so we increase slightly which might just be due to the fact that we initialize terribly terribly okay so what does it say about our task inference accuracy maybe there's some mask here set model task the alphas are to one nope no we're good we're good task inference accuracy 100% and I'm going to guess well with the task inference accuracy being 100% I'm going to guess this here will give us the exact same number I like the 93 point some percent so yeah 93.9% so I'm you know I'm going to say right here that the on the super masks and the superposition really are two separate ideas right you it's it's because the paper is like it sounds cool and all with the supermask and superposition but this inference using the superposition and then the entropy to decide is really one idea and training different super math the the advantage in using supermask is of course that the model is way smaller so you can remember it much more easily but also you know that it's really different if there's there's nothing to do with the superposition yeah all right so I'm going I'm going to guess this also works for you know 200 tasks and whatnot the higher order of tasks so I think that's it and we're done here yeah
[ { "end": 5.46, "start": 0, "text": " Hi there! Today we'll look at super masks in superposition again. So this is part" }, { "end": 10.14, "start": 5.46, "text": " two of this paper by Mitchell Wurtzman and Vivek Ramanujan and here's the" }, { "end": 16.54, "start": 10.14, "text": " reason why there's a part two. So after yesterday's video on this paper I" }, { "end": 21.98, "start": 16.54, "text": " couldn't sleep because I really felt that I had left out some important" }, { "end": 26.26, "start": 21.98, "text": " aspects that I wanted to touch on during the video. Now sometimes during videos I" }, { "end": 30.92, "start": 26.26, "text": " look at the clock and I realize like oh crap the video is already like an hour" }, { "end": 36.52, "start": 30.92, "text": " long and I know people are watching on 2x speed anyway but still it's like too" }, { "end": 40.760000000000005, "start": 36.52, "text": " long and I need to wrap it up really soon. And what I felt were pretty" }, { "end": 44.92, "start": 40.760000000000005, "text": " important messages about this paper got lost. So specifically I want to address" }, { "end": 50.760000000000005, "start": 44.92, "text": " three different things. First of all they have like a formal analysis, not a formal" }, { "end": 56.14, "start": 50.760000000000005, "text": " but a kind of more rigorous analysis of what their modified G objective does." }, { "end": 60.72, "start": 56.14, "text": " And I also want to give some intuition in that because I felt I really" }, { "end": 69.04, "start": 60.72, "text": " had done a good job at that. The second part is that the two different ideas" }, { "end": 76, "start": 69.04, "text": " right here being the super masks and the superposition and I think my opinion is" }, { "end": 80.52, "start": 76, "text": " sort of that these are two separate things and they really have nothing to" }, { "end": 84.64, "start": 80.52, "text": " do with each other and I think that didn't really come through last video." }, { "end": 89.72, "start": 84.64, "text": " And the third one being the broader impact statement of this paper which I" }, { "end": 96.04, "start": 89.72, "text": " you know I usually kind of gloss over it and go like haha but I hear there is an" }, { "end": 103.24000000000001, "start": 96.04, "text": " important point to it so yeah we'll get to that. Alright so again not a new paper" }, { "end": 107.32, "start": 103.24000000000001, "text": " today I realized this but I think it's worth kind of diving deeper into this" }, { "end": 113.48, "start": 107.32, "text": " paper which is a very cool paper you know so don't don't get me wrong right" }, { "end": 117.96000000000001, "start": 113.48, "text": " here and I feel mostly I haven't done a good part at explaining it." }, { "end": 126.48, "start": 117.96000000000001, "text": " Like literally lying awake. Okay so let's go to the first point. We had this so if" }, { "end": 130.68, "start": 126.48, "text": " you hadn't seen the first video super masks and superposition basically says" }, { "end": 135.64000000000001, "start": 130.68, "text": " that we want to do lifelong learning and we want to do lifelong learning by" }, { "end": 140.68, "start": 135.64000000000001, "text": " lifelong learning is the task where you have a bunch of tasks in sequence and" }, { "end": 145, "start": 140.68, "text": " you learn them in sequence so one after the other and basically the goal is to" }, { "end": 150.8, "start": 145, "text": " not forget tasks once you learn new tasks and this model does it by" }, { "end": 156.12, "start": 150.8, "text": " always building one of these super masks for each task that is applied to the" }, { "end": 162.64000000000001, "start": 156.12, "text": " same randomly initialized base neural network each time and you know by" }, { "end": 166.88, "start": 162.64000000000001, "text": " keeping the super mask around you won't forget the task and then at inference" }, { "end": 170.76, "start": 166.88, "text": " time if you're given the task and just retrieve the mask if you're not given" }, { "end": 175.44, "start": 170.76, "text": " the tasks you can do this superposition trick where you apply all the masks in a" }, { "end": 180.76, "start": 175.44, "text": " superposition and then you look at sort of the gradient of an entropy function" }, { "end": 185.84, "start": 180.76, "text": " in order to decide which task reduces the entropy the most so which task is" }, { "end": 192.2, "start": 185.84, "text": " the most certain about a particular data point and that you you kind of infer" }, { "end": 198.56, "start": 192.2, "text": " that that's the task you're gonna go with so instead of the entropy which is" }, { "end": 203.95999999999998, "start": 198.56, "text": " you know well reasoned they had this other objective they call a G and G" }, { "end": 210.72, "start": 203.95999999999998, "text": " basically looks at the it's really strange it looks at the superfluous" }, { "end": 214.48, "start": 210.72, "text": " neurons so they also add these superfluous neurons these S neurons" }, { "end": 223.44, "start": 214.48, "text": " right here and they they the G objective will only look at the S neurons in order" }, { "end": 228.07999999999998, "start": 223.44, "text": " to decide whether or not that's the correct task and it's basically just the" }, { "end": 232.83999999999997, "start": 228.07999999999998, "text": " log some X of the S neurons and we had some intuition about them being you know" }, { "end": 237.32, "start": 232.83999999999997, "text": " all small and so on them being like outlier detectors but there is an entire" }, { "end": 241.88, "start": 237.32, "text": " chapter in the appendix where the authors do a sort of more in-depth" }, { "end": 249.44, "start": 241.88, "text": " theoretical analysis of that which you know I it's not not necessary to do this" }, { "end": 255.92, "start": 249.44, "text": " for them so I really enjoy I enjoyed reading that and that gave me sort of" }, { "end": 263.8, "start": 255.92, "text": " the better intuition of what this G objective does so here they say the aim" }, { "end": 268.88, "start": 263.8, "text": " is not to formally prove properties of the algorithm rather we hope that a more" }, { "end": 275.56, "start": 268.88, "text": " mathematical language may prove useful in extending intuition okay so again" }, { "end": 279.4, "start": 275.56, "text": " that's that's pretty cool so they start off by saying you have your neural" }, { "end": 287.12, "start": 279.4, "text": " network is basically W and the the sorry the it's it's this Phi right here and" }, { "end": 293.6, "start": 287.12, "text": " the W are the last layers weights which compute your log it's so Y is going not" }, { "end": 297.6, "start": 293.6, "text": " to be your class but Y is going to be your log it's and P is going to be the" }, { "end": 303.84000000000003, "start": 297.6, "text": " probability vector over your class which if the you calculate this via a softmax" }, { "end": 311.88, "start": 303.84000000000003, "text": " is going to be the following expression right here if you have a mask right then" }, { "end": 318.24, "start": 311.88, "text": " at least in the last layer you can in you can infer it as this right here so" }, { "end": 324.16, "start": 318.24, "text": " you multiply the mask by the last these weights and then that gives you your" }, { "end": 331.04, "start": 324.16, "text": " log it's so they say here with they initialize the weights right here" }, { "end": 335.14000000000004, "start": 331.04, "text": " actually they initialize the they have no bias term and they initialize the" }, { "end": 340.32000000000005, "start": 335.14000000000004, "text": " weights by this constant so plus minus this constant it's not really necessary" }, { "end": 344.88, "start": 340.32000000000005, "text": " to do that but they do it right here it makes the analysis also a bit easier I" }, { "end": 349.84000000000003, "start": 344.88, "text": " guess it just works more well if you have these masks in superposition of" }, { "end": 354.2, "start": 349.84, "text": " course you want to add all of these masks with their respective alpha weighting" }, { "end": 363.76, "start": 354.2, "text": " factor then multiply by the weights and that gives you your log it's so note" }, { "end": 367.79999999999995, "start": 363.76, "text": " that this this doesn't necessarily only have to be the last layers weights" }, { "end": 373.67999999999995, "start": 367.79999999999995, "text": " right here you can view that as any sort of weights of the neural network if you" }, { "end": 378.76, "start": 373.67999999999995, "text": " formulate this Phi correctly so you don't think that they only apply the" }, { "end": 383.56, "start": 378.76, "text": " mask to the last layer they do apply the mask to the entire thing all right now" }, { "end": 390.15999999999997, "start": 383.56, "text": " the the important part here is what happens if we look at the derivative of" }, { "end": 396, "start": 390.15999999999997, "text": " G with respect to one of the alphas and take the maximum negative derivative of" }, { "end": 401.59999999999997, "start": 396, "text": " that G which is that mysterious function that only looks at the at the at the" }, { "end": 407.64, "start": 401.59999999999997, "text": " superfluous neurons so what they want they kind of construct this G by" }, { "end": 416.12, "start": 407.64, "text": " principle what they say is we want a function G that mimics the supervised" }, { "end": 422, "start": 416.12, "text": " loss right we want a function G that is kind of equal like the supervised loss" }, { "end": 429.15999999999997, "start": 422, "text": " if we had the task ID right and that's that's pretty cool because you know the" }, { "end": 435.4, "start": 429.15999999999997, "text": " the supervised loss you sort of need all the information you need the label you" }, { "end": 441.79999999999995, "start": 435.4, "text": " need you need all the all you need the task ID so the supervised loss is" }, { "end": 449.32, "start": 441.79999999999995, "text": " unavailable but we want a function G that in its gradient mimics the supervised" }, { "end": 455.56, "start": 449.32, "text": " loss so they go about constructing this right here they say lemma first lemma" }, { "end": 459.2, "start": 455.56, "text": " it's possible to construct a function G such that the gradient matches the" }, { "end": 463.47999999999996, "start": 459.2, "text": " gradient from the supervised loss for all s neurons so for all these" }, { "end": 469.20000000000005, "start": 463.48, "text": " superfluous neurons specifically we want that the gradient with respect to the" }, { "end": 473.12, "start": 469.20000000000005, "text": " log it's if the gradient to the log it's is equal that means the gradient to all" }, { "end": 477.42, "start": 473.12, "text": " the rest of the network is equal because the rest of the network goes through the" }, { "end": 481, "start": 477.42, "text": " log it's right the gradient through the log it's is equal to the gradient of the" }, { "end": 486.68, "start": 481, "text": " supervised loss to the log it's for all the superfluous neurons and zero" }, { "end": 492.44, "start": 486.68, "text": " otherwise so they say the zero otherwise is pretty easily done in math you know" }, { "end": 498.88, "start": 492.44, "text": " simply set it to zero and in the actual code which you can achieve like this" }, { "end": 504.8, "start": 498.88, "text": " where M indicates the superfluous neurons so this is just they said just" }, { "end": 510.28, "start": 504.8, "text": " multiplied here and the other ones are detached so there is no gradient" }, { "end": 516.04, "start": 510.28, "text": " flowing this is the property that we only look at the superfluous neurons and" }, { "end": 523.92, "start": 516.04, "text": " now we are going to show that the gradient is going to be equal so they" }, { "end": 530.76, "start": 523.92, "text": " say if you had the supervised loss which means if you had the label then this" }, { "end": 537.0799999999999, "start": 530.76, "text": " would be your cross entropy loss okay so you cross it divides into this part" }, { "end": 540.9599999999999, "start": 537.0799999999999, "text": " where you need the label and then this part here you don't need the label now" }, { "end": 548.2800000000001, "start": 540.96, "text": " you can pretty much say look the label is certainly going to be one of" }, { "end": 553.36, "start": 548.2800000000001, "text": " not the superfluous neurons because the superfluous neurons are superfluous they" }, { "end": 559.32, "start": 553.36, "text": " are never the correct neuron so this is always going to be you know not the not" }, { "end": 563.44, "start": 559.32, "text": " the neurons we look at so the gradient certainly this is always going to be" }, { "end": 570, "start": 563.44, "text": " zero because we never we wherever the gradient is flowing that's not where the" }, { "end": 579.92, "start": 570, "text": " where this is one so the gradient of any superfluous neuron is just this thing" }, { "end": 586.36, "start": 579.92, "text": " right here and that's exactly why they build the function G so the function G" }, { "end": 591.8, "start": 586.36, "text": " has this exact gradient the function G if you derive it has that same gradient" }, { "end": 600.3599999999999, "start": 591.8, "text": " as the supervised loss for the superfluous neurons okay so it's sort of" }, { "end": 606.24, "start": 600.3599999999999, "text": " magic but it's not you know it's not magic so they need two more assumptions" }, { "end": 611.24, "start": 606.24, "text": " here to have to get the following properties so the for the first" }, { "end": 619.56, "start": 611.24, "text": " property now because now we want to have G be identifying the correct task so" }, { "end": 624.04, "start": 619.56, "text": " we've already constructed G now we want to show that if we really do this the" }, { "end": 631.0799999999999, "start": 624.04, "text": " gradient with respect to the alphas then if we do it for a wrong task for the" }, { "end": 636.68, "start": 631.0799999999999, "text": " tasks that it's not the task of that particular data point that goes into" }, { "end": 642.5999999999999, "start": 636.68, "text": " computing G then we'll get a value that's probably lower than zero however" }, { "end": 648.8399999999999, "start": 642.5999999999999, "text": " if we plug in if we derive but with respect to the alpha of the correct task" }, { "end": 655.6800000000001, "start": 648.84, "text": " then we get a gradient a negative gradient that's higher than zero okay so" }, { "end": 660.12, "start": 655.6800000000001, "text": " we're now going to prove that this with high probability really allows us to" }, { "end": 666.8000000000001, "start": 660.12, "text": " distinguish the correct task from the wrong task and we need two assumptions" }, { "end": 670.88, "start": 666.8000000000001, "text": " right here the assumption one is we assume that the mask learn on task I" }, { "end": 677, "start": 670.88, "text": " will be independent from the data from task J if the task data is from task J" }, { "end": 683.88, "start": 677, "text": " then this are independent random variables okay so it sort of means that" }, { "end": 691.4, "start": 683.88, "text": " the the tasks themselves are kind of independent but it's not it's it's not" }, { "end": 696.02, "start": 691.4, "text": " the same requirement but you can think of in in the case of permuted M nest or" }, { "end": 703.2, "start": 696.02, "text": " so this is some it's given except if you consider this kind of frequency of" }, { "end": 708.24, "start": 703.2, "text": " brightness and so on but if you have independent task I think that this is" }, { "end": 714.6400000000001, "start": 708.24, "text": " given that means that the features right here and the masks are independent" }, { "end": 720.9200000000001, "start": 714.6400000000001, "text": " variable if if the data is from tax J then the features and the mask from task" }, { "end": 725.6400000000001, "start": 720.9200000000001, "text": " I are independent variable sorry the second assumption you need is that we" }, { "end": 729.44, "start": 725.6400000000001, "text": " assume that a negative weight and a positive weight are equally likely to be" }, { "end": 736.24, "start": 729.44, "text": " masked out okay so this again you can think of with some regularity this is" }, { "end": 743.08, "start": 736.24, "text": " certainly going to be to be given in a randomly initialized neural network note" }, { "end": 749.48, "start": 743.08, "text": " that when the features are 0 which will be the case for 0 mean random features" }, { "end": 755.32, "start": 749.48, "text": " yeah so um yeah before I said this was your neural network this is your random" }, { "end": 761.8000000000001, "start": 755.32, "text": " neural network right and then you mask that and so on if this is a randomly" }, { "end": 766.84, "start": 761.8000000000001, "text": " initialized neural network then you can make a case that the expected features" }, { "end": 775.12, "start": 766.84, "text": " of those will be 0 it doesn't need to be the case but you can you can construct" }, { "end": 779.48, "start": 775.12, "text": " it such that it is so if you have the two things right if you have those two" }, { "end": 786.8000000000001, "start": 779.48, "text": " things then you can prove the following if the data X comes from task J then" }, { "end": 792.32, "start": 786.8000000000001, "text": " when you derive by an alpha that's not of task J you get a number that's" }, { "end": 799.72, "start": 792.32, "text": " smaller than zero in expectation and here the crucial part is you reframe" }, { "end": 807.12, "start": 799.72, "text": " this gradient you reframe reframe reframe and what you'll see is that this" }, { "end": 815.6, "start": 807.12, "text": " here comes out so this is a sum and each element of the sum is going to be" }, { "end": 819.36, "start": 815.6, "text": " greater or equal to zero which means that this thing is greater or equal to" }, { "end": 824.72, "start": 819.36, "text": " zero which means the negative thing is smaller than zero in lemma H1 now we're" }, { "end": 829.76, "start": 824.72, "text": " going to look at lemma H1 to get an intuition of what's going on right here" }, { "end": 837.8, "start": 829.76, "text": " so lemma H1 says if J is the true task and I is not equal to J then this" }, { "end": 843.86, "start": 837.8, "text": " quantity here is greater than zero all right I restarted my tablet and we are" }, { "end": 851.08, "start": 843.86, "text": " back so what's kind of the the intuition behind why this quantity here would be" }, { "end": 857.3199999999999, "start": 851.08, "text": " greater or equal to zero and honestly in order to make it a bit easier I first" }, { "end": 865.08, "start": 857.32, "text": " want to look at whenever I equals J so whenever J is the true task and then I" }, { "end": 871.6, "start": 865.08, "text": " equals J then we can sort of think of the opposite like why why this should be" }, { "end": 878.6800000000001, "start": 871.6, "text": " smaller or equal to zero so consider this this is the run the feature of the" }, { "end": 887.7199999999999, "start": 878.68, "text": " network of you right and then the EUV connects that to the to the mask at point" }, { "end": 896.8, "start": 887.7199999999999, "text": " V and the mask at point at that point UV is either zero or one depending on the" }, { "end": 902.3599999999999, "start": 896.8, "text": " training so this this Xi right here that's going to be the from the" }, { "end": 907.76, "start": 902.3599999999999, "text": " initialization but the mask is going to be zero or one depending on whether that" }, { "end": 912.96, "start": 907.76, "text": " feature contributes sorry whether this entire thing here contributes" }, { "end": 919.08, "start": 912.96, "text": " positively to the task or not so the secret right here why we can make a" }, { "end": 924.76, "start": 919.08, "text": " claim that this is greater or lower than zero is going to be that the mask can" }, { "end": 932.76, "start": 924.76, "text": " only be zero or one it cannot be negative one right so if the mask is" }, { "end": 938.2, "start": 932.76, "text": " zero then obviously this thing is going to be zero however if the mask is one" }, { "end": 943.72, "start": 938.2, "text": " what does it mean if the mask is one that means that this this entire feature" }, { "end": 953.72, "start": 943.72, "text": " right here let's call it F is positively impacting is positively contributing to" }, { "end": 961.72, "start": 953.72, "text": " this particular neuron right here so if the mask is one this is this it means" }, { "end": 967.44, "start": 961.72, "text": " the addition of that feature more of that feature makes that log it go up okay" }, { "end": 977.2, "start": 967.44, "text": " so if the mask is one during training it means that the feature positively" }, { "end": 981.4, "start": 977.2, "text": " contributes to the task so if we look at the gradient with respect to this" }, { "end": 985.88, "start": 981.4, "text": " function with respect to the the log it and the function basically means it's" }, { "end": 995.4399999999999, "start": 985.88, "text": " just measure measures how high these superfluous log it's are then what why" }, { "end": 1001.12, "start": 995.4399999999999, "text": " do we find a negative interaction there because if you look at the neural" }, { "end": 1007.64, "start": 1001.12, "text": " network and you forward pass and this particular feature is important and you" }, { "end": 1013.8, "start": 1007.64, "text": " look at the loss G and you backward pass through the log it's if it is smaller" }, { "end": 1020.12, "start": 1013.8, "text": " than zero that means there there is a negative interaction right here so that" }, { "end": 1028.3999999999999, "start": 1020.12, "text": " basically means that if we make this feature higher then in this case we make" }, { "end": 1036.96, "start": 1028.3999999999999, "text": " this G function go lower okay and that is the case for the correct task because" }, { "end": 1044.16, "start": 1036.96, "text": " if this is the correct task and the mask is learned adequately that means it" }, { "end": 1050.8400000000001, "start": 1044.16, "text": " should assign a low weight to the superfluous neuron whenever the input" }, { "end": 1057.68, "start": 1050.8400000000001, "text": " features you know are of that task and so it makes sense that this here would" }, { "end": 1063.72, "start": 1057.68, "text": " be a negative number because what we want if the mask deems the feature" }, { "end": 1068.92, "start": 1063.72, "text": " important in a positive sense we want that if the feature goes up G goes down" }, { "end": 1076.08, "start": 1068.92, "text": " and that is exactly why we have the negative interaction right here right so" }, { "end": 1081.48, "start": 1076.08, "text": " the negative comes from this being negative I hope this sort of makes sense" }, { "end": 1086.56, "start": 1081.48, "text": " so if the mask is one the mask says basically if that feature goes up the" }, { "end": 1091.84, "start": 1086.56, "text": " loss goes down now G is a measure of the superfluous neurons the superfluous" }, { "end": 1098.08, "start": 1091.84, "text": " neurons should be small if the loss is small so if this is really from the task" }, { "end": 1102.8799999999999, "start": 1098.08, "text": " and this feature is really useful that means if we increase the feature the G" }, { "end": 1107.9599999999998, "start": 1102.8799999999999, "text": " function should go down and therefore this product here is going to be most" }, { "end": 1116.56, "start": 1107.9599999999998, "text": " likely negative okay and the contrary is you know analogous right here if this is" }, { "end": 1123.44, "start": 1116.56, "text": " not of this task and the mass can either be 0 or 1 right if it's 0 then this" }, { "end": 1130.36, "start": 1123.44, "text": " quantity is 0 however if it's 1 it's more likely that the that there the" }, { "end": 1137.12, "start": 1130.36, "text": " feature here because it's I is not the correct task which basically means that" }, { "end": 1143.8799999999999, "start": 1137.12, "text": " this feature it is for a different task it is good for a different task so the" }, { "end": 1147.96, "start": 1143.88, "text": " mask of that different task says it's good right here and we have no reason to" }, { "end": 1152.7600000000002, "start": 1147.96, "text": " believe that this would decrease the loss of the loss of this particular data" }, { "end": 1160.16, "start": 1152.7600000000002, "text": " point in this task so it's kind of the inverse reasoning if you look at the" }, { "end": 1167.4, "start": 1160.16, "text": " actual derivation here it's fairly long and it goes over the cases of the" }, { "end": 1171.5600000000002, "start": 1167.4, "text": " interactions between actually this initialization and the mask so the" }, { "end": 1178.48, "start": 1171.56, "text": " initialization can be positive or negative as you can see right here and I" }, { "end": 1186.08, "start": 1178.48, "text": " think I just think that the the intuition here is that the superfluous" }, { "end": 1192.46, "start": 1186.08, "text": " neurons react differently to a data point of the trained task because they" }, { "end": 1199.96, "start": 1192.46, "text": " have been kind of made to decrease for that task and for that particular mask" }, { "end": 1204.72, "start": 1199.96, "text": " as they do for when the data point doesn't match the mask when the data" }, { "end": 1210.4, "start": 1204.72, "text": " point doesn't match the mask there is no reason for the logits of the superfluous" }, { "end": 1216.7, "start": 1210.4, "text": " neurons to be low and if the data point task does match the mask there is ample" }, { "end": 1222.8400000000001, "start": 1216.7, "text": " reasons for those to be low I hope that sort of makes sense it is sort of it's a" }, { "end": 1227.14, "start": 1222.8400000000001, "text": " bit more of an intuition but if you really want to dig into it look at the" }, { "end": 1234.72, "start": 1227.14, "text": " derivation right here okay second point is the fact that the masks and the super" }, { "end": 1238.88, "start": 1234.72, "text": " positions don't really have to do anything with each other and that's you" }, { "end": 1242.8400000000001, "start": 1238.88, "text": " know I've said throughout the video like remember these tasks are super easy yada" }, { "end": 1249.16, "start": 1242.8400000000001, "text": " yada yada so let me make it clear in this in this diagram right here the" }, { "end": 1255.5600000000002, "start": 1249.16, "text": " super masks these are simply a way to train a neural network in a crude way" }, { "end": 1260.12, "start": 1255.56, "text": " right I don't think there is you know this distinction between mask and" }, { "end": 1265.76, "start": 1260.12, "text": " network I don't really like that much because ultimately what you're doing is" }, { "end": 1271.6399999999999, "start": 1265.76, "text": " simply you're training a neural network in a kind of weird way okay the fact" }, { "end": 1276.52, "start": 1271.6399999999999, "text": " that you always use the same underlying you know great neural network doesn't" }, { "end": 1281.9199999999998, "start": 1276.52, "text": " really matter right here it's still what you do in this super mask training is" }, { "end": 1285.12, "start": 1281.9199999999998, "text": " you provide a severely over parameterized network and then the mask" }, { "end": 1289.36, "start": 1285.12, "text": " simply gets to choose which weights to mix rather than you get to adjust the" }, { "end": 1294.08, "start": 1289.36, "text": " weights if you adjust the weights you usually get more accurate than with the" }, { "end": 1299.1599999999999, "start": 1294.08, "text": " mask but it's sort of like a quantized neural network that you train right here" }, { "end": 1303.12, "start": 1299.1599999999999, "text": " so that's the super mask thing again I don't think it's important that the" }, { "end": 1306.84, "start": 1303.12, "text": " underlying network is always the same the only advantage you have is it saves" }, { "end": 1314.36, "start": 1306.84, "text": " space because these masks are very small the super masks on the other hand this" }, { "end": 1321.3999999999999, "start": 1314.36, "text": " idea that you overlay all of the masks together and then you look at where this" }, { "end": 1327.8, "start": 1321.3999999999999, "text": " at the gradient of the entropy and you look at which of the of the mixing" }, { "end": 1333.36, "start": 1327.8, "text": " factors the gradient poles the most that's a different idea and the question" }, { "end": 1337.52, "start": 1333.36, "text": " here is wouldn't that isn't that independent does really depend on the" }, { "end": 1343.8799999999999, "start": 1337.52, "text": " masks or doesn't it and the you know the hypothesis would be that if I simply" }, { "end": 1348.72, "start": 1343.88, "text": " train you know three different neural networks for three different tasks could" }, { "end": 1352.8400000000001, "start": 1348.72, "text": " I not do the same superposition trick like could I not just add all of them" }, { "end": 1358.1200000000001, "start": 1352.8400000000001, "text": " with a respective alpha look at the entropy calculate the gradient with" }, { "end": 1362.2800000000002, "start": 1358.1200000000001, "text": " respect to each of the alphas of the entropy and then decide which task it is" }, { "end": 1368, "start": 1362.2800000000002, "text": " you know don't need masks simply mix neural networks in superposition so I" }, { "end": 1372.96, "start": 1368, "text": " did it and I actually tried their code is available so big props for their" }, { "end": 1378.16, "start": 1372.96, "text": " code being available I tried their code it's actually very few changes and I'm" }, { "end": 1384.72, "start": 1378.16, "text": " going to append my live coding of this at the end of this video so if you want" }, { "end": 1388.64, "start": 1384.72, "text": " to if you are interested in watching that you can do so but you know the" }, { "end": 1393.1200000000001, "start": 1388.64, "text": " outcome is if I train neural networks and I have I've you know done super quick" }, { "end": 1398, "start": 1393.1200000000001, "text": " and initialize them wrongly probably and all but if I train these neural net if I" }, { "end": 1403.48, "start": 1398, "text": " train the masks you get to like 92 percent accuracy in their tasks in each" }, { "end": 1407.24, "start": 1403.48, "text": " of the tasks and then also in the average if I train the actual neural" }, { "end": 1412, "start": 1407.24, "text": " networks I get to a higher accuracy like 93 something it doesn't matter it's just" }, { "end": 1418.36, "start": 1412, "text": " higher okay so that's hypothesis one is the training masks is just a way of" }, { "end": 1422.7, "start": 1418.36, "text": " training neural networks the fact that the masks and the network training" }, { "end": 1428.2, "start": 1422.7, "text": " itself are that close I think is a testament to how easy these tasks are" }, { "end": 1434.0800000000002, "start": 1428.2, "text": " like how easy eminent amnest is I'm going to also hypothesize that if the" }, { "end": 1438.04, "start": 1434.0800000000002, "text": " task gets harder and harder and I don't mean 10 class image net I mean a" }, { "end": 1444.68, "start": 1438.04, "text": " thousand class image net then these masks are going to degrade severely" }, { "end": 1448.2, "start": 1444.68, "text": " versus training the actual neural network I might be wrong I mean you can" }, { "end": 1453.16, "start": 1448.2, "text": " over parameter eyes really heavily and they will still work okay but in any" }, { "end": 1455.92, "start": 1453.16, "text": " case I trade the train these neural networks and they reached higher" }, { "end": 1461, "start": 1455.92, "text": " accuracy and then I did the exact same thing I laid them in superposition to" }, { "end": 1465.56, "start": 1461, "text": " determine what task it is and I could achieve the exact same result so here in" }, { "end": 1469.6000000000001, "start": 1465.56, "text": " their example they have a hundred percent task classification accuracy and" }, { "end": 1475.16, "start": 1469.6000000000001, "text": " I reached the exact same thing code worked I'm not going to try to scale" }, { "end": 1482.3200000000002, "start": 1475.16, "text": " this up to 250 or 2500 in tasks but I'm going to assume that with you know" }, { "end": 1488.0400000000002, "start": 1482.3200000000002, "text": " tuning and stuff that it's going to work about equally well you could make an" }, { "end": 1492.8400000000001, "start": 1488.0400000000002, "text": " argument that the masks being sparser they might be differentiated from each" }, { "end": 1500.48, "start": 1492.8400000000001, "text": " other more accurately but I'm not sure maybe but it's it's not a cool it's not" }, { "end": 1507.04, "start": 1500.48, "text": " a qualitative difference right so these two things are really two separate ideas" }, { "end": 1513.2, "start": 1507.04, "text": " that find their way together in this paper but ultimately have not much to do" }, { "end": 1523.34, "start": 1513.2, "text": " with each other okay at least that's from what I can tell I might I might be" }, { "end": 1528.3600000000001, "start": 1523.34, "text": " wrong here and I might be wrong with respect to their G objective and whatnot" }, { "end": 1536.12, "start": 1528.36, "text": " and you know but I think that that these are two cool ideas but they can be" }, { "end": 1543.28, "start": 1536.12, "text": " applied independently so the last thing I want to look at is their broader impact" }, { "end": 1549.28, "start": 1543.28, "text": " statement right here now there is a reason so usually I kind of track these" }, { "end": 1552.8799999999999, "start": 1549.28, "text": " broader impact statement because I think this this is this here is sort of" }, { "end": 1556.52, "start": 1552.8799999999999, "text": " fundamental research right this is fundamental machine learning research" }, { "end": 1560.8, "start": 1556.52, "text": " we do architecture the multitask learning task isn't really important as" }, { "end": 1565.4, "start": 1560.8, "text": " long as we have kind of the same tasks right here uncorrelated and so on the" }, { "end": 1568.8799999999999, "start": 1565.4, "text": " same hardness and I've also made the point that it's really important for" }, { "end": 1574.24, "start": 1568.8799999999999, "text": " these tasks to be the same hard for this to work in this place a role right here" }, { "end": 1580.92, "start": 1574.24, "text": " so um and they do they do describe some of this in this conclusion with you know" }, { "end": 1586.44, "start": 1580.92, "text": " limitation that we observed has to do with task identity inference when model" }, { "end": 1591.3600000000001, "start": 1586.44, "text": " are not well calibrated models that are overly confident for the wrong task okay" }, { "end": 1600.4, "start": 1591.3600000000001, "text": " so in order for them to infer the correct task they the sort of so if you" }, { "end": 1605.92, "start": 1600.4, "text": " look at your entropy of the models for the tasks that means you're gonna" }, { "end": 1611.98, "start": 1605.92, "text": " select the model that is the most sure about the task this only works if the" }, { "end": 1617.84, "start": 1611.98, "text": " tasks are equally hard okay if one task is much much harder than the other task" }, { "end": 1622.04, "start": 1617.84, "text": " this other task is always going to say well I'm really confident about this one" }, { "end": 1625.84, "start": 1622.04, "text": " because the task is just easier it's going to be it's going to train in neural" }, { "end": 1630.88, "start": 1625.84, "text": " networks is generally more confident and you're going to misclassify a lot of the" }, { "end": 1637.2, "start": 1630.88, "text": " tasks so so here what does this have to do with the broader impact statement if" }, { "end": 1645.48, "start": 1637.2, "text": " you look at the broader impact statement what they say right here so they say a" }, { "end": 1649.76, "start": 1645.48, "text": " goal of continue learning self-manage tasks with a single model however it is" }, { "end": 1653.24, "start": 1649.76, "text": " not exactly clear what qualifies as a single model therefore a concrete" }, { "end": 1658.52, "start": 1653.24, "text": " objective has become to learn many tasks as efficiently as possible we believe" }, { "end": 1662.64, "start": 1658.52, "text": " that subs up is a useful step in this direction however there are consequences" }, { "end": 1667.44, "start": 1662.64, "text": " to more efficient models both positive and negative so this is sort of what the" }, { "end": 1672.1200000000001, "start": 1667.44, "text": " community does so there are three things that I've seen so far in broader impact" }, { "end": 1676.64, "start": 1672.1200000000001, "text": " statement first you some people say this is not applicable to us which I agree" }, { "end": 1682.0400000000002, "start": 1676.64, "text": " for most fundamental research broader it like the broader impact statement is" }, { "end": 1687.1200000000001, "start": 1682.0400000000002, "text": " supposed to be what does this particular method how will this influence broader" }, { "end": 1694.36, "start": 1687.12, "text": " society so not applicable completely valid for most of these research papers" }, { "end": 1702.2399999999998, "start": 1694.36, "text": " because guess what you can use any method to do good or to do bad and that's" }, { "end": 1708.6399999999999, "start": 1702.2399999999998, "text": " that's the second second part second method is basically you you just change" }, { "end": 1713.76, "start": 1708.6399999999999, "text": " a generic statements how you can do good and bad and usually you can't relate it" }, { "end": 1718.96, "start": 1713.76, "text": " to your particular method in the paper right because your method is I don't" }, { "end": 1725.96, "start": 1718.96, "text": " know like my faster convergence rate of SGD but and and so what you do is you" }, { "end": 1730.3, "start": 1725.96, "text": " just go one level up you go up the levels it's always like optimization can" }, { "end": 1733.28, "start": 1730.3, "text": " be used for good and for bad I mean that's still kind of a bit vague and" }, { "end": 1737.8799999999999, "start": 1733.28, "text": " then you go up further well optimization can do more machine learning and machine" }, { "end": 1742.24, "start": 1737.8799999999999, "text": " learning can be used to do good and bad for example face recognition and things" }, { "end": 1745.76, "start": 1742.24, "text": " like this so you just go up the levels and that's what they essentially do here" }, { "end": 1750.84, "start": 1745.76, "text": " and that's what you know most people have defaulted to it's like okay so you" }, { "end": 1756.04, "start": 1750.84, "text": " know our model here is you know we it basically one can train more efficient" }, { "end": 1760.16, "start": 1756.04, "text": " models and then they simply highlight what more efficient models can do" }, { "end": 1764.18, "start": 1760.16, "text": " efficient models require less compute if there's a model by we run on the end" }, { "end": 1769.24, "start": 1764.18, "text": " device if models are more efficient than large-scale research is not limited to" }, { "end": 1774.36, "start": 1769.24, "text": " wealthier institutions by the way I also the broader impact statement I believe" }, { "end": 1779.32, "start": 1774.36, "text": " should be the impact on society and not really on the research community itself" }, { "end": 1787.6, "start": 1779.32, "text": " so I also this this is a bit shaky with respect to because I'm really regarding" }, { "end": 1791.92, "start": 1787.6, "text": " what the broader impact statement should be this is not my opinion I'm I'm trying" }, { "end": 1797.1200000000001, "start": 1791.92, "text": " to reflect everything I've read of guidance about what the broader impact" }, { "end": 1803.2399999999998, "start": 1797.12, "text": " statement should be by the way there is also method method three which is to" }, { "end": 1806.4399999999998, "start": 1803.2399999999998, "text": " simply tell me more about your paper in the broader impact statement which I" }, { "end": 1810.6399999999999, "start": 1806.4399999999998, "text": " guess is the smart method because the broader impact statement can be before" }, { "end": 1815.2399999999998, "start": 1810.6399999999999, "text": " before the references so it's in the main part and people are required to" }, { "end": 1819.36, "start": 1815.2399999999998, "text": " read it not like the appendix reviewers are not required to read the appendix" }, { "end": 1822.76, "start": 1819.36, "text": " reviewers are required to read the broader impact statement so I guess the" }, { "end": 1827.68, "start": 1822.76, "text": " smart authors will just try to cloak more information about their model in" }, { "end": 1832.08, "start": 1827.68, "text": " terms of a broader impact statement I guess well whether that's smart is a" }, { "end": 1837.96, "start": 1832.08, "text": " different discussion but here they just it's it's already defaulting right these" }, { "end": 1843.64, "start": 1837.96, "text": " it's already the default people simply go level up level up level up until we" }, { "end": 1848.8799999999999, "start": 1843.64, "text": " can you know say something generic and we will also highlight and discuss the" }, { "end": 1852.72, "start": 1848.88, "text": " negative consequences of models which can efficiently learn many tasks and" }, { "end": 1857.0400000000002, "start": 1852.72, "text": " efficient models in general when models are more efficient they're also more" }, { "end": 1861.2800000000002, "start": 1857.0400000000002, "text": " available and less subject to regularization as a study and study of" }, { "end": 1866.0400000000002, "start": 1861.2800000000002, "text": " result for instance when a high-impact model is released an institution will" }, { "end": 1871.44, "start": 1866.0400000000002, "text": " hopefully be accompanied by a model card analyzing the bias and intended use of" }, { "end": 1877.16, "start": 1871.44, "text": " the model by contrast if anyone is able to train a powerful model this may no" }, { "end": 1881, "start": 1877.16, "text": " longer be the case resulting in a proliferation of model with harmful" }, { "end": 1885.8000000000002, "start": 1881, "text": " biases or intended use taking the United States for instance bias can be harmful" }, { "end": 1890.72, "start": 1885.8000000000002, "text": " as models show disproportionately more errors for already marginalized groups" }, { "end": 1896.8000000000002, "start": 1890.72, "text": " furthering existing deeply rooted structural racism this this is like well" }, { "end": 1904.88, "start": 1896.8000000000002, "text": " technology this is basically a statement about technology and so why why do I" }, { "end": 1911.88, "start": 1904.88, "text": " have a particular not issue but why do I pick this broader impact statement they" }, { "end": 1917.96, "start": 1911.88, "text": " even Rick this here this is this gender shades paper right where people went and" }, { "end": 1922.8000000000002, "start": 1917.96, "text": " they looked at these commercial API's for face recognition I I think that's" }, { "end": 1929.0800000000002, "start": 1922.8000000000002, "text": " the paper yeah gender shades so if you have a face" }, { "end": 1937.6, "start": 1929.08, "text": " recognizer they realized they divided people up by I think gender and race so" }, { "end": 1943.78, "start": 1937.6, "text": " you know like they built four groups or I haven't I haven't I've just looked at" }, { "end": 1947.8, "start": 1943.78, "text": " the paper but in my understanding that they divided people up into groups which" }, { "end": 1952.9199999999998, "start": 1947.8, "text": " I find arbitrary to have the these two axes race and gender but okay you can do" }, { "end": 1958.6399999999999, "start": 1952.9199999999998, "text": " that and they discovered that these commercial API's have different accuracy" }, { "end": 1964.16, "start": 1958.64, "text": " for the different groups right and that basically our point is that you know" }, { "end": 1967.2800000000002, "start": 1964.16, "text": " these commercial API's if they're offered for all humans they should work" }, { "end": 1974.68, "start": 1967.2800000000002, "text": " equally well for all humans now now you may be see what it has to do with this" }, { "end": 1982.4, "start": 1974.68, "text": " paper well this paper is in the business of doing multitask learning so it is" }, { "end": 1989, "start": 1982.4, "text": " very viable to actually frame the the task for example like this is an example" }, { "end": 1994.96, "start": 1989, "text": " if you frame the task of multitask learning like face recognition on" }, { "end": 1999.2800000000002, "start": 1994.96, "text": " different groups of people as a multitask learning problem you have you" }, { "end": 2005.3600000000001, "start": 1999.2800000000002, "text": " know group group one right here group two group three and then if at inference" }, { "end": 2009.5600000000002, "start": 2005.3600000000001, "text": " time so you can build you know good models for each of the group at" }, { "end": 2012.72, "start": 2009.56, "text": " inference time you're given an image and you're trying to a fur first which" }, { "end": 2017.4199999999998, "start": 2012.72, "text": " group is that from and then take the appropriate classifier that would be you" }, { "end": 2022.72, "start": 2017.4199999999998, "text": " know that would be a good a hypothetical classifier for this thing now what do we" }, { "end": 2030.3999999999999, "start": 2022.72, "text": " know about this thing this thing is fails if the tasks aren't equally hard" }, { "end": 2038.6799999999998, "start": 2030.3999999999999, "text": " also in in specifically if if for one group let's say for group three the the" }, { "end": 2043.24, "start": 2038.68, "text": " task is way harder because you have less data I guess the one of the main" }, { "end": 2048.36, "start": 2043.24, "text": " problems there is that the data sets are not equally balanced if you have less" }, { "end": 2054.2400000000002, "start": 2048.36, "text": " data for that then the task becomes de facto harder and the model is less sure" }, { "end": 2062.2400000000002, "start": 2054.2400000000002, "text": " about the task which means that it's a double whammy so not only is the model" }, { "end": 2068.56, "start": 2062.2400000000002, "text": " itself less accurate but these the input data point if the person is actually" }, { "end": 2074.32, "start": 2068.56, "text": " of group three is less likely to be classified correctly into the correct" }, { "end": 2081.16, "start": 2074.32, "text": " model at to begin with so you know for all the for all I I've had my my share" }, { "end": 2086.16, "start": 2081.16, "text": " of of comments on the video I made and I still maintain that societal bias can" }, { "end": 2091.2799999999997, "start": 2086.16, "text": " comes about by data set but for all the people saying there are models that" }, { "end": 2098.32, "start": 2091.2799999999997, "text": " exaggerate existing biases in models this would be like if there is any ever" }, { "end": 2103.04, "start": 2098.32, "text": " any applicability of these broader impact statement guidelines this would" }, { "end": 2108.36, "start": 2103.04, "text": " be the paper right it's this right here is an actual system that if I have" }, { "end": 2113.48, "start": 2108.36, "text": " different classifiers and I combine them with this method it will double punish" }, { "end": 2120.04, "start": 2113.48, "text": " the classifier that is less sure that is less accurate because that is also going" }, { "end": 2124.88, "start": 2120.04, "text": " to be the one with the higher entropy therefore not as much selected if I give" }, { "end": 2130.76, "start": 2124.88, "text": " a data point of that particular task and so this is like a I'm not criticizing" }, { "end": 2134.76, "start": 2130.76, "text": " the method here like by all means like this is a cool method where you can" }, { "end": 2139.32, "start": 2134.76, "text": " recognize that this happens and try to calibrate accordingly but if there was" }, { "end": 2145.2000000000003, "start": 2139.32, "text": " ever any straight ball for a broader impact statement I would you know this" }, { "end": 2151.32, "start": 2145.2000000000003, "text": " is it and this I'm not I'm not trying I'm not saying that these these authors" }, { "end": 2157.2400000000002, "start": 2151.32, "text": " didn't do that for a reason I believe that look it's been whatever not even" }, { "end": 2161.36, "start": 2157.2400000000002, "text": " half a year since we've started with these general broader impact statements" }, { "end": 2168.1200000000003, "start": 2161.36, "text": " and everybody is already defaulting to simply say technology good technology" }, { "end": 2177.8, "start": 2168.1200000000003, "text": " bad that's that's the people aren't even thinking and so this right this is one" }, { "end": 2183.6800000000003, "start": 2177.8, "text": " of the reasons why I simply find these broader impact statements to be not that" }, { "end": 2188.2400000000002, "start": 2183.6800000000003, "text": " like not a good idea because there is a default answer and people are just" }, { "end": 2194.2400000000002, "start": 2188.2400000000002, "text": " putting it here even in like when there is an actual obvious immensely obvious" }, { "end": 2202.36, "start": 2194.2400000000002, "text": " thing that they even they even cited like the basis for that so you know" }, { "end": 2210.8, "start": 2202.36, "text": " that's sort of my take on this I again I enjoyed this paper the code is is" }, { "end": 2215.96, "start": 2210.8, "text": " available everything is good about this this paper I'm not even the fact that" }, { "end": 2218.48, "start": 2215.96, "text": " these are you know I think these are kind of two separate ideas they're" }, { "end": 2225.4, "start": 2218.48, "text": " combined cool they're analyzed formally in theory there's intuition given all" }, { "end": 2232.34, "start": 2225.4, "text": " good so don't get me wrong this is not like trashing this paper it's just" }, { "end": 2240.44, "start": 2232.34, "text": " I felt I had something more to say and I think that was it so yeah I'll see you" }, { "end": 2247.2000000000003, "start": 2240.44, "text": " next time with the new paper okay so our goal here is going to be to change this" }, { "end": 2254.36, "start": 2247.2000000000003, "text": " code to not use masks as mixtures but actually use neural networks with real" }, { "end": 2260.04, "start": 2254.36, "text": " weights as as mixtures and in superposition with each other okay so" }, { "end": 2264.2, "start": 2260.04, "text": " what we're going to do is we're going to train the different neural networks and" }, { "end": 2270, "start": 2264.2, "text": " then use this kind of superposition trick to figure out which task a data" }, { "end": 2277.04, "start": 2270, "text": " point came from so let's have a look at the code right here and there's a bunch" }, { "end": 2283.4, "start": 2277.04, "text": " of helper code and if we go down through everything you'll see that this is the" }, { "end": 2288.96, "start": 2283.4, "text": " MNIST permuted data set so each each task is basically a random permutation" }, { "end": 2297.2, "start": 2288.96, "text": " of MNIST and if you execute I believe this here and then you train the model" }, { "end": 2303.32, "start": 2297.2, "text": " and right now it's for five tasks but I guess that's going to be enough for now" }, { "end": 2311.32, "start": 2303.32, "text": " yeah so if we get some good signal here I guess it's a matter of of doing kind" }, { "end": 2316.32, "start": 2311.32, "text": " of engineering and plumbing and tuning if until you get it up to whatever 200" }, { "end": 2323, "start": 2316.32, "text": " or 2000 tasks though I might be wrong there so this is training and I" }, { "end": 2329.44, "start": 2323, "text": " shortly sort of had a look at the code but I haven't actually tried this yet so" }, { "end": 2336.2000000000003, "start": 2329.44, "text": " the thing the model is built here you see this is multi task fully connected" }, { "end": 2341.6400000000003, "start": 2336.2000000000003, "text": " which has these different layers right here and it's built by these multi task" }, { "end": 2349.56, "start": 2341.64, "text": " mask linear models now the multi task mask linear models are defined right" }, { "end": 2354.6, "start": 2349.56, "text": " here so it's basically a linear model as you can see it's derived from a linear" }, { "end": 2361.2799999999997, "start": 2354.6, "text": " from a linear module and it has a parameter called num tasks and then it" }, { "end": 2369.7599999999998, "start": 2361.2799999999997, "text": " has a parameter scores which I guess is are these these masks right here and the" }, { "end": 2375.44, "start": 2369.76, "text": " scores I'm going to guess are always going to be multiplied by the weights" }, { "end": 2381.44, "start": 2375.44, "text": " here in the forward so you can see they're in forward you get the weights" }, { "end": 2389.2000000000003, "start": 2381.44, "text": " from the alphas yeah yeah this is the superimposed alright so if we know the" }, { "end": 2396.0400000000004, "start": 2389.2000000000003, "text": " task ID down here we get this subnet and we are going to multiply it with the" }, { "end": 2401.4, "start": 2396.04, "text": " weights if we don't know the task ID we want to get these alphas so the alphas" }, { "end": 2408.2799999999997, "start": 2401.4, "text": " are going to be one over the number of tasks at the beginning we're then going" }, { "end": 2417.4, "start": 2408.2799999999997, "text": " to multiply each of the alphas with the weights and with that we're going to get" }, { "end": 2423.8, "start": 2417.4, "text": " this subnet mask right here so we need to know what this self dot stacked is so" }, { "end": 2429.0800000000004, "start": 2423.8, "text": " this self dot stacked is getting right here in this cache mask or simply" }, { "end": 2435.0800000000004, "start": 2429.0800000000004, "text": " stacking this this get subnet for all of the things so our plan is going to be" }, { "end": 2440.32, "start": 2435.0800000000004, "text": " that this subnet right here is going to be the actual weights of the neural" }, { "end": 2447.2000000000003, "start": 2440.32, "text": " network okay and not just the not just the mask and then we don't need to" }, { "end": 2451.6400000000003, "start": 2447.2000000000003, "text": " actually multiply it with the weight we can just just forget about the weight" }, { "end": 2458.8799999999997, "start": 2451.64, "text": " honestly and just train the subnet so for the subnet as you can see here you" }, { "end": 2464.7599999999998, "start": 2458.8799999999997, "text": " have this get subnet thing and that's an autograd function which basically means" }, { "end": 2468.64, "start": 2464.7599999999998, "text": " in the forward pass you want to discretize it and in the backward pass" }, { "end": 2473.56, "start": 2468.64, "text": " this is a straight through estimator so our first task is going and this here" }, { "end": 2478.8399999999997, "start": 2473.56, "text": " should be done now my laptop has stopped breathing so we've trained five tasks" }, { "end": 2485, "start": 2478.84, "text": " and now we can run inference on that so this is when the task is given real" }, { "end": 2493.84, "start": 2485, "text": " quick you can see task one 92 percent 92 percent 92 percent 92 percent so we have" }, { "end": 2501.04, "start": 2493.84, "text": " a an overall performance of 92.44 percent then when the task is not given" }, { "end": 2507.32, "start": 2501.04, "text": " we have two things to evaluate whether or not basically how good we are overall" }, { "end": 2511.92, "start": 2507.32, "text": " and whether or not we get the tasks correct of course the tasks are at this" }, { "end": 2518.2000000000003, "start": 2511.92, "text": " pre requirement so we have a hundred percent task inference accuracy okay so" }, { "end": 2522.76, "start": 2518.2000000000003, "text": " we don't we don't okay we can we could evaluate this here but you can already" }, { "end": 2527.52, "start": 2522.76, "text": " see the output from last time there is like no difference from the performance" }, { "end": 2532.7200000000003, "start": 2527.52, "text": " of the when the task is given it's always being able to infer the task we" }, { "end": 2537.3599999999997, "start": 2532.72, "text": " want to check out the same thing so we want to change first of all this get" }, { "end": 2542.3199999999997, "start": 2537.3599999999997, "text": " subnet this is where it's these scores are discretized now given that these" }, { "end": 2547.2799999999997, "start": 2542.3199999999997, "text": " scores are going to be and to end up being our actual weights we want we" }, { "end": 2550.9199999999996, "start": 2547.2799999999997, "text": " don't do that we simply return the scores now this is a this is pretty" }, { "end": 2558.3999999999996, "start": 2550.9199999999996, "text": " pointless right now but we'll keep it just to be as close as possible to the" }, { "end": 2569.12, "start": 2558.4, "text": " to that now mask in it this is where we initialize the mask now this is climbing" }, { "end": 2575.2400000000002, "start": 2569.12, "text": " uniform and it has some thing but we want probably we want to train the" }, { "end": 2583.78, "start": 2575.2400000000002, "text": " neural network to be initialized you know as we know it so let's try what what" }, { "end": 2590.0800000000004, "start": 2583.78, "text": " our other initialize function so in it dot what do we have here do we have" }, { "end": 2599.6800000000003, "start": 2590.0800000000004, "text": " what's usual I don't even know normal Savi that that sounds about right that" }, { "end": 2609.28, "start": 2599.6800000000003, "text": " sounds about right all right all right so scores and yeah let's try this this" }, { "end": 2612.88, "start": 2609.28, "text": " could this could I break everything right if you initialize wrongly you get" }, { "end": 2623.96, "start": 2612.88, "text": " like dumb results so okay signed constant yada yada yada where is that" }, { "end": 2632.6, "start": 2623.96, "text": " used huh okay that's also initializing something so we calculate the gain and" }, { "end": 2641.2400000000002, "start": 2632.6, "text": " then okay this doesn't seem good we'll just keep it hey why not" }, { "end": 2651.04, "start": 2641.24, "text": " why not why not just keep it at that all right so cool oh yeah this is for the" }, { "end": 2655.4799999999996, "start": 2651.04, "text": " weight anyway we won't use the weight at all of this layer we'll just use our own" }, { "end": 2661.52, "start": 2655.4799999999996, "text": " weights so here we have these stacked okay we get the scores that's all good" }, { "end": 2667.9599999999996, "start": 2661.52, "text": " like I'm pretty happy with that I'm pretty happy with this mask in it that" }, { "end": 2670.8799999999997, "start": 2667.9599999999996, "text": " we make our parameters so these are going to be our different neural" }, { "end": 2678.6, "start": 2670.88, "text": " networks that we train this all looks good the alphas look good now the only" }, { "end": 2686.52, "start": 2678.6, "text": " thing we want to do honestly is just to have not the weight times the subnet" }, { "end": 2695.84, "start": 2686.52, "text": " here but the subnet as such like this is this it do we now train actual neural" }, { "end": 2704.56, "start": 2695.84, "text": " networks I I have my doubts honestly like there should be no this should be" }, { "end": 2716.2000000000003, "start": 2704.56, "text": " it hmm yeah yeah let's just try it like we're gonna get a mistake somewhere like" }, { "end": 2726.8399999999997, "start": 2716.2, "text": " crash nope nope okay all right actually training so for real like these scores" }, { "end": 2733.4399999999996, "start": 2726.8399999999997, "text": " right here the fact what made them a mask is that we discretize them right" }, { "end": 2737.3599999999997, "start": 2733.4399999999996, "text": " here so we made them into a mask right here we're not doing that anymore so" }, { "end": 2740.96, "start": 2737.3599999999997, "text": " we're just training floats and then we're also not multiplying it by the" }, { "end": 2745.9199999999996, "start": 2740.96, "text": " weight we are just using those floats which means that we are using the" }, { "end": 2751.8, "start": 2745.92, "text": " basically a neural network and then here the bias I was worried about the bias" }, { "end": 2757.96, "start": 2751.8, "text": " but the bias is always zero as you can see here so the bias is always false" }, { "end": 2763.44, "start": 2757.96, "text": " yeah so we're training five different neural networks for five different tasks" }, { "end": 2770.48, "start": 2763.44, "text": " and you know according to my hypothesis these masked things are just kind of" }, { "end": 2778.4, "start": 2770.48, "text": " crude quantized ways are of training neural networks and if if my hypothesis" }, { "end": 2783.8, "start": 2778.4, "text": " is correct this here is going to turn out probably even better than this" }, { "end": 2789.44, "start": 2783.8, "text": " masked thing okay so last task training right here" }, { "end": 2798.2, "start": 2789.44, "text": " I'm starting to breathe good laptop fast laptop very nice come on come on come" }, { "end": 2808.6, "start": 2798.2, "text": " on and we're done so again we have an average top one performance of 92 point" }, { "end": 2815.08, "start": 2808.6, "text": " is this even did I even oh no I ran this right here okay like that's the exact" }, { "end": 2820.08, "start": 2815.08, "text": " same number it was last time so we need to run inference again and if we're" }, { "end": 2829.2, "start": 2820.08, "text": " given the task ID then we are at 93.9% so we increase slightly which might just be" }, { "end": 2835.72, "start": 2829.2, "text": " due to the fact that we initialize terribly terribly okay so what does it" }, { "end": 2840.18, "start": 2835.72, "text": " say about our task inference accuracy maybe there's some mask here set model" }, { "end": 2849.7599999999998, "start": 2840.18, "text": " task the alphas are to one nope no we're good we're good task inference accuracy" }, { "end": 2855.88, "start": 2849.76, "text": " 100% and I'm going to guess well with the task inference accuracy being 100%" }, { "end": 2861.48, "start": 2855.88, "text": " I'm going to guess this here will give us the exact same number I like the 93" }, { "end": 2870.0800000000004, "start": 2861.48, "text": " point some percent so yeah 93.9% so I'm you know I'm going to say right here" }, { "end": 2876.6000000000004, "start": 2870.0800000000004, "text": " that the on the super masks and the superposition really are two separate" }, { "end": 2885.56, "start": 2876.6, "text": " ideas right you it's it's because the paper is like it sounds cool and all" }, { "end": 2890.88, "start": 2885.56, "text": " with the supermask and superposition but this inference using the superposition" }, { "end": 2897.04, "start": 2890.88, "text": " and then the entropy to decide is really one idea and training different super" }, { "end": 2901.2799999999997, "start": 2897.04, "text": " math the the advantage in using supermask is of course that the model is" }, { "end": 2910.5600000000004, "start": 2901.28, "text": " way smaller so you can remember it much more easily but also you know that it's" }, { "end": 2913.76, "start": 2910.5600000000004, "text": " really different if there's there's nothing to do with the superposition" }, { "end": 2920.28, "start": 2913.76, "text": " yeah all right so I'm going I'm going to guess this also works for you know 200" }, { "end": 2927.88, "start": 2920.28, "text": " tasks and whatnot the higher order of tasks so I think that's it and we're" }, { "end": 2931.56, "start": 2927.88, "text": " done here yeah" } ]
3jT1qJ8ETzk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
SupSup: Supermasks in Superposition (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "supsup", "supermasks", "lottery ticket", "lottery ticket hypothesis", "gradient", "entropy", "surplus", "superfluous neurons", "lifelong learning", "multitask learning", "catastrophic forgetting", "continuous learning", "binary mask", "random network", "optimization", "hopfield network", "gradient descent", "superposition" ]
Supermasks are binary masks of a randomly initialized neural network that result in the masked network performing well on a particular task. This paper considers the problem of (sequential) Lifelong Learning and trains one Supermask per Task, while keeping the randomly initialized base network constant. By minimizing the output entropy, the system can automatically derive the Task ID of a data point at inference time and distinguish up to 2500 tasks automatically. OUTLINE: 0:00 - Intro & Overview 1:20 - Catastrophic Forgetting 5:20 - Supermasks 9:35 - Lifelong Learning using Supermasks 11:15 - Inference Time Task Discrimination by Entropy 15:05 - Mask Superpositions 24:20 - Proof-of-Concept, Task Given at Inference 30:15 - Binary Maximum Entropy Search 32:00 - Task Not Given at Inference 37:15 - Task Not Given at Training 41:35 - Ablations 45:05 - Superfluous Neurons 51:10 - Task Selection by Detecting Outliers 57:40 - Encoding Masks in Hopfield Networks 59:40 - Conclusion Paper: https://arxiv.org/abs/2006.14769 Code: https://github.com/RAIVNLab/supsup My Video about Lottery Tickets: https://youtu.be/ZVVnvZdUMUk My Video about Supermasks: https://youtu.be/jhCInVFE2sc Abstract: We present the Supermasks in Superposition (SupSup) model, capable of sequentially learning thousands of tasks without catastrophic forgetting. Our approach uses a randomly initialized, fixed base network and for each task finds a subnetwork (supermask) that achieves good performance. If task identity is given at test time, the correct subnetwork can be retrieved with minimal memory usage. If not provided, SupSup can infer the task using gradient-based optimization to find a linear superposition of learned supermasks which minimizes the output entropy. In practice we find that a single gradient step is often sufficient to identify the correct mask, even among 2500 tasks. We also showcase two promising extensions. First, SupSup models can be trained entirely without task identity information, as they may detect when they are uncertain about new data and allocate an additional supermask for the new training distribution. Finally the entire, growing set of supermasks can be stored in a constant-sized reservoir by implicitly storing them as attractors in a fixed-sized Hopfield network. Authors: Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, Ali Farhadi Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher
Hi there, today we'll look at super masks in superposition by Mitchell Wirtzman, Vivek Ramanujan at AL. So on a high level this paper tackles the problem of sequentially learning many many tasks without catastrophic forgetting by leveraging these things called super masks. A super mask is basically a binary mask that you unlace over a randomly initialized neural network to make the mask network perform better than a random initialization. They will train these masks for each of the tasks that they consider and then at inference time they can recover the task that the data is from and therefore kind of do this lifelong multitask learning better than the baselines that they compare against. In fact they can do better without knowing the task than the baselines can with knowing the task. So that's pretty pretty cool. This is a pretty dense paper in terms of content and we won't go over everything in the paper but we'll go over the ideas and what kind of what I think makes them work. So stick around if you want to know that. Also consider sharing this video out, tell your friends about it and subscribe if you haven't, it helps. So yeah cool. So let's dive in. We present the super masks in superposition model capable of sequentially learning thousands of tasks without catastrophic forgetting. So the term catastrophic forgetting comes from the world of this kind of sequential multitask learning where you have a model. Let's say this is your model, the black box, and you let it learn on a task. Let's say this is an image recognition task. So you have a data set and you let it run on this data set. You learn the data set, maybe it's CIFAR 10, right? So this is CIFAR 10. Cool. And now the model can do CIFAR 10 pretty well. Then you also want to learn a different task. You want to learn MNIST. Okay, so you have MNIST and you want to learn MNIST and you want to learn that one. So your hope is that your final model can do both. So you'll take this one and you simply train it on MNIST as well. And then, you know, we know there's this kind of fine tuning pre training and so on. So your hope would be that at the end it can do both. But then you want another one. You want ImageNet. Okay, now ImageNet is a pretty big data set. So you take your model and you also train it on ImageNet. And with time, the model is always going to be very good at the task you just learned, but it is going to forget the tasks that you learned previously. This is the catastrophic forgetting problem. You might ask, why don't I just train on all the tasks equally, like at the same time? And that's a valid question. You can do that. But this in the task description here, it's necessary that we learn the task one after another, because, you know, maybe we get this data in this year and then it's pretty big data. We can't just afford to retrain on all the data all the time. We want to kind of continuously integrate our knowledge. This is very important in the fields of lifelong learning, where you want to kind of the hope is you can build a system that continuously integrates experience, but doesn't forget the old experience. Okay, and the experience might come from new data sets and so on, but you don't want to forget the old ones. So catastrophic forgetting is one of the main problems in these types of research in this field of research of lifelong learning. And this paper is going to tackle this. How? It's sort of. So if you think of what could you do right here, what you could do is you could simply not use the same model, right? You could simply train the different models for each task and just keep them around. Right. And at, you know, test time, you need some way of deciding. So there are two different scenarios in at test time. So you learn all of these models. And then at test time, there's an image. And it could be that I tell you that this image, by the way, that's an MNIST image. So you just grab this model and you apply it. Very cool. Or it could be that I don't tell you what image it is. Like I have no clue. Then you need a way to decide where it comes from. But once you do decide where it comes from, it's again pretty easy. Once you think, I think this is an MNIST thing, you can apply this one. So you could technically do that, but it's very unhelpful because these models, they can be large. Right. First of all, they can be large. So that means it costs you to store those. And second of all, there might actually be some overlap, like C for 10 and ImageNet are both natural images. So they might benefit from each other's feature in some way. Now, what we're going to do here is we're sort of going to do this separate models approach. Namely, we're going to use these. We're going to build these super masks. So super masks are the second thing that we're going to combine here. Our approach uses a randomly initialized fixed base network. And for each task, find a sub network, a super mask that achieves good performance. So what's a super mask? A super mask comes from these kind of papers about lottery ticket hypothesis. And one of these papers discovered basically or conjectured and then showed in evidence that if you have a network that is randomly initialized, just like this is your neural network, the gray thing, and there is a way to mask it, which means masking basically means that you either activate or inactivate connections. So you have your network and you simply multiply it by a binary mask that for each connection is a one or a zero. So the one so here is like zero, zero, zero, zero, zero. This is a one. This is zero, zero, zero. This is a one. So the network isn't going to be zeros and ones, but it's going to be multiplied. Each connection is going to be multiplied by a zero or a one, which means wherever there's a one, whatever weight that connection had, that will be the value of the weight of the connection. If it is a zero, whatever weight that connection had, it will be it will be pinned to zero. So there will be no signal flowing. So this paper established that if you take a randomly initialized neural network, there is a way to mask it. And you can find those masks where if you mask in a particular way, the network will already perform better than random on a given task. So there is a way to solve MNIST by using a randomly initialized neural network and then simply masking it cleverly. And then the mask network will have a good accuracy on MNIST. OK, and they found that. And I've made a video about that. And the sort of intuition behind the super masks is this is just my intuition. Is that, you know, MNIST, this is what I'm guessing. MNIST is a relatively easy task. In fact, most of the tasks they're considering in these papers are relatively easy. And if you have a randomly initialized neural network, basically what you have around is a bunch of weight. Right. So if if I have my two layers right here and then each connection here is is a number like point two five. This is, you know, seven. This is negative three and so on. They're going to consider they here are going to consider weights that are initialized in a very special way. But ultimately, you just have a bunch of random weights lying around. And if the task is super easy, let's say, and the the neural network is sufficiently overparameterized, there might be many, many ways of achieving your goal. So rather than being able to adjust the weights like you would do when you train the neural network, you would actually change those numbers. You get away with simply selecting the combination of weights that will give you a good performance. So in it's kind of it's sort of a mix of drop out and vector quantization. So in vector quantization, you also you get away with quantizing the vectors to given precision. And here the task is easy enough such that by simple overparameterization and selecting of the weights that you have around, mixing them correctly by simply. So you can't mix arbitrarily, but you can mix with zero or one. You get good enough. OK. So this is sort of my hypothesis. My hypothesis would be that the harder the task, the the harder it gets to find super masks that perform well. That's what I think. But nevertheless, to say for the tasks they're considering here, you can find these super masks. And there is a way to do that by using gradient descent, even though the super masks are discrete. So what we're going to do is we're going to use the same randomly initialized neural network for each of the tasks. Right. So this is like C for 10. This is MNIST. This is ImageNet. We're going to use the same gray network, but we're going to find an individual mask for each of those networks, for each of those tasks on top of the same network. And they're all going to perform relatively well, according to the super mask conjecture. Now, again, this is not surprising. And the fact that we always use the same randomly initialized network, you know, isn't really it's not really necessary that we always use the same. But in this case, they say, OK, we always use the same. And then we only need to store the mask for each task. The mask is much simpler than the weights because, you know, a 32 bit floating point number is 32 bits, while a masking bit is only one bit. So we save basically a factor of 32 in our models. But essentially, essentially, right, it's not the case that we are training the same model and some continue learning. It's much more akin to training a training one model per task and then inferring the task. And just that we do it in a much more crude way. So it's more like learning a compressed model per task. I find it's a better way to look at it than than continuous learning. In any case, you learn these super masks. And then here is the the the hard bit. The easy bit is if I tell you which tasks the inference data point, the test data point comes from, you have a pretty easy time classifying it. You simply select the mask accordingly, you run forward pass, and that's it. If I don't tell you where the test data point comes from, that's the hard part. Now, they need a way to decide where the data point comes from. And the idea that they have right here, they have sort of multiple ideas. But the main idea, the first idea is that if you have trained these individual models for the individual tasks, then OK, there is not good explanation here. Then the correct model should be very confident. This is an assumption that you make. So I'm going to take my image of the test set and I'm going to feed it through the model one, which, you know, you have to separate this idea is separate from the masks at its core. It's simply saying if I have three different models that I have trained for three different tasks and now I get an input, I don't know which one it's from. I can simply feed it to each one of them and I can look at the output distribution. So maybe my output distribution right here, this is as you can see, three output neurons. It's a three class classifier right here. My output distribution is somewhat here like this. And here it's like this. And here it's like I shouldn't do that. I got a comment. You know who you are. And here it's like this. OK, so which one would you pick? And their answer here is we should pick this one because of it has very low entropy. So this middle model here is very, very sure about this data point. It's very sure about its prediction because it the distance basically of the top prediction to all the other predictions is so high. It's very confident in its prediction. Whereas here you can see that the distance is not too high. Also here, the distance between the highest and the others is not too high. So they say we are going to pick the model or the mask in this case for which the output entropy is the highest. And that is a heuristic for now, but it tends to work pretty well. And it has a bit to do with how relatively difficult your tasks are. So your tasks need to be kind of equally difficult. Otherwise, it's not otherwise this can get a little bit a little bit out of hand. But there are ways to solve it. And they allude to that in the kind of future work section. But in this case, if the tasks are equally hard and they consider tasks that are equally hard, then the entropy is a good measure of how confident these things are. And therefore we can check which task it is by using the entropy as a heuristic. All right, so we're left with simply trying each of the masks and then decide taking the one that has the highest entropy. Now, they say this is costly because if we've learned a thousand tasks, we need to try each of the thousand masks in order to do that. So they go for something else. And this is the second word in the title, this superposition word. So instead of doing that, what they'll do is they'll use a superposition of masks. And actually, the picture also I find more descriptive than the formula. I can write down the formula down here. So what they'll do is they'll say, why don't we just overlap all of the masks? So we'll have all of these masks and I for one for each tasks and we'll initialize them with coefficients. Alpha I will just mix them like this and alpha here. It's initialized in one over K, where K is the number of tasks. OK, we'll just mix them and then we'll multiply them by the weight of the neural network. And that's will that neural network is where we input our image into. OK, so what does that give us? That basically gives us a mix of all the networks. Like it's pretty safe to say that the entire network is going to be in there and maybe sometimes multiple times. Like if multiple masks use the same weight, it's going to be in there with a higher weight and so on. So that's what you see right here. You can see that all the masks are overlapped in superposition with each other. Now, what does the output give you? The output gives you nothing. The output gives you kind of the average prediction of the network. So this here is going to give you kind of the sort of the average prediction of all of the networks, which isn't very helpful. But of course, what we can do is we can look at the gradients of this. So if we from this calculate the entropy, which is here denoted H, and we calculate, we back propagate this. So we back propagate this to the alphas and we calculate the gradient of the entropy with respect to each of the alphas. What does that give us? So what's the intuition here? The intuition is if I change my alpha a bit, how does the entropy change? So basically, this gives you the sensitivity of the entropy to these alpha parameters. So if this is high, what does it mean? It means that this mask right here has a big influence on the entropy. Specifically, if I were to increase the alpha, then the entropy would increase. OK. And if I were to decrease the alpha, then the entropy would decrease. That's the kind of what the gradient gives you. Now, did I say before we want the one with the highest entropy? I'm pretty sure we want the one with the lowest entropy. We want the one where we're very, very, very sure. Right. I might have said that absolutely wrong. So if you see right here, this is the formalism. First, we associate each of the k learned supermasks with a coefficient alpha initially set to 1 over k. Each alpha can be interpreted as the belief that supermask m is the correct mask, equivalently the belief that the current unknown task is task i. The model output is then computed with a weighted superposition of all learned tasks, which is this thing right here. The correct mask should produce a confidence low entropy output. Therefore, we recover the correct mask. We find the coefficients alpha, which minimize the output entropy h. OK. So, yes, we want the task with the lowest entropy, of course, not with the highest entropy. So if we look at the gradient right here, the gradient basically tells us how each of the masks will influence the different the entropy. And if we simply select the alpha where the gradient here is the most negative number. So we want this to be as low as possible, not zero, but negative as high as possible. Then we know that if we increase this, the contribution of this mask, then the entropy will go down the most. OK. And again, our hypothesis here is that maximum entropy, sorry, minimum entropy means most confident prediction means that the if all tasks are equally hard, it probably means that the data point is from the task where we have the lowest entropy. So what's the what's the deal here? Like they show in this graph right here, they show this is much faster. So if we if we were to evaluate each mask individually and measure its entropy, of course, with the number of tasks, we'll simply linearly increase our time in the forward pass, because we need to try out each of these masks. However, if we do what they're doing here, we simply run one, we mix these ones, we run one forward pass, we do back prop and they consider two strategies. So what you can do is you can do gradient descent on these alphas, which takes a number of steps to converge. Or you can actually do a single step. So you just observe the gradient and by the gradient, you recognize which one has the lowest gradient. And that's the one you pick. So where's the catch here? The catch is that if you do something like this, if you do something like this, there are two catches, actually. First of all, this here is a convex combination, right? This is convex combination. And the problem isn't convex at all. But if you simply take this convex combination, multiply it and then look at the gradient, you sort of assume that the problem is a kind of a convex, nicely shaped problem. And if you then observe these gradients with respect to the alphas, you make assumptions about the problem that might not be true. So you lose, you kind of heuristically approximate the importance of these masks. That's the first thing. The second thing, of course, is that it's you still you still are implicitly saving your still are implicitly trying all the models, but you're just not trying them explicitly. You're implicitly trying all the models because when you do this combination right here, your auto differentiation library will actually keep track of what the individual models contribute. It's just that per layer. So, of course, this here, this W is multi layer perceptron, which means that if you have multiple layers, you know, there's W one and there's W two. And you have your alphas and your alphas are also, you know, you can distribute them into these. Sorry, your masks are also mask for layer one mask for layer two and so on. So your auto differentiation package needs to keep track of. OK, mask one goes here with this alpha mask to the layer two goes here with this alpha. And there is there. So it needs to keep track of this graph. It's just that this is highly optimized and you also need to you only need to do it layer by layer. So the contribution of alpha of mask one, this is maybe alpha eye of mask eye one mask eye two. The contribution of the alpha eye will not be explicit in this layer. It will be implicit as an average across the layer. Right. So, again, this is you assume in each layer, you assume a convex combination of all the alphas and propagate them. And propagate that forward. And therefore, if you look at the next layer, you can only view what mask two does mask of layer two does as in terms of a convex combination of layer one. So you make multiple approximations and you rely on the optimization of your auto differentiation library to keep track of these different things and do operations in parallel. And in the case where you do it linearly, I'm going to guess you simply do it as a sequential operation, but it's going to be exact. So that's the trade off. All right. So we now know how we can figure out where the task is from. And let's see how that works. So in this first task, we are looking at split image net. Split image net simply it takes the image net data set, which is a thousand class data set, and it distributes it into 100 different tasks. Each is a 10 class classification task. Now note two things. First thing is that split image net. Each task is approximately as hard as each other as the other tasks. Right. It's still image net classification and it's the same number of the of it's the same number of labels. And each task is about the same hardness. You can make that assumption. And second of all, the tasks are actually pretty, pretty easy. Right. It's hard to distinguish image net into a thousand classes. But if you split that task, I'm going to bet that you have these high resolution images and you have a 10 class classification. It's going to be relatively easy. So all our conditions are met for at least for my hypothesis to hold. And you can see on the right side, you can see split C for 100, which does the same thing to C for 100. It subdivides it into different, very small class classification tasks. You can see the results. The upper bound here is where you train a single model for each of the tasks. That gets you to average accuracy of 92 percent. So on image net, 92 percent. It's pretty, it's pretty good. Of course, this is again, this is 10 class, which makes the numbers a lot different with the subs. So subs up, you get to this pretty good 88 percent accuracy. This is this super masks in superposition. This here is a baseline that also does lifelong learning. Now, they have these annotations right here. Gigi, which yes, Gigi, haha. But so the first letter will always tell you whether the task ID is given during training. And the second letter will tell you whether the task ID is given during testing. So this here simply evaluates whether or not this masking is feasible, which you can see here it is. So this will we know which mask to train during training and we know which mask to retrieve during testing. So there is nothing of this entropy gradients here. None of it. This simply evaluates the viability of the masking approach, which as you can see, it's pretty viable and it's more viable than these baselines. This same thing on the CIFAR 100 right here. So you can see they also evaluate since I guess it's an easier problem, they also evaluate the number of bytes which they can control. So they can control the number of bytes in their model by simply increasing or decreasing the required sparsity of their mask. So you can change your mask by saying how sparse you want it. And of course, if you want it more sparse, you get a worse model because you have less less ones in your budget to make your model perform well. But you can see that if they do it with these baseline model, this batch E, you severely underperform with regard to the upper bound right here. The upper bound again is where you train a model per task and separate heads here is another kind of dummy baseline where you train a different head for each of the tasks with a common trunk. That gets you pretty much nowhere. With the sub sub algorithm, you do get almost to the performance of the upper bound. And in fact, if you do this transfer approach right here, you do get there. The transfer approach simply means that so you do these tasks in succession, right? You do task one. Okay, done. You do task two. Okay, done. And for each one, you train a mask. Okay, for each one you train this is mask one, mask two. The transfer approach simply says if I start task three, I'm going to start the mask three, my initial weights basically are going to be a running average of the masks that I have already considered or an average. There is some amount of transfer going on simply to initialize the weights. It's actually astounding that this helps you so much. But with this, if you look at the actual numbers, I believe you can get like a tiny bit higher than the training a single model for each of the tasks. Okay, so this sort of establishes the viability of training the different masks for the different tasks, which I again, I think it is not surprising because essentially you're training a different model per task. And it's just the fact that you do a very crude model and that you can store very efficiently. Now you might object and say, hey, don't I need to store the underlying randomly initialized network? And the answer is yes and no. Actually, you only need to store the random seed to produce it. So checkmate. Yeah, they do. So here they explain this one shot algorithm where they simply look at the gradient of the entropy. You can see with the maximum negative gradient of the entropy, they also have this binary algorithm. If the task where they say with the task is harder to differentiate this kind of assumption of the convex combination thing does might not hold. So what they do is they have this binary algorithm where they do a binary search where they simply want to circumvent the necessity to evaluate each of the masks by itself because that takes long. So they do something in between where they do this binary algorithm. This is right here where they do this convex combination, they evaluate the gradient, but then they don't just take the highest of the negative gradients. They eliminate half of them. So you can see whenever it's lower than the median, they eliminate it and then they start off with this new set of reduced alphas. So in each of these steps, they eliminate half of the masks and then they recompute again because because it is not a convex problem, the order might actually be different in the second and third and fourth step. Of course, this is simply this is like halfway towards between this one shot algorithm and trying each mask by itself. It's kind of a compromise. I mean, they make it they really try to not not try each mask once because it's one of their contributions. Right. But then they probably realized if we just do it one shot, sometimes it doesn't work. So they're going between, which is, you know, it's a pretty cool idea. All right. Next experiments. We're now in this situation and you see you see a number of things. So first of all, we have a new added a new baseline, this PSP, and you can see that the baselines operating this G.G. regime. So the baselines are given the task during training and given the task during evaluation. You see the upper bound here in gray is where you train a model for each task. And you assume that's an upper bound because you assume the tasks are kind of unrelated to each other, which is not the case. So there is actually potential to beat the to beat the upper bound baseline. And subs up here you see operates in a different regime. Namely, there's this regime of you're given the task during training, but then during testing, you're not given the task. OK. And this you here, it basically means that the labels you assume that the labels of the tasks are not shared. So in in this case, if you predict, if you predict like if you split MNIST into always two class, if you split MNIST into two tasks, you predict the first task is zero, one, two, three, four. The second task is five, six, seven, eight, nine. OK. And you have the same amount of labels. So you always have five output neurons. Right. So you have one, two, three, four, five output neurons. If you if the image here is like a five, that would be task task one label zero. Right. If your network now predicts label zero correctly, but predicts the the image to come from task one, you count it as a mistake. You say, well, you know, you've predicted the right output neuron, but you've told me it comes from task zero from from the zero to four. So I'm going to count that as a mistake. So it's really there isn't there isn't a way for the network to kind of get around predicting the wrong tasks or kind of share information. So you assume that the labels are not shared or unshared. Yeah. So it's the subs up here has a significantly harder task than the baselines. Keep keep that in mind. And now we are applying our because we we are not given the task at inference time. Now we're applying our heuristic where we go and look at which of the mask entropies is the lowest. Respectively, we use this actually this one shot algorithm where we look at the gradients. And you can see this is on permuted MNIST in permuted MNIST. What you do is you take MNIST and you simply permute the pixels. And this it sounds crazy, but you simply permute the pixels and that gives you a new task. So you can come up with like almost an infinite number of tasks because there are what? Twenty eight times, twenty eight pixels. So you can commute them seven hundred and eighty four factorial times, which gives you like infinitely many tasks. And so you can modulate. So here you can see the number of tasks learned increases. And at the beginning, this baselines, especially this baseline, is doing fairly well, actually, on par with the upper bound when you only have ten different tasks. However, after that quickly degrades, however, this subs up here, it keeps it keeps its performance, which it so this doesn't only mean that it correctly predicts the output neuron. It also correctly predicts which task, which permutation was applied to the digit simply by looking where the entropy is high. Right. So that's pretty cool. And, you know, it's it's actually kind of surprising to be to be honest. So on the left, this is a L'Onet architecture on the right. It's a fully connected network. Now, the fully connected network here performing better is sort of expected. First of all, MNIST is really easy and can actually be solved with a fully connected network. And second of all, especially permuted MNIST, I guess, doesn't really conform to the to the assumptions of convolutional neural networks anymore. Again, keep in mind, these tasks are very easy. Yeah. So so especially for the fully connected network, of course, each permutation kind of looks the same because it's it doesn't care at the beginning that each pixels are next to each other. Simply each pixel is a different thing. It's just the fact that it cannot it cannot learn from one tasks much about the other tasks. That's why you that's the nature of permuted MNIST. All right. And then in this experiment right here, and this is the sort of crown experiment, they learn they do this permuted MNIST, but they go up to 2500 tasks, 2500 different permutations. But so but now they have an additional thing right here. So again, they have this sub sub where it needs to predict the correct permutation, but also they compare it with a an algorithm that needs that is this NN right here. So in this NN, not not only are you not given the task label at testing time, you are actually not even given the task label at training time. But here the outputs are shared. So, you know, since since you have no way of knowing which task it is, you've never given it as long as you predict the correct class. You good. So it's always it's always a 10 class classification problem. It's just not permuted. You're not given the task label here. So first of all, I want to say that this here, the shared labels, it could actually contribute to the success of this algorithm right here, because even though you permute the pixels, you can still sort of do things like count the frequency of light pixels versus dark pixels in MNIST. And that might already give you a very, very big hint. Right. Or, you know, simple correlation of of two pixels, though that's that's a task specific thing. But the the frequency of light pixels versus dark pixels will already give you a big boost in accuracy. And now you can actually share that feature. That feature will always be the same for every permutation. So this is something you can share between tasks. And I would like. So one way I guess you could eliminate that. Well, I don't know. I'm not sure. You kind of have to randomize the number of light pixels, but keep the classes the same. It's going to be complicated. Right. But just keep that in mind. However, how how does the algorithm even decide? So they have a heuristic right here as well, namely. They say, OK, if we don't have no task identity during training or inference. Where task identity is entirely unknown, even during training, if subs of is uncertain about the current task identity, it is likely that the data does not do not belong to any tasks seen so far. When this occurs, a new super mask is allocated and the number of tasks learned so far is incremented. OK, so they go with the same principle right here. They say if we get a new training sample, we just evaluate it against all the masks that we had so far. Or we do our one shot algorithm to approximate which masks gets us a low entropy. If none of the mask gets us a low entropy, then we decide this must be some kind of unseen task. So we're going to allocate a new mask for this unseen tasks. And that heuristic, as you can see, it performs fairly, fairly well. Where was our graph? Our graph was down here. In fact, it performs pretty much on par with where you know the task during training. And just not during during inference up until like here, the very last bit. If you really get into the high task regime. Where I guess it starts getting it starts getting confusing. So this this heuristic might start to break down, but it might just be a fact how they tune their constants. Like they have to define a threshold where they say, OK, if the entropy is somehow higher than this threshold, then we allocate a new a new task. And this might be optimized in order to solve this. Again, these tasks are very, very, very, very easy. So keep keep that in mind. Yeah. OK. So this basically was the experimental part of that paper. Now they consider different extensions to that. I'm not sure how they also consider some ablations, which are pretty interesting. So here they say we are going to up the kind of the hardness of the task with with rotated MNIST and also their model does pretty well on the rotated MNIST task, where the differences of between the differences between the tasks are simply some of them are rotated by 10 degrees. So that's a tiny rotation in the right. If you have a number three, you kind of rotated by 10. I can't even draw that subtle of a rotation by 10 degrees. And, you know, the subs up must correctly predict which task the images from, or it will not get the it will not get a correct reward. The fact that it performs pretty well and the fact that it has, you know, rotation degrees, where it outperforms the baseline that is actually given the rotation. So it's given the task at inference time is pretty, pretty remarkable. Again, I believe this is due to the fact that these tasks are so easy. And therefore, this entropy, it just spikes when you get the correct thing, because it sort of it sort of latches onto very easy features for each task. So I'm going to guess that the tasks are generally solvable by maybe correlating two pixels. Right. If like this pixel correlated with this pixel, if the correlation is high, it's a three. The correlation is low. It's something else. OK. And then if you rotate it, it's just not the case anymore that this pixel and this pixel, the correlation is very high. So if you predict using this correlation, you'll get a pretty low confidence. And I'm going to guess that, yeah, if you have discrete tasks and it's in this task, then your confidence will just spike because the task is so easy and because all the tasks are about equally hard. Because if you can find this correlation here, you can find it over here. It's simply going to be two different two different pixels in this task. And then as you try the masks, whenever you hit the one where you can predict pretty confidently with those two pixels, then your confidence is going to spike, your entropy is going to get down. And, you know, it's that task. They also here they compare. The one shot algorithm. So they they they use their one shot algorithm to and they put it on a baseline. So this baseline where they always actually have to give it the task, they augmented by by their their one shot algorithm to select the task. And it turns out they can make it perform fairly well, not on par with them. Interestingly, but they can make it perform also fairly well, actually better than it was performing before. So they have different extensions right here. And that's some of them are pretty important. The one important thing they do is they have these superfluous neurons and that's sort of hidden. And it's always a bit. So here, for example, you see in the output, they say we have a lunette model using output size 500. Now there are only 10 different labels in the MNIST task, right? Also in the permuted MNIST task, there are 10 different labels. I mean, there are a total of 25,000 labels if you have 2500 tasks. But the neural network has output size 10. However, their neural network here has output size 500, which is surprising. So they say right here and we're going to get to the Hopfield network at the very end for those who are still around, because that's I think that should be its own paper. But, you know, they say it could use an output of size L where L is the actual number of labels per task, though we find in practice that it helps significantly to add extra neurons to the final layer. Specifically, we consider outputs P in our S. So S is higher than L and refer to the neurons that are past L as superfluous neurons. So let's try to make sense of this. So they have a neural network. And let's say it's a three class classification task, right? So you have three classes and that's what you would do. They simply add a bunch of neurons right here. That means they also they, you know, they add all of the connections from the previous layer to those neurons. But still, the classes can only be either 0, 1 or 2. These classes never appear during training. So they claim this helps during during their procedure. And I I thought about it a bit and we might be able to try to guess why it makes sense. They say they simply say we observe that helps. And I mean, you know, let's let's try to make sense of it. OK, so if we train, if we train our model using these too many neurons, what happens? Well, our label is always going to be of the top three neurons. So let's say our label is one. This is going to result in a one hot vector like this. Now, what are we training in this layer here? In this layer here, we're training logits. OK, so pre pre softmax outputs. So our our algorithm, our cross entropy loss is going to push all of these here down during every single training point. It's going to push this one up and all of these down. Now, these three here are going to be pushed up and down depending on the label. However, all of these down here are going to be only pushed down during the entire training. So they are going to be exceptionally low numbers. OK, now, if we then come and we look at the at the entropy of this, the the entropy, I think honestly, this is simply you could achieve the same thing by using a different temperature parameter in the softmax or in the entropy that you consider. Because why can this help? And this helps with inferring which task it's coming from. Right. So if you consider a task where you only have three outputs, so you don't have this bit down here and you look at the entropy, it's going to be you know, it's going to be something something. Sorry, I have to draw this right here. It's going to be like this. It's fairly confident. But if and maybe for the other tasks, it's not going to be as confident. You know, it's maybe going to be like this. However, if you have those and if it's of the correct tasks, I'm going to guess this kind of stays the same because they're really low. But if it's of the incorrect tasks, then you're not sure. And you not being sure about the output also means that you allocate a lot more to these things right here. Because you've sort of never seen this particular kind of training examples. So you're not sure. So you're just going to distribute your kind of your probability mass across these things right here because you've not been trained on that kind of input. Right. It's very important to see that this is task. This is the correct task, which they always label J. And for for any other incorrect task, you've never seen data like this. So these things here sort of act like an outlier class without you explicitly training an outlier class. You simply train these things during training. You make them small. But you it's important to notice you always make them small. From a data point that comes from their particular task. OK, that's what you train them for. And now if you input a data point from a different task, they have less reason to be small because this is an outlier data point. So you have much more fluctuations. So you have more fluctuations here. And therefore, the entropy is going to be even higher. All right. This is sort of how I make sense of the fact that these additional superfluous neurons help here. They act as kind of an outlier detector for the training data set of that particular task. Now, because you have different training data for each task, they go further and they say it actually works even better. It works even better if we instead of this entropy heuristic, we consider another heuristic. Accordingly, we consider an objective G, which encourages the S neurons to have large negative values and confused as an alternative to entropy in equation four. So G, they analyze down in the appendix. And we're just quickly going to look at what G is. Sorry, this is about to load right here. And it's very interesting to see what G is. Or is it? Yes. So G is going to be this right here. So why are the logits and then G is this expression right here? And in fact, it's this expression with the with a bit of a modification. So it's going to be G is going to be the log some X of the logits. Right. So it's this is some this is somewhat like the entropy. And what we're going to consider is the gradient of G. So what we want is the gradient of G with respect to our alphas. And the condition here with this detach operation is that. The gradient of G should be, you know, the gradient of the loss function for all V that are superfluous neurons and zero otherwise. So we're going to detach the gradient of G for all the real neurons, for all the actual logits of the output class. And we're only going to consider the gradient flowing through the superfluous neurons. So all of this here is if we take the gradient, it's only going to flow to in these in the last layer through the gradients of the superfluous neurons. OK. And that's why we don't need the entropy, because the entropy always considers the difference, sort of the difference between the correct label and the other labels. We are pretty sure that in our superfluous neurons, we don't have the correct label. OK. So this log the log some X of our of these outputs here, what will they represent? Well, this is sort of a flatness measure. Again, it's kind of like the entropy, except we don't have a correct label right here. If one of them is very high and the other ones are very low, or if they're generally very high up, then this will be high. However, consider the difference between this and this, where they're all super small and also they're all pretty equal. The log some X will be very small. So this is an alternative where we can basically only look at the superfluous neurons and say, is are these superfluous neurons all very small? And, you know, none of them basically says I'm the correct label. Then we can be pretty sure that over here there is some confidence. However, if they are sort of kind of larger and generally kind of generally large, maybe unequal, that means we're not very confident because these are our outlier classes. They shouldn't be. They shouldn't be large at all. So an alternative to looking at the entropy of this distribution is to build such superfluous neurons and then look at those and only those. And so the gradient of only those in order to decide which task it's from. It's an interesting idea, I have to say. But maybe one could achieve sort of the same thing with a with a temperature parameter here or by building an explicit outlier detection. But it's generally an interesting idea for outlier detection, I have to say. I've never really seen anything like this, though I also haven't really considered it. So here they show the importance. And you've seen in the experiments before that there sometimes was this H objective and also this G objective. So you can look at the entropy, but also you can look at the G. In both cases, you have superfluous neurons. So before you actually saw you have 500 neurons for a task of for a task of 10 that needed 10 output classes. Right. So this tells me that these superfluous neurons are pretty important for them. And it this is probably one of the things that makes this work. Right. These superfluous neurons. So you kind of setting up a trap where for the wrong models, you let it run into this trap of assigning a lot of weight into these outlier classes. And only if the correct model is trained to not do that on the particular data that you're considering. I don't think this comes through in the paper too much that this is one of I guess this is one of the main factors making this work. And you can see right here they actually do an experiment. So I don't want to be too mean where they say, look, if we train with just 25 classes and this is permuted MNIST. So the necessary amount will be 10. So if we train with only 25, you can see how quickly we degrade right here. However, as we go up and train with a hundred and 200, we get better and better. In fact, if we train with this G objective, it always sort of outperforms the H objective. Interestingly, the more output neurons you have, the less this difference seems to be. But maybe the percent difference is the same. The percent error difference is the same. I don't know. I can't tell from here. Yeah. So this isn't all. There is also this Hopfield network going on where they say, OK, OK. So essentially, we're actually training different models, right? We're not really superimposing all of these models. We're training a different mask for each of the tasks and kind of remembering the masks and so on. Can we also build a model where we actually only have one model? And that's what they do right here, where they build a Hopfield network, which is basically just a big matrix. This is the Hopfield network. And then they encode the masks in this Hopfield network. So specifically, the Hopfield network is of size D squared, where it is able to encode two to the D different binary strings. And it does so in a fuzzy way. But you can prove that if you construct the Hopfield network like this, where Z is a binary string, you can recover the binary strings by gradient descent in the Hopfield network. And obviously, the more binary strings you encode, the less you get out. It's not magic. You can't store that many bits into a thing that doesn't have that many bits. But I believe, you know, again, this is using gradient descent, and it can do so with surprising accuracy. So remember that these here are bits while these here are floating point numbers. So the comparison that I just made isn't entirely fair. But I don't want to go into the Hopfield networks because I really feel this should be its own paper. I guess they just want to show that it's also possible to compress these masks into one thing, such that I can't make the argument anymore that, hey, all you're doing is training different models for different tasks. All right. All in all, pretty cool paper. As I said, pretty dense paper. I invite you to read it. They have a big appendix where they have more experiments and so on and explain everything in detail. All in all, from this, I don't really take the method, but the ideas are very interesting. And I am excited to see where this goes in the future. All right. I'll see you next time. Bye bye.
[ { "end": 7, "start": 0, "text": " Hi there, today we'll look at super masks in superposition by Mitchell Wirtzman, Vivek Ramanujan at AL." }, { "end": 15, "start": 7, "text": " So on a high level this paper tackles the problem of sequentially learning many many tasks without catastrophic forgetting" }, { "end": 24, "start": 15, "text": " by leveraging these things called super masks. A super mask is basically a binary mask that you unlace over a randomly initialized neural network" }, { "end": 29, "start": 24, "text": " to make the mask network perform better than a random initialization." }, { "end": 39, "start": 29, "text": " They will train these masks for each of the tasks that they consider and then at inference time they can recover the task that the data is from" }, { "end": 47, "start": 39, "text": " and therefore kind of do this lifelong multitask learning better than the baselines that they compare against." }, { "end": 56, "start": 47, "text": " In fact they can do better without knowing the task than the baselines can with knowing the task. So that's pretty pretty cool." }, { "end": 70, "start": 56, "text": " This is a pretty dense paper in terms of content and we won't go over everything in the paper but we'll go over the ideas and what kind of what I think makes them work." }, { "end": 80, "start": 70, "text": " So stick around if you want to know that. Also consider sharing this video out, tell your friends about it and subscribe if you haven't, it helps." }, { "end": 93, "start": 80, "text": " So yeah cool. So let's dive in. We present the super masks in superposition model capable of sequentially learning thousands of tasks without catastrophic forgetting." }, { "end": 102, "start": 93, "text": " So the term catastrophic forgetting comes from the world of this kind of sequential multitask learning where you have a model." }, { "end": 108, "start": 102, "text": " Let's say this is your model, the black box, and you let it learn on a task. Let's say this is an image recognition task." }, { "end": 117, "start": 108, "text": " So you have a data set and you let it run on this data set. You learn the data set, maybe it's CIFAR 10, right? So this is CIFAR 10. Cool." }, { "end": 125, "start": 117, "text": " And now the model can do CIFAR 10 pretty well. Then you also want to learn a different task. You want to learn MNIST." }, { "end": 136, "start": 125, "text": " Okay, so you have MNIST and you want to learn MNIST and you want to learn that one. So your hope is that your final model can do both." }, { "end": 145, "start": 136, "text": " So you'll take this one and you simply train it on MNIST as well. And then, you know, we know there's this kind of fine tuning pre training and so on." }, { "end": 151, "start": 145, "text": " So your hope would be that at the end it can do both. But then you want another one. You want ImageNet." }, { "end": 157, "start": 151, "text": " Okay, now ImageNet is a pretty big data set. So you take your model and you also train it on ImageNet." }, { "end": 167, "start": 157, "text": " And with time, the model is always going to be very good at the task you just learned, but it is going to forget the tasks that you learned previously." }, { "end": 175, "start": 167, "text": " This is the catastrophic forgetting problem. You might ask, why don't I just train on all the tasks equally, like at the same time?" }, { "end": 184, "start": 175, "text": " And that's a valid question. You can do that. But this in the task description here, it's necessary that we learn the task one after another," }, { "end": 193, "start": 184, "text": " because, you know, maybe we get this data in this year and then it's pretty big data. We can't just afford to retrain on all the data all the time." }, { "end": 200, "start": 193, "text": " We want to kind of continuously integrate our knowledge. This is very important in the fields of lifelong learning," }, { "end": 209, "start": 200, "text": " where you want to kind of the hope is you can build a system that continuously integrates experience, but doesn't forget the old experience." }, { "end": 215, "start": 209, "text": " Okay, and the experience might come from new data sets and so on, but you don't want to forget the old ones." }, { "end": 222, "start": 215, "text": " So catastrophic forgetting is one of the main problems in these types of research in this field of research of lifelong learning." }, { "end": 231, "start": 222, "text": " And this paper is going to tackle this. How? It's sort of. So if you think of what could you do right here," }, { "end": 241, "start": 231, "text": " what you could do is you could simply not use the same model, right? You could simply train the different models for each task and just keep them around." }, { "end": 249, "start": 241, "text": " Right. And at, you know, test time, you need some way of deciding. So there are two different scenarios in at test time." }, { "end": 253, "start": 249, "text": " So you learn all of these models. And then at test time, there's an image." }, { "end": 259, "start": 253, "text": " And it could be that I tell you that this image, by the way, that's an MNIST image." }, { "end": 264, "start": 259, "text": " So you just grab this model and you apply it. Very cool." }, { "end": 269, "start": 264, "text": " Or it could be that I don't tell you what image it is. Like I have no clue." }, { "end": 276, "start": 269, "text": " Then you need a way to decide where it comes from. But once you do decide where it comes from, it's again pretty easy." }, { "end": 281, "start": 276, "text": " Once you think, I think this is an MNIST thing, you can apply this one." }, { "end": 289, "start": 281, "text": " So you could technically do that, but it's very unhelpful because these models, they can be large. Right." }, { "end": 297, "start": 289, "text": " First of all, they can be large. So that means it costs you to store those." }, { "end": 302, "start": 297, "text": " And second of all, there might actually be some overlap, like C for 10 and ImageNet are both natural images." }, { "end": 307, "start": 302, "text": " So they might benefit from each other's feature in some way." }, { "end": 313, "start": 307, "text": " Now, what we're going to do here is we're sort of going to do this separate models approach." }, { "end": 318, "start": 313, "text": " Namely, we're going to use these. We're going to build these super masks." }, { "end": 322, "start": 318, "text": " So super masks are the second thing that we're going to combine here." }, { "end": 326, "start": 322, "text": " Our approach uses a randomly initialized fixed base network." }, { "end": 332, "start": 326, "text": " And for each task, find a sub network, a super mask that achieves good performance." }, { "end": 339, "start": 332, "text": " So what's a super mask? A super mask comes from these kind of papers about lottery ticket hypothesis." }, { "end": 352, "start": 339, "text": " And one of these papers discovered basically or conjectured and then showed in evidence that if you have a network that is randomly initialized," }, { "end": 359, "start": 352, "text": " just like this is your neural network, the gray thing, and there is a way to mask it," }, { "end": 365, "start": 359, "text": " which means masking basically means that you either activate or inactivate connections." }, { "end": 372, "start": 365, "text": " So you have your network and you simply multiply it by a binary mask that for each connection is a one or a zero." }, { "end": 380, "start": 372, "text": " So the one so here is like zero, zero, zero, zero, zero. This is a one. This is zero, zero, zero. This is a one." }, { "end": 384, "start": 380, "text": " So the network isn't going to be zeros and ones, but it's going to be multiplied." }, { "end": 389, "start": 384, "text": " Each connection is going to be multiplied by a zero or a one, which means wherever there's a one," }, { "end": 395, "start": 389, "text": " whatever weight that connection had, that will be the value of the weight of the connection." }, { "end": 404, "start": 395, "text": " If it is a zero, whatever weight that connection had, it will be it will be pinned to zero." }, { "end": 412, "start": 404, "text": " So there will be no signal flowing. So this paper established that if you take a randomly initialized neural network," }, { "end": 419, "start": 412, "text": " there is a way to mask it. And you can find those masks where if you mask in a particular way," }, { "end": 423, "start": 419, "text": " the network will already perform better than random on a given task." }, { "end": 429, "start": 423, "text": " So there is a way to solve MNIST by using a randomly initialized neural network and then simply masking it cleverly." }, { "end": 434, "start": 429, "text": " And then the mask network will have a good accuracy on MNIST." }, { "end": 447, "start": 434, "text": " OK, and they found that. And I've made a video about that. And the sort of intuition behind the super masks is this is just my intuition." }, { "end": 453, "start": 447, "text": " Is that, you know, MNIST, this is what I'm guessing. MNIST is a relatively easy task." }, { "end": 459, "start": 453, "text": " In fact, most of the tasks they're considering in these papers are relatively easy." }, { "end": 466, "start": 459, "text": " And if you have a randomly initialized neural network, basically what you have around is a bunch of weight. Right." }, { "end": 475, "start": 466, "text": " So if if I have my two layers right here and then each connection here is is a number like point two five." }, { "end": 479, "start": 475, "text": " This is, you know, seven. This is negative three and so on." }, { "end": 486, "start": 479, "text": " They're going to consider they here are going to consider weights that are initialized in a very special way." }, { "end": 489, "start": 486, "text": " But ultimately, you just have a bunch of random weights lying around." }, { "end": 501, "start": 489, "text": " And if the task is super easy, let's say, and the the neural network is sufficiently overparameterized, there might be many, many ways of achieving your goal." }, { "end": 510, "start": 501, "text": " So rather than being able to adjust the weights like you would do when you train the neural network, you would actually change those numbers." }, { "end": 517, "start": 510, "text": " You get away with simply selecting the combination of weights that will give you a good performance." }, { "end": 525, "start": 517, "text": " So in it's kind of it's sort of a mix of drop out and vector quantization." }, { "end": 531, "start": 525, "text": " So in vector quantization, you also you get away with quantizing the vectors to given precision." }, { "end": 542, "start": 531, "text": " And here the task is easy enough such that by simple overparameterization and selecting of the weights that you have around, mixing them correctly by simply." }, { "end": 547, "start": 542, "text": " So you can't mix arbitrarily, but you can mix with zero or one." }, { "end": 550, "start": 547, "text": " You get good enough. OK." }, { "end": 560, "start": 550, "text": " So this is sort of my hypothesis. My hypothesis would be that the harder the task, the the harder it gets to find super masks that perform well." }, { "end": 568, "start": 560, "text": " That's what I think. But nevertheless, to say for the tasks they're considering here, you can find these super masks." }, { "end": 574, "start": 568, "text": " And there is a way to do that by using gradient descent, even though the super masks are discrete." }, { "end": 582, "start": 574, "text": " So what we're going to do is we're going to use the same randomly initialized neural network for each of the tasks." }, { "end": 587, "start": 582, "text": " Right. So this is like C for 10. This is MNIST. This is ImageNet." }, { "end": 598, "start": 587, "text": " We're going to use the same gray network, but we're going to find an individual mask for each of those networks, for each of those tasks on top of the same network." }, { "end": 604, "start": 598, "text": " And they're all going to perform relatively well, according to the super mask conjecture." }, { "end": 607, "start": 604, "text": " Now, again, this is not surprising." }, { "end": 617, "start": 607, "text": " And the fact that we always use the same randomly initialized network, you know, isn't really it's not really necessary that we always use the same." }, { "end": 620, "start": 617, "text": " But in this case, they say, OK, we always use the same." }, { "end": 625, "start": 620, "text": " And then we only need to store the mask for each task." }, { "end": 634, "start": 625, "text": " The mask is much simpler than the weights because, you know, a 32 bit floating point number is 32 bits, while a masking bit is only one bit." }, { "end": 638, "start": 634, "text": " So we save basically a factor of 32 in our models." }, { "end": 649, "start": 638, "text": " But essentially, essentially, right, it's not the case that we are training the same model and some continue learning." }, { "end": 660, "start": 649, "text": " It's much more akin to training a training one model per task and then inferring the task." }, { "end": 666, "start": 660, "text": " And just that we do it in a much more crude way. So it's more like learning a compressed model per task." }, { "end": 672, "start": 666, "text": " I find it's a better way to look at it than than continuous learning." }, { "end": 674, "start": 672, "text": " In any case, you learn these super masks." }, { "end": 678, "start": 674, "text": " And then here is the the the hard bit." }, { "end": 686, "start": 678, "text": " The easy bit is if I tell you which tasks the inference data point, the test data point comes from, you have a pretty easy time classifying it." }, { "end": 691, "start": 686, "text": " You simply select the mask accordingly, you run forward pass, and that's it." }, { "end": 696, "start": 691, "text": " If I don't tell you where the test data point comes from, that's the hard part." }, { "end": 703, "start": 696, "text": " Now, they need a way to decide where the data point comes from." }, { "end": 710, "start": 703, "text": " And the idea that they have right here, they have sort of multiple ideas." }, { "end": 727, "start": 710, "text": " But the main idea, the first idea is that if you have trained these individual models for the individual tasks, then OK, there is not good explanation here." }, { "end": 731, "start": 727, "text": " Then the correct model should be very confident." }, { "end": 733, "start": 731, "text": " This is an assumption that you make." }, { "end": 746, "start": 733, "text": " So I'm going to take my image of the test set and I'm going to feed it through the model one, which, you know, you have to separate this idea is separate from the masks at its core." }, { "end": 755, "start": 746, "text": " It's simply saying if I have three different models that I have trained for three different tasks and now I get an input, I don't know which one it's from." }, { "end": 761, "start": 755, "text": " I can simply feed it to each one of them and I can look at the output distribution." }, { "end": 767, "start": 761, "text": " So maybe my output distribution right here, this is as you can see, three output neurons." }, { "end": 769, "start": 767, "text": " It's a three class classifier right here." }, { "end": 773, "start": 769, "text": " My output distribution is somewhat here like this." }, { "end": 777, "start": 773, "text": " And here it's like this." }, { "end": 782, "start": 777, "text": " And here it's like I shouldn't do that." }, { "end": 783, "start": 782, "text": " I got a comment." }, { "end": 787, "start": 783, "text": " You know who you are." }, { "end": 791, "start": 787, "text": " And here it's like this." }, { "end": 794, "start": 791, "text": " OK, so which one would you pick?" }, { "end": 803, "start": 794, "text": " And their answer here is we should pick this one because of it has very low entropy." }, { "end": 808, "start": 803, "text": " So this middle model here is very, very sure about this data point." }, { "end": 817, "start": 808, "text": " It's very sure about its prediction because it the distance basically of the top prediction to all the other predictions is so high." }, { "end": 819, "start": 817, "text": " It's very confident in its prediction." }, { "end": 824, "start": 819, "text": " Whereas here you can see that the distance is not too high." }, { "end": 828, "start": 824, "text": " Also here, the distance between the highest and the others is not too high." }, { "end": 839, "start": 828, "text": " So they say we are going to pick the model or the mask in this case for which the output entropy is the highest." }, { "end": 844, "start": 839, "text": " And that is a heuristic for now, but it tends to work pretty well." }, { "end": 849, "start": 844, "text": " And it has a bit to do with how relatively difficult your tasks are." }, { "end": 854, "start": 849, "text": " So your tasks need to be kind of equally difficult." }, { "end": 860, "start": 854, "text": " Otherwise, it's not otherwise this can get a little bit a little bit out of hand." }, { "end": 862, "start": 860, "text": " But there are ways to solve it." }, { "end": 864, "start": 862, "text": " And they allude to that in the kind of future work section." }, { "end": 875, "start": 864, "text": " But in this case, if the tasks are equally hard and they consider tasks that are equally hard, then the entropy is a good measure of how confident these things are." }, { "end": 882, "start": 875, "text": " And therefore we can check which task it is by using the entropy as a heuristic." }, { "end": 890, "start": 882, "text": " All right, so we're left with simply trying each of the masks and then decide taking the one that has the highest entropy." }, { "end": 901, "start": 890, "text": " Now, they say this is costly because if we've learned a thousand tasks, we need to try each of the thousand masks in order to do that." }, { "end": 903, "start": 901, "text": " So they go for something else." }, { "end": 908, "start": 903, "text": " And this is the second word in the title, this superposition word." }, { "end": 915, "start": 908, "text": " So instead of doing that, what they'll do is they'll use a superposition of masks." }, { "end": 921, "start": 915, "text": " And actually, the picture also I find more descriptive than the formula." }, { "end": 923, "start": 921, "text": " I can write down the formula down here." }, { "end": 929, "start": 923, "text": " So what they'll do is they'll say, why don't we just overlap all of the masks?" }, { "end": 936, "start": 929, "text": " So we'll have all of these masks and I for one for each tasks and we'll initialize them with coefficients." }, { "end": 941, "start": 936, "text": " Alpha I will just mix them like this and alpha here." }, { "end": 946, "start": 941, "text": " It's initialized in one over K, where K is the number of tasks." }, { "end": 952, "start": 946, "text": " OK, we'll just mix them and then we'll multiply them by the weight of the neural network." }, { "end": 961, "start": 952, "text": " And that's will that neural network is where we input our image into." }, { "end": 964, "start": 961, "text": " OK, so what does that give us?" }, { "end": 968, "start": 964, "text": " That basically gives us a mix of all the networks." }, { "end": 976, "start": 968, "text": " Like it's pretty safe to say that the entire network is going to be in there and maybe sometimes multiple times." }, { "end": 983, "start": 976, "text": " Like if multiple masks use the same weight, it's going to be in there with a higher weight and so on." }, { "end": 985, "start": 983, "text": " So that's what you see right here." }, { "end": 989, "start": 985, "text": " You can see that all the masks are overlapped in superposition with each other." }, { "end": 991, "start": 989, "text": " Now, what does the output give you?" }, { "end": 992, "start": 991, "text": " The output gives you nothing." }, { "end": 996, "start": 992, "text": " The output gives you kind of the average prediction of the network." }, { "end": 1003, "start": 996, "text": " So this here is going to give you kind of the sort of the average prediction of all of the networks, which isn't very helpful." }, { "end": 1010, "start": 1003, "text": " But of course, what we can do is we can look at the gradients of this." }, { "end": 1021, "start": 1010, "text": " So if we from this calculate the entropy, which is here denoted H, and we calculate, we back propagate this." }, { "end": 1032, "start": 1021, "text": " So we back propagate this to the alphas and we calculate the gradient of the entropy with respect to each of the alphas." }, { "end": 1033, "start": 1032, "text": " What does that give us?" }, { "end": 1035, "start": 1033, "text": " So what's the intuition here?" }, { "end": 1042, "start": 1035, "text": " The intuition is if I change my alpha a bit, how does the entropy change?" }, { "end": 1048, "start": 1042, "text": " So basically, this gives you the sensitivity of the entropy to these alpha parameters." }, { "end": 1052, "start": 1048, "text": " So if this is high, what does it mean?" }, { "end": 1058, "start": 1052, "text": " It means that this mask right here has a big influence on the entropy." }, { "end": 1067, "start": 1058, "text": " Specifically, if I were to increase the alpha, then the entropy would increase." }, { "end": 1072, "start": 1067, "text": " OK. And if I were to decrease the alpha, then the entropy would decrease." }, { "end": 1076, "start": 1072, "text": " That's the kind of what the gradient gives you." }, { "end": 1081, "start": 1076, "text": " Now, did I say before we want the one with the highest entropy?" }, { "end": 1086, "start": 1081, "text": " I'm pretty sure we want the one with the lowest entropy." }, { "end": 1090, "start": 1086, "text": " We want the one where we're very, very, very sure." }, { "end": 1095, "start": 1090, "text": " Right. I might have said that absolutely wrong." }, { "end": 1105, "start": 1095, "text": " So if you see right here, this is the formalism." }, { "end": 1111, "start": 1105, "text": " First, we associate each of the k learned supermasks with a coefficient alpha initially set to 1 over k." }, { "end": 1116, "start": 1111, "text": " Each alpha can be interpreted as the belief that supermask m is the correct mask," }, { "end": 1121, "start": 1116, "text": " equivalently the belief that the current unknown task is task i." }, { "end": 1128, "start": 1121, "text": " The model output is then computed with a weighted superposition of all learned tasks, which is this thing right here." }, { "end": 1133, "start": 1128, "text": " The correct mask should produce a confidence low entropy output." }, { "end": 1136, "start": 1133, "text": " Therefore, we recover the correct mask." }, { "end": 1140, "start": 1136, "text": " We find the coefficients alpha, which minimize the output entropy h." }, { "end": 1148, "start": 1140, "text": " OK. So, yes, we want the task with the lowest entropy, of course, not with the highest entropy." }, { "end": 1157, "start": 1148, "text": " So if we look at the gradient right here, the gradient basically tells us how each of the masks will influence the different the entropy." }, { "end": 1165, "start": 1157, "text": " And if we simply select the alpha where the gradient here is the most negative number." }, { "end": 1172, "start": 1165, "text": " So we want this to be as low as possible, not zero, but negative as high as possible." }, { "end": 1183, "start": 1172, "text": " Then we know that if we increase this, the contribution of this mask, then the entropy will go down the most." }, { "end": 1197, "start": 1183, "text": " OK. And again, our hypothesis here is that maximum entropy, sorry, minimum entropy means most confident prediction means that the if all tasks are equally hard," }, { "end": 1203, "start": 1197, "text": " it probably means that the data point is from the task where we have the lowest entropy." }, { "end": 1207, "start": 1203, "text": " So what's the what's the deal here?" }, { "end": 1211, "start": 1207, "text": " Like they show in this graph right here, they show this is much faster." }, { "end": 1222, "start": 1211, "text": " So if we if we were to evaluate each mask individually and measure its entropy, of course, with the number of tasks, we'll simply linearly increase our time in the forward pass," }, { "end": 1225, "start": 1222, "text": " because we need to try out each of these masks." }, { "end": 1238, "start": 1225, "text": " However, if we do what they're doing here, we simply run one, we mix these ones, we run one forward pass, we do back prop and they consider two strategies." }, { "end": 1246, "start": 1238, "text": " So what you can do is you can do gradient descent on these alphas, which takes a number of steps to converge." }, { "end": 1248, "start": 1246, "text": " Or you can actually do a single step." }, { "end": 1255, "start": 1248, "text": " So you just observe the gradient and by the gradient, you recognize which one has the lowest gradient." }, { "end": 1257, "start": 1255, "text": " And that's the one you pick." }, { "end": 1258, "start": 1257, "text": " So where's the catch here?" }, { "end": 1267, "start": 1258, "text": " The catch is that if you do something like this, if you do something like this, there are two catches, actually." }, { "end": 1273, "start": 1267, "text": " First of all, this here is a convex combination, right?" }, { "end": 1275, "start": 1273, "text": " This is convex combination." }, { "end": 1278, "start": 1275, "text": " And the problem isn't convex at all." }, { "end": 1290, "start": 1278, "text": " But if you simply take this convex combination, multiply it and then look at the gradient, you sort of assume that the problem is a kind of a convex, nicely shaped problem." }, { "end": 1300, "start": 1290, "text": " And if you then observe these gradients with respect to the alphas, you make assumptions about the problem that might not be true." }, { "end": 1305, "start": 1300, "text": " So you lose, you kind of heuristically approximate the importance of these masks." }, { "end": 1307, "start": 1305, "text": " That's the first thing." }, { "end": 1321, "start": 1307, "text": " The second thing, of course, is that it's you still you still are implicitly saving your still are implicitly trying all the models, but you're just not trying them explicitly." }, { "end": 1335, "start": 1321, "text": " You're implicitly trying all the models because when you do this combination right here, your auto differentiation library will actually keep track of what the individual models contribute." }, { "end": 1337, "start": 1335, "text": " It's just that per layer." }, { "end": 1349, "start": 1337, "text": " So, of course, this here, this W is multi layer perceptron, which means that if you have multiple layers, you know, there's W one and there's W two." }, { "end": 1358, "start": 1349, "text": " And you have your alphas and your alphas are also, you know, you can distribute them into these." }, { "end": 1364, "start": 1358, "text": " Sorry, your masks are also mask for layer one mask for layer two and so on." }, { "end": 1368, "start": 1364, "text": " So your auto differentiation package needs to keep track of." }, { "end": 1375, "start": 1368, "text": " OK, mask one goes here with this alpha mask to the layer two goes here with this alpha." }, { "end": 1378, "start": 1375, "text": " And there is there." }, { "end": 1381, "start": 1378, "text": " So it needs to keep track of this graph." }, { "end": 1388, "start": 1381, "text": " It's just that this is highly optimized and you also need to you only need to do it layer by layer." }, { "end": 1398, "start": 1388, "text": " So the contribution of alpha of mask one, this is maybe alpha eye of mask eye one mask eye two." }, { "end": 1405, "start": 1398, "text": " The contribution of the alpha eye will not be explicit in this layer." }, { "end": 1410, "start": 1405, "text": " It will be implicit as an average across the layer." }, { "end": 1417, "start": 1410, "text": " Right. So, again, this is you assume in each layer, you assume a convex combination of all the alphas and propagate them." }, { "end": 1419, "start": 1417, "text": " And propagate that forward." }, { "end": 1431, "start": 1419, "text": " And therefore, if you look at the next layer, you can only view what mask two does mask of layer two does as in terms of a convex combination of layer one." }, { "end": 1441, "start": 1431, "text": " So you make multiple approximations and you rely on the optimization of your auto differentiation library to keep track of these different things and do operations in parallel." }, { "end": 1453, "start": 1441, "text": " And in the case where you do it linearly, I'm going to guess you simply do it as a sequential operation, but it's going to be exact." }, { "end": 1455, "start": 1453, "text": " So that's the trade off." }, { "end": 1462, "start": 1455, "text": " All right. So we now know how we can figure out where the task is from." }, { "end": 1465, "start": 1462, "text": " And let's see how that works." }, { "end": 1470, "start": 1465, "text": " So in this first task, we are looking at split image net." }, { "end": 1480, "start": 1470, "text": " Split image net simply it takes the image net data set, which is a thousand class data set, and it distributes it into 100 different tasks." }, { "end": 1483, "start": 1480, "text": " Each is a 10 class classification task." }, { "end": 1485, "start": 1483, "text": " Now note two things." }, { "end": 1488, "start": 1485, "text": " First thing is that split image net." }, { "end": 1494, "start": 1488, "text": " Each task is approximately as hard as each other as the other tasks." }, { "end": 1495, "start": 1494, "text": " Right." }, { "end": 1503, "start": 1495, "text": " It's still image net classification and it's the same number of the of it's the same number of labels." }, { "end": 1508, "start": 1503, "text": " And each task is about the same hardness." }, { "end": 1509, "start": 1508, "text": " You can make that assumption." }, { "end": 1513, "start": 1509, "text": " And second of all, the tasks are actually pretty, pretty easy." }, { "end": 1514, "start": 1513, "text": " Right." }, { "end": 1518, "start": 1514, "text": " It's hard to distinguish image net into a thousand classes." }, { "end": 1526, "start": 1518, "text": " But if you split that task, I'm going to bet that you have these high resolution images and you have a 10 class classification." }, { "end": 1529, "start": 1526, "text": " It's going to be relatively easy." }, { "end": 1533, "start": 1529, "text": " So all our conditions are met for at least for my hypothesis to hold." }, { "end": 1542, "start": 1533, "text": " And you can see on the right side, you can see split C for 100, which does the same thing to C for 100." }, { "end": 1548, "start": 1542, "text": " It subdivides it into different, very small class classification tasks." }, { "end": 1550, "start": 1548, "text": " You can see the results." }, { "end": 1554, "start": 1550, "text": " The upper bound here is where you train a single model for each of the tasks." }, { "end": 1558, "start": 1554, "text": " That gets you to average accuracy of 92 percent." }, { "end": 1562, "start": 1558, "text": " So on image net, 92 percent." }, { "end": 1564, "start": 1562, "text": " It's pretty, it's pretty good." }, { "end": 1571, "start": 1564, "text": " Of course, this is again, this is 10 class, which makes the numbers a lot different with the subs." }, { "end": 1578, "start": 1571, "text": " So subs up, you get to this pretty good 88 percent accuracy." }, { "end": 1581, "start": 1578, "text": " This is this super masks in superposition." }, { "end": 1586, "start": 1581, "text": " This here is a baseline that also does lifelong learning." }, { "end": 1591, "start": 1586, "text": " Now, they have these annotations right here." }, { "end": 1594, "start": 1591, "text": " Gigi, which yes, Gigi, haha." }, { "end": 1602, "start": 1594, "text": " But so the first letter will always tell you whether the task ID is given during training." }, { "end": 1607, "start": 1602, "text": " And the second letter will tell you whether the task ID is given during testing." }, { "end": 1614, "start": 1607, "text": " So this here simply evaluates whether or not this masking is feasible, which you can see here it is." }, { "end": 1622, "start": 1614, "text": " So this will we know which mask to train during training and we know which mask to retrieve during testing." }, { "end": 1626, "start": 1622, "text": " So there is nothing of this entropy gradients here." }, { "end": 1627, "start": 1626, "text": " None of it." }, { "end": 1638, "start": 1627, "text": " This simply evaluates the viability of the masking approach, which as you can see, it's pretty viable and it's more viable than these baselines." }, { "end": 1643, "start": 1638, "text": " This same thing on the CIFAR 100 right here." }, { "end": 1650, "start": 1643, "text": " So you can see they also evaluate since I guess it's an easier problem, they also evaluate the number of bytes which they can control." }, { "end": 1658, "start": 1650, "text": " So they can control the number of bytes in their model by simply increasing or decreasing the required sparsity of their mask." }, { "end": 1664, "start": 1658, "text": " So you can change your mask by saying how sparse you want it." }, { "end": 1676, "start": 1664, "text": " And of course, if you want it more sparse, you get a worse model because you have less less ones in your budget to make your model perform well." }, { "end": 1688, "start": 1676, "text": " But you can see that if they do it with these baseline model, this batch E, you severely underperform with regard to the upper bound right here." }, { "end": 1701, "start": 1688, "text": " The upper bound again is where you train a model per task and separate heads here is another kind of dummy baseline where you train a different head for each of the tasks with a common trunk." }, { "end": 1704, "start": 1701, "text": " That gets you pretty much nowhere." }, { "end": 1710, "start": 1704, "text": " With the sub sub algorithm, you do get almost to the performance of the upper bound." }, { "end": 1715, "start": 1710, "text": " And in fact, if you do this transfer approach right here, you do get there." }, { "end": 1720, "start": 1715, "text": " The transfer approach simply means that so you do these tasks in succession, right?" }, { "end": 1724, "start": 1720, "text": " You do task one. Okay, done. You do task two. Okay, done." }, { "end": 1730, "start": 1724, "text": " And for each one, you train a mask. Okay, for each one you train this is mask one, mask two." }, { "end": 1745, "start": 1730, "text": " The transfer approach simply says if I start task three, I'm going to start the mask three, my initial weights basically are going to be a running average of the masks that I have already considered or an average." }, { "end": 1752, "start": 1745, "text": " There is some amount of transfer going on simply to initialize the weights." }, { "end": 1755, "start": 1752, "text": " It's actually astounding that this helps you so much." }, { "end": 1766, "start": 1755, "text": " But with this, if you look at the actual numbers, I believe you can get like a tiny bit higher than the training a single model for each of the tasks." }, { "end": 1782, "start": 1766, "text": " Okay, so this sort of establishes the viability of training the different masks for the different tasks, which I again, I think it is not surprising because essentially you're training a different model per task." }, { "end": 1791, "start": 1782, "text": " And it's just the fact that you do a very crude model and that you can store very efficiently." }, { "end": 1796, "start": 1791, "text": " Now you might object and say, hey, don't I need to store the underlying randomly initialized network?" }, { "end": 1801, "start": 1796, "text": " And the answer is yes and no. Actually, you only need to store the random seed to produce it." }, { "end": 1806, "start": 1801, "text": " So checkmate. Yeah, they do." }, { "end": 1812, "start": 1806, "text": " So here they explain this one shot algorithm where they simply look at the gradient of the entropy." }, { "end": 1821, "start": 1812, "text": " You can see with the maximum negative gradient of the entropy, they also have this binary algorithm." }, { "end": 1831, "start": 1821, "text": " If the task where they say with the task is harder to differentiate this kind of assumption of the convex combination thing does might not hold." }, { "end": 1848, "start": 1831, "text": " So what they do is they have this binary algorithm where they do a binary search where they simply want to circumvent the necessity to evaluate each of the masks by itself because that takes long." }, { "end": 1853, "start": 1848, "text": " So they do something in between where they do this binary algorithm." }, { "end": 1866, "start": 1853, "text": " This is right here where they do this convex combination, they evaluate the gradient, but then they don't just take the highest of the negative gradients." }, { "end": 1869, "start": 1866, "text": " They eliminate half of them." }, { "end": 1878, "start": 1869, "text": " So you can see whenever it's lower than the median, they eliminate it and then they start off with this new set of reduced alphas." }, { "end": 1892, "start": 1878, "text": " So in each of these steps, they eliminate half of the masks and then they recompute again because because it is not a convex problem, the order might actually be different in the second and third and fourth step." }, { "end": 1901, "start": 1892, "text": " Of course, this is simply this is like halfway towards between this one shot algorithm and trying each mask by itself." }, { "end": 1904, "start": 1901, "text": " It's kind of a compromise." }, { "end": 1912, "start": 1904, "text": " I mean, they make it they really try to not not try each mask once because it's one of their contributions." }, { "end": 1917, "start": 1912, "text": " Right. But then they probably realized if we just do it one shot, sometimes it doesn't work." }, { "end": 1921, "start": 1917, "text": " So they're going between, which is, you know, it's a pretty cool idea." }, { "end": 1924, "start": 1921, "text": " All right. Next experiments." }, { "end": 1928, "start": 1924, "text": " We're now in this situation and you see you see a number of things." }, { "end": 1936, "start": 1928, "text": " So first of all, we have a new added a new baseline, this PSP, and you can see that the baselines operating this G.G. regime." }, { "end": 1943, "start": 1936, "text": " So the baselines are given the task during training and given the task during evaluation." }, { "end": 1948, "start": 1943, "text": " You see the upper bound here in gray is where you train a model for each task." }, { "end": 1956, "start": 1948, "text": " And you assume that's an upper bound because you assume the tasks are kind of unrelated to each other, which is not the case." }, { "end": 1962, "start": 1956, "text": " So there is actually potential to beat the to beat the upper bound baseline." }, { "end": 1966, "start": 1962, "text": " And subs up here you see operates in a different regime." }, { "end": 1974, "start": 1966, "text": " Namely, there's this regime of you're given the task during training, but then during testing, you're not given the task." }, { "end": 1981, "start": 1974, "text": " OK. And this you here, it basically means that the labels you assume that the labels of the tasks are not shared." }, { "end": 2000, "start": 1981, "text": " So in in this case, if you predict, if you predict like if you split MNIST into always two class, if you split MNIST into two tasks, you predict the first task is zero, one, two, three, four." }, { "end": 2003, "start": 2000, "text": " The second task is five, six, seven, eight, nine." }, { "end": 2008, "start": 2003, "text": " OK. And you have the same amount of labels. So you always have five output neurons. Right." }, { "end": 2011, "start": 2008, "text": " So you have one, two, three, four, five output neurons." }, { "end": 2021, "start": 2011, "text": " If you if the image here is like a five, that would be task task one label zero." }, { "end": 2032, "start": 2021, "text": " Right. If your network now predicts label zero correctly, but predicts the the image to come from task one, you count it as a mistake." }, { "end": 2040, "start": 2032, "text": " You say, well, you know, you've predicted the right output neuron, but you've told me it comes from task zero from from the zero to four." }, { "end": 2042, "start": 2040, "text": " So I'm going to count that as a mistake." }, { "end": 2052, "start": 2042, "text": " So it's really there isn't there isn't a way for the network to kind of get around predicting the wrong tasks or kind of share information." }, { "end": 2058, "start": 2052, "text": " So you assume that the labels are not shared or unshared." }, { "end": 2065, "start": 2058, "text": " Yeah. So it's the subs up here has a significantly harder task than the baselines." }, { "end": 2067, "start": 2065, "text": " Keep keep that in mind." }, { "end": 2073, "start": 2067, "text": " And now we are applying our because we we are not given the task at inference time." }, { "end": 2081, "start": 2073, "text": " Now we're applying our heuristic where we go and look at which of the mask entropies is the lowest." }, { "end": 2086, "start": 2081, "text": " Respectively, we use this actually this one shot algorithm where we look at the gradients." }, { "end": 2091, "start": 2086, "text": " And you can see this is on permuted MNIST in permuted MNIST." }, { "end": 2097, "start": 2091, "text": " What you do is you take MNIST and you simply permute the pixels." }, { "end": 2103, "start": 2097, "text": " And this it sounds crazy, but you simply permute the pixels and that gives you a new task." }, { "end": 2108, "start": 2103, "text": " So you can come up with like almost an infinite number of tasks because there are what?" }, { "end": 2117, "start": 2108, "text": " Twenty eight times, twenty eight pixels. So you can commute them seven hundred and eighty four factorial times," }, { "end": 2122, "start": 2117, "text": " which gives you like infinitely many tasks. And so you can modulate." }, { "end": 2125, "start": 2122, "text": " So here you can see the number of tasks learned increases." }, { "end": 2136, "start": 2125, "text": " And at the beginning, this baselines, especially this baseline, is doing fairly well, actually, on par with the upper bound when you only have ten different tasks." }, { "end": 2153, "start": 2136, "text": " However, after that quickly degrades, however, this subs up here, it keeps it keeps its performance, which it so this doesn't only mean that it correctly predicts the output neuron." }, { "end": 2163, "start": 2153, "text": " It also correctly predicts which task, which permutation was applied to the digit simply by looking where the entropy is high." }, { "end": 2166, "start": 2163, "text": " Right. So that's pretty cool." }, { "end": 2172, "start": 2166, "text": " And, you know, it's it's actually kind of surprising to be to be honest." }, { "end": 2176, "start": 2172, "text": " So on the left, this is a L'Onet architecture on the right." }, { "end": 2183, "start": 2176, "text": " It's a fully connected network. Now, the fully connected network here performing better is sort of expected." }, { "end": 2187, "start": 2183, "text": " First of all, MNIST is really easy and can actually be solved with a fully connected network." }, { "end": 2199, "start": 2187, "text": " And second of all, especially permuted MNIST, I guess, doesn't really conform to the to the assumptions of convolutional neural networks anymore." }, { "end": 2202, "start": 2199, "text": " Again, keep in mind, these tasks are very easy." }, { "end": 2216, "start": 2202, "text": " Yeah. So so especially for the fully connected network, of course, each permutation kind of looks the same because it's it doesn't care at the beginning" }, { "end": 2221, "start": 2216, "text": " that each pixels are next to each other. Simply each pixel is a different thing." }, { "end": 2228, "start": 2221, "text": " It's just the fact that it cannot it cannot learn from one tasks much about the other tasks." }, { "end": 2233, "start": 2228, "text": " That's why you that's the nature of permuted MNIST." }, { "end": 2243, "start": 2233, "text": " All right. And then in this experiment right here, and this is the sort of crown experiment, they learn they do this permuted MNIST," }, { "end": 2250, "start": 2243, "text": " but they go up to 2500 tasks, 2500 different permutations." }, { "end": 2255, "start": 2250, "text": " But so but now they have an additional thing right here." }, { "end": 2261, "start": 2255, "text": " So again, they have this sub sub where it needs to predict the correct permutation," }, { "end": 2268, "start": 2261, "text": " but also they compare it with a an algorithm that needs that is this NN right here." }, { "end": 2275, "start": 2268, "text": " So in this NN, not not only are you not given the task label at testing time," }, { "end": 2279, "start": 2275, "text": " you are actually not even given the task label at training time." }, { "end": 2282, "start": 2279, "text": " But here the outputs are shared." }, { "end": 2292, "start": 2282, "text": " So, you know, since since you have no way of knowing which task it is, you've never given it as long as you predict the correct class." }, { "end": 2295, "start": 2292, "text": " You good. So it's always it's always a 10 class classification problem." }, { "end": 2298, "start": 2295, "text": " It's just not permuted." }, { "end": 2302, "start": 2298, "text": " You're not given the task label here." }, { "end": 2307, "start": 2302, "text": " So first of all, I want to say that this here, the shared labels," }, { "end": 2310, "start": 2307, "text": " it could actually contribute to the success of this algorithm right here," }, { "end": 2315, "start": 2310, "text": " because even though you permute the pixels," }, { "end": 2322, "start": 2315, "text": " you can still sort of do things like count the frequency of light pixels versus dark pixels in MNIST." }, { "end": 2326, "start": 2322, "text": " And that might already give you a very, very big hint." }, { "end": 2333, "start": 2326, "text": " Right. Or, you know, simple correlation of of two pixels, though that's that's a task specific thing." }, { "end": 2341, "start": 2333, "text": " But the the frequency of light pixels versus dark pixels will already give you a big boost in accuracy." }, { "end": 2344, "start": 2341, "text": " And now you can actually share that feature." }, { "end": 2346, "start": 2344, "text": " That feature will always be the same for every permutation." }, { "end": 2353, "start": 2346, "text": " So this is something you can share between tasks. And I would like." }, { "end": 2356, "start": 2353, "text": " So one way I guess you could eliminate that." }, { "end": 2359, "start": 2356, "text": " Well, I don't know. I'm not sure." }, { "end": 2364, "start": 2359, "text": " You kind of have to randomize the number of light pixels, but keep the classes the same." }, { "end": 2367, "start": 2364, "text": " It's going to be complicated. Right." }, { "end": 2373, "start": 2367, "text": " But just keep that in mind. However, how how does the algorithm even decide?" }, { "end": 2379, "start": 2373, "text": " So they have a heuristic right here as well, namely." }, { "end": 2390, "start": 2379, "text": " They say, OK, if we don't have no task identity during training or inference." }, { "end": 2393, "start": 2390, "text": " Where task identity is entirely unknown, even during training," }, { "end": 2396, "start": 2393, "text": " if subs of is uncertain about the current task identity," }, { "end": 2401, "start": 2396, "text": " it is likely that the data does not do not belong to any tasks seen so far." }, { "end": 2407, "start": 2401, "text": " When this occurs, a new super mask is allocated and the number of tasks learned so far is incremented." }, { "end": 2410, "start": 2407, "text": " OK, so they go with the same principle right here." }, { "end": 2417, "start": 2410, "text": " They say if we get a new training sample, we just evaluate it against all the masks that we had so far." }, { "end": 2425, "start": 2417, "text": " Or we do our one shot algorithm to approximate which masks gets us a low entropy." }, { "end": 2432, "start": 2425, "text": " If none of the mask gets us a low entropy, then we decide this must be some kind of unseen task." }, { "end": 2437, "start": 2432, "text": " So we're going to allocate a new mask for this unseen tasks." }, { "end": 2443, "start": 2437, "text": " And that heuristic, as you can see, it performs fairly, fairly well." }, { "end": 2447, "start": 2443, "text": " Where was our graph? Our graph was down here." }, { "end": 2453, "start": 2447, "text": " In fact, it performs pretty much on par with where you know the task during training." }, { "end": 2461, "start": 2453, "text": " And just not during during inference up until like here, the very last bit." }, { "end": 2465, "start": 2461, "text": " If you really get into the high task regime." }, { "end": 2469, "start": 2465, "text": " Where I guess it starts getting it starts getting confusing." }, { "end": 2475, "start": 2469, "text": " So this this heuristic might start to break down, but it might just be a fact how they tune their constants." }, { "end": 2481, "start": 2475, "text": " Like they have to define a threshold where they say, OK, if the entropy is somehow higher than this threshold," }, { "end": 2483, "start": 2481, "text": " then we allocate a new a new task." }, { "end": 2487, "start": 2483, "text": " And this might be optimized in order to solve this." }, { "end": 2491, "start": 2487, "text": " Again, these tasks are very, very, very, very easy." }, { "end": 2495, "start": 2491, "text": " So keep keep that in mind." }, { "end": 2503, "start": 2495, "text": " Yeah. OK. So this basically was the experimental part of that paper." }, { "end": 2507, "start": 2503, "text": " Now they consider different extensions to that." }, { "end": 2513, "start": 2507, "text": " I'm not sure how they also consider some ablations, which are pretty interesting." }, { "end": 2521, "start": 2513, "text": " So here they say we are going to up the kind of the hardness of the task with with rotated MNIST" }, { "end": 2525, "start": 2521, "text": " and also their model does pretty well on the rotated MNIST task," }, { "end": 2535, "start": 2525, "text": " where the differences of between the differences between the tasks are simply some of them are rotated by 10 degrees." }, { "end": 2539, "start": 2535, "text": " So that's a tiny rotation in the right." }, { "end": 2543, "start": 2539, "text": " If you have a number three, you kind of rotated by 10." }, { "end": 2547, "start": 2543, "text": " I can't even draw that subtle of a rotation by 10 degrees." }, { "end": 2556, "start": 2547, "text": " And, you know, the subs up must correctly predict which task the images from," }, { "end": 2562, "start": 2556, "text": " or it will not get the it will not get a correct reward." }, { "end": 2568, "start": 2562, "text": " The fact that it performs pretty well and the fact that it has, you know, rotation degrees," }, { "end": 2573, "start": 2568, "text": " where it outperforms the baseline that is actually given the rotation." }, { "end": 2578, "start": 2573, "text": " So it's given the task at inference time is pretty, pretty remarkable." }, { "end": 2582, "start": 2578, "text": " Again, I believe this is due to the fact that these tasks are so easy." }, { "end": 2588, "start": 2582, "text": " And therefore, this entropy, it just spikes when you get the correct thing," }, { "end": 2594, "start": 2588, "text": " because it sort of it sort of latches onto very easy features for each task." }, { "end": 2601, "start": 2594, "text": " So I'm going to guess that the tasks are generally solvable by maybe correlating two pixels." }, { "end": 2606, "start": 2601, "text": " Right. If like this pixel correlated with this pixel, if the correlation is high, it's a three." }, { "end": 2609, "start": 2606, "text": " The correlation is low. It's something else. OK." }, { "end": 2617, "start": 2609, "text": " And then if you rotate it, it's just not the case anymore that this pixel and this pixel, the correlation is very high." }, { "end": 2624, "start": 2617, "text": " So if you predict using this correlation, you'll get a pretty low confidence." }, { "end": 2629, "start": 2624, "text": " And I'm going to guess that, yeah, if you have discrete tasks and it's in this task," }, { "end": 2635, "start": 2629, "text": " then your confidence will just spike because the task is so easy and because all the tasks are about equally hard." }, { "end": 2639, "start": 2635, "text": " Because if you can find this correlation here, you can find it over here." }, { "end": 2644, "start": 2639, "text": " It's simply going to be two different two different pixels in this task." }, { "end": 2654, "start": 2644, "text": " And then as you try the masks, whenever you hit the one where you can predict pretty confidently with those two pixels," }, { "end": 2658, "start": 2654, "text": " then your confidence is going to spike, your entropy is going to get down." }, { "end": 2666, "start": 2658, "text": " And, you know, it's that task. They also here they compare." }, { "end": 2676, "start": 2666, "text": " The one shot algorithm. So they they they use their one shot algorithm to and they put it on a baseline." }, { "end": 2682, "start": 2676, "text": " So this baseline where they always actually have to give it the task," }, { "end": 2689, "start": 2682, "text": " they augmented by by their their one shot algorithm to select the task." }, { "end": 2695, "start": 2689, "text": " And it turns out they can make it perform fairly well, not on par with them." }, { "end": 2703, "start": 2695, "text": " Interestingly, but they can make it perform also fairly well, actually better than it was performing before." }, { "end": 2710, "start": 2703, "text": " So they have different extensions right here. And that's some of them are pretty important." }, { "end": 2717, "start": 2710, "text": " The one important thing they do is they have these superfluous neurons and that's sort of hidden." }, { "end": 2723, "start": 2717, "text": " And it's always a bit. So here, for example, you see in the output," }, { "end": 2728, "start": 2723, "text": " they say we have a lunette model using output size 500." }, { "end": 2732, "start": 2728, "text": " Now there are only 10 different labels in the MNIST task, right?" }, { "end": 2736, "start": 2732, "text": " Also in the permuted MNIST task, there are 10 different labels." }, { "end": 2742, "start": 2736, "text": " I mean, there are a total of 25,000 labels if you have 2500 tasks." }, { "end": 2751, "start": 2742, "text": " But the neural network has output size 10. However, their neural network here has output size 500," }, { "end": 2761, "start": 2751, "text": " which is surprising. So they say right here and we're going to get to the Hopfield network at the very end" }, { "end": 2766, "start": 2761, "text": " for those who are still around, because that's I think that should be its own paper." }, { "end": 2775, "start": 2766, "text": " But, you know, they say it could use an output of size L where L is the actual number of labels per task," }, { "end": 2782, "start": 2775, "text": " though we find in practice that it helps significantly to add extra neurons to the final layer." }, { "end": 2795, "start": 2782, "text": " Specifically, we consider outputs P in our S. So S is higher than L and refer to the neurons that are past L as superfluous neurons." }, { "end": 2802, "start": 2795, "text": " So let's try to make sense of this. So they have a neural network." }, { "end": 2810, "start": 2802, "text": " And let's say it's a three class classification task, right? So you have three classes and that's what you would do." }, { "end": 2815, "start": 2810, "text": " They simply add a bunch of neurons right here. That means they also they, you know," }, { "end": 2818, "start": 2815, "text": " they add all of the connections from the previous layer to those neurons." }, { "end": 2826, "start": 2818, "text": " But still, the classes can only be either 0, 1 or 2. These classes never appear during training." }, { "end": 2832, "start": 2826, "text": " So they claim this helps during during their procedure." }, { "end": 2842, "start": 2832, "text": " And I I thought about it a bit and we might be able to try to guess why it makes sense." }, { "end": 2850, "start": 2842, "text": " They say they simply say we observe that helps. And I mean, you know, let's let's try to make sense of it." }, { "end": 2856, "start": 2850, "text": " OK, so if we train, if we train our model using these too many neurons, what happens?" }, { "end": 2861, "start": 2856, "text": " Well, our label is always going to be of the top three neurons. So let's say our label is one." }, { "end": 2867, "start": 2861, "text": " This is going to result in a one hot vector like this. Now, what are we training in this layer here?" }, { "end": 2875, "start": 2867, "text": " In this layer here, we're training logits. OK, so pre pre softmax outputs." }, { "end": 2888, "start": 2875, "text": " So our our algorithm, our cross entropy loss is going to push all of these here down during every single training point." }, { "end": 2895, "start": 2888, "text": " It's going to push this one up and all of these down. Now, these three here are going to be pushed up and down depending on the label." }, { "end": 2902, "start": 2895, "text": " However, all of these down here are going to be only pushed down during the entire training." }, { "end": 2915, "start": 2902, "text": " So they are going to be exceptionally low numbers. OK, now, if we then come and we look at the at the entropy of this," }, { "end": 2929, "start": 2915, "text": " the the entropy, I think honestly, this is simply you could achieve the same thing by using a different temperature parameter in the softmax or in the entropy that you consider." }, { "end": 2936, "start": 2929, "text": " Because why can this help? And this helps with inferring which task it's coming from. Right." }, { "end": 2945, "start": 2936, "text": " So if you consider a task where you only have three outputs, so you don't have this bit down here and you look at the entropy," }, { "end": 2953, "start": 2945, "text": " it's going to be you know, it's going to be something something. Sorry, I have to draw this right here." }, { "end": 2963, "start": 2953, "text": " It's going to be like this. It's fairly confident. But if and maybe for the other tasks, it's not going to be as confident." }, { "end": 2975, "start": 2963, "text": " You know, it's maybe going to be like this. However, if you have those and if it's of the correct tasks, I'm going to guess this kind of stays the same because they're really low." }, { "end": 2987, "start": 2975, "text": " But if it's of the incorrect tasks, then you're not sure. And you not being sure about the output also means that you allocate a lot more to these things right here." }, { "end": 2993, "start": 2987, "text": " Because you've sort of never seen this particular kind of training examples. So you're not sure." }, { "end": 3004, "start": 2993, "text": " So you're just going to distribute your kind of your probability mass across these things right here because you've not been trained on that kind of input." }, { "end": 3010, "start": 3004, "text": " Right. It's very important to see that this is task. This is the correct task, which they always label J." }, { "end": 3023, "start": 3010, "text": " And for for any other incorrect task, you've never seen data like this. So these things here sort of act like an outlier class without you explicitly training an outlier class." }, { "end": 3032, "start": 3023, "text": " You simply train these things during training. You make them small. But you it's important to notice you always make them small." }, { "end": 3040, "start": 3032, "text": " From a data point that comes from their particular task. OK, that's what you train them for." }, { "end": 3050, "start": 3040, "text": " And now if you input a data point from a different task, they have less reason to be small because this is an outlier data point." }, { "end": 3057, "start": 3050, "text": " So you have much more fluctuations. So you have more fluctuations here. And therefore, the entropy is going to be even higher." }, { "end": 3064, "start": 3057, "text": " All right. This is sort of how I make sense of the fact that these additional superfluous neurons help here." }, { "end": 3072, "start": 3064, "text": " They act as kind of an outlier detector for the training data set of that particular task." }, { "end": 3081, "start": 3072, "text": " Now, because you have different training data for each task, they go further and they say it actually works even better." }, { "end": 3089, "start": 3081, "text": " It works even better if we instead of this entropy heuristic, we consider another heuristic." }, { "end": 3100, "start": 3089, "text": " Accordingly, we consider an objective G, which encourages the S neurons to have large negative values and confused as an alternative to entropy in equation four." }, { "end": 3108, "start": 3100, "text": " So G, they analyze down in the appendix. And we're just quickly going to look at what G is." }, { "end": 3116, "start": 3108, "text": " Sorry, this is about to load right here. And it's very interesting to see what G is." }, { "end": 3125, "start": 3116, "text": " Or is it? Yes. So G is going to be this right here." }, { "end": 3131, "start": 3125, "text": " So why are the logits and then G is this expression right here?" }, { "end": 3143, "start": 3131, "text": " And in fact, it's this expression with the with a bit of a modification. So it's going to be G is going to be the log some X of the logits." }, { "end": 3150, "start": 3143, "text": " Right. So it's this is some this is somewhat like the entropy." }, { "end": 3158, "start": 3150, "text": " And what we're going to consider is the gradient of G. So what we want is the gradient of G with respect to our alphas." }, { "end": 3169, "start": 3158, "text": " And the condition here with this detach operation is that." }, { "end": 3182, "start": 3169, "text": " The gradient of G should be, you know, the gradient of the loss function for all V that are superfluous neurons and zero otherwise." }, { "end": 3191, "start": 3182, "text": " So we're going to detach the gradient of G for all the real neurons, for all the actual logits of the output class." }, { "end": 3196, "start": 3191, "text": " And we're only going to consider the gradient flowing through the superfluous neurons." }, { "end": 3211, "start": 3196, "text": " So all of this here is if we take the gradient, it's only going to flow to in these in the last layer through the gradients of the superfluous neurons." }, { "end": 3221, "start": 3211, "text": " OK. And that's why we don't need the entropy, because the entropy always considers the difference, sort of the difference between the correct label and the other labels." }, { "end": 3226, "start": 3221, "text": " We are pretty sure that in our superfluous neurons, we don't have the correct label." }, { "end": 3235, "start": 3226, "text": " OK. So this log the log some X of our of these outputs here, what will they represent?" }, { "end": 3246, "start": 3235, "text": " Well, this is sort of a flatness measure. Again, it's kind of like the entropy, except we don't have a correct label right here." }, { "end": 3257, "start": 3246, "text": " If one of them is very high and the other ones are very low, or if they're generally very high up, then this will be high." }, { "end": 3267, "start": 3257, "text": " However, consider the difference between this and this, where they're all super small and also they're all pretty equal." }, { "end": 3270, "start": 3267, "text": " The log some X will be very small." }, { "end": 3281, "start": 3270, "text": " So this is an alternative where we can basically only look at the superfluous neurons and say, is are these superfluous neurons all very small?" }, { "end": 3287, "start": 3281, "text": " And, you know, none of them basically says I'm the correct label." }, { "end": 3292, "start": 3287, "text": " Then we can be pretty sure that over here there is some confidence." }, { "end": 3306, "start": 3292, "text": " However, if they are sort of kind of larger and generally kind of generally large, maybe unequal, that means we're not very confident because these are our outlier classes." }, { "end": 3309, "start": 3306, "text": " They shouldn't be. They shouldn't be large at all." }, { "end": 3320, "start": 3309, "text": " So an alternative to looking at the entropy of this distribution is to build such superfluous neurons and then look at those and only those." }, { "end": 3325, "start": 3320, "text": " And so the gradient of only those in order to decide which task it's from." }, { "end": 3328, "start": 3325, "text": " It's an interesting idea, I have to say." }, { "end": 3340, "start": 3328, "text": " But maybe one could achieve sort of the same thing with a with a temperature parameter here or by building an explicit outlier detection." }, { "end": 3344, "start": 3340, "text": " But it's generally an interesting idea for outlier detection, I have to say." }, { "end": 3350, "start": 3344, "text": " I've never really seen anything like this, though I also haven't really considered it." }, { "end": 3352, "start": 3350, "text": " So here they show the importance." }, { "end": 3358, "start": 3352, "text": " And you've seen in the experiments before that there sometimes was this H objective and also this G objective." }, { "end": 3362, "start": 3358, "text": " So you can look at the entropy, but also you can look at the G." }, { "end": 3365, "start": 3362, "text": " In both cases, you have superfluous neurons." }, { "end": 3374, "start": 3365, "text": " So before you actually saw you have 500 neurons for a task of for a task of 10 that needed 10 output classes." }, { "end": 3380, "start": 3374, "text": " Right. So this tells me that these superfluous neurons are pretty important for them." }, { "end": 3387, "start": 3380, "text": " And it this is probably one of the things that makes this work." }, { "end": 3389, "start": 3387, "text": " Right. These superfluous neurons." }, { "end": 3401, "start": 3389, "text": " So you kind of setting up a trap where for the wrong models, you let it run into this trap of assigning a lot of weight into these outlier classes." }, { "end": 3407, "start": 3401, "text": " And only if the correct model is trained to not do that on the particular data that you're considering." }, { "end": 3414, "start": 3407, "text": " I don't think this comes through in the paper too much that this is one of I guess this is one of the main factors making this work." }, { "end": 3417, "start": 3414, "text": " And you can see right here they actually do an experiment." }, { "end": 3425, "start": 3417, "text": " So I don't want to be too mean where they say, look, if we train with just 25 classes and this is permuted MNIST." }, { "end": 3428, "start": 3425, "text": " So the necessary amount will be 10." }, { "end": 3433, "start": 3428, "text": " So if we train with only 25, you can see how quickly we degrade right here." }, { "end": 3439, "start": 3433, "text": " However, as we go up and train with a hundred and 200, we get better and better." }, { "end": 3448, "start": 3439, "text": " In fact, if we train with this G objective, it always sort of outperforms the H objective." }, { "end": 3453, "start": 3448, "text": " Interestingly, the more output neurons you have, the less this difference seems to be." }, { "end": 3457, "start": 3453, "text": " But maybe the percent difference is the same." }, { "end": 3460, "start": 3457, "text": " The percent error difference is the same." }, { "end": 3462, "start": 3460, "text": " I don't know. I can't tell from here." }, { "end": 3466, "start": 3462, "text": " Yeah. So this isn't all." }, { "end": 3472, "start": 3466, "text": " There is also this Hopfield network going on where they say, OK, OK." }, { "end": 3476, "start": 3472, "text": " So essentially, we're actually training different models, right?" }, { "end": 3478, "start": 3476, "text": " We're not really superimposing all of these models." }, { "end": 3484, "start": 3478, "text": " We're training a different mask for each of the tasks and kind of remembering the masks and so on." }, { "end": 3490, "start": 3484, "text": " Can we also build a model where we actually only have one model?" }, { "end": 3496, "start": 3490, "text": " And that's what they do right here, where they build a Hopfield network, which is basically just a big matrix." }, { "end": 3498, "start": 3496, "text": " This is the Hopfield network." }, { "end": 3502, "start": 3498, "text": " And then they encode the masks in this Hopfield network." }, { "end": 3513, "start": 3502, "text": " So specifically, the Hopfield network is of size D squared, where it is able to encode two to the D different binary strings." }, { "end": 3515, "start": 3513, "text": " And it does so in a fuzzy way." }, { "end": 3522, "start": 3515, "text": " But you can prove that if you construct the Hopfield network like this, where Z is a binary string," }, { "end": 3528, "start": 3522, "text": " you can recover the binary strings by gradient descent in the Hopfield network." }, { "end": 3534, "start": 3528, "text": " And obviously, the more binary strings you encode, the less you get out." }, { "end": 3540, "start": 3534, "text": " It's not magic. You can't store that many bits into a thing that doesn't have that many bits." }, { "end": 3550, "start": 3540, "text": " But I believe, you know, again, this is using gradient descent, and it can do so with surprising accuracy." }, { "end": 3555, "start": 3550, "text": " So remember that these here are bits while these here are floating point numbers." }, { "end": 3560, "start": 3555, "text": " So the comparison that I just made isn't entirely fair." }, { "end": 3565, "start": 3560, "text": " But I don't want to go into the Hopfield networks because I really feel this should be its own paper." }, { "end": 3575, "start": 3565, "text": " I guess they just want to show that it's also possible to compress these masks into one thing," }, { "end": 3581, "start": 3575, "text": " such that I can't make the argument anymore that, hey, all you're doing is training different models for different tasks." }, { "end": 3585, "start": 3581, "text": " All right. All in all, pretty cool paper. As I said, pretty dense paper." }, { "end": 3593, "start": 3585, "text": " I invite you to read it. They have a big appendix where they have more experiments and so on and explain everything in detail." }, { "end": 3600, "start": 3593, "text": " All in all, from this, I don't really take the method, but the ideas are very interesting." }, { "end": 3627, "start": 3600, "text": " And I am excited to see where this goes in the future. All right. I'll see you next time. Bye bye." } ]
z_3Qv4In2ac
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Live Machine Learning Research] Plain Self-Ensembles (I actually DISCOVER SOMETHING) - Part 1
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ensemble", "pytorch", "lightning", "cifar10", "github", "vim", "code", "cuda", "gpu", "research", "ml", "ml research", "how to", "implement", "live coding", "python", "self", "distillation", "born again", "deep ensembles", "cnn", "resnet", "vgg", "torchvision", "imagenet" ]
I share my progress of implementing a research idea from scratch. I attempt to build an ensemble model out of students of label-free self-distillation without any additional data or augmentation. Turns out, it actually works, and interestingly, the more students I employ, the better the accuracy. This leads to the hypothesis that the ensemble effect is not a process of extracting more information from labels. OUTLINE: 0:00 - Introduction 2:10 - Research Idea 4:15 - Adjusting the Codebase 25:00 - Teacher and Student Models 52:30 - Shipping to the Server 1:03:40 - Results 1:14:50 - Conclusion Code: https://github.com/yk/PyTorch_CIFAR10 References: My Video on SimCLRv2: https://youtu.be/2lkUNDZld-4 Born-Again Neural Networks: https://arxiv.org/abs/1805.04770 Deep Ensembles: A Loss Landscape Perspective: https://arxiv.org/abs/1912.02757 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher
Hey what's up! So I've had this relatively dumb research idea and people have been asking me for more coding videos and so on so I thought why not do a video where I take a research idea and implement it from scratch just to show how one would go or how I would go about implementing something like this. Now this was simply meant as sort of a demonstration but then at the end it actually worked and so yeah that was unexpected and my initial reaction was just to be like oh crap just hold everything you know stop video making you develop the idea write a paper about it okay and I was about doing that when I realized that you know I'm always the one complaining that research is not transparent enough and people aren't open enough and so on so I sort of thought I might do a different thing right here in that I will actually share the process of this non-finished research project so currently I am in the middle of this I have no idea whether it's going to work out or not and that's it and I think we can do open source software development you know completely in the open whereas with research we're all like super scared that people are gonna scoop us and we people just keep it keep their work hidden until they're done and then boom they put it in an archive and all I want to go to a world where we collaborate much more in research and it's much more like open source software development so here is my way here's here's my process of implementing this idea and it's fairly long so if you just want to get to the results you can just skip at the end I'll put time stamps in there's this new YouTube chapter video so that will be very helpful I guess yeah and with that being said I hope you enjoy this let me know what you think of videos like this and I'll see you next time hey what's going on today we're going to take a research idea and implement it as fast as we can so this is not really to show you the best research idea because it's not and it's probably been done before so I have no high hopes here but this is just to show that if you had like some research idea and you've actually done the literature research and figured no one has done that yet which I haven't because probably someone has done that how you could take this and like get started up initially pretty quickly and this is just the process that I would go through and I'm going to go through with you today and we're going to try to get this up and running as quickly as possible so I had this idea that looking at SimClear v2 there's a lot of things to be done still in the space of let's say self teaching self distillation and so on you know there's mean teacher and then there's whatnot and this is all usually done in the semi supervised very few label regime and so on but we know that these self supervised techniques can help you and supervised learning and then in SimClear v2 you do semi supervised in that you do self supervised and then fully supervised and then distillation like self distillation there's there's all these kinds of interleaving stuff and I thought okay what if I just take a pre-trained network that performs really well on something and I self distill it into a bunch of student models like a number like 10 or so and then I like that's my ensemble model will that perform better than the original model like this is a terrible idea and it's probably not going to work like there's 99% chance it's not going to work but let's try to test this today so I got my drink I got my carbs since it's weekend and we're going to give this a shot all right so first thing we need some sort of base to go from in research it's good to build your own stuff but a lot of times if you want to be as fast as possible you want to go as quickly as you can so here I found this repo thankfully with an MIT license so shout out to who even fun I guess for training for putting up a repo training these C for 10 models or training these PI torch vision models on C for 10 C for 10 is a small enough data set so that we can kind of work with it and these models are already pre-trained so I've cloned this repo and we're going to adjust that so there is a first of all there is a download as you can see here which in this report says it downloads these I've not done this before I have no clue how this is gonna work out in this download script here downloads the weights from box and I hope you can see this and then I guess you can load the the pre-trained weights with pre-trained equals true and yeah we'll get into all that later so the first thing we got to do is get this to run let's say so let's look at this downloads thing first so the download thing is going to have a URL it's going to use requests to get that URL and then save this into this state dicts thing now what I usually want to do is I don't I want my folder of code to only have code and not to be intermixed with data and code because this is the thing that I'm gonna ship around to various servers and so on so I'd rather have the code in one folder and then the data and like a central folder so I'm not really fine with this sort of downloading the this right here into into the folder that we have so what I'm going to do is I'm going to change that such that it downloads it into a central folder so first we already have OS so what we're going to do is we're going to get some like data path going which is going to be our home folder OS path and I guess I also I already have a C for 10 folder right here so we'll use this and then so path dot join will join that and that's going to download it's not really data is it it's more like models okay let's do this cool so data path is this is the models that is going to download all right and then it unzips the file again so here it unzips the file to the current working directory I don't want this so I'm going to change that again to the models path all right no directory to path to zip file directory to extract to I think we're fine right now so this download script is going to download the path the all the weights there now I want this to happen sort of automatically while this is in a server or while this is on a server so what I'm going to do is probably just to so if this script runs you can see it runs the main but in the other script I might just want to do this automatically so let's go to the test script right here or let's say we go to the train script this is probably the main script right here the train script so we have to somehow call this other script here probably in the main function all right so let's import this other so import C for 10 what was it called C for 10 download okay and here we're going to call that and does this does this not download it if it already exists we have to check that so a lot of this is just going to be you know beating the stuff into beating stuff into into existence so if this zip file already exists we're not going to we're not going to do anything right which leaves us open if like if the unzipping fails then we're going to be in a kind of dumb path but you know we'll risk it zip path would be that so that's if OS path exists zip path then return okay so we're good in the download script what else do we need the data set I probably already have the data set from torch vision so that's not going to be an issue okay so here we're gonna call C for 10 download dot main all right and that should do we can't really call that yet let's actually just run this download script no no such file or directory probably need to make that probably need to make that directory right okay if OS make theirs models path exist okay true yeah that should be something all right and we're downloading so this is 2.4 gigabytes which can you know be put by itself let's put that over there and while that's downloading let's check out the test script actually let's check out the test script so this simply takes in this C for 10 module and instantiates a trainer and as you can see it calls test on it so this should not be too hard I'm going to guess this C for 10 module is a lightning module as you can see right here it is we know how tensor sorry pytorch lightning works if you don't know how pytorch lightning works pretty easy you configure this module right here you configure a bunch of stuff like the data sets the training step and so on and you're good to go so I guess what we're going to do is we're going to change this train script and change it to our needs okay so let's copy that let's go with train ensemble bang so this is what we're going to change all right so first if the GPUs is a string then the other yada yada if it's two then wow that's that's that's kind of a weird engineering quirk right here okay what I want to do is make the GPU use transparent so we'll only ever use one GPU so let's call that CUDA and put that to true and then we'll say da da da da oh come on there is like a lot of stuff going on here let's so and then torch is called torch I'd hate that can I do this can I import it like twice with different names probably it's probably not very good but I'll do it okay so if CUDA is not available we'll just set the CUDA to false if th.cuda.is available okay not if it's not available then hparams.cuda equals false and then we'll set the GPUs to zero comma I guess that's what it expects if else none and that should do it for the GPUs okay so second thing that we need we're going to need we're calling fit here and there is this logs directory where the checkpoints are going to be saved I'm fine with that I just want to kind of remove the logs directory at the beginning so I'll do that and whenever we start this I'm going to remove the logs directory this is a controversial move but you know on remove tree recursively delete the directory tree yes logs good okay our download is done so what do we do next we might want to do just try to test test something and here in the test thing we might want to set the GPUs I don't have a GPU right here so none and the data directory is going to be yeah I'll put it so nope nope nope okay it doesn't find the it doesn't find the the state dicts and so on now we're going to have to fix this we're going to have to fix the fact that it doesn't load okay okay and that's probably going to be here in these models so if I look in the dense net for example which we can learn and there's this pre-trained argument and what's that going to be it's oh that's bad okay it like has a hard code at the fact as hard code at the fact that there are there is this state dicts directory okay um yeah that's terrible terrible terrible terrible so I guess this is going to be in every single one of these models and that's not good so what we're going to do is probably always late loaded without the pre-trained and then kind of loaded ourselves from the from the correct directory so what's the correct directory again we're going to set the model dear we probably can just take that from the download script like that state dicts okay and then we want the architecture I guess that's a thing we can actually put the classifier here right here that's something we can so it's going to be the classifier if you look in the state dates directory I'm gonna guess you can models see for ten state dicts we haven't unpacked it where have we not where have we unpacked it to help help oh no have we unpacked it to here we have not we have not so what is in here ah it's the C for ten models sub thing and then state dicts okay so it's always going to be the architecture plus PT so we can you know we can deal with that so it's going to be C for ten models state dicts that's fine and then it's always going to be the architecture plus a PT so let's look at one of these models to see how this is loaded we've saw we've seen this here so we simply want to load this state dict in and here it constructs the thing this is let's do proper string interpolation shall we oh device where this device come from we should check that out device is given device device device CPU where is device given okay dense net device CPU oh I guess device is always CPU and then then we map it to wherever I'm not entirely sure so here we say set device I guess we can just get the device from somewhere let's try it out okay so we're going to need this right here so we're going to OS path join models path and something that's dot PT so and here we're going to get the architecture which is the classifier cool so that's how we load something and then the device maybe we can just go torch kuda dot get device is that possible let's try nope ah okay nope no get device device maybe nope map location was given okay so we have to figure out where this device comes from honestly here no module there's this get classifier right here but just just says pre trained this device always CPU I just can't believe that I guess I'll believe it we'll always load to the CPU okay cool we can do that I guess pytorch lightning will then put it on the GPU for us cool so this is about how far I got when I tried to do this by myself and now the problems start missing keys in state dick a lot of missing stuff we can't we can't possibly load that yeah no not going to you so we can't load stuff what does it do load file name equals and then let's paste this and let's put some kind of break point here so we can check it out okay that exists now she feels like that should exist yeah that exists what's the what's the deal what's the matter here so we got model which is I guess a resin at 18 and we got this thing that we might want to load so why doesn't it work torch load load file name see that works so that's the state dict is that let's look at its keys we got a you know a bunch of stuff okay so why can't we load that well load state dict state dict and now unexpected keys in state dick missing keys so this is always prepended with model dot and here it's not okay what do we do about that I guess this is because we loaded ourselves okay cool so our model is not yes so our model has the sub path model so we need model dot model dot load state dict right look at us we made it so this is testing I guess this is this resin at 18 or whatnot so we can leave that to run for itself so we figured out how to load this stuff took us a while now let's go ahead and we know how to load the models we know how to load the weights so this is our teacher model right our teacher model is supposed to load up the weights and then and then teach the student models so here what does this training thing do we download the thing we make our GPUs to be really good okay and then we instantiate this module right here as you can see so now we're going to check out this module by the way the testing is done and as you can see there's an accuracy of 93.33 which I'm pretty happy with this is congruent with what we saw right here the resin at 18 do okay and we can I guess we can take a resin at 18 or a resin at 50 they're both fairly small right here so a lot of them are going to fit on our GPUs once we use the GPUs so let's change this module around right here to actually do the to actually do the let's say the the proper thing that we wanted to do so here we have self dot model as you can see and it's get classifier and the question is does it load it pre-trained so what we want to do is this is going to be our teacher model and this in this get classifier we want pre-trained to be false always right here we don't want any sort of we don't want to load the pre-trained instead what we want to do is we actually want to have the we want to load it ourselves right so here pretend false and now we're going from our test script we're going to take over the path they think the code that we used to load this okay all right so but a beam but a boom OS we don't have OS that common along just fine yep yep yep so now here we're going to have our self teacher model to load that state dict all right so this is it for initialization now we also need our student models of course so our student models are going to be a bunch of models models are going to be a bunch of models where what do we say so this is going to be a torch or a like a module list there's this module list torch dot n and dot module list right so I initialize that with a list and the list is going to be get me the classifier and we're just going to go for the same kind of classifiers right now to really boil it down to have the same architecture for the students and for the teachers for bar in range in range and here we probably need a flag so h params dot num students okay so these are going to be our student models so let's quickly create this num students thing right here I'll probably have to have an integer and we'll go with five students for now okay so we're creating five students all of them are not pre trained so we're going to are we going to train them from scratch or do we want actually to take over the weights we probably don't want to take over the weights let's just train them from scratch in a distillation mode I have no clue about this stuff by the way okay I guess this concludes this already concludes what we what we wanted to do so because this module list what can we do with it does anyone know I don't know by the way I'm sorry for the switching between the dark and the bright background I don't know how to fix that so pytorch and an module list it would be nice if we could give them some names right so I guess that's just an iterable right here so probably there's nothing that we can do to give them proper names or we'd have to hack around and I don't want to do that so I guess we can just check if that actually computes until here so let's check it out let's try the ensemble it doesn't data set not found or corrupted okay so what we'll have to do is we'll have to implement have to change this data directory right here so the data deer is going to be OS this whatever my C for 10 directory is no such file directory logs okay so logs doesn't exist so let's actually make it still no such file a directory logs why why doesn't it make it no such file a directory logs ah okay we need to ignore errors here and we're good okay so it computes until the point you probably you probably can't see that right I guess now you can see it let's check yeah now you can see it all right so where are we we are at the point right here in our module after we've created the teacher and the students so if we look at self technically we should be able to see right here a whole bunch of resin at 18s whole bunch so here you can see the teacher model right and I'm going to guess you can see layer 4 and here you can see the student models so the student models are going to be in a whole list of models and now we're going to train them so since they're initialized differently our hope is going to be that they're sort of going to end up at different places we're going to train them with the same like we're going to be really really stupid about this okay all right so let's be really stupid about it so what what are we gonna have to change here is our training step and our training step is actually fine we'll simply forward we'll get a loss from that and then we are going to return that and that's going to be back propped so in our optimizer wherever we initialize our optimizer we should probably give it the parameters that are not only the student model parameters right not the teacher model parameters so it should only train the student models okay and even even like that we should probably always set the teacher model in eval mode but we'll do that in the forward step right here so in the forward step we get images and labels and here it runs it just forward through the model we want to change that we actually want to have teacher predictions which we're going to have the teacher model we're going to forward this through the teacher models now the criterion I'm going to guess is a cross entropy so the predictions here are actually going to be logits right and this is this is good except that what we want to do is have a distribution of over labels so after the teacher here runs through and let's put a break point right here and actually look at it I find it's always easy if you go and just run until the point where you are at the code and then you can just look at stuff so here there's oh there's a validation sanity check okay probably don't want that and now we have the break right here and now we can look at teacher predictions dot shape so that's a batch size times 10 and if we look at it I'm going to guess there's some negative numbers in there so that's not going to be that those are going to be logits now we want them that to be a soft max over the last dimension and that's going to be of the same shape but of course now we're going to have a proper distribution so if we sum over the last dimension you should see a bunch of ones all right so the teacher predictions are going to be soft max over the last dimension and since we since we don't want to back prop through the teacher we can do this in an environment of no grad right here so we have that with not being stupid and we also set the teacher model into eval mode so I guess that does it set train no that should do it I have no idea yeah let's let's run it again we could have done that there okay so so far so good so we have the teacher predictions now what we need to do is run them through the student and use them as labels so we'll go for student in student models we'll go student forward or we simply run the images through that and that give us the logits and then we use our loss function on the logits and that not the labels but the teacher predictions right so we never actually use the labels here as you can see and that's going to be the student loss and now we have a bunch of losses and we're going to append that ah nope dot like this and our loss is simply going to be the sum of all the student losses not even the average I guess we could losses I guess we could make it the average just so if we change the number of students we'll get some kind of some sort of a better sense of the actual numbers what what what okay I think over here we're good yeah so our teacher model is not in training mode but our student models hopefully are in training mode no is this the eval pass I guess this is the eval pass this is the validation sanity check pass okay so this is going to be our loss and our accuracy now right so okay what's going to be our accuracy our accuracy is going to be we have these student losses all of them and what we are going to do is we're simply going to take the maximum prediction across the students pretty easy pretty easy but we need to collect the log it's so come on so we'll also have the log it's append the student log it's okay so we have a whole bunch of log it's right here and we'll get some predictions out of that now the question is do we want to simply take the mode or do we actually want to run a softmax over each and then take the average prediction I'm not duper super sure but we can try to do it in different different ways so right now we might just want to take the maybe the average log it and then run a softmax on top of that because I'm gonna guess the log it's our outputs of a linear layer so they might behave more in a linear fashion than if we were to average the actual probabilities that come out right maybe let's let's do this okay so we'll go we'll take these log it's they're all and we need to somehow concatenate those or stack them so how are we gonna stack them so they're 256 their batch size by number of classes so we'll just stack them at dimension zero I guess that's fine and then we are going to mean also across dimension zero so those are going to be our log it's our final log it's and then our predictions are going to be the argmax of the log it's in the last dimension yep that should be pretty straightforward I guess that's it easy as that yes the rest here should just do by itself and I'm going to go ahead and run give this another run and see where we run into problems can't really see how this could ever go wrong we'll just take everything over okay we actually got a problem 1d target tensor expected multi-target not supported so the cross entropy loss in pytorch does not support that let's let's give it a shot make this a little bigger for you and let's go for the cross P loss I can't type today so here we have the cross entropy loss and the cross entropy loss is useful when training cross because problem with the classes yada yada yada wait should be one that okay criterion expects a class index as the target okay so what we need is like a soft loss right we don't need this cross entropy loss we actually want we want to have soft targets so what do we do we want to do I think the cross entropy loss is a combination of the here of the log softmax and the NLL loss can we take the NLL loss maybe so the NLL loss right here is going to be the target that this loss expect should be a class index no okay that's not good so next let's go do we have we somehow need a soft cross entropy loss let's search for that pytorch soft cross entropy soft classes I guess people do that kind of stuff so the problem with these kind of losses is that what you do what you have to do is kind of protect yourself against against numerical instabilities right so what we want to do is find a function that does this for us I guess if we do the loft the the log softmax that should take care of it for us okay this is tensor flow okay following thread cross entropy loss I guess people just do really the log softmax and then do that and we should be fine with this okay thanks okay Frank yeah maybe maybe this has advanced since then so we can give like a last look at this and this is a bit too big I'm sorry your eyes are gonna have to suffer and we're going to look at loss functions and we're going to just look through them multi label soft margin loss hmm multi label we don't really want multi label right we want this but not with the targets okay I guess we're just gonna have to write this ourselves so ultimately what is the cross entropy that cross entropy is simply the probability of the true label times the log probability of the wrong or of the predicted label yeah as you see right here so we are going to simply multiply target times the log probability of the predicted label and then some some dot take that mean across the batch I guess yeah that should do we can implement this let's do it so this criterion right here is going to be our loss function and that's only used once so what we can do is going to be a function so we're going to take student logits and we're going to take teacher probabilities okay so how's that gonna work out we're going to do the log soft max from the student logits so and then dot does that exist log soft max functional okay we need functional and student logits of that dimension so now we have properly normalized student logit so that's going to be student log probes and then what we want to do is simply multiply the teacher probes times the student log probes and the negative of that is going to be our loss the question is do we want to sum that I guess across this dimension or mean it I guess the sum sum should do all right this is it easy as that why have we searched for so long so the criterion we can simply replace that now by our loss function cool so let's run it again yada yada yada okay so I need to check maybe we should have taken like a smaller model it sometimes pays off to you know start with a really small model small model just so you can you can do these kind of things fast so here we have dimension out of range to do okay which is where is that in forward in line 78 let's go there line 78 okay here the max is not going to be as much fun let's go there I think this is some of these things change over time in pytorch so this code might be written when you know so what we have is we have a max over the predictions and predictions oh it's already an arg max so I guess we can remove this all whether or not that agrees with labels dot data we don't need any of that dot float and we also don't need that we so the accuracy is simply going to be the mean of this no I guess so we're here can't we just do predictions equals labels yes and we want the sum of that actually we want the float first and then we want the mean yeah that seems reasonable so let's do it like this yeah float mean perfect how could this be any more easy but this right here is all of it so validation step accuracy corrects we'll just look at it once we've done it and what I do is usually just run it until it doesn't give me any mistakes anymore and then I know I sort of have succeeded okay we're pretty close I feel so it says grad can implicitly create it only for scalar outputs which probably means our loss function is not a scalar so when we return the loss right here here we have the sum of the losses divided by the length of the losses let's go here and check out what's up with that so what will do I see this loss function here will output basically one loss for each data point so what we need to do I guess is call mean on this or some when they created the criterion in the original one now we've thrown it away look at the git diff right here so I guess this reduces how does this reduce the cross entropy loss when we don't do anything cross entropy loss reduces reduction mean okay so let's reduce with the mean so if we call loss then after that we should call mean and then here I'm not so sure anymore we should divide here because the learning rate is kind of tuned to the original loss size so I guess we'll be content for now with summing up these things and over here I guess we've we've solved that right no losses yeah see our losses is going to be an entire tensor and now we just fixed that right now okay so let's try it again in the meantime what can we do we can so we already take care of our GPU we take care of the logs one thing to do when with respect to this download stuff right here is you know if you have a server or something and you let a lot of things run in parallel what you want to do is make sure they don't all download the same stuff at the same time that's pretty bad so what you want to do is ideally have some sort of lock such that they coordinate and I usually use a file lock for this so I'm gonna create that right here from import the file lock and then I simply create the lock let's go file lock and here you have to input a file so sometimes like yeah data lock I don't know you just pick some file and that's the file that these these processes are going to sync on and then once you do this you simply wrap all of it in a with lock so only one at a time can go in in this function all right so that's that that should make us safe and right here we're now training this is excellent we are training the students now we need to do that on an actual GPU so I have multiple tools to ship this to a GPU so first of all I can try to ship this to and try to ship this to a let's say to to one GPU so the way you do that is first of all I want some sort of unbuffered version of Python and then I have this do I even have this tool I do have the tool okay so I'm gonna call one of our servers and we don't know what's going on okay so cannot import name seed everything from pytorch lightning seed everything is that some kind of new thing in pytorch lightning I guess I have it apparently so here seed everything with zero why I don't need that we are running without any seed here we are being really cool okay next mistake cannot okay still same mistake of course since we don't yep next mistake we'll just go through the mistake learning rate late learning rate loggers so I guess we need to update pytorch lightning on the servers and I'll do that quickly okay I've updated pytorch lightning let's check out whether or not we can actually run something yeah we can run something so this again is now downloading this on the server while this is happening there's another thing we can do namely I have sort of sort of made a system to run stuff on servers which I like a lot honestly so I guess we can try this out how do I hidden oh with I okay so I want to first of all delete this delete this yes I guess delete this cool and this git folder is a bit annoying let's restructure because otherwise it will always ship the git folder with everything up here does it do that yeah I don't like to have the code in a top level so quickly make a sources directory move everything in there so move the c410 models into the source directory move all the Python files into the source directory no clear the logs and we're much better much better okay much better so what we what what we will do is my system requires like a file and I'm just gonna copy one from another project quickly okay so we're back and I copied that over as you can see you basically give hyper parameters and it blasts ever the hyper parameters through in a kind of a random search fashion it's not too sophisticated but we can work with it so 10 that's the file right for 10 train ensemble yes that's the file cool and here we're just going to put all of our hyper parameters and that will remove the logs file I'm okay with that but want this bang cool so what do we want we want basically we just want to try it like a bunch of times and then see like average across it right that's all maybe we want the the architecture to change so let's say the classifier is a resnet 18 or a resnet 34 or a resnet 50 just so we have a bunch of stuff to do okay and this downloaded and is training on GPU hopefully if this works then we can ship this off and we'll make this other hyper parameter that I like to use called rep which is just basically a dummy parameter and so I can just repeat the experiment a bunch of times and let's put that in here so this is really that this has this has no effect except for randomizing it a bit I guess we can try to seed stuff so whenever it says seed everything we'll just seed it with this we'll call it seed no is it here this seed everything yeah so h params dot rep sorry seed cool what this this is doing something nice you can see it so this is unbuffered Python output thanks yeah so what other classifiers do we have we can again we can try a bunch of them we can try all of them but why don't we try all of them like this then let's go into this rat file I don't know why I called it rat I just want it like some three-letter thing so yep like this and then we can just take all of that crap and delete it and delete this and those are going to be all our models so our classifier is going to consist of all of this stuff let's I know I know I know I suck at them don't tell me actually tell me I want them tips trying to learn something new like each week in them but it is hard and tend to make myself actually do it so let's go let's go with just one repetition so far if we if we are not sure we can still up the number of repetitions we don't even have the rep right now this is called seed all right so we have different classifiers and what we're going to need we also have this num students right let's go with one with five and with 20 so here we got one epoch done and we get a validation loss do we get a validation accuracy validating validating I have no idea we'll cancel this right now and we'll go ahead and just blast this onto our servers and hopefully that that's gonna work I have no idea is everything fine everything's fine go no what what cool and let me get back to you once this is finished all right we're back so I've just written some code here to extract the results of that run and something you know it's pretty interesting what came out so in these plots you'll see on the x-axis of the number of students in the ensemble remember these students are all trained from the same teacher the teacher you can see in orange that's just the single teacher for reference you can see that if you have one student model it sometimes under performs or sometimes out performs the single teacher model but then if you have more student models you can see that there is a pretty monotonic relationship so here it's the reason this fit doesn't finish here is because there's not enough space on the GPU for that many student models but you can see that the relationship here is fairly monotonic here it's a bit of a kink so the first idea like this this is really astounding because these students have all been trained from that single teacher and they have been trained for as long as the teacher has been trained so they don't have more compute than the teacher they've been trained from scratch not from some checkpoint or from the teacher weights it's simple distillation from the teacher no labels and the students are all in parallel as well so they don't see different data or even different data augmentations it's the exact same order of the exact same data points going through all of the students the exact same learning rate schedule there's no noise and so on so the first thought that came to my mind like something fishy is going on here right like this this is this seems like to like come on there's no new information here so I thought hey I the teacher the teacher model I've just grabbed them from this from this repo from this pre-trained checkpoints and these pre-trained checkpoints they are you know the checkpoints that have performed best on the validation set so this is sort of a sneaky way of how we could train on the validation set right because we annotate each data point in the training data set with this checkpoint and the checkpoint has been selected for performing especially well on the validation data set it could explain why we get a gain on the validation data set so what I did is I retrained all of the teacher models such that I just retrained them for these 100 epochs and I just took the last checkpoint the all everything's the same the hyper parameters learning rate schedule and so on this is not tuned for any particular model and it's pretty like it's pretty standard it's like it's not like 0.12589 it's like 0.01 and 100 epochs or so fairly standard parameters and I just took the last checkpoint to make it didn't even look at its performance to make sure that I didn't you know select something that was especially good on the validation data set and the results here you'll see are actually already the results of that run which the previous run it was almost the same like I was astounded how well it works and then I thought hey maybe I'm kind of you know cheating here so I redid it with the teachers that are not specifically selected and this is already the results so that's pretty cool right so then I wondered what happens if I now if I increase my training amount so I just let this run for more like what if I let the students run for more than the teacher has run again there's no new information here so you can see that the now the okay the green is now the teacher the blue is a hundred epochs and the orange is 250 epochs and you can see with that even one student will outperform the teacher but many students will outperform even more so if you give more compute there's lots of lots of headroom here to improve you'll see this here I think this last one with the blue line is just a bit of a weird a weird configuration I guess if you were to rerun that that would you know fall in line so this is pretty pretty weird right so I have a bunch of questions so first of all I've searched the literature a bit more and I came up with a number of papers that do things like this now usually when you do distillation you people stress the importance of like how to introduce noise like in the noisy student paper or that you really need these data augmentations or you know same clear V2 uses the self distillation in order to do in in order to label more data so they say it's important that we bring more unlabeled data into the process and so on so all of this it really doesn't match right here and especially this focus on we need noise during the distillation process to build these ensembles this is also you know if you know mean teacher things like this I also found a paper called born again neural networks that does something quite similar but not very simple not like the same where they distill a teacher to the student with the same architecture and then they distill the student again into another student and then that into another student and so on and then at the end they say oh we can also build an ensemble but sometimes their ensembles outperform their you know chain of distillation sometimes they don't they don't really focus on that part a lot and it's way more complicated like you distill one student after another and I also think they they have some introduction of variability in the students like like noise or different augmentations and so on so this here seems you know really really really simple now I want to know this ensemble effect it seems pretty pretty weird right so what gives so the first thing we could do is we could say what what does how does this compare to an ensemble of teacher models like if we actually were to build an ensemble like train five teacher models on you know five five different teacher models it's still the same data but reasonably they might be able to learn something more from the data if we have five teacher models they might learn different things from the data and therefore if we combine them they might kind of overlap their knowledge and sort of catch where if one doesn't generalize in one data point the other four can overrule it whereas with these student with these self ensembles there's not really a way where we can learn more from data because we can only learn from the teacher and the teacher is fixed and has seen that much data right so how does this compare so wrote some rewrote some code is it's just plumbing and I release the code it's linked but it's just plumbing don't worry don't worry there is no great thoughts in there it's just plumbing such that my students don't are not all in parallel so the ensembles are not trained in parallel anymore I train each model individually which means that at maximum I have to have two models on the same GPU one teacher and one student so I make sure that the teachers they are trained from scratch and the students they're always trained from the same teacher right so the student ensembles will be exactly the same as we have them here that means one teacher is responsible for all the students but yeah so okay I'll just show you the results right here so if we look at those results you can see that and I've done it for a bunch of models right here the blue line is the ensemble of teachers and here on the x-axis you see the number of models and now since I'm not training everything on the same GPU but I recombine later that that basically means that I have doubt the ability to train up to 10 models are actually however many I want and the only real trick in the code is that when I evaluate one of these ensembles what I do is I load a mini batch and then I basically load the first checkpoint run the forward pass load the second checkpoint run a forward pass load the third checkpoint run a forward pass I do this for all the checkpoints until I go to the next mini batch but that's just for evaluating right it just seemed easiest with the code that I had so you can see right here that the there is a significant like this is almost overlapping right here for most models there sometimes the student wins sometimes the teacher wins so the teacher ensemble wins now remember the teachers are trained on you know ten times as much data right here but it's always the same data but still they have the opportunity to learn ten times as much information from the data whereas the students they're all distilled from that same teacher without any noise any augmentate any augmentations except for the augmentations that you use during training anyway and I've done this for a hundred epochs and I've done this for 250 is this already 250 I think that was a hundred I just put that there nope okay yeah that was a hundred epochs but you'll see the 250 epoch plots they look very much the same okay they are just a bit better if you train for 250 epochs now interestingly okay here's the interesting part about the 250 epochs the student is still distilled from a teacher model that has been trained for a hundred epochs so all of this all of this makes no sense to me right the student is still distilled from the hundred epoch teacher model yet if you train the student for 250 epochs in self distillation and then build an ensemble of these students from that same teacher model and you compare that to an ensemble of teachers that have all been trained for longer for 250 epochs which you know should out it the 250 epochs generally outperforms the hundred epochs models still they are the same this is this is pretty crazy results I I think and sort of my conclusion from this is that the ensemble effect right here is not a function of learning of extracting more information from the data the ensemble effect might actually be have something to do with the function landscape itself and kind of exploring different minima of the of the same function not of the same function but exploring different functions to describe the same phenomena and I've also found a paper that explains the lost landscape of deep ensembles and I will make a video on that maybe it's out already maybe it will be out after you see this one I I haven't decided yet which which order I'm going to release things but this here I it's it's pretty interesting and we need like a name so self self ensembles are already a thing but they are always with noise and stuff like this so let's call them something like plain self ensembles but that that that sounds like a good name plain self ensembles the act of self distillation a single model into multiple models without any noise any augmentations anything just you run as if you were to train the model itself and then you build an ensemble of these models by simply averaging the log it's plain self ensembles alright so the plan from here is to check on like at least one other data set you know these these models I appreciate that I could get them pre trained but they're just the image net models and then kind of let run on C for 10 so there's no kind of guarantee that these have been you know tuned or anything that the learning rates or whatnot so I want to take like an image net model you still make sure that I don't use any like hidden information where I could cheat on the validation set but try this on at least one thing and see if that works as well if we can sort of push image net performance simply by doing this trick so that's the plan for now and I have some other ideas but I just wanted to let you know and this is sort of how research works I guess you have a dumb idea and it turns out to work and then you go on and still probably probably there is not maybe too much interesting things here maybe it doesn't work on image net because these models are just under train and this somehow made them better somehow or regularize them somehow that usually doesn't work there's so much that can go wrong still so but yeah that was it and I invite you to like check out other papers in this space if you want it's a pretty interesting space and with that I don't have much more to say yeah I hope you enjoyed this let me know what you think of like research implementation or research process videos like this I'm not sure what people expect like I can't make this into five minute video of like whoo I discovered something because then you know there's no clue of what's what's happening but maybe like an hour or so is also too long I'm not sure yeah let me know what you think and I'll see you next time bye
[ { "end": 5.84, "start": 0, "text": " Hey what's up! So I've had this relatively dumb research idea and people" }, { "end": 10.64, "start": 5.84, "text": " have been asking me for more coding videos and so on so I thought why not do" }, { "end": 15.56, "start": 10.64, "text": " a video where I take a research idea and implement it from scratch just to show" }, { "end": 21.6, "start": 15.56, "text": " how one would go or how I would go about implementing something like this. Now" }, { "end": 26.12, "start": 21.6, "text": " this was simply meant as sort of a demonstration but then at the end it" }, { "end": 33.120000000000005, "start": 26.12, "text": " actually worked and so yeah that was unexpected and my initial reaction was" }, { "end": 38.36, "start": 33.120000000000005, "text": " just to be like oh crap just hold everything you know stop video making" }, { "end": 44.8, "start": 38.36, "text": " you develop the idea write a paper about it okay and I was about doing that when" }, { "end": 49.480000000000004, "start": 44.8, "text": " I realized that you know I'm always the one complaining that research is not" }, { "end": 54.92, "start": 49.480000000000004, "text": " transparent enough and people aren't open enough and so on so I sort of" }, { "end": 60.04, "start": 54.92, "text": " thought I might do a different thing right here in that I will actually share" }, { "end": 65.24000000000001, "start": 60.04, "text": " the process of this non-finished research project so currently I am in" }, { "end": 68.68, "start": 65.24000000000001, "text": " the middle of this I have no idea whether it's going to work out or not" }, { "end": 75.2, "start": 68.68, "text": " and that's it and I think we can do open source software development you know" }, { "end": 80.28, "start": 75.2, "text": " completely in the open whereas with research we're all like super scared" }, { "end": 85.16, "start": 80.28, "text": " that people are gonna scoop us and we people just keep it keep their work" }, { "end": 90.8, "start": 85.16, "text": " hidden until they're done and then boom they put it in an archive and all I" }, { "end": 96.28, "start": 90.8, "text": " want to go to a world where we collaborate much more in research and" }, { "end": 104.64, "start": 96.28, "text": " it's much more like open source software development so here is my way here's" }, { "end": 110.04, "start": 104.64, "text": " here's my process of implementing this idea and it's fairly long so if you just" }, { "end": 114.2, "start": 110.04, "text": " want to get to the results you can just skip at the end I'll put time stamps in" }, { "end": 120.48, "start": 114.2, "text": " there's this new YouTube chapter video so that will be very helpful I guess yeah" }, { "end": 124.48, "start": 120.48, "text": " and with that being said I hope you enjoy this let me know what you think of" }, { "end": 131.20000000000002, "start": 124.48, "text": " videos like this and I'll see you next time hey what's going on today we're" }, { "end": 136.84, "start": 131.20000000000002, "text": " going to take a research idea and implement it as fast as we can so this" }, { "end": 141.84, "start": 136.84, "text": " is not really to show you the best research idea because it's not and it's" }, { "end": 147, "start": 141.84, "text": " probably been done before so I have no high hopes here but this is just to show" }, { "end": 150.52, "start": 147, "text": " that if you had like some research idea and you've actually done the literature" }, { "end": 154.76, "start": 150.52, "text": " research and figured no one has done that yet which I haven't because probably" }, { "end": 161.44, "start": 154.76, "text": " someone has done that how you could take this and like get started up initially" }, { "end": 166.48000000000002, "start": 161.44, "text": " pretty quickly and this is just the process that I would go through and I'm" }, { "end": 171.23999999999998, "start": 166.48, "text": " going to go through with you today and we're going to try to get this up and" }, { "end": 179.07999999999998, "start": 171.23999999999998, "text": " running as quickly as possible so I had this idea that looking at SimClear v2" }, { "end": 186.2, "start": 179.07999999999998, "text": " there's a lot of things to be done still in the space of let's say self teaching" }, { "end": 190.6, "start": 186.2, "text": " self distillation and so on you know there's mean teacher and then there's" }, { "end": 195.83999999999997, "start": 190.6, "text": " whatnot and this is all usually done in the semi supervised very few label" }, { "end": 200.68, "start": 195.84, "text": " regime and so on but we know that these self supervised techniques can help you" }, { "end": 205.6, "start": 200.68, "text": " and supervised learning and then in SimClear v2 you do semi supervised in" }, { "end": 210.8, "start": 205.6, "text": " that you do self supervised and then fully supervised and then distillation" }, { "end": 215, "start": 210.8, "text": " like self distillation there's there's all these kinds of interleaving stuff" }, { "end": 220.6, "start": 215, "text": " and I thought okay what if I just take a pre-trained network that performs really" }, { "end": 227.12, "start": 220.6, "text": " well on something and I self distill it into a bunch of student models like a" }, { "end": 233.44, "start": 227.12, "text": " number like 10 or so and then I like that's my ensemble model will that" }, { "end": 239.16, "start": 233.44, "text": " perform better than the original model like this is a terrible idea and it's" }, { "end": 243.44, "start": 239.16, "text": " probably not going to work like there's 99% chance it's not going to work but" }, { "end": 250.29999999999998, "start": 243.44, "text": " let's try to test this today so I got my drink I got my carbs since it's" }, { "end": 256.04, "start": 250.3, "text": " weekend and we're going to give this a shot all right so first thing we need" }, { "end": 261.6, "start": 256.04, "text": " some sort of base to go from in research it's good to build your own stuff but a" }, { "end": 266.28000000000003, "start": 261.6, "text": " lot of times if you want to be as fast as possible you want to go as quickly as" }, { "end": 272.88, "start": 266.28000000000003, "text": " you can so here I found this repo thankfully with an MIT license so shout" }, { "end": 282.06, "start": 272.88, "text": " out to who even fun I guess for training for putting up a repo training" }, { "end": 289.12, "start": 282.06, "text": " these C for 10 models or training these PI torch vision models on C for 10 C for" }, { "end": 293.08, "start": 289.12, "text": " 10 is a small enough data set so that we can kind of work with it and these" }, { "end": 298.64, "start": 293.08, "text": " models are already pre-trained so I've cloned this repo and we're going to" }, { "end": 304.64, "start": 298.64, "text": " adjust that so there is a first of all there is a download as you can see here" }, { "end": 309.32, "start": 304.64, "text": " which in this report says it downloads these I've not done this before I have no" }, { "end": 314.91999999999996, "start": 309.32, "text": " clue how this is gonna work out in this download script here downloads the" }, { "end": 322.15999999999997, "start": 314.91999999999996, "text": " weights from box and I hope you can see this and then I guess you can load the" }, { "end": 327.8, "start": 322.15999999999997, "text": " the pre-trained weights with pre-trained equals true and yeah we'll get into all" }, { "end": 332.8, "start": 327.8, "text": " that later so the first thing we got to do is get this to run let's say so let's" }, { "end": 339.12, "start": 332.8, "text": " look at this downloads thing first so the download thing is going to have a" }, { "end": 343.68, "start": 339.12, "text": " URL it's going to use requests to get that URL and then save this into this" }, { "end": 349.44, "start": 343.68, "text": " state dicts thing now what I usually want to do is I don't I want my folder" }, { "end": 354.08000000000004, "start": 349.44, "text": " of code to only have code and not to be intermixed with data and code because" }, { "end": 357.74, "start": 354.08000000000004, "text": " this is the thing that I'm gonna ship around to various servers and so on so" }, { "end": 361.88, "start": 357.74, "text": " I'd rather have the code in one folder and then the data and like a central" }, { "end": 369.24, "start": 361.88, "text": " folder so I'm not really fine with this sort of downloading the this right here" }, { "end": 375.2, "start": 369.24, "text": " into into the folder that we have so what I'm going to do is I'm going to" }, { "end": 378.92, "start": 375.2, "text": " change that such that it downloads it into a central folder so first we" }, { "end": 384.84000000000003, "start": 378.92, "text": " already have OS so what we're going to do is we're going to get some like data" }, { "end": 397.84, "start": 384.84, "text": " path going which is going to be our home folder OS path and I guess I also I" }, { "end": 408.64, "start": 397.84, "text": " already have a C for 10 folder right here so we'll use this and then so path" }, { "end": 417.8, "start": 408.64, "text": " dot join will join that and that's going to download it's not really data is it" }, { "end": 427.59999999999997, "start": 417.8, "text": " it's more like models okay let's do this cool so data path is this is the models" }, { "end": 433.4, "start": 427.59999999999997, "text": " that is going to download all right and then it unzips the file again so here it" }, { "end": 437.68, "start": 433.4, "text": " unzips the file to the current working directory I don't want this so I'm going" }, { "end": 446.32, "start": 437.68, "text": " to change that again to the models path all right no directory to path to zip" }, { "end": 451.36, "start": 446.32, "text": " file directory to extract to I think we're fine right now so this download" }, { "end": 457.8, "start": 451.36, "text": " script is going to download the path the all the weights there now I want this to" }, { "end": 463.64, "start": 457.8, "text": " happen sort of automatically while this is in a server or while this is on a" }, { "end": 468.64, "start": 463.64, "text": " server so what I'm going to do is probably just to so if this script runs" }, { "end": 473.28, "start": 468.64, "text": " you can see it runs the main but in the other script I might just want to do" }, { "end": 479.44, "start": 473.28, "text": " this automatically so let's go to the test script right here or let's say we" }, { "end": 484.56, "start": 479.44, "text": " go to the train script this is probably the main script right here the train" }, { "end": 496.48, "start": 484.56, "text": " script so we have to somehow call this other script here probably in the main" }, { "end": 502.76, "start": 496.48, "text": " function all right so let's import this other so import C for 10 what was it" }, { "end": 515.84, "start": 502.76, "text": " called C for 10 download okay and here we're going to call that and does this" }, { "end": 520.64, "start": 515.84, "text": " does this not download it if it already exists we have to check that so a lot of" }, { "end": 525.64, "start": 520.64, "text": " this is just going to be you know beating the stuff into beating stuff into" }, { "end": 532.04, "start": 525.64, "text": " into existence so if this zip file already exists we're not going to we're" }, { "end": 541.0799999999999, "start": 532.04, "text": " not going to do anything right which leaves us open if like if the unzipping" }, { "end": 548.04, "start": 541.0799999999999, "text": " fails then we're going to be in a kind of dumb path but you know we'll risk it" }, { "end": 562, "start": 548.04, "text": " zip path would be that so that's if OS path exists zip path then return" }, { "end": 571.48, "start": 562, "text": " okay so we're good in the download script what else do we need the data set" }, { "end": 575.4, "start": 571.48, "text": " I probably already have the data set from torch vision so that's not going to" }, { "end": 586.08, "start": 575.4, "text": " be an issue okay so here we're gonna call C for 10 download dot main all" }, { "end": 593.08, "start": 586.08, "text": " right and that should do we can't really call that yet let's actually just run" }, { "end": 605.76, "start": 593.08, "text": " this download script no no such file or directory probably need to make that" }, { "end": 620.16, "start": 605.76, "text": " probably need to make that directory right okay if OS make theirs models path" }, { "end": 632.16, "start": 622.96, "text": " exist okay true yeah that should be something all right and we're" }, { "end": 638.0799999999999, "start": 632.16, "text": " downloading so this is 2.4 gigabytes which can you know be put by itself" }, { "end": 644.48, "start": 638.0799999999999, "text": " let's put that over there and while that's downloading let's check out the" }, { "end": 652.36, "start": 644.48, "text": " test script actually let's check out the test script so this simply takes in this" }, { "end": 660.68, "start": 652.36, "text": " C for 10 module and instantiates a trainer and as you can see it calls" }, { "end": 664.9599999999999, "start": 660.68, "text": " test on it so this should not be too hard I'm going to guess this C for 10" }, { "end": 671.9599999999999, "start": 664.9599999999999, "text": " module is a lightning module as you can see right here it is we know how tensor" }, { "end": 675.8399999999999, "start": 671.9599999999999, "text": " sorry pytorch lightning works if you don't know how pytorch lightning works" }, { "end": 679.4799999999999, "start": 675.8399999999999, "text": " pretty easy you configure this module right here you configure a bunch of" }, { "end": 685.04, "start": 679.4799999999999, "text": " stuff like the data sets the training step and so on and you're good to go so" }, { "end": 692.52, "start": 685.04, "text": " I guess what we're going to do is we're going to change this train script and" }, { "end": 701.88, "start": 692.52, "text": " change it to our needs okay so let's copy that let's go with train ensemble" }, { "end": 709.92, "start": 701.88, "text": " bang so this is what we're going to change all right so first if the GPUs" }, { "end": 718.0799999999999, "start": 709.92, "text": " is a string then the other yada yada if it's two then wow that's that's that's" }, { "end": 725.8, "start": 718.0799999999999, "text": " kind of a weird engineering quirk right here okay what I want to do is make the" }, { "end": 736.48, "start": 725.8, "text": " GPU use transparent so we'll only ever use one GPU so let's call that CUDA and" }, { "end": 750.24, "start": 736.48, "text": " put that to true and then we'll say da da da da oh come on there is like a lot" }, { "end": 762.44, "start": 750.24, "text": " of stuff going on here let's so and then torch is called torch I'd hate that can" }, { "end": 767.1600000000001, "start": 762.44, "text": " I do this can I import it like twice with different names probably it's" }, { "end": 775.6400000000001, "start": 767.1600000000001, "text": " probably not very good but I'll do it okay so if CUDA is not available we'll" }, { "end": 784.8000000000001, "start": 775.6400000000001, "text": " just set the CUDA to false if th.cuda.is available okay not if it's not" }, { "end": 798.52, "start": 784.8, "text": " available then hparams.cuda equals false and then we'll set the GPUs to zero" }, { "end": 809, "start": 798.52, "text": " comma I guess that's what it expects if else none and that should do it for the" }, { "end": 819.04, "start": 809, "text": " GPUs okay so second thing that we need we're going to need we're calling fit" }, { "end": 824.44, "start": 819.04, "text": " here and there is this logs directory where the checkpoints are going to be" }, { "end": 831.12, "start": 824.44, "text": " saved I'm fine with that I just want to kind of remove the logs directory at the" }, { "end": 838.68, "start": 831.12, "text": " beginning so I'll do that and whenever we start this I'm going to remove the" }, { "end": 848.5999999999999, "start": 838.68, "text": " logs directory this is a controversial move but you know on remove tree" }, { "end": 859.16, "start": 848.5999999999999, "text": " recursively delete the directory tree yes logs good okay our download is done" }, { "end": 867.92, "start": 859.16, "text": " so what do we do next we might want to do just try to test test something and" }, { "end": 874.64, "start": 867.92, "text": " here in the test thing we might want to set the GPUs I don't have a GPU right" }, { "end": 899.24, "start": 874.64, "text": " here so none and the data directory is going to be yeah I'll put it so nope nope nope" }, { "end": 906, "start": 899.24, "text": " okay it doesn't find the it doesn't find the the state dicts and so on now we're" }, { "end": 909.6, "start": 906, "text": " going to have to fix this we're going to have to fix the fact that it doesn't" }, { "end": 920.84, "start": 909.6, "text": " load okay okay and that's probably going to be here in these models so if I look" }, { "end": 926.08, "start": 920.84, "text": " in the dense net for example which we can learn and there's this pre-trained" }, { "end": 933.5600000000001, "start": 926.08, "text": " argument and what's that going to be it's oh that's bad okay it like has a" }, { "end": 940.12, "start": 933.5600000000001, "text": " hard code at the fact as hard code at the fact that there are there is this" }, { "end": 953.08, "start": 940.12, "text": " state dicts directory okay um yeah that's terrible terrible terrible terrible so I" }, { "end": 958.4000000000001, "start": 953.08, "text": " guess this is going to be in every single one of these models and that's not" }, { "end": 962.76, "start": 958.4000000000001, "text": " good so what we're going to do is probably always late loaded without the" }, { "end": 969.1600000000001, "start": 962.76, "text": " pre-trained and then kind of loaded ourselves from the from the correct" }, { "end": 974.9200000000001, "start": 969.1600000000001, "text": " directory so what's the correct directory again we're going to set the" }, { "end": 985.9599999999999, "start": 974.92, "text": " model dear we probably can just take that from the download script like that" }, { "end": 998.0799999999999, "start": 987.12, "text": " state dicts okay and then we want the architecture I guess that's a thing we" }, { "end": 1005.1, "start": 998.08, "text": " can actually put the classifier here right here that's something we can so" }, { "end": 1008.5200000000001, "start": 1005.1, "text": " it's going to be the classifier if you look in the state dates directory I'm" }, { "end": 1019.6, "start": 1008.5200000000001, "text": " gonna guess you can models see for ten state dicts we haven't unpacked it where" }, { "end": 1026.9, "start": 1019.6, "text": " have we not where have we unpacked it to help help oh no have we unpacked it to" }, { "end": 1039.52, "start": 1026.9, "text": " here we have not we have not so what is in here ah it's the C for ten models" }, { "end": 1043.8400000000001, "start": 1039.52, "text": " sub thing and then state dicts okay so it's always going to be the architecture" }, { "end": 1051.6000000000001, "start": 1043.8400000000001, "text": " plus PT so we can you know we can deal with that so it's going to be C for ten" }, { "end": 1058, "start": 1051.6, "text": " models state dicts that's fine and then it's always going to be the architecture" }, { "end": 1068.08, "start": 1058, "text": " plus a PT so let's look at one of these models to see how this is loaded we've" }, { "end": 1079.32, "start": 1068.08, "text": " saw we've seen this here so we simply want to load this state dict in and here" }, { "end": 1085.32, "start": 1079.32, "text": " it constructs the thing this is let's do proper string interpolation shall we oh" }, { "end": 1098.12, "start": 1085.32, "text": " device where this device come from we should check that out device is given" }, { "end": 1115.6799999999998, "start": 1098.12, "text": " device device device CPU where is device given okay dense net device CPU oh I" }, { "end": 1123.6799999999998, "start": 1115.6799999999998, "text": " guess device is always CPU and then then we map it to wherever I'm not entirely" }, { "end": 1130.96, "start": 1123.68, "text": " sure so here we say set device I guess we can just get the device from" }, { "end": 1148.04, "start": 1130.96, "text": " somewhere let's try it out okay so we're going to need this right here so we're" }, { "end": 1161.44, "start": 1148.04, "text": " going to OS path join models path and something that's dot PT so and here we're" }, { "end": 1172.8, "start": 1161.44, "text": " going to get the architecture which is the classifier cool so that's how we" }, { "end": 1183.6, "start": 1172.8, "text": " load something and then the device maybe we can just go torch kuda dot get device" }, { "end": 1189.1599999999999, "start": 1183.6, "text": " is that possible let's try" }, { "end": 1203.1200000000001, "start": 1189.16, "text": " nope ah okay" }, { "end": 1224.8799999999999, "start": 1203.12, "text": " nope no get device device maybe nope map location was given okay so we have to" }, { "end": 1237.44, "start": 1224.88, "text": " figure out where this device comes from honestly here no module there's this get" }, { "end": 1242, "start": 1237.44, "text": " classifier right here but just just says pre trained" }, { "end": 1257.48, "start": 1242, "text": " this device always CPU I just can't believe that I guess I'll believe it" }, { "end": 1272.84, "start": 1257.48, "text": " we'll always load to the CPU okay cool we can do that I guess pytorch lightning" }, { "end": 1281.24, "start": 1272.84, "text": " will then put it on the GPU for us cool so this is about how far I got when I" }, { "end": 1294.68, "start": 1281.24, "text": " tried to do this by myself and now the problems start missing keys in state" }, { "end": 1305.16, "start": 1294.68, "text": " dick a lot of missing stuff we can't we can't possibly load that yeah no not" }, { "end": 1326.28, "start": 1305.16, "text": " going to you so we can't load stuff what does it do load file name equals and" }, { "end": 1334.2, "start": 1326.28, "text": " then let's paste this and let's put some kind of break point here so we can check" }, { "end": 1336.4, "start": 1334.2, "text": " it out" }, { "end": 1363.8000000000002, "start": 1336.4, "text": " okay that exists now she feels like that should exist" }, { "end": 1379.4, "start": 1367.0800000000002, "text": " yeah that exists what's the what's the deal what's the matter here so we got" }, { "end": 1390.6000000000001, "start": 1379.4, "text": " model which is I guess a resin at 18 and we got this thing that we might want to" }, { "end": 1401.04, "start": 1390.6, "text": " load so why doesn't it work torch load load file name see that works so that's" }, { "end": 1411.8799999999999, "start": 1401.04, "text": " the state dict is that let's look at its keys we got a you know a bunch of stuff" }, { "end": 1426.3200000000002, "start": 1411.88, "text": " okay so why can't we load that well load state dict state dict and now unexpected" }, { "end": 1438.6000000000001, "start": 1426.3200000000002, "text": " keys in state dick missing keys so this is always prepended with model dot and" }, { "end": 1456.08, "start": 1438.6, "text": " here it's not okay what do we do about that I guess this is because we loaded" }, { "end": 1470.6799999999998, "start": 1456.08, "text": " ourselves okay cool so our model is not yes so our model has the sub path model" }, { "end": 1480.96, "start": 1470.6799999999998, "text": " so we need model dot model dot load state dict right look at us we made it so" }, { "end": 1486.52, "start": 1480.96, "text": " this is testing I guess this is this resin at 18 or whatnot so we can leave" }, { "end": 1496, "start": 1486.52, "text": " that to run for itself so we figured out how to load this stuff took us a while" }, { "end": 1503.72, "start": 1496, "text": " now let's go ahead and we know how to load the models we know how to load the" }, { "end": 1508.04, "start": 1503.72, "text": " weights so this is our teacher model right our teacher model is supposed to" }, { "end": 1515.92, "start": 1508.04, "text": " load up the weights and then and then teach the student models so here what" }, { "end": 1523, "start": 1515.92, "text": " does this training thing do we download the thing we make our GPUs to be really" }, { "end": 1529.2, "start": 1523, "text": " good okay and then we instantiate this module right here as you can see so now" }, { "end": 1532.92, "start": 1529.2, "text": " we're going to check out this module by the way the testing is done and as you" }, { "end": 1538, "start": 1532.92, "text": " can see there's an accuracy of 93.33 which I'm pretty happy with this is" }, { "end": 1544.2, "start": 1538, "text": " congruent with what we saw right here the resin at 18 do okay and we can I" }, { "end": 1548.04, "start": 1544.2, "text": " guess we can take a resin at 18 or a resin at 50 they're both fairly small" }, { "end": 1553.5600000000002, "start": 1548.04, "text": " right here so a lot of them are going to fit on our GPUs once we use the GPUs so" }, { "end": 1559.26, "start": 1553.5600000000002, "text": " let's change this module around right here to actually do the to actually do" }, { "end": 1565.04, "start": 1559.26, "text": " the let's say the the proper thing that we wanted to do so here we have self dot" }, { "end": 1572.82, "start": 1565.04, "text": " model as you can see and it's get classifier and the question is does it" }, { "end": 1578.08, "start": 1572.82, "text": " load it pre-trained so what we want to do is this is going to be our teacher" }, { "end": 1585.04, "start": 1578.08, "text": " model and this in this get classifier we want pre-trained to be false always" }, { "end": 1591.12, "start": 1585.04, "text": " right here we don't want any sort of we don't want to load the pre-trained" }, { "end": 1597.6399999999999, "start": 1591.12, "text": " instead what we want to do is we actually want to have the we want to" }, { "end": 1603.84, "start": 1597.6399999999999, "text": " load it ourselves right so here pretend false and now we're going from our test" }, { "end": 1609.56, "start": 1603.84, "text": " script we're going to take over the path they think the code that we used to load" }, { "end": 1623.72, "start": 1609.56, "text": " this okay all right so but a beam but a boom OS we don't have OS that common" }, { "end": 1633.32, "start": 1623.72, "text": " along just fine yep yep yep so now here we're going to have our self teacher" }, { "end": 1641.4399999999998, "start": 1633.32, "text": " model to load that state dict all right so this is it for initialization now we" }, { "end": 1646.12, "start": 1641.4399999999998, "text": " also need our student models of course so our student models are going to be a" }, { "end": 1658.9199999999998, "start": 1646.12, "text": " bunch of models models are going to be a bunch of models where what do we say so" }, { "end": 1666.8400000000001, "start": 1658.92, "text": " this is going to be a torch or a like a module list there's this module list" }, { "end": 1688.44, "start": 1678.76, "text": " torch dot n and dot module list right so I initialize that with a list and the" }, { "end": 1692.16, "start": 1688.44, "text": " list is going to be get me the classifier and we're just going to go for" }, { "end": 1696.76, "start": 1692.16, "text": " the same kind of classifiers right now to really boil it down to have the same" }, { "end": 1709.48, "start": 1696.76, "text": " architecture for the students and for the teachers for bar in range in range" }, { "end": 1718.0800000000002, "start": 1709.48, "text": " and here we probably need a flag so h params dot num students okay so these" }, { "end": 1721.96, "start": 1718.08, "text": " are going to be our student models so let's quickly create this num students" }, { "end": 1730.6799999999998, "start": 1721.96, "text": " thing right here I'll probably have to have an integer and we'll go with five" }, { "end": 1738.04, "start": 1730.6799999999998, "text": " students for now okay so we're creating five students all of them are not" }, { "end": 1743.1599999999999, "start": 1738.04, "text": " pre trained so we're going to are we going to train them from scratch or do" }, { "end": 1748.24, "start": 1743.16, "text": " we want actually to take over the weights we probably don't want to take" }, { "end": 1754.16, "start": 1748.24, "text": " over the weights let's just train them from scratch in a distillation mode I" }, { "end": 1760.64, "start": 1754.16, "text": " have no clue about this stuff by the way okay I guess this concludes this" }, { "end": 1768.48, "start": 1760.64, "text": " already concludes what we what we wanted to do so because this module list what" }, { "end": 1773.92, "start": 1768.48, "text": " can we do with it does anyone know I don't know by the way I'm sorry for the" }, { "end": 1779.44, "start": 1773.92, "text": " switching between the dark and the bright background I don't know how to" }, { "end": 1787, "start": 1779.44, "text": " fix that so pytorch and an module list it would be nice if we could give them" }, { "end": 1796.3600000000001, "start": 1787, "text": " some names right so I guess that's just an iterable right here so probably" }, { "end": 1800.1599999999999, "start": 1796.36, "text": " there's nothing that we can do to give them proper names or we'd have to hack" }, { "end": 1806.3999999999999, "start": 1800.1599999999999, "text": " around and I don't want to do that so I guess we can just check if that actually" }, { "end": 1817.3999999999999, "start": 1806.3999999999999, "text": " computes until here so let's check it out let's try the ensemble it doesn't" }, { "end": 1823.4799999999998, "start": 1817.3999999999999, "text": " data set not found or corrupted okay so what we'll have to do is we'll have to" }, { "end": 1829.08, "start": 1823.48, "text": " implement have to change this data directory right here so the data deer" }, { "end": 1836.52, "start": 1829.08, "text": " is going to be OS this" }, { "end": 1849.52, "start": 1838.04, "text": " whatever my C for 10 directory is no such file directory logs okay so logs" }, { "end": 1853.92, "start": 1849.52, "text": " doesn't exist so let's actually make it" }, { "end": 1868.84, "start": 1859, "text": " still no such file a directory logs why why doesn't it make it no such file a" }, { "end": 1879.2, "start": 1868.84, "text": " directory logs ah okay we need to ignore errors here and we're good okay so it" }, { "end": 1885.4, "start": 1879.2, "text": " computes until the point you probably you probably can't see that right I guess" }, { "end": 1891.4, "start": 1885.4, "text": " now you can see it let's check yeah now you can see it all right so where are we" }, { "end": 1898.32, "start": 1891.4, "text": " we are at the point right here in our module after we've created the teacher" }, { "end": 1905.18, "start": 1898.32, "text": " and the students so if we look at self technically we should be able to see" }, { "end": 1912.8400000000001, "start": 1905.18, "text": " right here a whole bunch of resin at 18s whole bunch so here you can see the" }, { "end": 1919.6000000000001, "start": 1912.8400000000001, "text": " teacher model right and I'm going to guess you can see layer 4 and here you" }, { "end": 1922.6000000000001, "start": 1919.6000000000001, "text": " can see the student models so the student models are going to be in a" }, { "end": 1927.4, "start": 1922.6000000000001, "text": " whole list of models and now we're going to train them so since they're" }, { "end": 1930.68, "start": 1927.4, "text": " initialized differently our hope is going to be that they're sort of going" }, { "end": 1934.8400000000001, "start": 1930.68, "text": " to end up at different places we're going to train them with the same like" }, { "end": 1941.36, "start": 1934.84, "text": " we're going to be really really stupid about this okay all right so let's be" }, { "end": 1948.32, "start": 1941.36, "text": " really stupid about it so what what are we gonna have to change here is our" }, { "end": 1953.6799999999998, "start": 1948.32, "text": " training step and our training step is actually fine we'll simply forward we'll" }, { "end": 1959.32, "start": 1953.6799999999998, "text": " get a loss from that and then we are going to return that and that's going to" }, { "end": 1965.32, "start": 1959.32, "text": " be back propped so in our optimizer wherever we initialize our optimizer we" }, { "end": 1970.2, "start": 1965.32, "text": " should probably give it the parameters that are not only the student model" }, { "end": 1979.6, "start": 1970.2, "text": " parameters right not the teacher model parameters so it should only train the" }, { "end": 1986.9399999999998, "start": 1979.6, "text": " student models okay and even even like that we should probably always set the" }, { "end": 1995.52, "start": 1986.94, "text": " teacher model in eval mode but we'll do that in the forward step right here so" }, { "end": 2001.1000000000001, "start": 1995.52, "text": " in the forward step we get images and labels and here it runs it just forward" }, { "end": 2005.4, "start": 2001.1000000000001, "text": " through the model we want to change that we actually want to have teacher" }, { "end": 2011.8400000000001, "start": 2005.4, "text": " predictions which we're going to have the teacher model we're going to forward" }, { "end": 2016.48, "start": 2011.8400000000001, "text": " this through the teacher models now the criterion I'm going to guess is a cross" }, { "end": 2020.84, "start": 2016.48, "text": " entropy so the predictions here are actually going to be logits right and" }, { "end": 2030.6, "start": 2020.84, "text": " this is this is good except that what we want to do is have a distribution of" }, { "end": 2036.6, "start": 2030.6, "text": " over labels so after the teacher here runs through and let's put a break point" }, { "end": 2042.1200000000001, "start": 2036.6, "text": " right here and actually look at it I find it's always easy if you go and just" }, { "end": 2050.16, "start": 2042.12, "text": " run until the point where you are at the code and then you can just look at stuff" }, { "end": 2056.68, "start": 2050.16, "text": " so here there's oh there's a validation sanity check okay probably don't want" }, { "end": 2062, "start": 2056.68, "text": " that and now we have the break right here and now we can look at teacher" }, { "end": 2069.24, "start": 2062, "text": " predictions dot shape so that's a batch size times 10 and if we look at it I'm" }, { "end": 2072.08, "start": 2069.24, "text": " going to guess there's some negative numbers in there so that's not going to" }, { "end": 2078.08, "start": 2072.08, "text": " be that those are going to be logits now we want them that to be a soft max over" }, { "end": 2082.64, "start": 2078.08, "text": " the last dimension and that's going to be of the same shape but of course now" }, { "end": 2086.72, "start": 2082.64, "text": " we're going to have a proper distribution so if we sum over the last" }, { "end": 2091.2, "start": 2086.72, "text": " dimension you should see a bunch of ones all right so the teacher predictions are" }, { "end": 2101.8399999999997, "start": 2091.2, "text": " going to be soft max over the last dimension and since we since we don't" }, { "end": 2107.6, "start": 2101.8399999999997, "text": " want to back prop through the teacher we can do this in an environment of no grad" }, { "end": 2116.4399999999996, "start": 2107.6, "text": " right here so we have that with not being stupid and we also set the teacher" }, { "end": 2131.68, "start": 2116.44, "text": " model into eval mode so I guess that does it set train no that should do it" }, { "end": 2144.52, "start": 2131.68, "text": " I have no idea yeah let's let's run it again we could have done that there okay" }, { "end": 2150.28, "start": 2144.52, "text": " so so far so good so we have the teacher predictions now what we need to do is" }, { "end": 2156.68, "start": 2150.28, "text": " run them through the student and use them as labels so we'll go for student" }, { "end": 2167.12, "start": 2156.68, "text": " in student models we'll go student forward or we simply run the images" }, { "end": 2179.8399999999997, "start": 2167.12, "text": " through that and that give us the logits and then we use our loss function on the" }, { "end": 2188.6, "start": 2179.8399999999997, "text": " logits and that not the labels but the teacher predictions right so we never" }, { "end": 2195.68, "start": 2188.6, "text": " actually use the labels here as you can see and that's going to be the student" }, { "end": 2206.44, "start": 2195.68, "text": " loss and now we have a bunch of losses and we're going to append that" }, { "end": 2221.56, "start": 2206.44, "text": " ah nope dot like this and our loss is simply going to be the sum of all the" }, { "end": 2229.56, "start": 2221.56, "text": " student losses not even the average I guess we could" }, { "end": 2237, "start": 2230.7999999999997, "text": " losses I guess we could make it the average just so if we change the number" }, { "end": 2244.32, "start": 2237, "text": " of students we'll get some kind of some sort of a better sense of the" }, { "end": 2253.88, "start": 2244.32, "text": " actual numbers what what what okay I think over here we're good yeah so our" }, { "end": 2262.6800000000003, "start": 2253.88, "text": " teacher model is not in training mode but our student models hopefully are in" }, { "end": 2267.7200000000003, "start": 2262.6800000000003, "text": " training mode no is this the eval pass I guess this is the eval pass this is the" }, { "end": 2273.32, "start": 2267.7200000000003, "text": " validation sanity check pass okay so this is going to be our loss and our" }, { "end": 2281.1600000000003, "start": 2273.32, "text": " accuracy now right so okay what's going to be our accuracy our accuracy is going" }, { "end": 2285.96, "start": 2281.1600000000003, "text": " to be we have these student losses all of them and what we are going to do is" }, { "end": 2293.6800000000003, "start": 2285.96, "text": " we're simply going to take the maximum prediction across the students pretty" }, { "end": 2309.52, "start": 2293.68, "text": " easy pretty easy but we need to collect the log it's so come on so we'll also" }, { "end": 2318.6, "start": 2309.52, "text": " have the log it's append the student log it's okay so we have a whole bunch of" }, { "end": 2325.64, "start": 2318.6, "text": " log it's right here and we'll get some predictions out of that now the question" }, { "end": 2330.56, "start": 2325.64, "text": " is do we want to simply take the mode or do we actually want to run a softmax" }, { "end": 2337.8399999999997, "start": 2330.56, "text": " over each and then take the average prediction I'm not duper super sure but" }, { "end": 2344.08, "start": 2337.8399999999997, "text": " we can try to do it in different different ways so right now we might just" }, { "end": 2351.12, "start": 2344.08, "text": " want to take the maybe the average log it and then run a softmax on top of that" }, { "end": 2357.68, "start": 2351.12, "text": " because I'm gonna guess the log it's our outputs of a linear layer so they might" }, { "end": 2362.2, "start": 2357.68, "text": " behave more in a linear fashion than if we were to average the actual" }, { "end": 2374.06, "start": 2362.2, "text": " probabilities that come out right maybe let's let's do this okay so we'll go" }, { "end": 2378.92, "start": 2374.06, "text": " we'll take these log it's they're all and we need to somehow concatenate" }, { "end": 2389.64, "start": 2378.92, "text": " those or stack them so how are we gonna stack them so they're 256 their batch" }, { "end": 2395.7599999999998, "start": 2389.64, "text": " size by number of classes so we'll just stack them at dimension zero I guess" }, { "end": 2402.96, "start": 2395.7599999999998, "text": " that's fine and then we are going to mean also across dimension zero so those" }, { "end": 2409.52, "start": 2402.96, "text": " are going to be our log it's our final log it's and then our predictions are" }, { "end": 2422.36, "start": 2409.52, "text": " going to be the argmax of the log it's in the last dimension yep that should" }, { "end": 2436.88, "start": 2422.36, "text": " be pretty straightforward I guess that's it easy as that yes the rest here should" }, { "end": 2444.1200000000003, "start": 2436.88, "text": " just do by itself and I'm going to go ahead and run give this another run and" }, { "end": 2450.92, "start": 2444.1200000000003, "text": " see where we run into problems can't really see how this could ever go wrong" }, { "end": 2461.16, "start": 2450.92, "text": " we'll just take everything over okay we actually got a problem 1d target tensor" }, { "end": 2466.4, "start": 2461.16, "text": " expected multi-target not supported so the cross entropy loss in pytorch does" }, { "end": 2472.96, "start": 2466.4, "text": " not support that let's let's give it a shot make this a little bigger for you" }, { "end": 2485.68, "start": 2472.96, "text": " and let's go for the cross P loss I can't type today so here we have the" }, { "end": 2490.86, "start": 2485.68, "text": " cross entropy loss and the cross entropy loss is useful when training cross" }, { "end": 2496.36, "start": 2490.86, "text": " because problem with the classes yada yada yada wait should be one that okay" }, { "end": 2506.4, "start": 2496.36, "text": " criterion expects a class index as the target okay so what we need is like a" }, { "end": 2513.1, "start": 2506.4, "text": " soft loss right we don't need this cross entropy loss we actually want we want to" }, { "end": 2520.46, "start": 2513.1, "text": " have soft targets so what do we do we want to do I think the cross entropy" }, { "end": 2527.52, "start": 2520.46, "text": " loss is a combination of the here of the log softmax and the NLL loss can we take" }, { "end": 2540.44, "start": 2527.52, "text": " the NLL loss maybe so the NLL loss right here is going to be the target that this" }, { "end": 2552.36, "start": 2540.44, "text": " loss expect should be a class index no okay that's not good so next let's go do" }, { "end": 2561.32, "start": 2552.36, "text": " we have we somehow need a soft cross entropy loss let's search for that" }, { "end": 2577.0800000000004, "start": 2561.32, "text": " pytorch soft cross entropy soft classes I guess people do that kind of stuff so" }, { "end": 2584.7200000000003, "start": 2578.76, "text": " the problem with these kind of losses is that what you do what you have to do is" }, { "end": 2593.9599999999996, "start": 2584.72, "text": " kind of protect yourself against against numerical instabilities right so what we" }, { "end": 2600.7599999999998, "start": 2593.9599999999996, "text": " want to do is find a function that does this for us I guess if we do the loft" }, { "end": 2606.68, "start": 2600.7599999999998, "text": " the the log softmax that should take care of it for us okay this is tensor" }, { "end": 2622.16, "start": 2606.68, "text": " flow okay following thread cross entropy loss I guess people just do really the" }, { "end": 2634.96, "start": 2622.16, "text": " log softmax and then do that and we should be fine with this okay thanks" }, { "end": 2645.96, "start": 2634.96, "text": " okay Frank yeah maybe maybe this has advanced since then so we can give like" }, { "end": 2651.2, "start": 2645.96, "text": " a last look at this and this is a bit too big I'm sorry your eyes are gonna" }, { "end": 2662.16, "start": 2651.2, "text": " have to suffer and we're going to look at loss functions and we're going to" }, { "end": 2673.2799999999997, "start": 2662.16, "text": " just look through them multi label soft margin loss hmm multi label we don't" }, { "end": 2684.2799999999997, "start": 2673.2799999999997, "text": " really want multi label right we want this but not with the targets okay I" }, { "end": 2689.72, "start": 2684.2799999999997, "text": " guess we're just gonna have to write this ourselves so ultimately what is the" }, { "end": 2695.24, "start": 2689.72, "text": " cross entropy that cross entropy is simply the probability of the true label" }, { "end": 2702.9599999999996, "start": 2695.24, "text": " times the log probability of the wrong or of the predicted label yeah as you" }, { "end": 2707.64, "start": 2702.9599999999996, "text": " see right here so we are going to simply multiply target times the log" }, { "end": 2716.66, "start": 2707.64, "text": " probability of the predicted label and then some some dot take that mean across" }, { "end": 2727.52, "start": 2716.66, "text": " the batch I guess yeah that should do we can implement this let's do it so this" }, { "end": 2733.2799999999997, "start": 2727.52, "text": " criterion right here is going to be our loss function and that's only used once" }, { "end": 2747.88, "start": 2733.28, "text": " so what we can do is going to be a function so we're going to take student" }, { "end": 2756.5600000000004, "start": 2747.88, "text": " logits and we're going to take teacher probabilities okay so how's that gonna" }, { "end": 2763.68, "start": 2756.56, "text": " work out we're going to do the log soft max from the student logits so and then" }, { "end": 2776.12, "start": 2763.68, "text": " dot does that exist log soft max functional okay we need functional and" }, { "end": 2784.12, "start": 2776.12, "text": " student logits of that dimension so now we have properly normalized student" }, { "end": 2790.7599999999998, "start": 2784.12, "text": " logit so that's going to be student log probes and then what we want to do is" }, { "end": 2798.56, "start": 2790.7599999999998, "text": " simply multiply the teacher probes times the student log probes and the negative" }, { "end": 2806.7999999999997, "start": 2798.56, "text": " of that is going to be our loss the question is do we want to sum that I" }, { "end": 2819.84, "start": 2806.8, "text": " guess across this dimension or mean it I guess the sum sum should do all right" }, { "end": 2829.0800000000004, "start": 2819.84, "text": " this is it easy as that why have we searched for so long so the criterion we" }, { "end": 2839.04, "start": 2829.08, "text": " can simply replace that now by our loss function cool so let's run it again" }, { "end": 2850.44, "start": 2841.72, "text": " yada yada yada okay so I need to check maybe we should have taken like a smaller" }, { "end": 2854.7999999999997, "start": 2850.44, "text": " model it sometimes pays off to you know start with a really small model small" }, { "end": 2863.44, "start": 2854.8, "text": " model just so you can you can do these kind of things fast so here we have" }, { "end": 2872.32, "start": 2863.44, "text": " dimension out of range to do okay which is where is that in forward in line 78" }, { "end": 2884.0800000000004, "start": 2872.32, "text": " let's go there line 78 okay here the max is not going to be as much fun let's" }, { "end": 2889.2, "start": 2884.08, "text": " go there I think this is some of these things change over time in pytorch so" }, { "end": 2896.3199999999997, "start": 2889.2, "text": " this code might be written when you know so what we have is we have a max over" }, { "end": 2903.92, "start": 2896.3199999999997, "text": " the predictions and predictions oh it's already an arg max so I guess we can" }, { "end": 2910.52, "start": 2903.92, "text": " remove this all whether or not that agrees with labels dot data we don't" }, { "end": 2920.84, "start": 2910.52, "text": " need any of that dot float and we also don't need that we so the accuracy is" }, { "end": 2929.48, "start": 2920.84, "text": " simply going to be the mean of this no I guess so we're here can't we just do" }, { "end": 2946.56, "start": 2929.48, "text": " predictions equals labels yes and we want the sum of that actually we want" }, { "end": 2953.4, "start": 2946.56, "text": " the float first and then we want the mean yeah that seems reasonable so let's" }, { "end": 2967.2000000000003, "start": 2953.4, "text": " do it like this yeah float mean perfect how could this be any more easy but this" }, { "end": 2976.36, "start": 2967.2000000000003, "text": " right here is all of it so validation step accuracy corrects we'll just look" }, { "end": 2982.36, "start": 2976.36, "text": " at it once we've done it and what I do is usually just run it until it doesn't" }, { "end": 2988.4, "start": 2982.36, "text": " give me any mistakes anymore and then I know I sort of have succeeded" }, { "end": 3031.32, "start": 3012.36, "text": " okay we're pretty close I feel so it says grad can implicitly create it only" }, { "end": 3037.08, "start": 3031.32, "text": " for scalar outputs which probably means our loss function is not a scalar so" }, { "end": 3042.6, "start": 3037.08, "text": " when we return the loss right here here we have the sum of the losses divided by" }, { "end": 3055.36, "start": 3042.6, "text": " the length of the losses let's go here and check out what's up with that so" }, { "end": 3061.72, "start": 3055.36, "text": " what will do I see this loss function here will output basically one loss for" }, { "end": 3071.2, "start": 3061.72, "text": " each data point so what we need to do I guess is call mean on this or some when" }, { "end": 3077.04, "start": 3071.2, "text": " they created the criterion in the original one now we've thrown it away" }, { "end": 3087.3999999999996, "start": 3077.48, "text": " look at the git diff right here so I guess this reduces how does this reduce" }, { "end": 3097.1600000000003, "start": 3087.4, "text": " the cross entropy loss when we don't do anything cross entropy loss reduces" }, { "end": 3106.4, "start": 3097.1600000000003, "text": " reduction mean okay so let's reduce with the mean so if we call loss then after" }, { "end": 3112.32, "start": 3106.4, "text": " that we should call mean and then here I'm not so sure anymore we should divide" }, { "end": 3117.12, "start": 3112.32, "text": " here because the learning rate is kind of tuned to the original loss size so I" }, { "end": 3124.04, "start": 3117.12, "text": " guess we'll be content for now with summing up these things and over here I" }, { "end": 3132.56, "start": 3124.04, "text": " guess we've we've solved that right no losses yeah see our losses is going to be" }, { "end": 3140.92, "start": 3132.56, "text": " an entire tensor and now we just fixed that right now okay so let's try it" }, { "end": 3149.32, "start": 3140.92, "text": " again in the meantime what can we do we can so we already take care of our GPU" }, { "end": 3155.36, "start": 3149.32, "text": " we take care of the logs one thing to do when with respect to this download" }, { "end": 3163.44, "start": 3155.36, "text": " stuff right here is you know if you have a server or something and you let a lot" }, { "end": 3167.2000000000003, "start": 3163.44, "text": " of things run in parallel what you want to do is make sure they don't all" }, { "end": 3172.7599999999998, "start": 3167.2, "text": " download the same stuff at the same time that's pretty bad so what you want to do" }, { "end": 3177.3599999999997, "start": 3172.7599999999998, "text": " is ideally have some sort of lock such that they coordinate and I usually use a" }, { "end": 3188.3599999999997, "start": 3177.3599999999997, "text": " file lock for this so I'm gonna create that right here from import the file" }, { "end": 3197.76, "start": 3188.36, "text": " lock and then I simply create the lock let's go file lock and here you have to" }, { "end": 3211.88, "start": 3197.76, "text": " input a file so sometimes like yeah data lock I don't know you just pick some" }, { "end": 3218.8, "start": 3211.88, "text": " file and that's the file that these these processes are going to sync on and" }, { "end": 3231.2400000000002, "start": 3218.8, "text": " then once you do this you simply wrap all of it in a with lock so only one at" }, { "end": 3242.8799999999997, "start": 3231.24, "text": " a time can go in in this function all right so that's that that should make us" }, { "end": 3250, "start": 3242.8799999999997, "text": " safe and right here we're now training this is excellent we are training the" }, { "end": 3257.8399999999997, "start": 3250, "text": " students now we need to do that on an actual GPU so I have multiple tools to" }, { "end": 3265.76, "start": 3257.84, "text": " ship this to a GPU so first of all I can try to ship this to and try to ship this" }, { "end": 3274.3, "start": 3265.76, "text": " to a let's say to to one GPU so the way you do that is first of all I want some" }, { "end": 3279.32, "start": 3274.3, "text": " sort of unbuffered version of Python and then I have this do I even have this" }, { "end": 3295.96, "start": 3279.32, "text": " tool I do have the tool okay so I'm gonna call one of our servers and we" }, { "end": 3302.96, "start": 3295.96, "text": " don't know what's going on okay so cannot import name seed everything from" }, { "end": 3309.96, "start": 3302.96, "text": " pytorch lightning seed everything is that some kind of new thing in pytorch" }, { "end": 3329.44, "start": 3309.96, "text": " lightning I guess I have it apparently so here seed everything with zero why I" }, { "end": 3337, "start": 3329.44, "text": " don't need that we are running without any seed here we are being really cool" }, { "end": 3348.4, "start": 3337, "text": " okay next mistake cannot okay still same mistake of course since we don't yep" }, { "end": 3358.44, "start": 3352.36, "text": " next mistake we'll just go through the mistake learning rate late learning rate" }, { "end": 3365.2400000000002, "start": 3358.44, "text": " loggers so I guess we need to update pytorch lightning on the servers and I'll" }, { "end": 3371.84, "start": 3365.2400000000002, "text": " do that quickly okay I've updated pytorch lightning let's check out whether" }, { "end": 3377, "start": 3371.84, "text": " or not we can actually run something yeah we can run something so this again" }, { "end": 3381.92, "start": 3377, "text": " is now downloading this on the server while this is happening there's another" }, { "end": 3388.52, "start": 3381.92, "text": " thing we can do namely I have sort of sort of made a system to run stuff on" }, { "end": 3396.96, "start": 3388.52, "text": " servers which I like a lot honestly so I guess we can try this out how do I" }, { "end": 3409.96, "start": 3396.96, "text": " hidden oh with I okay so I want to first of all delete this delete this yes I" }, { "end": 3419, "start": 3409.96, "text": " guess delete this cool and this git folder is a bit annoying let's restructure" }, { "end": 3425.4, "start": 3419, "text": " because otherwise it will always ship the git folder with everything up here" }, { "end": 3433.7200000000003, "start": 3425.4, "text": " does it do that yeah I don't like to have the code in a top level so quickly" }, { "end": 3447.52, "start": 3433.72, "text": " make a sources directory move everything in there so move the c410 models into" }, { "end": 3458.4399999999996, "start": 3447.52, "text": " the source directory move all the Python files into the source directory no clear" }, { "end": 3470.88, "start": 3458.44, "text": " the logs and we're much better much better okay much better so what we what" }, { "end": 3476.52, "start": 3470.88, "text": " what we will do is my system requires like a file and I'm just gonna copy one" }, { "end": 3499, "start": 3476.52, "text": " from another project quickly okay so we're back and I copied that over as you" }, { "end": 3505.08, "start": 3499, "text": " can see you basically give hyper parameters and it blasts ever the hyper" }, { "end": 3508.2599999999998, "start": 3505.08, "text": " parameters through in a kind of a random search fashion it's not too" }, { "end": 3520.92, "start": 3508.2599999999998, "text": " sophisticated but we can work with it so 10 that's the file right for 10 train" }, { "end": 3528.72, "start": 3520.92, "text": " ensemble yes that's the file cool and here we're just going to put all of our" }, { "end": 3537.3199999999997, "start": 3528.72, "text": " hyper parameters and that will remove the logs file I'm okay with that but" }, { "end": 3540.56, "start": 3537.3199999999997, "text": " want this" }, { "end": 3554.9199999999996, "start": 3546.4399999999996, "text": " bang cool so what do we want we want basically we just want to try it like a" }, { "end": 3562.4, "start": 3554.92, "text": " bunch of times and then see like average across it right that's all maybe we want" }, { "end": 3575.2400000000002, "start": 3562.4, "text": " the the architecture to change so let's say the classifier is a resnet 18 or a" }, { "end": 3587.16, "start": 3575.24, "text": " resnet 34 or a resnet 50 just so we have a bunch of stuff to do okay and this" }, { "end": 3596.3999999999996, "start": 3587.16, "text": " downloaded and is training on GPU hopefully if this works then we can ship" }, { "end": 3601.64, "start": 3596.3999999999996, "text": " this off and we'll make this other hyper parameter that I like to use called rep" }, { "end": 3606.64, "start": 3601.64, "text": " which is just basically a dummy parameter and so I can just repeat the" }, { "end": 3615, "start": 3606.64, "text": " experiment a bunch of times and let's put that in here so this is really that" }, { "end": 3624, "start": 3615, "text": " this has this has no effect except for randomizing it a bit I guess we can try" }, { "end": 3630.04, "start": 3624, "text": " to seed stuff so whenever it says seed everything we'll just seed it with this" }, { "end": 3646, "start": 3630.04, "text": " we'll call it seed no is it here this seed everything yeah so h params dot rep" }, { "end": 3658.12, "start": 3646, "text": " sorry seed cool what this this is doing something nice you can see it so this is" }, { "end": 3668.6, "start": 3658.12, "text": " unbuffered Python output thanks yeah so what other classifiers do we have we can" }, { "end": 3676, "start": 3668.6, "text": " again we can try a bunch of them we can try all of them but why don't we try all" }, { "end": 3684.52, "start": 3676, "text": " of them like this then let's go into this rat file I don't know why I called" }, { "end": 3696.7599999999998, "start": 3684.52, "text": " it rat I just want it like some three-letter thing so yep like this and" }, { "end": 3704.32, "start": 3696.7599999999998, "text": " then we can just take all of that crap and delete it and delete this and those" }, { "end": 3709.52, "start": 3704.32, "text": " are going to be all our models so our classifier is going to consist of all" }, { "end": 3721.36, "start": 3709.52, "text": " of this stuff let's I know I know I know I suck at them don't tell me actually" }, { "end": 3727.16, "start": 3721.36, "text": " tell me I want them tips trying to learn something new like each week in them but" }, { "end": 3734.7599999999998, "start": 3727.16, "text": " it is hard and tend to make myself actually do it so let's go let's go" }, { "end": 3739.32, "start": 3734.7599999999998, "text": " with just one repetition so far if we if we are not sure we can still up the" }, { "end": 3745.7200000000003, "start": 3739.32, "text": " number of repetitions we don't even have the rep right now this is called seed" }, { "end": 3751.7200000000003, "start": 3745.84, "text": " all right so we have different classifiers and what we're going to" }, { "end": 3760.52, "start": 3751.7200000000003, "text": " need we also have this num students right let's go with one with five and" }, { "end": 3773.28, "start": 3760.52, "text": " with 20 so here we got one epoch done and we get a validation loss do we get a" }, { "end": 3780.7599999999998, "start": 3773.28, "text": " validation accuracy validating validating I have no idea we'll cancel" }, { "end": 3788.92, "start": 3780.7599999999998, "text": " this right now and we'll go ahead and just blast this onto our servers and" }, { "end": 3798.48, "start": 3788.92, "text": " hopefully that that's gonna work I have no idea is everything fine everything's" }, { "end": 3807.7200000000003, "start": 3798.48, "text": " fine go no what what" }, { "end": 3816.2799999999997, "start": 3807.72, "text": " cool" }, { "end": 3827.08, "start": 3820.2799999999997, "text": " and let me get back to you once this is finished all right we're back so I've" }, { "end": 3833.56, "start": 3827.08, "text": " just written some code here to extract the results of that run and something" }, { "end": 3837.48, "start": 3833.56, "text": " you know it's pretty interesting what came out so in these plots you'll see on" }, { "end": 3842.6, "start": 3837.48, "text": " the x-axis of the number of students in the ensemble remember these students are" }, { "end": 3846.52, "start": 3842.6, "text": " all trained from the same teacher the teacher you can see in orange that's" }, { "end": 3851.92, "start": 3846.52, "text": " just the single teacher for reference you can see that if you have one student" }, { "end": 3857.6, "start": 3851.92, "text": " model it sometimes under performs or sometimes out performs the single" }, { "end": 3863.68, "start": 3857.6, "text": " teacher model but then if you have more student models you can see that there is" }, { "end": 3869.44, "start": 3863.68, "text": " a pretty monotonic relationship so here it's the reason this fit doesn't finish" }, { "end": 3875.64, "start": 3869.44, "text": " here is because there's not enough space on the GPU for that many student models" }, { "end": 3880.68, "start": 3875.64, "text": " but you can see that the relationship here is fairly monotonic here it's a bit" }, { "end": 3887.04, "start": 3880.68, "text": " of a kink so the first idea like this this is really astounding because these" }, { "end": 3890.6, "start": 3887.04, "text": " students have all been trained from that single teacher and they have been" }, { "end": 3894.64, "start": 3890.6, "text": " trained for as long as the teacher has been trained so they don't have more" }, { "end": 3897.68, "start": 3894.64, "text": " compute than the teacher they've been trained from scratch not from some" }, { "end": 3903.08, "start": 3897.68, "text": " checkpoint or from the teacher weights it's simple distillation from the" }, { "end": 3907.3199999999997, "start": 3903.08, "text": " teacher no labels and the students are all in parallel as well so they don't" }, { "end": 3911.2, "start": 3907.3199999999997, "text": " see different data or even different data augmentations it's the exact same" }, { "end": 3915.52, "start": 3911.2, "text": " order of the exact same data points going through all of the students the" }, { "end": 3922.24, "start": 3915.52, "text": " exact same learning rate schedule there's no noise and so on so the first" }, { "end": 3926.04, "start": 3922.24, "text": " thought that came to my mind like something fishy is going on here right" }, { "end": 3933.24, "start": 3926.04, "text": " like this this is this seems like to like come on there's no new information" }, { "end": 3939.24, "start": 3933.24, "text": " here so I thought hey I the teacher the teacher model I've just grabbed them" }, { "end": 3942.84, "start": 3939.24, "text": " from this from this repo from this pre-trained checkpoints and these" }, { "end": 3947.08, "start": 3942.84, "text": " pre-trained checkpoints they are you know the checkpoints that have performed" }, { "end": 3952.88, "start": 3947.08, "text": " best on the validation set so this is sort of a sneaky way of how we could" }, { "end": 3956.92, "start": 3952.88, "text": " train on the validation set right because we annotate each data point in" }, { "end": 3960.6400000000003, "start": 3956.92, "text": " the training data set with this checkpoint and the checkpoint has been" }, { "end": 3965.84, "start": 3960.6400000000003, "text": " selected for performing especially well on the validation data set it could" }, { "end": 3971.44, "start": 3965.84, "text": " explain why we get a gain on the validation data set so what I did is I" }, { "end": 3977.84, "start": 3971.44, "text": " retrained all of the teacher models such that I just retrained them for these 100" }, { "end": 3983.2400000000002, "start": 3977.84, "text": " epochs and I just took the last checkpoint the all everything's the same" }, { "end": 3987.4, "start": 3983.2400000000002, "text": " the hyper parameters learning rate schedule and so on this is not tuned for" }, { "end": 3991.8, "start": 3987.4, "text": " any particular model and it's pretty like it's pretty standard it's like it's" }, { "end": 4000.2400000000002, "start": 3991.8, "text": " not like 0.12589 it's like 0.01 and 100 epochs or so fairly standard" }, { "end": 4004.8799999999997, "start": 4000.24, "text": " parameters and I just took the last checkpoint to make it didn't even look at" }, { "end": 4009.4799999999996, "start": 4004.8799999999997, "text": " its performance to make sure that I didn't you know select something that" }, { "end": 4014.6, "start": 4009.4799999999996, "text": " was especially good on the validation data set and the results here you'll see" }, { "end": 4020.8799999999997, "start": 4014.6, "text": " are actually already the results of that run which the previous run it was almost" }, { "end": 4024.8399999999997, "start": 4020.8799999999997, "text": " the same like I was astounded how well it works and then I thought hey maybe" }, { "end": 4031.44, "start": 4024.84, "text": " I'm kind of you know cheating here so I redid it with the teachers that are not" }, { "end": 4038.48, "start": 4031.44, "text": " specifically selected and this is already the results so that's pretty" }, { "end": 4045.48, "start": 4038.48, "text": " cool right so then I wondered what happens if I now if I increase my" }, { "end": 4049.7200000000003, "start": 4045.48, "text": " training amount so I just let this run for more like what if I let the students" }, { "end": 4056.2799999999997, "start": 4049.72, "text": " run for more than the teacher has run again there's no new information here so" }, { "end": 4061.04, "start": 4056.2799999999997, "text": " you can see that the now the okay the green is now the teacher the blue is a" }, { "end": 4068.04, "start": 4061.04, "text": " hundred epochs and the orange is 250 epochs and you can see with that even" }, { "end": 4075.16, "start": 4068.04, "text": " one student will outperform the teacher but many students will outperform even" }, { "end": 4079.6, "start": 4075.16, "text": " more so if you give more compute there's lots of lots of headroom here to" }, { "end": 4084.08, "start": 4079.6, "text": " improve you'll see this here I think this last one with the blue line is just" }, { "end": 4089.48, "start": 4084.08, "text": " a bit of a weird a weird configuration I guess if you were to rerun that that" }, { "end": 4096.64, "start": 4089.48, "text": " would you know fall in line so this is pretty pretty weird right so I have a" }, { "end": 4102.04, "start": 4096.64, "text": " bunch of questions so first of all I've searched the literature a bit more and I" }, { "end": 4106.24, "start": 4102.04, "text": " came up with a number of papers that do things like this now usually when you do" }, { "end": 4111.12, "start": 4106.24, "text": " distillation you people stress the importance of like how to introduce" }, { "end": 4117.639999999999, "start": 4111.12, "text": " noise like in the noisy student paper or that you really need these data" }, { "end": 4124.36, "start": 4117.639999999999, "text": " augmentations or you know same clear V2 uses the self distillation in order to" }, { "end": 4129.5199999999995, "start": 4124.36, "text": " do in in order to label more data so they say it's important that we bring" }, { "end": 4135.719999999999, "start": 4129.5199999999995, "text": " more unlabeled data into the process and so on so all of this it really doesn't" }, { "end": 4140.400000000001, "start": 4135.72, "text": " match right here and especially this focus on we need noise during the" }, { "end": 4144.8, "start": 4140.400000000001, "text": " distillation process to build these ensembles this is also you know if you" }, { "end": 4149.6, "start": 4144.8, "text": " know mean teacher things like this I also found a paper called born again" }, { "end": 4154.64, "start": 4149.6, "text": " neural networks that does something quite similar but not very simple not" }, { "end": 4159, "start": 4154.64, "text": " like the same where they distill a teacher to the student with the same" }, { "end": 4165.360000000001, "start": 4159, "text": " architecture and then they distill the student again into another student and" }, { "end": 4169.679999999999, "start": 4165.36, "text": " then that into another student and so on and then at the end they say oh we can" }, { "end": 4174.48, "start": 4169.679999999999, "text": " also build an ensemble but sometimes their ensembles outperform their you" }, { "end": 4179.28, "start": 4174.48, "text": " know chain of distillation sometimes they don't they don't really focus on" }, { "end": 4183.08, "start": 4179.28, "text": " that part a lot and it's way more complicated like you distill one" }, { "end": 4187.4, "start": 4183.08, "text": " student after another and I also think they they have some introduction of" }, { "end": 4193.62, "start": 4187.4, "text": " variability in the students like like noise or different augmentations and so" }, { "end": 4201.44, "start": 4193.62, "text": " on so this here seems you know really really really simple now I want to know" }, { "end": 4207.72, "start": 4201.44, "text": " this ensemble effect it seems pretty pretty weird right so what gives so the" }, { "end": 4214.28, "start": 4207.72, "text": " first thing we could do is we could say what what does how does this compare to" }, { "end": 4218.82, "start": 4214.28, "text": " an ensemble of teacher models like if we actually were to build an ensemble like" }, { "end": 4225.12, "start": 4218.82, "text": " train five teacher models on you know five five different teacher models it's" }, { "end": 4231.639999999999, "start": 4225.12, "text": " still the same data but reasonably they might be able to learn something more" }, { "end": 4235.12, "start": 4231.639999999999, "text": " from the data if we have five teacher models they might learn different things" }, { "end": 4241, "start": 4235.12, "text": " from the data and therefore if we combine them they might kind of overlap" }, { "end": 4246.599999999999, "start": 4241, "text": " their knowledge and sort of catch where if one doesn't generalize in one data" }, { "end": 4250.68, "start": 4246.6, "text": " point the other four can overrule it whereas with these student with these" }, { "end": 4254.96, "start": 4250.68, "text": " self ensembles there's not really a way where we can learn more from data" }, { "end": 4259.84, "start": 4254.96, "text": " because we can only learn from the teacher and the teacher is fixed and has" }, { "end": 4264.200000000001, "start": 4259.84, "text": " seen that much data right so how does this compare so wrote some rewrote some" }, { "end": 4269.84, "start": 4264.200000000001, "text": " code is it's just plumbing and I release the code it's linked but it's just" }, { "end": 4274.92, "start": 4269.84, "text": " plumbing don't worry don't worry there is no great thoughts in there it's just" }, { "end": 4280, "start": 4274.92, "text": " plumbing such that my students don't are not all in parallel so the ensembles are" }, { "end": 4284.88, "start": 4280, "text": " not trained in parallel anymore I train each model individually which means that" }, { "end": 4291.04, "start": 4284.88, "text": " at maximum I have to have two models on the same GPU one teacher and one student" }, { "end": 4297.4400000000005, "start": 4291.04, "text": " so I make sure that the teachers they are trained from scratch and the" }, { "end": 4302.12, "start": 4297.4400000000005, "text": " students they're always trained from the same teacher right so the student" }, { "end": 4306.64, "start": 4302.12, "text": " ensembles will be exactly the same as we have them here that means one teacher is" }, { "end": 4311.72, "start": 4306.64, "text": " responsible for all the students but yeah so okay I'll just show you the" }, { "end": 4319.72, "start": 4311.72, "text": " results right here so if we look at those results you can see that and I've" }, { "end": 4324.16, "start": 4319.72, "text": " done it for a bunch of models right here the blue line is the ensemble of" }, { "end": 4328.36, "start": 4324.16, "text": " teachers and here on the x-axis you see the number of models and now since I'm" }, { "end": 4334.92, "start": 4328.36, "text": " not training everything on the same GPU but I recombine later that that" }, { "end": 4339.639999999999, "start": 4334.92, "text": " basically means that I have doubt the ability to train up to 10 models are" }, { "end": 4346.24, "start": 4339.639999999999, "text": " actually however many I want and the only real trick in the code is that when" }, { "end": 4351.24, "start": 4346.24, "text": " I evaluate one of these ensembles what I do is I load a mini batch and then I" }, { "end": 4356.4, "start": 4351.24, "text": " basically load the first checkpoint run the forward pass load the second" }, { "end": 4359.879999999999, "start": 4356.4, "text": " checkpoint run a forward pass load the third checkpoint run a forward pass I do" }, { "end": 4363.5599999999995, "start": 4359.879999999999, "text": " this for all the checkpoints until I go to the next mini batch but that's just" }, { "end": 4370.08, "start": 4363.5599999999995, "text": " for evaluating right it just seemed easiest with the code that I had so you" }, { "end": 4374.719999999999, "start": 4370.08, "text": " can see right here that the there is a significant like this is almost" }, { "end": 4380.719999999999, "start": 4374.719999999999, "text": " overlapping right here for most models there sometimes the student wins" }, { "end": 4386.12, "start": 4380.719999999999, "text": " sometimes the teacher wins so the teacher ensemble wins now remember the" }, { "end": 4391.5199999999995, "start": 4386.12, "text": " teachers are trained on you know ten times as much data right here but it's" }, { "end": 4395.76, "start": 4391.5199999999995, "text": " always the same data but still they have the opportunity to learn ten times as" }, { "end": 4399.68, "start": 4395.76, "text": " much information from the data whereas the students they're all distilled from" }, { "end": 4405.88, "start": 4399.68, "text": " that same teacher without any noise any augmentate any augmentations except for" }, { "end": 4411.36, "start": 4405.88, "text": " the augmentations that you use during training anyway and I've done this for a" }, { "end": 4417.679999999999, "start": 4411.36, "text": " hundred epochs and I've done this for 250 is this already 250 I think that was" }, { "end": 4424.28, "start": 4417.679999999999, "text": " a hundred I just put that there nope okay yeah that was a hundred epochs but" }, { "end": 4430.96, "start": 4424.28, "text": " you'll see the 250 epoch plots they look very much the same okay they are" }, { "end": 4437.46, "start": 4430.96, "text": " just a bit better if you train for 250 epochs now interestingly okay here's the" }, { "end": 4445.4, "start": 4437.46, "text": " interesting part about the 250 epochs the student is still distilled from a" }, { "end": 4452.4800000000005, "start": 4445.4, "text": " teacher model that has been trained for a hundred epochs so all of this all of" }, { "end": 4457.28, "start": 4452.4800000000005, "text": " this makes no sense to me right the student is still distilled from the" }, { "end": 4463.8, "start": 4457.28, "text": " hundred epoch teacher model yet if you train the student for 250 epochs in" }, { "end": 4469, "start": 4463.8, "text": " self distillation and then build an ensemble of these students from that" }, { "end": 4474.08, "start": 4469, "text": " same teacher model and you compare that to an ensemble of teachers that have all" }, { "end": 4482.28, "start": 4474.08, "text": " been trained for longer for 250 epochs which you know should out it the 250" }, { "end": 4487.88, "start": 4482.28, "text": " epochs generally outperforms the hundred epochs models still they are the same" }, { "end": 4495, "start": 4487.88, "text": " this is this is pretty crazy results I I think and sort of my conclusion from" }, { "end": 4502.12, "start": 4495, "text": " this is that the ensemble effect right here is not a function of learning of" }, { "end": 4507.88, "start": 4502.12, "text": " extracting more information from the data the ensemble effect might actually" }, { "end": 4512.4400000000005, "start": 4507.88, "text": " be have something to do with the function landscape itself and kind of" }, { "end": 4517.56, "start": 4512.4400000000005, "text": " exploring different minima of the of the same function not of the same function" }, { "end": 4523.4400000000005, "start": 4517.56, "text": " but exploring different functions to describe the same phenomena and I've" }, { "end": 4528.080000000001, "start": 4523.4400000000005, "text": " also found a paper that explains the lost landscape of deep ensembles and I" }, { "end": 4532.56, "start": 4528.080000000001, "text": " will make a video on that maybe it's out already maybe it will be out after you" }, { "end": 4538.76, "start": 4532.56, "text": " see this one I I haven't decided yet which which order I'm going to release" }, { "end": 4544.280000000001, "start": 4538.76, "text": " things but this here I it's it's pretty interesting and we need like a name so" }, { "end": 4551.24, "start": 4544.28, "text": " self self ensembles are already a thing but they are always with noise and stuff" }, { "end": 4556.48, "start": 4551.24, "text": " like this so let's call them something like plain self ensembles but that that" }, { "end": 4562.599999999999, "start": 4556.48, "text": " that sounds like a good name plain self ensembles the act of self distillation a" }, { "end": 4567.44, "start": 4562.599999999999, "text": " single model into multiple models without any noise any augmentations" }, { "end": 4573.16, "start": 4567.44, "text": " anything just you run as if you were to train the model itself and then you" }, { "end": 4579.16, "start": 4573.16, "text": " build an ensemble of these models by simply averaging the log it's plain" }, { "end": 4585.5599999999995, "start": 4579.16, "text": " self ensembles alright so the plan from here is to check on like at least one" }, { "end": 4591.28, "start": 4585.5599999999995, "text": " other data set you know these these models I appreciate that I could get" }, { "end": 4596.92, "start": 4591.28, "text": " them pre trained but they're just the image net models and then kind of let" }, { "end": 4603.24, "start": 4596.92, "text": " run on C for 10 so there's no kind of guarantee that these have been you know" }, { "end": 4609.16, "start": 4603.24, "text": " tuned or anything that the learning rates or whatnot so I want to take like" }, { "end": 4614.4400000000005, "start": 4609.16, "text": " an image net model you still make sure that I don't use any like hidden" }, { "end": 4620, "start": 4614.4400000000005, "text": " information where I could cheat on the validation set but try this on at least" }, { "end": 4625.58, "start": 4620, "text": " one thing and see if that works as well if we can sort of push image net" }, { "end": 4632.96, "start": 4625.58, "text": " performance simply by doing this trick so that's the plan for now and I have" }, { "end": 4638.5599999999995, "start": 4632.96, "text": " some other ideas but I just wanted to let you know and this is sort of how" }, { "end": 4645.88, "start": 4638.5599999999995, "text": " research works I guess you have a dumb idea and it turns out to work and then" }, { "end": 4651.08, "start": 4645.88, "text": " you go on and still probably probably there is not maybe too much interesting" }, { "end": 4655.12, "start": 4651.08, "text": " things here maybe it doesn't work on image net because these models are just" }, { "end": 4660.88, "start": 4655.12, "text": " under train and this somehow made them better somehow or regularize them" }, { "end": 4665.28, "start": 4660.88, "text": " somehow that usually doesn't work there's so much that can go wrong still" }, { "end": 4673.76, "start": 4665.28, "text": " so but yeah that was it and I invite you to like check out other papers in this" }, { "end": 4679.84, "start": 4673.76, "text": " space if you want it's a pretty interesting space and with that I don't" }, { "end": 4686.2, "start": 4679.84, "text": " have much more to say yeah I hope you enjoyed this let me know what you think" }, { "end": 4692.360000000001, "start": 4686.2, "text": " of like research implementation or research process videos like this I'm" }, { "end": 4697.4400000000005, "start": 4692.360000000001, "text": " not sure what people expect like I can't make this into five minute video of like" }, { "end": 4702.04, "start": 4697.4400000000005, "text": " whoo I discovered something because then you know there's no clue of what's" }, { "end": 4709.04, "start": 4702.04, "text": " what's happening but maybe like an hour or so is also too long I'm not sure yeah" }, { "end": 4714.32, "start": 4709.04, "text": " let me know what you think and I'll see you next time bye" } ]
qFRfnIRMNlk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "vision", "recognition", "localization", "resnet", "resnet50", "fpn", "backbone", "permuation", "upsampling", "stride", "convolution", "convolutional neural network", "google", "spine", "spine net", "imagenet", "coco", "segmentation", "bounding box", "skip connections", "residual", "bottleneck" ]
#machinelearning #ai #google The high-level architecture of CNNs has not really changed over the years. We tend to build high-resolution low-dimensional layers first, followed by ever more coarse, but deep layers. This paper challenges this decades-old heuristic and uses neural architecture search to find an alternative, called SpineNet that employs multiple rounds of re-scaling and long-range skip connections. OUTLINE: 0:00 - Intro & Overview 1:00 - Problem Statement 2:30 - The Problem with Current Architectures 8:20 - Scale-Permuted Networks 11:40 - Neural Architecture Search 14:00 - Up- and Downsampling 19:10 - From ResNet to SpineNet 24:20 - Ablations 27:00 - My Idea: Attention Routing for CNNs 29:55 - More Experiments 34:45 - Conclusion & Comments Papers: https://arxiv.org/abs/1912.05027 Code: https://github.com/tensorflow/tpu/tree/master/models/official/detection Abstract: Convolutional neural networks typically encode an input image into a series of intermediate features with decreasing resolutions. While this structure is suited to classification tasks, it does not perform well for tasks requiring simultaneous recognition and localization (e.g., object detection). The encoder-decoder architectures are proposed to resolve this by applying a decoder network onto a backbone model designed for classification tasks. In this paper, we argue encoder-decoder architecture is ineffective in generating strong multi-scale features because of the scale-decreased backbone. We propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search. Using similar building blocks, SpineNet models outperform ResNet-FPN models by ~3% AP at various scales while using 10-20% fewer FLOPs. In particular, SpineNet-190 achieves 52.5% AP with a MaskR-CNN detector and achieves 52.1% AP with a RetinaNet detector on COCO for a single model without test-time augmentation, significantly outperforms prior art of detectors. SpineNet can transfer to classification tasks, achieving 5% top-1 accuracy improvement on a challenging iNaturalist fine-grained dataset. Code is at: this https URL. Authors: Xianzhi Du, Tsung-Yi Lin, Pengchong Jin, Golnaz Ghiasi, Mingxing Tan, Yin Cui, Quoc V. Le, Xiaodan Song Thumbnail art by Lucas Ferreira Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we'll look at SpineNet Learning Scale-Permuted Backbone for Recognition and Localization by Xianze Du at Aal of Google Research. On a high level this paper proposes to take current recognition and localization networks which have a CNN backbone, usually something like a ResNet, and switch up the order of the blocks in the ResNet and cross-connect them in a different way, such that they reach a higher accuracy with the new network that has the same amount of parameters or almost the same amount of parameters. They then further modify this network such that it reaches that higher accuracy with less compute than the original network. So if you want to know how it's done, you know, stick around. You can help me by sharing out this video if you liked it, if you didn't like it, leave a comment and tell me what you didn't like, otherwise I have no chance of improving. So that's the deal, okay? Cool. So the task here is a recognition and localization as you can see here, which basically means that you have an image and there's stuff on the image. Maybe there's a cat right here and maybe there is some kind of a house right here. And the tasks, these tasks come in various forms, but some of the tasks are to say what's on the image, so in this case cat and house, and also where is it? Now this could be a point, this could be a bounding box, or this could actually be a pixel segmentation. All of this sort of tasks exists in various forms. What usually is done in these tasks is you want to go in some way through a neural network and the neural network will output the same image again or the same shape. So it will output an image that is of the same shape, that if this is your input image, I'm just gonna quickly redraw without the labels, if this is your input image then the output image, let's say we're doing bounding boxes, the output would say something like here are bounding boxes and also the output would be cat and house. So these are the two outputs that the neural network would generate. This is some sort of a convolutional neural network because they deal with images fairly well. Now usually when we do image processing and we know this from for example image classification, so if we just have image classification just to classify here, if we just want the outputs cat or house or even just one single thing like an image net, our convolutional neural networks have a particular architecture. Namely what we do is we have the first convolutional neural net, the first layers will take the image and run these convolutional filters across them which gives you the same shape image back but then with time we scale it down. We have a max pooling or a convolution with stride 2 so that the image is only this big anymore and then we have a bunch of further layers and then we scale it down again and so on. Now as we scale it down the number of channels goes up. Of course at the beginning you have three channels for the three colors but then after the first convolution you might have whatever 32 channels right here. This is no longer the original image, this now is of course for each pixel you have a stack of features right you have a stack of features right here because that's what your convolutional layer does and then when you scale it down you have even more feature maps so we tend to scale down the resolution of our feature maps but we tend to increase the number of feature maps right here. The reasoning behind this is if you look at these bounding boxes they don't really... sorry if you look at the labels right here the fact that there's a cat on the image shouldn't depend on the exact pixel location of the cat right so even if I scale this down a bit I'll still recognize that there's a cat somewhere it can still aggregate that information in fact I could deal with scaling this down successively up to a single pixel and that's ultimately what an image classifier does you simply have a single vector at the end with the features in it and from that you classify. So the reasoning is that as you go through the network you pick up the low level features first like here you pick up the edges and the kind of low level shapes as you go higher through the network your features become more abstract but less localized which means that it's less important where they are and that's why you look at this image through a coarser and coarser segmentation and at the end your segmentation might be something like this. Okay so we have had a lot of success building image classifiers with this reasoning and this is sort of a human heuristic that just has worked well. Now when we do something like this bounding box classification or even per pixel classification all of a sudden it is very important where the things are right it is very important that it's this pixel and this pixel and this pixel and this pixel forming the bounding box because the more accurate you are the better your bounding box classifier you still have this right here this recognition but the localization part we can't just scale down anymore because we need to output something that's of the same size so what people have done is they've gone from this kind of from architecture that scales down because we know that works well we know the downscaling works well so we take that and then we scale up again and there is some reasoning behind this right so that's what we can do because we know this part works very well for extracting high-level features that are not that localized so our reasoning is going to be something like okay we'll force the network through this kind of bottleneck right here we'll force it to learn some high-level features we because otherwise you can just you know kind of remember the individual pixels and that won't work as well we'll force it to remember the high-level we'll force it to remember what a cat is and then it will help in the pixel segmentation to know what a cat is this is very valid assumption but it doesn't need to be the case and so there is one additional thing that these networks usually do is that they have like some skip connections here from the layers that are of the same size to the layers that are of the same size right here to here in order to kind of recover these high-level features because if you only look at an image through the lens of this right here and you're a you have to segment the ear of the cat you know you can only either color an entire pixel or not so you want to gain back some of that some of those high-level features and that's what you do with skip connections and that's why these networks usually look like this now in this work the authors sort of criticize this they say why why are we doing it like this isn't there a better way to do it specifically we want to look at this part right here which is called the backbone so we assume that we have these these output layers that give you at different scales different features and what we have to do is we have to construct a backbone that somehow feeds features either you know through this direct way or through these connections right here and feeds features to this these ultimate classifiers so these classifiers will then be used to classify the bounding boxes and classify the output classes for recognition and localization this is an illustration of this on the left you see a typical backbone and they call this a scale decreased network so an example of scale decreased network on the left versus a scale permuted network on the right the width of the block indicates feature resolution and the height indicates feature dimension dotted arrows represent connections from two blocks not plotted okay so on the left you have the typical architecture you see that the the width so this is the resolution is very high and as you go through the layers that resolution gets smaller and smaller and smaller but the number of features indicated by the height gets higher and higher and higher as you go through this is your typical architecture we are looking into that say that this is not the only one what we can do is we can build any sort of backbone and here they restrict themselves they say okay in order to make it comparable in order to be you know scientifically a bit more rigorous than just building anything what we restrict ourselves are simply permutations of this so we only allow us to permute these things so all the you know this goes here and this goes here as you see and that ensures that you still have roughly the same amount of parameters now there is it sort of a parameter difference because these connections here you need to up and down sample the images and sometimes that introduces parameters but in essence you have the rough same amount of parameters and then you can really research what can we improve a network simply by rearranging its blocks because that would give evidence that this scaling down architecture isn't really the best one okay so here you can see an example of this this is what they call a scale permuted network sorry this scale permuted network right here so in a scale permuted network what you're allowed to do is you're allowed you have these blocks on the left and you're allowed to put them anywhere you want in in any in any sort of I don't want to say order but yes in any sort of order yes it's an order actually so it goes from here down if this is one two three four five any block for any block you first place you first place this block you're allowed to connect it to any other block before it now here we don't see but you can see there's two incoming connections right here so we make use of more than one connection on the left you see there's always one connection between the blocks and on the right you see we allow up to two blocks to connect to a given block okay then you're done with this block you place the next block this one here also you're allowed to have two incoming connections this one here and this one here and you place the next block and so on now how you make this you also see that there doesn't need to be like a straight linear path because there is no connection right here if you can see that so you might be wondering how do I decide which block goes where and how do I decide on which connections connect where and that is going to be the idea here to use neural architecture search so neural architecture search right now is still a fancy way of saying let's try stuff out and so what you'll do is you will initialize a reinforcement learning controller that decides on the ordering and on the connections and it has some action space and you basically let it run so it it you know proposes a couple of architectures and and then you measure all of them you train all of these architectures and you see how well they fare and then you go back to the controller that's the reward signal and so we can draw so you have an agent which builds the building plan so the agent in the agent will emit as an action a building plan like big small big small big with connections like this and like this and like this and then that will go to the environment the environment here simply takes the architecture and trains train the architecture and then the let's say the eval loss or the validation loss the validation accuracy is equal is going to be your reward signal so you simply train a reinforcement learning agent to solve this particular problem which is training this in recognition and localization on the particular data set as well as possible to basically come up with the best architecture which you know it's it's fancy and it's a bit better than trying everything out but it's not much better right now and it takes a lot of compute to run these experiments because it takes a lot of iterations of this and every iteration consists of training one of these networks fully once now you can do something with like early stopping and stuff but so you get the idea this is what they what they propose and this is you know how they get better so there are a number of challenges in this namely we we said okay when you input a signal for example when you input a signal from this layer to this layer you can see that you have to shrink the resolution and you have to up the number of features and this was already sort of solved in the resonant original resonant paper but they reiterate how they do this here basically you have we have this layer and it is connected to these two layers we said every layer can receive inputs from two layers you see at the very end these are just added together okay so we have two things first of all the number of features is different you can see right over here the number of features the number of channels is different than the number of channels in the output image let's say right here so those are different and in fact they're different in both inputs and we have this method of one by one convolutions that was introduced in the original resonant paper if we do one by one convolutions it's basically a a learned transformation from a number of input channels to a number of output channels without change without doing any actual convolution operation this is simply linear operation up scaling or up up upping the number of feature maps you can see these one by one convolutions are employed here in various ways so because this is fairly compute intensive or so they claim what they do first is they always first go to less features so here we have a number of features which is maybe let's say this is f or sorry this is c0 you can see very small here maybe that there is this first we go to alpha times c0 and alpha is I think in the default setting it's one half so first we always go to one half the number of features before we do this switch here and then we have two options either so you go to one half the features and at the end you go to the number of target features so it could be if the target features are more than you currently have it could be that you first go to less features and then you go to even more features right as if you the the current one has more features than the end it's probably not as bad because you first go to less features and to even less features this is probably one of the things they did to save computation but which you can imagine that it hurts because here you simply have to you have to basically throw away half the features or you have to like linearly combine them in every step where you connect two layers to each other you know okay so there's two situations first situation your current resolution here is higher than the target resolution in that case we can simply do a convolution with a bigger stride than one right if you have an image and you do a convolution usually you have this overlapping convolution such that the result is the same size as you started with but you can also do a bigger stride and I'm a bit over drawing this here but you can do a bigger stride such that the final resolution is smaller and you can also do this max pooling right here so the max pooling is also a way to reduce the number of of the resolution of the image so if we're bigger we can do that if we are currently smaller than the the target what we can do is we can up sample and up sample you can do by doing nearest neighbor or things like this you can also do a learned up sample there are various ways I believe here they do a nearest neighbor but I'm not sure anymore actually let's check it out that's here somewhere resampling in cross-scale connections yada yada yada yada yada yada yada it's important to keep the computational resampling low we introduce a scaling factor alpha we had that then we use a nearest neighbor interpolation for up sampling or a stride to three by three convolution okay so it's nearest neighbor by up sampling alright so that's that's how they up and down sample the feature maps to the correct shapes either using nearest neighbor up sampling or using multi stride convolution followed by max pooling so what does that give them now they do several different steps in this so the first architecture this resnet 50 is the original architecture and remember we're only talking about the backbone right here now in the original resnet 50 architecture you have this resnet 50 fpn and this fpn is these are called the output layers this is what then goes and classifies the bounding boxes and the labels and so on now here you can see the resnet 50 is continuously getting smaller and more features they do an intermediate step so this this right here is their final thing where they let this let this algorithm go wild and you can see that it's pretty pretty fuzzy so this RL controller finds this architecture to be the best architecture and you can see it's continuously down and up and down and sorry and up and down and there is considerable cross connections between all of these things and then here you have the you have the different output layers built into the network rather than next to the network right so these are the ones that are now the red border ones are now the features that are used for going in classifying as an intermediate step they also consider this architecture where they basically built a smaller resnet right here and then let the algorithm decide on the rest right here so it still has the same amount of parameters roughly but they can investigate what happens if we go to these to this lower if we have this structure at the beginning but then part of it we can do with our algorithm and lastly they also consider this architecture now this architecture again their algorithm has control over the whole network but there is an additional thing that the algorithm can do the algorithm can also decide to change the number of features and to change the type of block so here you can see these are all residual blocks and these are these called bottleneck blocks they're simply a different way of of doing a residual block it was introduced in the original resnet paper but the the controller can simply switch to that and that can save some computation if you go through these bottleneck blocks so what does that give you you can see below that the resnet 50 is at 37.8 percent average precision if you liberate the top part to leave it to the algorithm it's at 39 if you liberate the entire network it's at 40.7 and remember these are like roughly the same amount of parameters and then if you if you also let the network control a bit of the feature size and the type of block you get a 40.8 which is the same as before but now this one I believe has about oh yeah here we go with 10% fewer flops okay so that's that's pretty cool though remember that the left thing this is this is made by humans this is just our heuristic and the right things they are made by RL and they are you know for these particular data sets though they do find that generally this also transfers to image net classification but still this is sort of a it works well for the type of data we work with and so on so I don't know how much I would trust it how far we should go of building spine net 49 as our new backbone for every image task that we have it remains to be seen I believe before actually we go to the experiments before we go to the experiments I want to state my idea right here so you get the general gist here and so another kind of coral I have with this is that you know in here you always have these single connections and here you always have these these double connections and I've looked through the experiment it seems like nowhere do they ablate or anything what what it means to only have single connections or if they so if they let the resnet run with double connections so if their controller could not switch the order but only introduce the connections they might have done this they have a lot of experiments where they do the different ablations so I would be interested what happens when you let it run on the resnet but let it have two connections per per layer is it then better or not so here the importance I'll get to my idea later the importance of scale permutation that's where they investigate how important is it that you permute the layers and that turns out to be fairly important then the importance of cross scale connections that's how they investigate here so these are these connections they say the cross scale connections play a crucial role in fusing features at different resolutions throughout the scale permuted network so that's the reasoning behind it we we take features from different kind of resolutions and we can also scale up again and then scale down again to gain some additional features from the from the higher resolutions we study it's important by graph damage so either they remove the short-term connections or they remove the long-range connections or they remove both and then connect one block to the previous block via a sequential connection so this is only this is only in the things that they learned right so this model is where they fully give their model control over the ordering and connections you can see that as this forty point seven percent now if they delete the short-range connections they drop to thirty five if they delete the only the long-range they drop to even more so here you can see that these long-range connections which I guess are connections that are going across multiple blocks skipping multiple blocks these tend to be very important so you can make the case that it might be very important to fuse these things from different layers to fuse the features from different resolutions because these long-range connections tend to be important though it's one thing to say that if we just leave them away with our model if we just damage it and then let it run it it it drops in accuracy it's not entirely the same thing as to say that these are important because you don't really know what happens like if you train without them maybe you could if you train without them you could reach as good an accuracy so this graph damage investigation it has something but not I wouldn't trust it too much and yeah I think they haven't investigated what happens if they keep the resinette order but let the connections be twice but you get the general the general idea of the paper right here of of what they do now they do this with architecture search right here but here's an idea okay I propose the following you have an image right here and we are wondering here should we let it go through a layer that's wide and with less features should we let it go through a layer that's you know very many features but not as wide but we have to downscale the image or should we let it go first through something intermediate let's see it like this okay so we're wondering how should we order these blocks why can't we do all of them at the same time why can't we do this this and this okay and then in the next layer again do all of them at the same time and you you you can already see where this is going I hope I hope you can see where this is going so you have a routing right here and how do we do routing in modern times in deep learning with attention so I propose you have layers with different attention hey let's say these are these are now your your sequences or you can also make them as attention heads okay these are you these and the lower level features are routed to the higher level features with an attention mechanism and you do this layer by layer by layer so you let because what's the problem here the problem here is that the same data point has to go you know you find these good connections but the all the data points have to go through the same connections and it might actually be that you need different routing depending on the data point it might be that what this is this is good for the average data point but it would be much better if whenever there's a cat you take one path and whenever there's a dog you take a different path so this will allow for that you basically have the routing parameterized by an attention mechanism this I have no clue how much compute this would take it doesn't seem that outrageous because what's your sequence length here your sequence length is going to be the number of layers maybe and maybe times the number of feature maps maybe have different attention head so you maybe want to replicate some of those here but ultimately I would guess the attention mechanism itself isn't that much of an overhead maybe it's an overhead that you have so many in parallel yeah but you know it remains to be seen that's that's the idea yeah you heard it here first okay so they have more experiments so they also build here is where they say okay we have the spine net 49 now and we found this to work we found this to work really well this is our spine net 49 architecture cool and we want to make it bigger but I guess they didn't have the computational resources to run the neural architecture search for bigger networks this is now as about as big as a resonant 50 right but what if you wanted to go to a resonant 100 or a resonant 150 there you you don't have the computational resources do neural architectures imagine this Google has doesn't have the computational resources to do neural architecture search on this thing so this must be expensive or I'm just I have no idea but what they do is they kind of do a trick so here they take the the spine net 49 and they say we build a spine net 60 96 by simply repeating each block twice so all the incoming connections would go to the first block and all the outgoing connections would come from the second block right here you had two in and maybe there's actually no limit to how many outgoing connections you can have and also you can also do this three times which I think is a bit of a cheap way and it kind of defeats the entire purpose right couldn't you make the exact same argument again here that maybe it's helpful to route from this block right here or maybe it's helpful that these don't have the same scale right after one another it just seems but okay so they say we found this good structure and we simply duplicate each block I'm not that big of a fan in any case so they train this and it of course outperforms everything else if you compare with kind of models of the same size so here you compare this spine net 49 to the resnet 50 and you can see there's about the same number of parameters how about it outperforms the resnet 50 pretty much and as you go up the number of parameters here the performance goes up yet again and I believe these dagger ones here are simply trained with a special schedule with here with applying stochastic depth and swish activation for a longer training schedule so you can see that not only do are these spine nets sorry of the number of parameters is here not only are the spine nets slightly smaller than the resnets they do require less flops and they reach better accuracy so you know every everything is a win here yeah so they apply this to these data sets I don't want to go you know too much into into that but in the last part they also apply this to image net so there's image classification where they basically say okay we can just go to our architecture and we can just add up all the output blocks we scale them appropriately and add up all the output blocks right here because these are good features for localization and so on and we can train it to do image classification so all of these go into a big combination classifier that does the 1000 classes of image net image classification and that also works pretty well with this network so they basically argue what they found is sort of a better image image processing network than the resnet 50 and I guess they would argue that from now on you should take this as your backbone for image classification and recognition and so on which it's entirely possible that this works better there's no particular reason why the resnet 50 should work at all right it's just a heuristic but I guess the I it remains to be to be seen whether that's generally true or just in the things they considered so you can see right here the spine net generally improving over the image net which isn't is not stated right here but it does generally improve and you can see as you go higher and higher spine net the the numbers tend to improve as well and this is already pretty respectable respectable number for image net right all right so this was it for this paper for this particular paper they do have you know two different of these object detection recognition datasets and I invite you to check out the experiments more closely if you're interested in that sort of thing I was mainly interested in the method of doing and arranging these layers and so on it seems like it's a cool engineering project cool investigative project the experiments are done well and in the end they reach a better you know they achieve to get a better model out of that and if it turns out that this model is a good model the entire community will be better off unfortunately there's no broader impact statement to tell us that also the terrorists will be able to use this for purposes but you can imagine that yourself all right that was it for me again leave a comment if you want me to change anything or have suggestions leave a like if you like the video share it out bye bye
[ { "end": 5.16, "start": 0, "text": " Hi there, today we'll look at SpineNet Learning Scale-Permuted Backbone for" }, { "end": 10.96, "start": 5.16, "text": " Recognition and Localization by Xianze Du at Aal of Google Research. On a high" }, { "end": 16.28, "start": 10.96, "text": " level this paper proposes to take current recognition and localization" }, { "end": 20.34, "start": 16.28, "text": " networks which have a CNN backbone, usually something like a ResNet, and" }, { "end": 26.6, "start": 20.34, "text": " switch up the order of the blocks in the ResNet and cross-connect them in a" }, { "end": 31.560000000000002, "start": 26.6, "text": " different way, such that they reach a higher accuracy with the new network" }, { "end": 35.36, "start": 31.560000000000002, "text": " that has the same amount of parameters or almost the same amount of parameters." }, { "end": 39.400000000000006, "start": 35.36, "text": " They then further modify this network such that it reaches that higher" }, { "end": 45.16, "start": 39.400000000000006, "text": " accuracy with less compute than the original network. So if you want to know" }, { "end": 51.160000000000004, "start": 45.16, "text": " how it's done, you know, stick around. You can help me by sharing out this video if" }, { "end": 55.44, "start": 51.160000000000004, "text": " you liked it, if you didn't like it, leave a comment and tell me what you didn't" }, { "end": 61.519999999999996, "start": 55.44, "text": " like, otherwise I have no chance of improving. So that's the deal, okay? Cool." }, { "end": 67.75999999999999, "start": 61.519999999999996, "text": " So the task here is a recognition and localization as you can see here, which" }, { "end": 72.92, "start": 67.75999999999999, "text": " basically means that you have an image and there's stuff on the image. Maybe" }, { "end": 78.2, "start": 72.92, "text": " there's a cat right here and maybe there is some kind of a house right here. And" }, { "end": 84.66, "start": 78.2, "text": " the tasks, these tasks come in various forms, but some of the tasks are to say" }, { "end": 92.47999999999999, "start": 84.66, "text": " what's on the image, so in this case cat and house, and also where is it? Now this" }, { "end": 96.12, "start": 92.47999999999999, "text": " could be a point, this could be a bounding box, or this could actually be a" }, { "end": 103.32, "start": 96.12, "text": " pixel segmentation. All of this sort of tasks exists in various forms. What" }, { "end": 110.75999999999999, "start": 103.32, "text": " usually is done in these tasks is you want to go in some way through a neural" }, { "end": 116.2, "start": 110.76, "text": " network and the neural network will output the same image again or the same" }, { "end": 121.74000000000001, "start": 116.2, "text": " shape. So it will output an image that is of the same shape, that if this is your" }, { "end": 128.04000000000002, "start": 121.74000000000001, "text": " input image, I'm just gonna quickly redraw without the labels, if this is" }, { "end": 132.04000000000002, "start": 128.04000000000002, "text": " your input image then the output image, let's say we're doing bounding boxes, the" }, { "end": 138.12, "start": 132.04000000000002, "text": " output would say something like here are bounding boxes and also the output would" }, { "end": 145.36, "start": 138.12, "text": " be cat and house. So these are the two outputs that the neural network would" }, { "end": 149.56, "start": 145.36, "text": " generate. This is some sort of a convolutional neural network because" }, { "end": 156.24, "start": 149.56, "text": " they deal with images fairly well. Now usually when we do image processing and" }, { "end": 160.12, "start": 156.24, "text": " we know this from for example image classification, so if we just have image" }, { "end": 167.36, "start": 160.12, "text": " classification just to classify here, if we just want the outputs cat or house or" }, { "end": 172.32000000000002, "start": 167.36, "text": " even just one single thing like an image net, our convolutional neural networks" }, { "end": 177.48000000000002, "start": 172.32000000000002, "text": " have a particular architecture. Namely what we do is we have the first" }, { "end": 184, "start": 177.48000000000002, "text": " convolutional neural net, the first layers will take the image and run these" }, { "end": 190.56, "start": 184, "text": " convolutional filters across them which gives you the same shape image back but" }, { "end": 195.32000000000002, "start": 190.56, "text": " then with time we scale it down. We have a max pooling or a convolution with" }, { "end": 201.72, "start": 195.32, "text": " stride 2 so that the image is only this big anymore and then we have a bunch of" }, { "end": 207.64, "start": 201.72, "text": " further layers and then we scale it down again and so on. Now as we scale it down" }, { "end": 211.16, "start": 207.64, "text": " the number of channels goes up. Of course at the beginning you have three channels" }, { "end": 214.4, "start": 211.16, "text": " for the three colors but then after the first convolution you might have" }, { "end": 219.72, "start": 214.4, "text": " whatever 32 channels right here. This is no longer the original image, this now is" }, { "end": 225.32, "start": 219.72, "text": " of course for each pixel you have a stack of features right you have a stack" }, { "end": 231.52, "start": 225.32, "text": " of features right here because that's what your convolutional layer does and" }, { "end": 237.16, "start": 231.52, "text": " then when you scale it down you have even more feature maps so we tend to" }, { "end": 243, "start": 237.16, "text": " scale down the resolution of our feature maps but we tend to increase the number" }, { "end": 247.92, "start": 243, "text": " of feature maps right here. The reasoning behind this is if you look at these" }, { "end": 253.83999999999997, "start": 247.92, "text": " bounding boxes they don't really... sorry if you look at the labels" }, { "end": 259.32, "start": 253.83999999999997, "text": " right here the fact that there's a cat on the image shouldn't depend on the exact" }, { "end": 264.36, "start": 259.32, "text": " pixel location of the cat right so even if I scale this down a bit I'll still" }, { "end": 268.68, "start": 264.36, "text": " recognize that there's a cat somewhere it can still aggregate that information" }, { "end": 273.12, "start": 268.68, "text": " in fact I could deal with scaling this down" }, { "end": 278.4, "start": 273.12, "text": " successively up to a single pixel and that's ultimately what an image classifier" }, { "end": 283.2, "start": 278.4, "text": " does you simply have a single vector at the end with the features in it and from" }, { "end": 288.76, "start": 283.2, "text": " that you classify. So the reasoning is that as you go through the network you" }, { "end": 294.36, "start": 288.76, "text": " pick up the low level features first like here you pick up the edges and the" }, { "end": 300.76, "start": 294.36, "text": " kind of low level shapes as you go higher through the network your features" }, { "end": 305.52, "start": 300.76, "text": " become more abstract but less localized which means that it's less important" }, { "end": 310.56, "start": 305.52, "text": " where they are and that's why you look at this image through a coarser and" }, { "end": 314.96, "start": 310.56, "text": " coarser segmentation and at the end your segmentation might be something like" }, { "end": 322.28, "start": 314.96, "text": " this. Okay so we have had a lot of success building image classifiers with" }, { "end": 326.03999999999996, "start": 322.28, "text": " this reasoning and this is sort of a human heuristic that just has worked" }, { "end": 333.6, "start": 326.04, "text": " well. Now when we do something like this bounding box classification or even per" }, { "end": 338.8, "start": 333.6, "text": " pixel classification all of a sudden it is very important where the things are" }, { "end": 342.84000000000003, "start": 338.8, "text": " right it is very important that it's this pixel and this pixel and this pixel" }, { "end": 347.04, "start": 342.84000000000003, "text": " and this pixel forming the bounding box because the more accurate you are the" }, { "end": 350.84000000000003, "start": 347.04, "text": " better your bounding box classifier you still have this right here this" }, { "end": 356.4, "start": 350.84, "text": " recognition but the localization part we can't just scale down anymore because" }, { "end": 360.2, "start": 356.4, "text": " we need to output something that's of the same size so what people have done" }, { "end": 366.76, "start": 360.2, "text": " is they've gone from this kind of from architecture that scales down because we" }, { "end": 371.28, "start": 366.76, "text": " know that works well we know the downscaling works well so we take that" }, { "end": 377.71999999999997, "start": 371.28, "text": " and then we scale up again and there is some reasoning behind this right so" }, { "end": 382.72, "start": 377.72, "text": " that's what we can do because we know this part works very well for" }, { "end": 387.32000000000005, "start": 382.72, "text": " extracting high-level features that are not that localized so our reasoning is" }, { "end": 391.48, "start": 387.32000000000005, "text": " going to be something like okay we'll force the network through this kind of" }, { "end": 396.8, "start": 391.48, "text": " bottleneck right here we'll force it to learn some high-level features we because" }, { "end": 400.56, "start": 396.8, "text": " otherwise you can just you know kind of remember the individual pixels and that" }, { "end": 404.44000000000005, "start": 400.56, "text": " won't work as well we'll force it to remember the high-level we'll force it" }, { "end": 410.84, "start": 404.44, "text": " to remember what a cat is and then it will help in the pixel segmentation to" }, { "end": 417.68, "start": 410.84, "text": " know what a cat is this is very valid assumption but it doesn't need to be the" }, { "end": 422.8, "start": 417.68, "text": " case and so there is one additional thing that these networks usually do is" }, { "end": 425.88, "start": 422.8, "text": " that they have like some skip connections here from the layers that" }, { "end": 430.24, "start": 425.88, "text": " are of the same size to the layers that are of the same size right here to here" }, { "end": 434.2, "start": 430.24, "text": " in order to kind of recover these high-level features because if you only" }, { "end": 439.32, "start": 434.2, "text": " look at an image through the lens of this right here and you're a you have to" }, { "end": 443.92, "start": 439.32, "text": " segment the ear of the cat you know you can only either color an entire pixel or" }, { "end": 449.36, "start": 443.92, "text": " not so you want to gain back some of that some of those high-level features" }, { "end": 452.4, "start": 449.36, "text": " and that's what you do with skip connections and that's why these" }, { "end": 458.96, "start": 452.4, "text": " networks usually look like this now in this work the authors sort of criticize" }, { "end": 463.68, "start": 458.96, "text": " this they say why why are we doing it like this isn't there a better way to do" }, { "end": 468.52, "start": 463.68, "text": " it specifically we want to look at this part right here which is called the" }, { "end": 474.08, "start": 468.52, "text": " backbone so we assume that we have these these output layers that give you at" }, { "end": 479.04, "start": 474.08, "text": " different scales different features and what we have to do is we have to" }, { "end": 484.32, "start": 479.04, "text": " construct a backbone that somehow feeds features either you know through this" }, { "end": 489.24, "start": 484.32, "text": " direct way or through these connections right here and feeds features to this" }, { "end": 494.16, "start": 489.24, "text": " these ultimate classifiers so these classifiers will then be used to" }, { "end": 499.76, "start": 494.16, "text": " classify the bounding boxes and classify the output classes for" }, { "end": 506.76, "start": 499.76, "text": " recognition and localization this is an illustration of this on the left you see" }, { "end": 513.76, "start": 506.76, "text": " a typical backbone and they call this a scale decreased network so an example of" }, { "end": 518.28, "start": 513.76, "text": " scale decreased network on the left versus a scale permuted network on the" }, { "end": 522.48, "start": 518.28, "text": " right the width of the block indicates feature resolution and the height" }, { "end": 527.04, "start": 522.48, "text": " indicates feature dimension dotted arrows represent connections from two" }, { "end": 531.0799999999999, "start": 527.04, "text": " blocks not plotted okay so on the left you have the typical architecture you" }, { "end": 538.64, "start": 531.0799999999999, "text": " see that the the width so this is the resolution is very high and as you go" }, { "end": 543.24, "start": 538.64, "text": " through the layers that resolution gets smaller and smaller and smaller but the" }, { "end": 548.32, "start": 543.24, "text": " number of features indicated by the height gets higher and higher and higher" }, { "end": 553.44, "start": 548.32, "text": " as you go through this is your typical architecture we are looking into that" }, { "end": 558.08, "start": 553.44, "text": " say that this is not the only one what we can do is we can build any sort of" }, { "end": 562.8, "start": 558.08, "text": " backbone and here they restrict themselves they say okay in order to" }, { "end": 567.44, "start": 562.8, "text": " make it comparable in order to be you know scientifically a bit more rigorous" }, { "end": 573.2, "start": 567.44, "text": " than just building anything what we restrict ourselves are simply permutations" }, { "end": 579.72, "start": 573.2, "text": " of this so we only allow us to permute these things so all the you know this" }, { "end": 585.4000000000001, "start": 579.72, "text": " goes here and this goes here as you see and that ensures that you still have" }, { "end": 589.08, "start": 585.4000000000001, "text": " roughly the same amount of parameters now there is it sort of a parameter" }, { "end": 593.9200000000001, "start": 589.08, "text": " difference because these connections here you need to up and down sample the" }, { "end": 599.68, "start": 593.92, "text": " images and sometimes that introduces parameters but in essence you have the" }, { "end": 606.04, "start": 599.68, "text": " rough same amount of parameters and then you can really research what can we" }, { "end": 610.4799999999999, "start": 606.04, "text": " improve a network simply by rearranging its blocks because that would give" }, { "end": 615.5999999999999, "start": 610.4799999999999, "text": " evidence that this scaling down architecture isn't really the best one" }, { "end": 620.76, "start": 615.5999999999999, "text": " okay so here you can see an example of this this is what they call a scale" }, { "end": 627.64, "start": 620.76, "text": " permuted network sorry this scale permuted network right here so in a" }, { "end": 632.12, "start": 627.64, "text": " scale permuted network what you're allowed to do is you're allowed you have" }, { "end": 636.2, "start": 632.12, "text": " these blocks on the left and you're allowed to put them anywhere you want in" }, { "end": 642.28, "start": 636.2, "text": " in any in any sort of I don't want to say order but yes in any sort of order" }, { "end": 648.8, "start": 642.28, "text": " yes it's an order actually so it goes from here down if this is one two three" }, { "end": 654.76, "start": 648.8, "text": " four five any block for any block you first place you first place this block" }, { "end": 660.9599999999999, "start": 654.76, "text": " you're allowed to connect it to any other block before it now here we don't" }, { "end": 665.76, "start": 660.9599999999999, "text": " see but you can see there's two incoming connections right here so we make use of" }, { "end": 669.1999999999999, "start": 665.76, "text": " more than one connection on the left you see there's always one connection" }, { "end": 675.56, "start": 669.1999999999999, "text": " between the blocks and on the right you see we allow up to two blocks to connect" }, { "end": 683.4, "start": 675.56, "text": " to a given block okay then you're done with this block you place the next block" }, { "end": 688, "start": 683.4, "text": " this one here also you're allowed to have two incoming connections this one" }, { "end": 693.76, "start": 688, "text": " here and this one here and you place the next block and so on now how you make" }, { "end": 698.3599999999999, "start": 693.76, "text": " this you also see that there doesn't need to be like a straight linear path" }, { "end": 704.1199999999999, "start": 698.3599999999999, "text": " because there is no connection right here if you can see that so you might" }, { "end": 710.28, "start": 704.12, "text": " be wondering how do I decide which block goes where and how do I decide on which" }, { "end": 718.68, "start": 710.28, "text": " connections connect where and that is going to be the idea here to use neural" }, { "end": 722.64, "start": 718.68, "text": " architecture search so neural architecture search right now is still a" }, { "end": 729.36, "start": 722.64, "text": " fancy way of saying let's try stuff out and so what you'll do is you will" }, { "end": 734.5600000000001, "start": 729.36, "text": " initialize a reinforcement learning controller that decides on the ordering" }, { "end": 739.28, "start": 734.5600000000001, "text": " and on the connections and it has some action space and you basically let it" }, { "end": 746.44, "start": 739.28, "text": " run so it it you know proposes a couple of architectures and and then you measure" }, { "end": 749.6, "start": 746.44, "text": " all of them you train all of these architectures and you see how well they" }, { "end": 753.5600000000001, "start": 749.6, "text": " fare and then you go back to the controller that's the reward signal and" }, { "end": 759.76, "start": 753.56, "text": " so we can draw so you have an agent which builds the building plan so the" }, { "end": 766, "start": 759.76, "text": " agent in the agent will emit as an action a building plan like big small" }, { "end": 772.7199999999999, "start": 766, "text": " big small big with connections like this and like this and like this and then" }, { "end": 776.9, "start": 772.7199999999999, "text": " that will go to the environment the environment here simply takes the" }, { "end": 785.1999999999999, "start": 776.9, "text": " architecture and trains train the architecture and then the let's say the" }, { "end": 792.04, "start": 785.1999999999999, "text": " eval loss or the validation loss the validation accuracy is equal is going to" }, { "end": 797.6999999999999, "start": 792.04, "text": " be your reward signal so you simply train a reinforcement learning agent to" }, { "end": 802.4399999999999, "start": 797.6999999999999, "text": " solve this particular problem which is training this in recognition and" }, { "end": 806.76, "start": 802.4399999999999, "text": " localization on the particular data set as well as possible to basically come up" }, { "end": 811.48, "start": 806.76, "text": " with the best architecture which you know it's it's fancy and it's a bit" }, { "end": 815.28, "start": 811.48, "text": " better than trying everything out but it's not much better right now and it" }, { "end": 819.24, "start": 815.28, "text": " takes a lot of compute to run these experiments because it takes a lot of" }, { "end": 823.24, "start": 819.24, "text": " iterations of this and every iteration consists of training one of these" }, { "end": 827.9399999999999, "start": 823.24, "text": " networks fully once now you can do something with like early stopping and" }, { "end": 834.9, "start": 827.9399999999999, "text": " stuff but so you get the idea this is what they what they propose and this is" }, { "end": 842.3199999999999, "start": 834.9, "text": " you know how they get better so there are a number of challenges in this" }, { "end": 850.48, "start": 842.3199999999999, "text": " namely we we said okay when you input a signal for example when you input a" }, { "end": 857.66, "start": 850.48, "text": " signal from this layer to this layer you can see that you have to shrink the" }, { "end": 864.56, "start": 857.66, "text": " resolution and you have to up the number of features and this was already sort of" }, { "end": 870.04, "start": 864.56, "text": " solved in the resonant original resonant paper but they reiterate how" }, { "end": 876.04, "start": 870.04, "text": " they do this here basically you have we have this layer and it is connected to" }, { "end": 881.68, "start": 876.04, "text": " these two layers we said every layer can receive inputs from two layers you see" }, { "end": 889.68, "start": 881.68, "text": " at the very end these are just added together okay so we have two things first" }, { "end": 894.8, "start": 889.68, "text": " of all the number of features is different you can see right over here" }, { "end": 900.12, "start": 894.8, "text": " the number of features the number of channels is different than the number of" }, { "end": 906.9599999999999, "start": 900.12, "text": " channels in the output image let's say right here so those are different and in" }, { "end": 911.8399999999999, "start": 906.9599999999999, "text": " fact they're different in both inputs and we have this method of one by one" }, { "end": 916.12, "start": 911.8399999999999, "text": " convolutions that was introduced in the original resonant paper if we do one by" }, { "end": 922.24, "start": 916.12, "text": " one convolutions it's basically a a learned transformation from a number of" }, { "end": 926.64, "start": 922.24, "text": " input channels to a number of output channels without change without doing" }, { "end": 931.76, "start": 926.64, "text": " any actual convolution operation this is simply linear operation up scaling or up" }, { "end": 937.2, "start": 931.76, "text": " up upping the number of feature maps you can see these one by one convolutions" }, { "end": 945.52, "start": 937.2, "text": " are employed here in various ways so because this is fairly compute intensive" }, { "end": 952.1999999999999, "start": 945.52, "text": " or so they claim what they do first is they always first go to less features so" }, { "end": 957.52, "start": 952.1999999999999, "text": " here we have a number of features which is maybe let's say this is f or sorry" }, { "end": 965.52, "start": 957.52, "text": " this is c0 you can see very small here maybe that there is this first we go to" }, { "end": 972, "start": 965.52, "text": " alpha times c0 and alpha is I think in the default setting it's one half so" }, { "end": 979.8, "start": 972, "text": " first we always go to one half the number of features before we do this" }, { "end": 986.24, "start": 979.8, "text": " switch here and then we have two options either so you go to one half the" }, { "end": 990.6, "start": 986.24, "text": " features and at the end you go to the number of target features so it could be" }, { "end": 995.08, "start": 990.6, "text": " if the target features are more than you currently have it could be that you" }, { "end": 1001.76, "start": 995.08, "text": " first go to less features and then you go to even more features right as if you" }, { "end": 1007, "start": 1001.76, "text": " the the current one has more features than the end it's probably not as bad" }, { "end": 1010.4399999999999, "start": 1007, "text": " because you first go to less features and to even less features this is" }, { "end": 1015.68, "start": 1010.4399999999999, "text": " probably one of the things they did to save computation but which you can" }, { "end": 1019.36, "start": 1015.68, "text": " imagine that it hurts because here you simply have to you have to basically" }, { "end": 1024.48, "start": 1019.36, "text": " throw away half the features or you have to like linearly combine them in every" }, { "end": 1031.32, "start": 1024.48, "text": " step where you connect two layers to each other you know okay so there's two" }, { "end": 1036.56, "start": 1031.32, "text": " situations first situation your current resolution here is higher than the" }, { "end": 1043.56, "start": 1036.56, "text": " target resolution in that case we can simply do a convolution with a bigger" }, { "end": 1048.2, "start": 1043.56, "text": " stride than one right if you have an image and you do a convolution usually" }, { "end": 1052.9199999999998, "start": 1048.2, "text": " you have this overlapping convolution such that the result is the same size as" }, { "end": 1058.9199999999998, "start": 1052.9199999999998, "text": " you started with but you can also do a bigger stride and I'm a bit over drawing" }, { "end": 1064.2, "start": 1058.92, "text": " this here but you can do a bigger stride such that the final resolution is" }, { "end": 1069.76, "start": 1064.2, "text": " smaller and you can also do this max pooling right here so the max pooling is" }, { "end": 1076.04, "start": 1069.76, "text": " also a way to reduce the number of of the resolution of the image so if we're" }, { "end": 1081.96, "start": 1076.04, "text": " bigger we can do that if we are currently smaller than the the target" }, { "end": 1087.52, "start": 1081.96, "text": " what we can do is we can up sample and up sample you can do by doing nearest" }, { "end": 1094.36, "start": 1087.52, "text": " neighbor or things like this you can also do a learned up sample there are" }, { "end": 1100.8799999999999, "start": 1094.36, "text": " various ways I believe here they do a nearest neighbor but I'm not sure anymore" }, { "end": 1112.2, "start": 1100.8799999999999, "text": " actually let's check it out that's here somewhere resampling in cross-scale" }, { "end": 1118.44, "start": 1112.2, "text": " connections yada yada yada yada yada yada yada it's important to keep the" }, { "end": 1122.2, "start": 1118.44, "text": " computational resampling low we introduce a scaling factor alpha we had" }, { "end": 1128.04, "start": 1122.2, "text": " that then we use a nearest neighbor interpolation for up sampling or a" }, { "end": 1133.44, "start": 1128.04, "text": " stride to three by three convolution okay so it's nearest neighbor by up" }, { "end": 1141, "start": 1133.44, "text": " sampling alright so that's that's how they up and down sample the feature maps" }, { "end": 1147.08, "start": 1141, "text": " to the correct shapes either using nearest neighbor up sampling or using" }, { "end": 1153.6, "start": 1147.08, "text": " multi stride convolution followed by max pooling so what does that give them now" }, { "end": 1159.72, "start": 1153.6, "text": " they do several different steps in this so the first architecture this resnet" }, { "end": 1164.04, "start": 1159.72, "text": " 50 is the original architecture and remember we're only talking about the" }, { "end": 1169.48, "start": 1164.04, "text": " backbone right here now in the original resnet 50 architecture you have this" }, { "end": 1175.92, "start": 1169.48, "text": " resnet 50 fpn and this fpn is these are called the output layers this is what" }, { "end": 1184.64, "start": 1175.92, "text": " then goes and classifies the bounding boxes and the labels and so on now here" }, { "end": 1189.96, "start": 1184.64, "text": " you can see the resnet 50 is continuously getting smaller and more" }, { "end": 1198.32, "start": 1189.96, "text": " features they do an intermediate step so this this right here is their final" }, { "end": 1202.8799999999999, "start": 1198.32, "text": " thing where they let this let this algorithm go wild and you can see that" }, { "end": 1208.6799999999998, "start": 1202.8799999999999, "text": " it's pretty pretty fuzzy so this RL controller finds this architecture to be" }, { "end": 1213.76, "start": 1208.6799999999998, "text": " the best architecture and you can see it's continuously down and up and down" }, { "end": 1219.6799999999998, "start": 1213.76, "text": " and sorry and up and down and there is considerable cross connections between" }, { "end": 1224.8, "start": 1219.6799999999998, "text": " all of these things and then here you have the you have the different output" }, { "end": 1229.44, "start": 1224.8, "text": " layers built into the network rather than next to the network right so these" }, { "end": 1234.8, "start": 1229.44, "text": " are the ones that are now the red border ones are now the features that are used" }, { "end": 1240.12, "start": 1234.8, "text": " for going in classifying as an intermediate step they also consider" }, { "end": 1244.76, "start": 1240.12, "text": " this architecture where they basically built a smaller resnet right here and" }, { "end": 1251.34, "start": 1244.76, "text": " then let the algorithm decide on the rest right here so it still has the same" }, { "end": 1257.08, "start": 1251.34, "text": " amount of parameters roughly but they can investigate what happens if we go to" }, { "end": 1262.84, "start": 1257.08, "text": " these to this lower if we have this structure at the beginning but then part" }, { "end": 1269.6, "start": 1262.84, "text": " of it we can do with our algorithm and lastly they also consider this" }, { "end": 1274.56, "start": 1269.6, "text": " architecture now this architecture again their algorithm has control over the" }, { "end": 1278.54, "start": 1274.56, "text": " whole network but there is an additional thing that the algorithm can do the" }, { "end": 1283.92, "start": 1278.54, "text": " algorithm can also decide to change the number of features and to change the" }, { "end": 1289.36, "start": 1283.92, "text": " type of block so here you can see these are all residual blocks and these are" }, { "end": 1295.2, "start": 1289.36, "text": " these called bottleneck blocks they're simply a different way of of doing a" }, { "end": 1302.56, "start": 1295.2, "text": " residual block it was introduced in the original resnet paper but the the" }, { "end": 1307.8, "start": 1302.56, "text": " controller can simply switch to that and that can save some computation if you go" }, { "end": 1312.9199999999998, "start": 1307.8, "text": " through these bottleneck blocks so what does that give you you can see below" }, { "end": 1321.1599999999999, "start": 1312.9199999999998, "text": " that the resnet 50 is at 37.8 percent average precision if you liberate the" }, { "end": 1325.8, "start": 1321.1599999999999, "text": " top part to leave it to the algorithm it's at 39 if you liberate the entire" }, { "end": 1330.28, "start": 1325.8, "text": " network it's at 40.7 and remember these are like roughly the same amount of" }, { "end": 1336.96, "start": 1330.28, "text": " parameters and then if you if you also let the network control a bit of the" }, { "end": 1342.8400000000001, "start": 1336.96, "text": " feature size and the type of block you get a 40.8 which is the same as before" }, { "end": 1350.52, "start": 1342.8400000000001, "text": " but now this one I believe has about oh yeah here we go with 10% fewer flops" }, { "end": 1357.52, "start": 1350.52, "text": " okay so that's that's pretty cool though remember that the left thing this is" }, { "end": 1363.16, "start": 1357.52, "text": " this is made by humans this is just our heuristic and the right things they are" }, { "end": 1368.6000000000001, "start": 1363.16, "text": " made by RL and they are you know for these particular data sets though they" }, { "end": 1374.8400000000001, "start": 1368.6000000000001, "text": " do find that generally this also transfers to image net classification but" }, { "end": 1381.2, "start": 1374.8400000000001, "text": " still this is sort of a it works well for the type of data we work with and so" }, { "end": 1386.48, "start": 1381.2, "text": " on so I don't know how much I would trust it how far we should go of building" }, { "end": 1394.04, "start": 1386.48, "text": " spine net 49 as our new backbone for every image task that we have it" }, { "end": 1401.16, "start": 1394.04, "text": " remains to be seen I believe before actually we go to the experiments before" }, { "end": 1405.96, "start": 1401.16, "text": " we go to the experiments I want to state my idea right here so you get the" }, { "end": 1411.48, "start": 1405.96, "text": " general gist here and so another kind of coral I have with this is that you know" }, { "end": 1415.72, "start": 1411.48, "text": " in here you always have these single connections and here you always have" }, { "end": 1420.08, "start": 1415.72, "text": " these these double connections and I've looked through the experiment it seems" }, { "end": 1427.6000000000001, "start": 1420.08, "text": " like nowhere do they ablate or anything what what it means to only have single" }, { "end": 1434.72, "start": 1427.6000000000001, "text": " connections or if they so if they let the resnet run with double connections" }, { "end": 1439.4, "start": 1434.72, "text": " so if their controller could not switch the order but only introduce the" }, { "end": 1444.32, "start": 1439.4, "text": " connections they might have done this they have a lot of experiments where" }, { "end": 1450.08, "start": 1444.32, "text": " they do the different ablations so I would be interested what happens when" }, { "end": 1458.96, "start": 1450.08, "text": " you let it run on the resnet but let it have two connections per per layer is it" }, { "end": 1463.72, "start": 1458.96, "text": " then better or not so here the importance I'll get to my idea later" }, { "end": 1470.48, "start": 1463.72, "text": " the importance of scale permutation that's where they investigate how" }, { "end": 1474.96, "start": 1470.48, "text": " important is it that you permute the layers and that turns out to be fairly" }, { "end": 1483.76, "start": 1474.96, "text": " important then the importance of cross scale connections that's how they" }, { "end": 1487.56, "start": 1483.76, "text": " investigate here so these are these connections they say the cross scale" }, { "end": 1491.32, "start": 1487.56, "text": " connections play a crucial role in fusing features at different resolutions" }, { "end": 1496, "start": 1491.32, "text": " throughout the scale permuted network so that's the reasoning behind it we we" }, { "end": 1501.12, "start": 1496, "text": " take features from different kind of resolutions and we can also scale up" }, { "end": 1506.24, "start": 1501.12, "text": " again and then scale down again to gain some additional features from the from" }, { "end": 1512.28, "start": 1506.24, "text": " the higher resolutions we study it's important by graph damage so either they" }, { "end": 1517.04, "start": 1512.28, "text": " remove the short-term connections or they remove the long-range connections or" }, { "end": 1520.8, "start": 1517.04, "text": " they remove both and then connect one block to the previous block via a" }, { "end": 1525.48, "start": 1520.8, "text": " sequential connection so this is only this is only in the things that they" }, { "end": 1530.6, "start": 1525.48, "text": " learned right so this model is where they fully give their model control over" }, { "end": 1534.3600000000001, "start": 1530.6, "text": " the ordering and connections you can see that as this forty point seven percent" }, { "end": 1540.08, "start": 1534.3600000000001, "text": " now if they delete the short-range connections they drop to thirty five if" }, { "end": 1545.3600000000001, "start": 1540.08, "text": " they delete the only the long-range they drop to even more so here you can see" }, { "end": 1550.02, "start": 1545.3600000000001, "text": " that these long-range connections which I guess are connections that are going" }, { "end": 1556.48, "start": 1550.02, "text": " across multiple blocks skipping multiple blocks these tend to be very important" }, { "end": 1564.32, "start": 1556.48, "text": " so you can make the case that it might be very important to fuse these things" }, { "end": 1568.76, "start": 1564.32, "text": " from different layers to fuse the features from different resolutions" }, { "end": 1573.8, "start": 1568.76, "text": " because these long-range connections tend to be important though it's one" }, { "end": 1580.8799999999999, "start": 1573.8, "text": " thing to say that if we just leave them away with our model if we just damage it" }, { "end": 1585.96, "start": 1580.8799999999999, "text": " and then let it run it it it drops in accuracy it's not entirely the same" }, { "end": 1590.3999999999999, "start": 1585.96, "text": " thing as to say that these are important because you don't really know what" }, { "end": 1594.8, "start": 1590.3999999999999, "text": " happens like if you train without them maybe you could if you train without" }, { "end": 1600.44, "start": 1594.8, "text": " them you could reach as good an accuracy so this graph damage investigation it" }, { "end": 1607, "start": 1600.44, "text": " has something but not I wouldn't trust it too much and yeah I think they haven't" }, { "end": 1611.64, "start": 1607, "text": " investigated what happens if they keep the resinette order but let the" }, { "end": 1617.4, "start": 1611.64, "text": " connections be twice but you get the general the general idea of the paper" }, { "end": 1624.16, "start": 1617.4, "text": " right here of of what they do now they do this with architecture search right" }, { "end": 1629.96, "start": 1624.16, "text": " here but here's an idea okay I propose the following you have an image right" }, { "end": 1635.64, "start": 1629.96, "text": " here and we are wondering here should we let it go through a layer that's wide" }, { "end": 1641.4, "start": 1635.64, "text": " and with less features should we let it go through a layer that's you know very" }, { "end": 1647.88, "start": 1641.4, "text": " many features but not as wide but we have to downscale the image or should we" }, { "end": 1653.8400000000001, "start": 1647.88, "text": " let it go first through something intermediate let's see it like this okay" }, { "end": 1659.8, "start": 1653.8400000000001, "text": " so we're wondering how should we order these blocks why can't we do all of" }, { "end": 1666.68, "start": 1659.8, "text": " them at the same time why can't we do this this and this okay and then in the" }, { "end": 1676.12, "start": 1666.68, "text": " next layer again do all of them at the same time and you you you can already" }, { "end": 1681.8, "start": 1676.12, "text": " see where this is going I hope I hope you can see where this is going so you" }, { "end": 1687.12, "start": 1681.8, "text": " have a routing right here and how do we do routing in modern times in deep" }, { "end": 1694.32, "start": 1687.12, "text": " learning with attention so I propose you have layers with different attention" }, { "end": 1699.4399999999998, "start": 1694.32, "text": " hey let's say these are these are now your your sequences or you can also make" }, { "end": 1708.2399999999998, "start": 1699.4399999999998, "text": " them as attention heads okay these are you these and the lower level features" }, { "end": 1715.4399999999998, "start": 1708.2399999999998, "text": " are routed to the higher level features with an attention mechanism and you do" }, { "end": 1720.3200000000002, "start": 1715.44, "text": " this layer by layer by layer so you let because what's the problem here the" }, { "end": 1725.3600000000001, "start": 1720.3200000000002, "text": " problem here is that the same data point has to go you know you find these good" }, { "end": 1729.16, "start": 1725.3600000000001, "text": " connections but the all the data points have to go through the same connections" }, { "end": 1736.24, "start": 1729.16, "text": " and it might actually be that you need different routing depending on the data" }, { "end": 1739.76, "start": 1736.24, "text": " point it might be that what this is this is good for the average data point but" }, { "end": 1743.92, "start": 1739.76, "text": " it would be much better if whenever there's a cat you take one path and" }, { "end": 1748.5600000000002, "start": 1743.92, "text": " whenever there's a dog you take a different path so this will allow for" }, { "end": 1754.76, "start": 1748.5600000000002, "text": " that you basically have the routing parameterized by an attention mechanism" }, { "end": 1759.4, "start": 1754.76, "text": " this I have no clue how much compute this would take it doesn't seem that" }, { "end": 1763.3200000000002, "start": 1759.4, "text": " outrageous because what's your sequence length here your sequence length is" }, { "end": 1767.8400000000001, "start": 1763.3200000000002, "text": " going to be the number of layers maybe and maybe times the number of feature" }, { "end": 1772.16, "start": 1767.8400000000001, "text": " maps maybe have different attention head so you maybe want to replicate some of" }, { "end": 1778.5600000000002, "start": 1772.16, "text": " those here but ultimately I would guess the attention mechanism itself isn't" }, { "end": 1781.68, "start": 1778.5600000000002, "text": " that much of an overhead maybe it's an overhead that you have so many in" }, { "end": 1790.72, "start": 1781.68, "text": " parallel yeah but you know it remains to be seen that's that's the idea yeah you" }, { "end": 1797.88, "start": 1790.72, "text": " heard it here first okay so they have more experiments so they also build" }, { "end": 1802.8000000000002, "start": 1797.88, "text": " here is where they say okay we have the spine net 49 now and we found this to" }, { "end": 1807.16, "start": 1802.8000000000002, "text": " work we found this to work really well this is our spine net 49 architecture" }, { "end": 1811.68, "start": 1807.16, "text": " cool and we want to make it bigger but I guess they didn't have the" }, { "end": 1817.0400000000002, "start": 1811.68, "text": " computational resources to run the neural architecture search for bigger" }, { "end": 1822.16, "start": 1817.0400000000002, "text": " networks this is now as about as big as a resonant 50 right but what if you" }, { "end": 1828.68, "start": 1822.16, "text": " wanted to go to a resonant 100 or a resonant 150 there you you don't have" }, { "end": 1832.6000000000001, "start": 1828.68, "text": " the computational resources do neural architectures imagine this Google has" }, { "end": 1836.44, "start": 1832.6000000000001, "text": " doesn't have the computational resources to do neural architecture" }, { "end": 1841.8400000000001, "start": 1836.44, "text": " search on this thing so this must be expensive or I'm just I have no idea" }, { "end": 1847.88, "start": 1841.8400000000001, "text": " but what they do is they kind of do a trick so here they take the the spine net" }, { "end": 1855.5200000000002, "start": 1847.88, "text": " 49 and they say we build a spine net 60 96 by simply repeating each block twice" }, { "end": 1860.2800000000002, "start": 1855.5200000000002, "text": " so all the incoming connections would go to the first block and all the outgoing" }, { "end": 1863.8400000000001, "start": 1860.2800000000002, "text": " connections would come from the second block right here you had two in and" }, { "end": 1868, "start": 1863.8400000000001, "text": " maybe there's actually no limit to how many outgoing connections you can have" }, { "end": 1874.48, "start": 1868, "text": " and also you can also do this three times which I think is a bit of a cheap" }, { "end": 1879.4, "start": 1874.48, "text": " way and it kind of defeats the entire purpose right couldn't you make the exact" }, { "end": 1883.64, "start": 1879.4, "text": " same argument again here that maybe it's helpful to route from this block right" }, { "end": 1889.44, "start": 1883.64, "text": " here or maybe it's helpful that these don't have the same scale right after" }, { "end": 1895.44, "start": 1889.44, "text": " one another it just seems but okay so they say we found this good structure" }, { "end": 1902.64, "start": 1895.44, "text": " and we simply duplicate each block I'm not that big of a fan in any case so" }, { "end": 1906.8000000000002, "start": 1902.64, "text": " they train this and it of course outperforms everything else if you" }, { "end": 1911.0400000000002, "start": 1906.8000000000002, "text": " compare with kind of models of the same size so here you compare this spine net" }, { "end": 1917.96, "start": 1911.0400000000002, "text": " 49 to the resnet 50 and you can see there's about the same number of" }, { "end": 1925.1200000000001, "start": 1917.96, "text": " parameters how about it outperforms the resnet 50 pretty much and as you go up" }, { "end": 1930.92, "start": 1925.1200000000001, "text": " the number of parameters here the performance goes up yet again and I" }, { "end": 1936.0800000000002, "start": 1930.92, "text": " believe these dagger ones here are simply trained with a special schedule" }, { "end": 1941.64, "start": 1936.0800000000002, "text": " with here with applying stochastic depth and swish activation for a longer" }, { "end": 1948.5600000000002, "start": 1941.64, "text": " training schedule so you can see that not only do are these spine nets sorry" }, { "end": 1953.6000000000001, "start": 1948.5600000000002, "text": " of the number of parameters is here not only are the spine nets slightly smaller" }, { "end": 1961.9599999999998, "start": 1953.6, "text": " than the resnets they do require less flops and they reach better accuracy so" }, { "end": 1971.9199999999998, "start": 1961.9599999999998, "text": " you know every everything is a win here yeah so they apply this to these data" }, { "end": 1982.6, "start": 1971.9199999999998, "text": " sets I don't want to go you know too much into into that but in the last" }, { "end": 1987.08, "start": 1982.6, "text": " part they also apply this to image net so there's image classification where" }, { "end": 1992.4399999999998, "start": 1987.08, "text": " they basically say okay we can just go to our architecture and we can just add" }, { "end": 1998.08, "start": 1992.4399999999998, "text": " up all the output blocks we scale them appropriately and add up all the output" }, { "end": 2002.1999999999998, "start": 1998.08, "text": " blocks right here because these are good features for localization and so on and" }, { "end": 2007.36, "start": 2002.1999999999998, "text": " we can train it to do image classification so all of these go into a" }, { "end": 2014.3999999999999, "start": 2007.36, "text": " big combination classifier that does the 1000 classes of image net image" }, { "end": 2019.9199999999998, "start": 2014.3999999999999, "text": " classification and that also works pretty well with this network so they" }, { "end": 2025.56, "start": 2019.9199999999998, "text": " basically argue what they found is sort of a better image image processing" }, { "end": 2030.84, "start": 2025.56, "text": " network than the resnet 50 and I guess they would argue that from now on you" }, { "end": 2036.9599999999998, "start": 2030.84, "text": " should take this as your backbone for image classification and recognition and" }, { "end": 2043.76, "start": 2036.96, "text": " so on which it's entirely possible that this works better there's no particular" }, { "end": 2048.4, "start": 2043.76, "text": " reason why the resnet 50 should work at all right it's just a heuristic but I" }, { "end": 2055.2400000000002, "start": 2048.4, "text": " guess the I it remains to be to be seen whether that's generally true or just in" }, { "end": 2060.88, "start": 2055.2400000000002, "text": " the things they considered so you can see right here the spine net generally" }, { "end": 2067.52, "start": 2060.88, "text": " improving over the image net which isn't is not stated right here but it does" }, { "end": 2073, "start": 2067.52, "text": " generally improve and you can see as you go higher and higher spine net the the" }, { "end": 2080.4, "start": 2073, "text": " numbers tend to improve as well and this is already pretty respectable" }, { "end": 2087.6400000000003, "start": 2080.4, "text": " respectable number for image net right all right so this was it for this paper" }, { "end": 2093.24, "start": 2087.64, "text": " for this particular paper they do have you know two different of these object" }, { "end": 2098.4, "start": 2093.24, "text": " detection recognition datasets and I invite you to check out the experiments" }, { "end": 2101.6, "start": 2098.4, "text": " more closely if you're interested in that sort of thing I was mainly" }, { "end": 2107.08, "start": 2101.6, "text": " interested in the method of doing and arranging these layers and so on it" }, { "end": 2111.4, "start": 2107.08, "text": " seems like it's a cool engineering project cool investigative project the" }, { "end": 2115.92, "start": 2111.4, "text": " experiments are done well and in the end they reach a better you know they" }, { "end": 2121.6, "start": 2115.92, "text": " achieve to get a better model out of that and if it turns out that this model" }, { "end": 2127.44, "start": 2121.6, "text": " is a good model the entire community will be better off unfortunately there's" }, { "end": 2132.92, "start": 2127.44, "text": " no broader impact statement to tell us that also the terrorists will be able to" }, { "end": 2141.44, "start": 2132.92, "text": " use this for purposes but you can imagine that yourself all right that was" }, { "end": 2146.8, "start": 2141.44, "text": " it for me again leave a comment if you want me to change anything or have" }, { "end": 2173.84, "start": 2146.8, "text": " suggestions leave a like if you like the video share it out bye bye" } ]
hAooAOFRsYc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nlp", "natural language processing", "attention", "attention mechanism", "linear", "linear transformer", "linformer", "reformer", "idiap", "epfl", "queries", "keys", "softmax", "kernel", "routing", "inner product", "rnn", "recurrent neural network", "transformer", "bert", "autoregressive", "dimensions", "topic modeling", "language model" ]
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements of compute and memory. This paper reformulates the attention mechanism in terms of kernel functions and obtains a linear formulation, which reduces these requirements. Surprisingly, this formulation also surfaces an interesting connection between autoregressive transformers and RNNs. OUTLINE: 0:00 - Intro & Overview 1:35 - Softmax Attention & Transformers 8:40 - Quadratic Complexity of Softmax Attention 9:40 - Generalized Attention Mechanism 13:45 - Kernels 20:40 - Linear Attention 25:20 - Experiments 28:30 - Intuition on Linear Attention 33:55 - Connecting Autoregressive Transformers and RNNs 41:30 - Caveats with the RNN connection 46:00 - More Results & Conclusion Paper: https://arxiv.org/abs/2006.16236 Website: https://linear-transformers.com/ Code: https://github.com/idiap/fast-transformers My Video on Attention: https://youtu.be/iDulhoQ2pro My Video on BERT: https://youtu.be/-9evrZnBorM Abstract: Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from (N2) to (N), where N is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences. Authors: Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at transformers or RNNs, fast autoregressive transformers with linear attention by Angelos Kateropoulos, Aporvias, Nikolaos Papas and François Fleuret. So this paper on a high level proposes to interpret the attention mechanism in transformers in terms of a kernel function and therefore the resulting higher dimensional linear operation can be used to formulate the linear transformer which is orders of magnitude faster than a classic transformer. They also show that in the case of autoregressive transformers this makes the transformer essentially equivalent to a special kind of RNN. So yeah, that's what this paper is about and I think I have some comments to make that I haven't really seen made by others though I have to admit I also haven't really looked at many comments so I might just be telling old boring things. As always if you like content like this consider sharing it out, leave a like if you liked it, leave a comment to let me know what you think. I do read the comments and they're generally comment section is very helpful to me and also people respond to each other. It's fairly cool to see that the community is usually very helpful to people asking questions. Just let me know what you think. Alright, so what's the problem with transformers? I've done many videos on transformers and I keep referring back to them for people who don't know what it is but there's this original paper called Attention is all you need where I made a video about that so if you don't know what transformers are you can go look at that that should basically cover everything you need to know but there's many more transformers in the meantime there's BERT, GPT, 2GPT whatever the number is after that many sequence processing models are now transformers, many set processing models are now transformers. So this has reached a very very made a very big splash in the community. So essentially transformers come with this attention mechanism where you have an input set actually but let's consider it a sequence so a sequence of text maybe like I have an ice cream cone something like this and you want to classify the text or you want to perform language modeling. So in language modeling the problem is as follows I give you this piece of text and I ask you to predict the next piece of text. This is this was kind of the first task that these transformers were used on and this is what is called an autoregressive transformer because you always have a piece you predict the next piece and then I give you that next it give you that entire piece and then you predict the next piece yet again and so on and this autoregressive property is gonna you know come in play in this paper later but ultimately what you have in a transformer is called an attention mechanism. So an attention mechanism is the following each layer in the transformer you can imagine as having the same number of nodes kind of the same number of neurons as the sequence your sequence is long. Now from this input sequence you're going to generate for each of these tokens you're going to generate three different things you're going to generate a key query in the value so in these you do from so usually this doesn't come in form of a letter right this comes in form of some kind of embedding vector and from that you're going to generate three different things I should probably use different colors for so this is a function you're going to produce three different things from that you're going to produce a key you're going to produce a query and you're going to produce a value. Now the key is you can imagine it being attached to this lower layer right here so that's the key for this this token right here that's the key the key here for that token right here it's a word piece right so the keys again are also just you know vectors vector vector the query you figuratively attach to the top layer right here so the queries they go here for each token and they are also vectors and the values will keep out of it for now so the queries and the keys define basically how you route the information and you route the information by going over each so each each you have to imagine each token right here this this half or half it needs to aggregate information from all the other tokens right so we're going through multiple layers of this and in each layer each of these tokens is aggregating information from the other tokens if we do this in multiple rounds is eventually you know the each token is aggregating information eventually each token knows about all the other tokens but how this information aggregation is done is very important for example if the token is a pronoun it would be very interested in information coming from any sort of named entity in the sentence because it very much wants to know what it is referring to right if you are a if you are the the pronoun in the sentence it is very vital that you understand which of these things you refer to so you'll start aggregating information for that and then once you know who or what you refer to then the other parts of the sentence can make use of that information so they will start requesting information from you and so layer after layer each token aggregates information from each other token so this works by let's say we're at this token right here what we're going to do is we're going to form the inner product between that vector and each of these vectors and then we're going to transfer that into a softmax which makes this into a first of all there's so we do the query together with all the keys key and then we run it through the exponential function and after that we're going to normalize it by the sum of all the exponential functions that will give us a properly normalized distribution so a histogram basically of where we are going to get our information from this is going to be the highest where the inner product right here is the highest so from this token right here and you know this is fairly fairly standard what I drew by accident is fairly standard that a token probably wants to know a lot about itself so you want to carry forward the information that you already have in this particular token that's why your inner products going to maybe align a lot with your own key so the keys and queries are learned so each token decides what kind of information it wants to advertise to the others and then also each token decides what kind of information it wants to gather from the others and the the routing then is put through a softmax function and that gives you this right here you do this for every single token so the problem with this is that every single token needs to do the inner product of its query with all the different keys and each of that has to go through the softmax and then the value that's actually aggregated are these values right here now the values are simply a transformation of the incoming values values are what's really propagated you can think of it as just like a one layer neural network ultimately you could also leave away the values people don't do this some people do the same queries and keys but the values are just a transformation of your input so the important thing is this right here this decides how you're going to aggregate the values all right so this is has a quadratic complexity so if you if you have n input tokens then this entire process will require n squared operations because you need to form the inner products between each pair of queries and keys right and it also is going to require that much memory and this we're going to see this is in large part due to this softmax operation because because we have a softmax it makes the whole thing nonlinear and it being nonlinear basically means we'll have to you know store everything keep everything around and we have to recompute for each query we're going to see in this paper formulation where if we make the whole process linear then we will not have to do that so let's dive into it so here they go linear transformers they start off with saying each transformer layer is essentially this right here so this is a this is kind of a higher level of view what we viewed so far is just this part right here this is the attention routing mechanism each layer is actually wrapped in a residual connection and also a simple element wise or row wise feed forward layer but these things are usually not that much into consideration what's really hurting in the transformer if you go into very long sequences is this attention routing mechanism so the attention routing mechanism is as follows you can see right here this is the formal expression of what I described right here here you have the and notice this is an outer product so if if I have if I have n sequence elements the Q right here are the queries so this transforms each of the n into a into a d-dimensional space right and also the keys will transform each of these into a d-dimensional space so this here is going this here is going to be a n by n matrix right this is this Q KT is going to be an n by n matrix this is X W Q W K X and this transposed right here nope yeah like this okay so this is sort of an outer product and then we're going to take the row wise softmax and that will give us for each row in this matrix so for each row in this matrix we're going to have these this distribution of how to aggregate information each row will give us basically for each of the upper level tokens for each of the outputs how we need to aggregate information from the inputs and the information that we're aggregating are these values right here now they generalize this first of all they say we can also we can write it in this form right here instead of having a softmax we can actually think of any kind of similarity function it between the queries and the keys so here you see what we want to do if we want to calculate output I here the important thing is there is no longer this is an entire matrix and we considered a row wise softmax and now we write this out into the individual elements of the output and we can we can do so we can say okay how do we obtain one element of the output we're going to calculate some sort of similarity of that particular crazy I here I here we're going to calculate some sort of similarity between the query of that particular output with all of the keys so here you can see all of the keys of the input and we're going to act and we're going to normalize right this is the normalization that happens also in the softmax and that will give us like a histogram of how we aggregate the values right here so all of this of this red stuff will give us again some sort of a histogram of how we're going to aggregate information if you look a bit like this and you know how the softmax is defined you'll see that if we plug in the exponential function for as the similarity function then you'll get back to the softmax okay they say equation three is equivalent to equation two if we substitute the similarity function with the exponential function now they go they go ahead and they go into kernels so for that you certainly to understand what a kernel is a kernel is a special kind for the purposes that we are you know looking at here a kernel is a special kind of a similarity function it needs to have some properties right here but essentially they say well this this kind of looks like a kernel and we will simply say okay here this similarity what if we use a kernel here so a kernel simply is a similarity function of two vectors if you interpret it like they have some more condition I know I know don't freak on me but the interesting properties about kernels is that if a similarity function is a kernel it means that there exists a mapping and where do we do so if K between A and B is a kernel if K is a kernel that means that there exists a similar a function Phi such that Phi such that the kernel between A and B can be expressed as a linear product between Phi of A and Phi of B transpose okay this is like this is an inner product so what it means is that this can be like a super nonlinear function a kernel for example it can be and the example often given in like machine learning classes is maybe something like this you have one dimensional data right and here is the here is zero and you have two kinds of data points you have the X's right here and you have the circles right here now I cannot classify this data linearly however however I can transform this into a higher dimensional space so my function Phi is of my function Phi of X is going to map to the vector X X squared and that will transform the data into a two-dimensional space right and the data will look something like this so it's going to the y-axis is going to be the square of the x-axis okay and like this and now I can find a linear classifier okay so in this case right here you can see that in this higher space things become linear things become linearly classifiable and very similarly like this is you can define the similarity between things right here so the similarity function would be the square function right here and this would be a quadratic an example of a quadratic kernel so this function right here can be very nonlinear I mean it can be a linear function but it can be very nonlinear but it is equivalent it is equivalent this means it is equivalent to a linear function in a high dimensional space now to figure out linear to figure out what this function Phi is is the big the big question of course for a couple of kernels we know the function Phi right for the quadratic kernel for example we know we just saw that Phi maps this to the vector of the coordinate and it's quadratic to its square we know for a couple of other kernels what their associated functions are but in general if it's a kernel then we can just replace it with a linear function and with with with this thing and in reverse we can just say well what we could do is we could just simply define a function Phi and basically map this into map these a and b into a higher dimensional space where this super duper nonlinear function would just become a linear function wouldn't that be much easier linear functions are much easier to work with than nonlinear functions and if we know that as long as we get the correct Phi we do exactly the same thing as the nonlinear function you know that would be helpful so there is an entire litters in the entire like decade of kernel ization and kernel ize everything kernel ized SVM s to start but then you can go way further in this and this is just the beginning and this is just a very sloppy explanation by me right here but ultimately they're saying hey instead of doing complicated nonlinear similarity function like the softmax can't we just project a and b into the higher dimensional space and then just do the linear inner product and do the same thing as the softmax and the answer is yes and no we know that for the softmax the particular Phi function that we would need would map to an infinite dimensional space usually usually this is applied in reverse it's like oh here instead usually in machine learning if they say you know we want to do this we want to map it into a high dimensional space that is linear but we we can't because these spaces are too high dimensional and therefore we find an equivalent kernel function and it's usually said well we've used an RBF kernel that corresponds to an infinite dimensional space and so on that's pretty cool here it's the reverse here it's we want to do we want the linear function we and the equivalent of the softmax function is an infinite dimensional function which we can't do right we can't feasibly compute an infinite dimensional space explicitly so it's not possible to just do the equivalent thing than in a transformer however you can still do something else you can still use polynomial kernels you can still use any kind of kernels that have corresponding functions that map to a finite dimensional space and that's what this paper does so here they say if we have such a function that maps these things into a higher dimensional space such that their inner product such that the similarity function in this higher dimensional space is an inner product then we can write it as just this inner product right here explicitly and then because of the associativity you can see that here is an eye and here there is no eye so we can just sort of pull this out of this sum and as well right here it doesn't don't don't cross this away these are vectors right but you can see especially here you can see pretty clear why is there a cursor stop you can see that this here you have to pay attention to the matrix dimension so if we use like bra-cat notation this is like this like this like this and like this okay so here on the bottom you see that there is an inner product so each output will be normalized by this inner product right here however the top is going to be a vector we know that you know each output is a vector so the top will aggregate these vectors right here according to this routing okay but if we write it like this you can see that what we could technically do is we could technically compute this part here once because it doesn't contain any eye so there is no eye in that part so we could just compute it once because we have these two these two layers of the attention mechanism and these K and V they just refer to this lower layer right here we could just compute that thing on the right ones that's going to give us a matrix as you can see right here from the dimensions and then we can simply take the product of that vector right here of the vector on the left with the matrix on the right and we'd be done it's one operation right instead of for each thing you know going and attending to each other and then do the softmax without the softmax we can all do this in a linear fashion so that makes it a lot easier in fact it makes the computation linear in so this is now O of n okay plus of course the the work that you have to do for mapping this into the higher dimensional space but this is also not quadratic this is done to each of these elements individually okay so this this is now as we said it's pretty easy you can calculate the matrix on the top you can actually also calculate this part right here this vector you can aggregate over the bottom and then if you go through the top it's simply a inner product with the vector of the queries and you're done and this is it in fact in matrix form you can simply write it down as one matrix multiplication seems pretty easy so the computational cost goes way down and they use the following function right here okay this is their map to the higher dimensional to the higher dimensional space so they say for our experiments that deal with smaller sequences we employ feature map that results in a positive similarity function as defined below so right here you have to pay attention you can't just pick any function but you can you can pick a lot of different functions where LU denotes the exponential linear unit activation function okay like this seems this seems fine they also say in our experimental section we show that the feature map of equation 7 performs on par with the full transformer while significantly reducing the computational memory requirements this you know it seems it seems like the the original transformer this choice of the softmax function even though it's you know powerful and can't be approximated with this trick right here it was also somewhat arbitrary I mean there is a reasoning behind it but it's also somewhat like meh and it's entirely possible right that that this here is way faster so I want to jump this causal masking thing for now and look at the results where you can see they verify the fact that in terms of time in terms of GPU memory if they apply their transformer and here on the x-axis you see sequence length and you can see that the this is log plot right these are log plots you can see that the original transformer right here has a way steeper slope than their transformer which is the black line right here the blue lines are the reformers which we've also I've also done a video on reformer if you want to check that out that is also a trick that uses locality sensitive hashing to get rid of the quadratic attention mechanism now the locality sensitive hashing also means that you kind of lose some accuracy so that's the trade-off right here but you can see that is also linear actually it's n log n depending on the sequence length but the log n is negligible so you see GPU memory and time way down and in terms of experiments it does perform on par it seems like it has different optimization trajectory but they show that you know there is this trade-off for the reformer where you lose inaccuracy they do not experience that trade-off in the linear transformer compared to the original transformer in their particular experiments now they do their experiments sort of show that they are not on par with the original transfer like they are on par in some of the tasks but also in some of the tasks they are not on par for example this speech data set right here where they do fairly well they actually beat the bi-lstm baseline and the reformer but they do not beat the softmax transformer so there there's it is still the case that the softmax transformer is more powerful than the thing here and will give some intuition very shortly on that but the linear transformer is way faster here it's three times faster and up here it is 300 times faster and on mnist and if you go and see for 10 is 4000 times faster simply by property of the longer either sequences are that you input the much more matters the fact that the softmax transformer has a quadratic runtime whereas the linear transformer has a linear runtime and I was also surprised here to see that the reformer wasn't that much faster that's probably due to the fact that it already has like a big overhead in these hashing rounds and so on that probably is is hurting it at sort of a constant level I guess if you were to up the sequence length even more than the reformer would also improve a lot more over the softmax transformer ok so what's what's happening here what's happening with these with this attention and why is it different what does it makes it different from the old attention now I wanna I wanna sort of connect this to the kind of old and old the olden literature of topic modeling so if you think of the of this transformer of this attention mechanism what you'll have is a dynamic routing of information right so for each from each output token you get to look at all the input tokens if we for example select this one you get to look and you get to decide for each one how do I want to aggregate my information ok and this is what makes this quadratic from each of the output tokens you get to look at all of the input tokens and decide how you want to do that and that is can be very long nonlinear in terms of when we use the softmax and so on so that what makes it expensive what this thing is doing is the following it takes all the keys right here so here we have all the keys and it's going to map them through this five function right each key is going to map through the five function and each query is also going to be mapped through the five function into these higher dimensional spaces and then an inner product is performed between the two and that decides the routing this is very similar to like topic models where if you interpret this this right here can be a mapping of my dimension of these keys and queries to the topics so essentially what's happening right here is for each of the input tokens sorry input tokens here output tokens here the dimension of this map defines is how many topics there are so in you know in these topics modeling you would have things like I want to I have news articles or words and then I define like a set of topics and I'm going to assign each word to a topic and then I'm going to assign each news article to a topic and so on and then you kind of do this dimension reduction but this can be done in many ways so let's say this is a mapping to three dimensions what this does is essentially this five function decides how you're going to map each of these inputs into these three topics so you can say all this token goes here and here this one goes here and that bit here this one goes here and so on so again this is a this is a mapping into a well in this case a lower dimensional space and then this function decides how you're going to aggregate these topics over across here and since this is you know this is now a linear multiplication between the two things so these two are going to be your matrices this here is going to be your phi of K and this here is going to be your phi of Q so you can see the difference here between the old attention mechanism and the new attention mechanism right the old attention mechanism each token was directly able to look at all the input tokens and decide how to aggregate the information and here it's sort of we have this in between is in between representation in this higher dimensional space and we can aggregate in only a we can distribute in a linear fashion and we can aggregate in a linear fashion in and from this higher dimensional space that's sort of how how I sort of how I imagine that that okay so you get to distribute each token right here into these topics and then the the outputs they they don't see the inputs anymore right you see that in the formulation there is a sum over j so right here there is this sum over j and that means that the outputs here they don't see the different inputs as different inputs they only see the inputs through the map of the phi function so they can only see the individual dimensions of that phi function they cannot see the outputs anymore and therefore yeah therefore you don't have the dependence on the big quadratic dependence on this on this n okay however you do have a co of course now a dependence on this the dimension of the intermediate representation and they also they say this right this is you know reasonable yeah they do derive the gradients here to save even more memory so you don't have to such that you don't have to let's say store of all of these activations that's pretty cool as well and they implemented in CUDA there is code available for the linear transformer all of this pretty pretty cool okay so the last thing they say they make the connections to RNNs now this is a bit detached from the linear transformer but because they formulated how they do they can make this connection so this now this now is valid for all transformers what they say right here but keep in mind it is valid for the original transformers in practice if you can make this mapping Phi to map to infinite dimensions which you can't but the analysis is equivalent so they say look if we write the attention mechanism like this and therefore like this what we can do is we can define these two quantities right s and z this is what we said before we can actually pre compute these quantities right here okay so that reduces to this right here if now we are looking at a autoregressive transformer and we said before what an autoregressive transformer was an autoregressive transformers you have a piece of sequence and you are tasked to predict this next thing right here now usually if you want to train this using an RNN you have to you know run your RNN input this hidden state and input that map forward the hidden state so you have to do all of this forward propagation in order to derive at this hidden at this output right here make the output and then you need to back prop through time right here there is no way to what you would like to do is you would like to say here I have a sentence I can actually make like five different training examples from that sentence so the first one is the one you just saw I just block off the last word but I can also make that training example right here right to when I just cut a second to last word and so on I can actually make all of these different training examples for language modeling from a single sentence and what I would like to do is I would like to train them all in parallel right I load my data point once I already have it why can't I just train everything at the same time like predict this from this word now predict also this from these two words and the transformers are you know very well conditioned they are very good at this basically so what a transformer can do is if you input a sequence like sorry like the thing at the bottom you can calculate the training signal for all of these different things at the same time and okay this was maybe a mistake you can calculate the training signal for all of this at the same time by using what's called causal masking in attention so if I have my attention mechanism right here let's consider it again and let's consider these two layers if I have my attention mechanism what I want to do is I want to constrain each token to only attend to tokens that came before it in the sequence so for example this token right here I'm going to constrain it to only attend to itself and the past because it will it will predict the next token in the sequence and it would be it would be really easy if we could attend to the input of that token right it could simply remember what that token is and then aggregate that here and then predict that so if for each token we restrict the attention to the tokens that came before it like also for this right here we restrict the attention only to go backwards then we can train all of this in parallel this is called causal masking it's usually implemented with like a mask that is like an upper diagonal and it's a bit unclear if you can attend to yours to yourself because then I guess this will become the output and you can only attend to this I don't know exactly how it is implemented but there it is usually realized with an upper triangular matrix as a mask and you apply this mask to each layer now they say that this is actually like an or an N and with their formulation you can make this pretty explicit namely you have these two states s and a Z and in each sequence element it's actually like an or an N where you update the s and the Z with these quantities right here and so it's like an or an N where these are the hidden states that you pass forward right and then you can formulate any transformer as an or an N that simply updates these two states but you see you need the explicit mapping of these of this kernel function you need this explicit mapping in order to be able to do this because otherwise this is here this is not going to be a linear addition it is going to be complicated you can't do it by simply remembering the past state so you need that formulation in order to be able to express it as an RNN but their analysis shows that this a transformer autoregressive is essentially an RNN and you can you can so you can make a connection in that and you can actually formulate this as an RNN which means that you can train in the transformer fashion everything at the same time but what is cool about an RNN an RNN at inference time an RNN once it has produced you know this word it can then because if you produce autoregressively you simply say hey I have this beginning of my news article please finish it so the model must output the next word and then from that sequence it must output the next word the next word and then from that the next word and so on and RNN because of the nature of simply passing forward hidden states at inference time can simply you know remember what the hidden states were input those again input the output here and go on so it's pretty fast at inference time which a transformer isn't with their formulation now if they have the explicit function Phi they can use this at inference time to be so much faster in fact on their website which I'll link of course in the in the description you can play with image generation using one of these transformers in your browser so you can simply start a transformer run in your browser that's how easy this becomes so you can see the linear transformer with causal masking you'll simply update these states right here and then pass those forward so easy and the backward pass as we said I don't want to go into the gradient calculation but they derive the gradient such that you don't have to remember these hidden states and it becomes or it is linear in or it saves a lot of more memory than before okay note so this is the last comment from my side note that this this causal masking transformers they are they are a bit of a hack in transformers and because so ultimately let's say let's say I have this sequence right here this is given and I want to predict this word right here what and okay let's make it here so I need multiple layers for this so I want to predict that next word and I have multiple layers right so I want to predict this from from the outputs right here let's say there is an output node right here I want to predict that particular word it's true that I should only be able to aggregate information from whatever was you know on the back right here but technically in a transformer it would be completely valid to say that this node right here which is let's say that's an article and it followed by a noun right would be able to attend to that one and then that one would be able to attend to that one and or sorry the output right here would be able to attend to that one this would not violate the autoregressive property right you can but you can see that in the intermediate layer this node right here is attending to a forward node now if you do things like this you can't do this trick anymore where you train everything at once because if if this connection exists that also means that if in this other training sample where this is the word to be predicted then this node could aggregate information from that node and basically cheat but the the technical autoregressive property is not violated by this connection right here and you only get this RNN formulation if you do not have these connections right so the this this hack to make the autoregressive transformers train in parallel is actually making the transformer formulation much weaker and therefore that's then equivalent to an RNN okay I it's not that transformers in general are equivalent to an RNN or at least this paper doesn't show that it's just that this hacked transformers are and I think that's an important distinction to make here rather than saying transformers are RNNs if we could only approximate the softmax in these infinite dimensions I don't think that's entirely true but it is true for the transformers the autoregressive transformers that we currently train now why is this connection so powerful it allows a token to attend to you know tokens forward of it and what does it mean to be able to attend like let's say it's really important that this token right here attends to that token right here what would you need to do if you couldn't do that if you let's let's let's say this is a program right so this token is the function F and it needs the input this argument a of whatever token comes in front of it and it needs to do something conditioned on a so if a if a is one it does something if a is two it does something else right if you if you don't have if you can't input a then you can't simply pass on the output value what you'll have to do is conceptually is basically you'll have to store the entire code of the function into hidden state if this is an RNN right you can't look forward it needs to store the entire code of this function F so all it needs to basically build this map if a is one then this if a is two then this if a is three then this store that in the hidden state and then once a comes around in the next time step this can be resolved you can see that this is infinitely more complicated than simply looking forward and outputting the value yourself so that's sort of the difference in power that these two formulations are talking about okay so yeah two parts to this paper first part linear transformer through kernels second part if you formulate it like this it is equivalent and so a autoregressive transformer in this way becomes equivalent to an RNN and here is some of the output samples you know they're they're pretty pretty good though if you look at the more output samples they have here it so here this this is the linear one and you can see for example as already in this very bottom one this one right here it's the kind of all the other transformers get the slant to the right and that the the original has whereas this one is simply straight I mean I don't want it I don't want to dunk on this like these others make a lot of mistake mistakes right here but here I also saw you know all of them get that this is going to be the number three while this one is somehow making this circle in here so it is not perfect and even though it's on par where in the tasks they see you can see right here that especially in this speech recognition the original transformer right here is significantly outperforming the linear transformer which is the one in black right here in fact in all of the tasks but ultimately it might not matter because they reach you know the same they reach the same they reach the same accuracy or whatnot and the linear transformer is way way faster so I can see that this is going to be a thing that people apply I guess time will tell right I invite you to read the paper tell me what you think I might be totally wrong here with any of my formulations or my intuition about what this new attention mechanism does yeah please let me know and I'll see you next time bye bye
[ { "end": 5.46, "start": 0, "text": " Hi there, today we're looking at transformers or RNNs, fast autoregressive" }, { "end": 12.16, "start": 5.46, "text": " transformers with linear attention by Angelos Kateropoulos, Aporvias, Nikolaos Papas" }, { "end": 17.36, "start": 12.16, "text": " and François Fleuret. So this paper on a high level proposes to interpret the" }, { "end": 22.8, "start": 17.36, "text": " attention mechanism in transformers in terms of a kernel function and" }, { "end": 28.32, "start": 22.8, "text": " therefore the resulting higher dimensional linear operation can be used" }, { "end": 33.6, "start": 28.32, "text": " to formulate the linear transformer which is orders of magnitude faster than" }, { "end": 38.56, "start": 33.6, "text": " a classic transformer. They also show that in the case of autoregressive" }, { "end": 44.68, "start": 38.56, "text": " transformers this makes the transformer essentially equivalent to a special kind" }, { "end": 50.84, "start": 44.68, "text": " of RNN. So yeah, that's what this paper is about and I think I have some" }, { "end": 56.72, "start": 50.84, "text": " comments to make that I haven't really seen made by others though I have to admit" }, { "end": 62.64, "start": 56.72, "text": " I also haven't really looked at many comments so I might just be telling old" }, { "end": 67.16, "start": 62.64, "text": " boring things. As always if you like content like this consider sharing it" }, { "end": 72.44, "start": 67.16, "text": " out, leave a like if you liked it, leave a comment to let me know what you" }, { "end": 77.24, "start": 72.44, "text": " think. I do read the comments and they're generally comment section is very" }, { "end": 83.52, "start": 77.24, "text": " helpful to me and also people respond to each other. It's fairly" }, { "end": 88.44, "start": 83.52, "text": " cool to see that the community is usually very helpful to people asking" }, { "end": 95.88, "start": 88.44, "text": " questions. Just let me know what you think. Alright, so what's the problem" }, { "end": 100.84, "start": 95.88, "text": " with transformers? I've done many videos on transformers and I keep" }, { "end": 105.03999999999999, "start": 100.84, "text": " referring back to them for people who don't know what it is but there's this" }, { "end": 111.03999999999999, "start": 105.03999999999999, "text": " original paper called Attention is all you need where I made a video about" }, { "end": 114.76, "start": 111.04, "text": " that so if you don't know what transformers are you can go look at that" }, { "end": 118.08000000000001, "start": 114.76, "text": " that should basically cover everything you need to know but there's many more" }, { "end": 124.56, "start": 118.08000000000001, "text": " transformers in the meantime there's BERT, GPT, 2GPT whatever the number is" }, { "end": 131.44, "start": 124.56, "text": " after that many sequence processing models are now transformers, many set" }, { "end": 137.64000000000001, "start": 131.44, "text": " processing models are now transformers. So this has reached a very very made a" }, { "end": 142.32, "start": 137.64, "text": " very big splash in the community. So essentially transformers come with this" }, { "end": 148.35999999999999, "start": 142.32, "text": " attention mechanism where you have an input set actually but let's consider it" }, { "end": 156.76, "start": 148.35999999999999, "text": " a sequence so a sequence of text maybe like I have an ice cream cone something" }, { "end": 161.67999999999998, "start": 156.76, "text": " like this and you want to classify the text or you want to perform language" }, { "end": 167.51999999999998, "start": 161.67999999999998, "text": " modeling. So in language modeling the problem is as follows I give you this" }, { "end": 174.20000000000002, "start": 167.52, "text": " piece of text and I ask you to predict the next piece of text. This is this was" }, { "end": 179.12, "start": 174.20000000000002, "text": " kind of the first task that these transformers were used on and this is" }, { "end": 183.12, "start": 179.12, "text": " what is called an autoregressive transformer because you always have a" }, { "end": 186.96, "start": 183.12, "text": " piece you predict the next piece and then I give you that next it give you" }, { "end": 191.08, "start": 186.96, "text": " that entire piece and then you predict the next piece yet again and so on and" }, { "end": 195.16000000000003, "start": 191.08, "text": " this autoregressive property is gonna you know come in play in this paper" }, { "end": 199.56, "start": 195.16, "text": " later but ultimately what you have in a transformer is called an attention" }, { "end": 203.6, "start": 199.56, "text": " mechanism. So an attention mechanism is the following each layer in the" }, { "end": 208.68, "start": 203.6, "text": " transformer you can imagine as having the same number of nodes kind of the" }, { "end": 213.16, "start": 208.68, "text": " same number of neurons as the sequence your sequence is long. Now from this" }, { "end": 218, "start": 213.16, "text": " input sequence you're going to generate for each of these tokens you're going to" }, { "end": 222.84, "start": 218, "text": " generate three different things you're going to generate a key query in the" }, { "end": 229.08, "start": 222.84, "text": " value so in these you do from so usually this doesn't come in form of a letter" }, { "end": 232.96, "start": 229.08, "text": " right this comes in form of some kind of embedding vector and from that you're" }, { "end": 236.52, "start": 232.96, "text": " going to generate three different things I should probably use different colors" }, { "end": 242.16, "start": 236.52, "text": " for so this is a function you're going to produce three different things from" }, { "end": 246.68, "start": 242.16, "text": " that you're going to produce a key you're going to produce a query and" }, { "end": 253.8, "start": 246.68, "text": " you're going to produce a value. Now the key is you can imagine it being attached" }, { "end": 259.84000000000003, "start": 253.8, "text": " to this lower layer right here so that's the key for this this token right here" }, { "end": 265.04, "start": 259.84000000000003, "text": " that's the key the key here for that token right here it's a word piece right" }, { "end": 270.68, "start": 265.04, "text": " so the keys again are also just you know vectors vector vector the query you" }, { "end": 276.12, "start": 270.68, "text": " figuratively attach to the top layer right here so the queries they go here" }, { "end": 283.84000000000003, "start": 276.12, "text": " for each token and they are also vectors and the values will keep out of it for" }, { "end": 287.64, "start": 283.84000000000003, "text": " now so the queries and the keys define basically how you route the information" }, { "end": 295.76, "start": 287.64, "text": " and you route the information by going over each so each each you have to" }, { "end": 303.68, "start": 295.76, "text": " imagine each token right here this this half or half it needs to aggregate" }, { "end": 308.6, "start": 303.68, "text": " information from all the other tokens right so we're going through multiple" }, { "end": 314, "start": 308.6, "text": " layers of this and in each layer each of these tokens is aggregating" }, { "end": 319, "start": 314, "text": " information from the other tokens if we do this in multiple rounds is eventually" }, { "end": 324.84000000000003, "start": 319, "text": " you know the each token is aggregating information eventually each token knows" }, { "end": 329.72, "start": 324.84000000000003, "text": " about all the other tokens but how this information aggregation is done is very" }, { "end": 335.40000000000003, "start": 329.72, "text": " important for example if the token is a pronoun it would be very interested in" }, { "end": 340.36, "start": 335.40000000000003, "text": " information coming from any sort of named entity in the sentence because it" }, { "end": 345.72, "start": 340.36, "text": " very much wants to know what it is referring to right if you are a if you" }, { "end": 351.76000000000005, "start": 345.72, "text": " are the the pronoun in the sentence it is very vital that you understand which" }, { "end": 355.92, "start": 351.76000000000005, "text": " of these things you refer to so you'll start aggregating information for that" }, { "end": 362.32, "start": 355.92, "text": " and then once you know who or what you refer to then the other parts of the" }, { "end": 366, "start": 362.32, "text": " sentence can make use of that information so they will start requesting" }, { "end": 373.36, "start": 366, "text": " information from you and so layer after layer each token aggregates information" }, { "end": 379.04, "start": 373.36, "text": " from each other token so this works by let's say we're at this token right here" }, { "end": 383.24, "start": 379.04, "text": " what we're going to do is we're going to form the inner product between that" }, { "end": 389, "start": 383.24, "text": " vector and each of these vectors and then we're going to transfer that into a" }, { "end": 398.72, "start": 389, "text": " softmax which makes this into a first of all there's so we do the query together" }, { "end": 404.6, "start": 398.72, "text": " with all the keys key and then we run it through the exponential function and" }, { "end": 409.36, "start": 404.6, "text": " after that we're going to normalize it by the sum of all the exponential" }, { "end": 415.36, "start": 409.36, "text": " functions that will give us a properly normalized distribution so a histogram" }, { "end": 420.84000000000003, "start": 415.36, "text": " basically of where we are going to get our information from this is going to" }, { "end": 425.76, "start": 420.84000000000003, "text": " be the highest where the inner product right here is the highest so from this" }, { "end": 432.72, "start": 425.76, "text": " token right here and you know this is fairly fairly standard what I drew by" }, { "end": 438.36, "start": 432.72, "text": " accident is fairly standard that a token probably wants to know a lot about" }, { "end": 442.88, "start": 438.36, "text": " itself so you want to carry forward the information that you already have in" }, { "end": 446.56, "start": 442.88, "text": " this particular token that's why your inner products going to maybe align a" }, { "end": 451.04, "start": 446.56, "text": " lot with your own key so the keys and queries are learned so each token" }, { "end": 456.56, "start": 451.04, "text": " decides what kind of information it wants to advertise to the others and" }, { "end": 462.36, "start": 456.56, "text": " then also each token decides what kind of information it wants to gather from" }, { "end": 470.12, "start": 462.36, "text": " the others and the the routing then is put through a softmax function and that" }, { "end": 474.36, "start": 470.12, "text": " gives you this right here you do this for every single token so the problem" }, { "end": 480.40000000000003, "start": 474.36, "text": " with this is that every single token needs to do the inner product of its" }, { "end": 484.52000000000004, "start": 480.40000000000003, "text": " query with all the different keys and each of that has to go through the" }, { "end": 490.12, "start": 484.52000000000004, "text": " softmax and then the value that's actually aggregated are these values" }, { "end": 496.4, "start": 490.12, "text": " right here now the values are simply a transformation of the incoming values" }, { "end": 502.36, "start": 496.4, "text": " values are what's really propagated you can think of it as just like a one layer" }, { "end": 507.64, "start": 502.36, "text": " neural network ultimately you could also leave away the values people don't do" }, { "end": 512.8, "start": 507.64, "text": " this some people do the same queries and keys but the values are just a" }, { "end": 518, "start": 512.8, "text": " transformation of your input so the important thing is this right here this" }, { "end": 524.76, "start": 518, "text": " decides how you're going to aggregate the values all right so this is has a" }, { "end": 533.04, "start": 524.76, "text": " quadratic complexity so if you if you have n input tokens then this entire" }, { "end": 537.2, "start": 533.04, "text": " process will require n squared operations because you need to form the" }, { "end": 542.64, "start": 537.2, "text": " inner products between each pair of queries and keys right and it also is" }, { "end": 547.5, "start": 542.64, "text": " going to require that much memory and this we're going to see this is in large" }, { "end": 552.96, "start": 547.5, "text": " part due to this softmax operation because because we have a softmax it" }, { "end": 558.08, "start": 552.96, "text": " makes the whole thing nonlinear and it being nonlinear basically means we'll" }, { "end": 562.32, "start": 558.08, "text": " have to you know store everything keep everything around and we have to" }, { "end": 567.6, "start": 562.32, "text": " recompute for each query we're going to see in this paper formulation where if" }, { "end": 574.84, "start": 567.6, "text": " we make the whole process linear then we will not have to do that so let's dive" }, { "end": 584.8000000000001, "start": 574.84, "text": " into it so here they go linear transformers they start off with saying" }, { "end": 589.6, "start": 584.8000000000001, "text": " each transformer layer is essentially this right here so this is a this is" }, { "end": 593.24, "start": 589.6, "text": " kind of a higher level of view what we viewed so far is just this part right" }, { "end": 597.88, "start": 593.24, "text": " here this is the attention routing mechanism each layer is actually wrapped" }, { "end": 604.08, "start": 597.88, "text": " in a residual connection and also a simple element wise or row wise feed" }, { "end": 610, "start": 604.08, "text": " forward layer but these things are usually not that much into consideration" }, { "end": 616.6, "start": 610, "text": " what's really hurting in the transformer if you go into very long sequences is" }, { "end": 622.24, "start": 616.6, "text": " this attention routing mechanism so the attention routing mechanism is as" }, { "end": 626.48, "start": 622.24, "text": " follows you can see right here this is the formal expression of what I" }, { "end": 632.5600000000001, "start": 626.48, "text": " described right here here you have the and notice this is an outer product so" }, { "end": 642.1999999999999, "start": 632.56, "text": " if if I have if I have n sequence elements the Q right here are the" }, { "end": 650, "start": 642.1999999999999, "text": " queries so this transforms each of the n into a into a d-dimensional space right" }, { "end": 655.64, "start": 650, "text": " and also the keys will transform each of these into a d-dimensional space so this" }, { "end": 665.56, "start": 655.64, "text": " here is going this here is going to be a n by n matrix right this is this Q KT is" }, { "end": 674.4399999999999, "start": 665.56, "text": " going to be an n by n matrix this is X W Q W K X and this transposed right here" }, { "end": 682.56, "start": 674.4399999999999, "text": " nope yeah like this okay so this is sort of an outer product and then we're going" }, { "end": 687.8399999999999, "start": 682.56, "text": " to take the row wise softmax and that will give us for each row in this" }, { "end": 693.3199999999999, "start": 687.8399999999999, "text": " matrix so for each row in this matrix we're going to have these this" }, { "end": 698.9599999999999, "start": 693.3199999999999, "text": " distribution of how to aggregate information each row will give us" }, { "end": 704.64, "start": 698.9599999999999, "text": " basically for each of the upper level tokens for each of the outputs how we" }, { "end": 709.3599999999999, "start": 704.64, "text": " need to aggregate information from the inputs and the information that we're" }, { "end": 717.48, "start": 709.36, "text": " aggregating are these values right here now they generalize this first of all" }, { "end": 723.96, "start": 717.48, "text": " they say we can also we can write it in this form right here instead of having a" }, { "end": 729.92, "start": 723.96, "text": " softmax we can actually think of any kind of similarity function it between" }, { "end": 735.84, "start": 729.92, "text": " the queries and the keys so here you see what we want to do if we want to" }, { "end": 740.5600000000001, "start": 735.84, "text": " calculate output I here the important thing is there is no longer this is an" }, { "end": 746.36, "start": 740.5600000000001, "text": " entire matrix and we considered a row wise softmax and now we write this out" }, { "end": 754.08, "start": 746.36, "text": " into the individual elements of the output and we can we can do so we can" }, { "end": 761, "start": 754.08, "text": " say okay how do we obtain one element of the output we're going to calculate" }, { "end": 768.44, "start": 761, "text": " some sort of similarity of that particular crazy I here I here we're" }, { "end": 771.98, "start": 768.44, "text": " going to calculate some sort of similarity between the query of that" }, { "end": 778.28, "start": 771.98, "text": " particular output with all of the keys so here you can see all of the keys of" }, { "end": 783.26, "start": 778.28, "text": " the input and we're going to act and we're going to normalize right this is" }, { "end": 788.24, "start": 783.26, "text": " the normalization that happens also in the softmax and that will give us like a" }, { "end": 794.4, "start": 788.24, "text": " histogram of how we aggregate the values right here so all of this of this red" }, { "end": 798.8, "start": 794.4, "text": " stuff will give us again some sort of a histogram of how we're going to" }, { "end": 806.28, "start": 798.8, "text": " aggregate information if you look a bit like this and you know how the softmax" }, { "end": 812.32, "start": 806.28, "text": " is defined you'll see that if we plug in the exponential function for as the" }, { "end": 817.82, "start": 812.32, "text": " similarity function then you'll get back to the softmax okay they say equation" }, { "end": 822.12, "start": 817.82, "text": " three is equivalent to equation two if we substitute the similarity function" }, { "end": 830.48, "start": 822.12, "text": " with the exponential function now they go they go ahead and they go into" }, { "end": 837.5200000000001, "start": 830.48, "text": " kernels so for that you certainly to understand what a kernel is a kernel is" }, { "end": 843.44, "start": 837.5200000000001, "text": " a special kind for the purposes that we are you know looking at here a kernel is" }, { "end": 848.8000000000001, "start": 843.44, "text": " a special kind of a similarity function it needs to have some properties right" }, { "end": 854.36, "start": 848.8000000000001, "text": " here but essentially they say well this this kind of looks like a kernel and we" }, { "end": 861.0400000000001, "start": 854.36, "text": " will simply say okay here this similarity what if we use a kernel here" }, { "end": 868, "start": 861.0400000000001, "text": " so a kernel simply is a similarity function of two vectors if you" }, { "end": 871.48, "start": 868, "text": " interpret it like they have some more condition I know I know don't freak on" }, { "end": 879.4, "start": 871.48, "text": " me but the interesting properties about kernels is that if a similarity function" }, { "end": 887.6, "start": 879.4, "text": " is a kernel it means that there exists a mapping and where do we do so if K" }, { "end": 898.76, "start": 887.6, "text": " between A and B is a kernel if K is a kernel that means that there exists a" }, { "end": 907.96, "start": 898.76, "text": " similar a function Phi such that Phi such that the kernel between A and B" }, { "end": 916.72, "start": 907.96, "text": " can be expressed as a linear product between Phi of A and Phi of B transpose" }, { "end": 925.84, "start": 916.72, "text": " okay this is like this is an inner product so what it means is that this" }, { "end": 932.6800000000001, "start": 925.84, "text": " can be like a super nonlinear function a kernel for example it can be and the" }, { "end": 938.08, "start": 932.6800000000001, "text": " example often given in like machine learning classes is maybe something like" }, { "end": 943.6, "start": 938.08, "text": " this you have one dimensional data right and here is the here is zero and you" }, { "end": 949.12, "start": 943.6, "text": " have two kinds of data points you have the X's right here and you have the" }, { "end": 957.5600000000001, "start": 949.12, "text": " circles right here now I cannot classify this data linearly however however I can" }, { "end": 965.16, "start": 957.5600000000001, "text": " transform this into a higher dimensional space so my function Phi is of my" }, { "end": 973.6800000000001, "start": 965.16, "text": " function Phi of X is going to map to the vector X X squared and that will" }, { "end": 979.3599999999999, "start": 973.68, "text": " transform the data into a two-dimensional space right and the data" }, { "end": 984.4799999999999, "start": 979.3599999999999, "text": " will look something like this so it's going to the y-axis is going to be the" }, { "end": 992.12, "start": 984.4799999999999, "text": " square of the x-axis okay and like this and now I can find a linear classifier" }, { "end": 1002.8399999999999, "start": 992.12, "text": " okay so in this case right here you can see that in this higher space things" }, { "end": 1008.96, "start": 1002.84, "text": " become linear things become linearly classifiable and very similarly like" }, { "end": 1015.88, "start": 1008.96, "text": " this is you can define the similarity between things right here so the" }, { "end": 1022.44, "start": 1015.88, "text": " similarity function would be the square function right here and this would be a" }, { "end": 1028, "start": 1022.44, "text": " quadratic an example of a quadratic kernel so this function right here can" }, { "end": 1033.92, "start": 1028, "text": " be very nonlinear I mean it can be a linear function but it can be very" }, { "end": 1038.96, "start": 1033.92, "text": " nonlinear but it is equivalent it is equivalent this means it is equivalent" }, { "end": 1046.12, "start": 1038.96, "text": " to a linear function in a high dimensional space now to figure out" }, { "end": 1055.32, "start": 1046.12, "text": " linear to figure out what this function Phi is is the big the big question of" }, { "end": 1060.84, "start": 1055.32, "text": " course for a couple of kernels we know the function Phi right for the quadratic" }, { "end": 1067.06, "start": 1060.84, "text": " kernel for example we know we just saw that Phi maps this to the vector of the" }, { "end": 1073.4399999999998, "start": 1067.06, "text": " coordinate and it's quadratic to its square we know for a couple of other" }, { "end": 1077.6399999999999, "start": 1073.4399999999998, "text": " kernels what their associated functions are but in general if it's a kernel then" }, { "end": 1083.46, "start": 1077.6399999999999, "text": " we can just replace it with a linear function and with with with this thing" }, { "end": 1091.28, "start": 1083.46, "text": " and in reverse we can just say well what we could do is we could just simply" }, { "end": 1099.76, "start": 1091.28, "text": " define a function Phi and basically map this into map these a and b into a" }, { "end": 1104.68, "start": 1099.76, "text": " higher dimensional space where this super duper nonlinear function would" }, { "end": 1108.48, "start": 1104.68, "text": " just become a linear function wouldn't that be much easier linear functions are" }, { "end": 1115.48, "start": 1108.48, "text": " much easier to work with than nonlinear functions and if we know that as long as" }, { "end": 1120.64, "start": 1115.48, "text": " we get the correct Phi we do exactly the same thing as the nonlinear function you" }, { "end": 1124.08, "start": 1120.64, "text": " know that would be helpful so there is an entire litters in the entire like" }, { "end": 1129.84, "start": 1124.08, "text": " decade of kernel ization and kernel ize everything kernel ized SVM s to start" }, { "end": 1135.6, "start": 1129.84, "text": " but then you can go way further in this and this is just the beginning and this" }, { "end": 1140.6799999999998, "start": 1135.6, "text": " is just a very sloppy explanation by me right here but ultimately they're saying" }, { "end": 1146.76, "start": 1140.6799999999998, "text": " hey instead of doing complicated nonlinear similarity function like the" }, { "end": 1153.04, "start": 1146.76, "text": " softmax can't we just project a and b into the higher dimensional space and" }, { "end": 1159.7199999999998, "start": 1153.04, "text": " then just do the linear inner product and do the same thing as the softmax and" }, { "end": 1167.2, "start": 1159.72, "text": " the answer is yes and no we know that for the softmax the particular Phi" }, { "end": 1171.84, "start": 1167.2, "text": " function that we would need would map to an infinite dimensional space usually" }, { "end": 1178.2, "start": 1171.84, "text": " usually this is applied in reverse it's like oh here instead usually in machine" }, { "end": 1182.46, "start": 1178.2, "text": " learning if they say you know we want to do this we want to map it into a high" }, { "end": 1185.84, "start": 1182.46, "text": " dimensional space that is linear but we we can't because these spaces are too" }, { "end": 1191.12, "start": 1185.84, "text": " high dimensional and therefore we find an equivalent kernel function and it's" }, { "end": 1194.6399999999999, "start": 1191.12, "text": " usually said well we've used an RBF kernel that corresponds to an infinite" }, { "end": 1198.8799999999999, "start": 1194.6399999999999, "text": " dimensional space and so on that's pretty cool here it's the reverse here" }, { "end": 1205.4399999999998, "start": 1198.8799999999999, "text": " it's we want to do we want the linear function we and the equivalent of the" }, { "end": 1210.9199999999998, "start": 1205.4399999999998, "text": " softmax function is an infinite dimensional function which we can't do" }, { "end": 1217.64, "start": 1210.92, "text": " right we can't feasibly compute an infinite dimensional space explicitly so" }, { "end": 1224.0800000000002, "start": 1217.64, "text": " it's not possible to just do the equivalent thing than in a transformer" }, { "end": 1230.1200000000001, "start": 1224.0800000000002, "text": " however you can still do something else you can still use polynomial kernels you" }, { "end": 1234.68, "start": 1230.1200000000001, "text": " can still use any kind of kernels that have corresponding functions that map to" }, { "end": 1240.8000000000002, "start": 1234.68, "text": " a finite dimensional space and that's what this paper does so here they say" }, { "end": 1246.48, "start": 1240.8, "text": " if we have such a function that maps these things into a higher dimensional" }, { "end": 1253.04, "start": 1246.48, "text": " space such that their inner product such that the similarity function in this" }, { "end": 1257.6, "start": 1253.04, "text": " higher dimensional space is an inner product then we can write it as just" }, { "end": 1261.36, "start": 1257.6, "text": " this inner product right here explicitly and then because of the" }, { "end": 1266.32, "start": 1261.36, "text": " associativity you can see that here is an eye and here there is no eye so we" }, { "end": 1272.12, "start": 1266.32, "text": " can just sort of pull this out of this sum and as well right here it doesn't" }, { "end": 1277.96, "start": 1272.12, "text": " don't don't cross this away these are vectors right but you can see especially" }, { "end": 1284.2, "start": 1277.96, "text": " here you can see pretty clear why is there a cursor stop you can see that" }, { "end": 1290.3799999999999, "start": 1284.2, "text": " this here you have to pay attention to the matrix dimension so if we use like" }, { "end": 1301.48, "start": 1290.38, "text": " bra-cat notation this is like this like this like this and like this okay so" }, { "end": 1308.24, "start": 1301.48, "text": " here on the bottom you see that there is an inner product so each output will be" }, { "end": 1314.2, "start": 1308.24, "text": " normalized by this inner product right here however the top is going to be a" }, { "end": 1320.6000000000001, "start": 1314.2, "text": " vector we know that you know each output is a vector so the top will aggregate" }, { "end": 1326.94, "start": 1320.6000000000001, "text": " these vectors right here according to this routing okay but if we write it" }, { "end": 1331, "start": 1326.94, "text": " like this you can see that what we could technically do is we could technically" }, { "end": 1335.92, "start": 1331, "text": " compute this part here once because it doesn't contain any eye so there is no eye" }, { "end": 1342.46, "start": 1335.92, "text": " in that part so we could just compute it once because we have these two these two" }, { "end": 1350.6000000000001, "start": 1342.46, "text": " layers of the attention mechanism and these K and V they just refer to this" }, { "end": 1355.76, "start": 1350.6000000000001, "text": " lower layer right here we could just compute that thing on the right ones" }, { "end": 1359.48, "start": 1355.76, "text": " that's going to give us a matrix as you can see right here from the dimensions" }, { "end": 1365.24, "start": 1359.48, "text": " and then we can simply take the product of that vector right here of the vector" }, { "end": 1369.44, "start": 1365.24, "text": " on the left with the matrix on the right and we'd be done it's one operation" }, { "end": 1376.04, "start": 1369.44, "text": " right instead of for each thing you know going and attending to each other and" }, { "end": 1381.6000000000001, "start": 1376.04, "text": " then do the softmax without the softmax we can all do this in a linear fashion" }, { "end": 1389.3600000000001, "start": 1381.6000000000001, "text": " so that makes it a lot easier in fact it makes the computation linear in so this" }, { "end": 1398, "start": 1389.3600000000001, "text": " is now O of n okay plus of course the the work that you have to do for mapping" }, { "end": 1402.08, "start": 1398, "text": " this into the higher dimensional space but this is also not quadratic this is" }, { "end": 1411, "start": 1402.08, "text": " done to each of these elements individually okay so this this is now as" }, { "end": 1415.6, "start": 1411, "text": " we said it's pretty easy you can calculate the matrix on the top you can" }, { "end": 1420.08, "start": 1415.6, "text": " actually also calculate this part right here this vector you can aggregate over" }, { "end": 1425.52, "start": 1420.08, "text": " the bottom and then if you go through the top it's simply a inner product with" }, { "end": 1431.76, "start": 1425.52, "text": " the vector of the queries and you're done and this is it in fact in matrix" }, { "end": 1439.32, "start": 1431.76, "text": " form you can simply write it down as one matrix multiplication seems pretty easy" }, { "end": 1447.68, "start": 1439.32, "text": " so the computational cost goes way down and they use the following function right" }, { "end": 1451.4, "start": 1447.68, "text": " here okay this is their map to the higher" }, { "end": 1457.68, "start": 1451.4, "text": " dimensional to the higher dimensional space so they say for our experiments" }, { "end": 1460.88, "start": 1457.68, "text": " that deal with smaller sequences we employ feature map that results in a" }, { "end": 1466.88, "start": 1460.88, "text": " positive similarity function as defined below so right here you have to pay" }, { "end": 1471.92, "start": 1466.88, "text": " attention you can't just pick any function but you can you can pick a lot" }, { "end": 1477.48, "start": 1471.92, "text": " of different functions where LU denotes the exponential linear unit activation" }, { "end": 1485.04, "start": 1477.48, "text": " function okay like this seems this seems fine they also say in our experimental" }, { "end": 1489.04, "start": 1485.04, "text": " section we show that the feature map of equation 7 performs on par with the full" }, { "end": 1493.72, "start": 1489.04, "text": " transformer while significantly reducing the computational memory requirements" }, { "end": 1499.44, "start": 1493.72, "text": " this you know it seems it seems like the the original transformer this choice of" }, { "end": 1503.24, "start": 1499.44, "text": " the softmax function even though it's you know powerful and can't be" }, { "end": 1507.84, "start": 1503.24, "text": " approximated with this trick right here it was also somewhat arbitrary I mean" }, { "end": 1513.48, "start": 1507.84, "text": " there is a reasoning behind it but it's also somewhat like meh and it's entirely" }, { "end": 1521.6, "start": 1513.48, "text": " possible right that that this here is way faster so I want to jump this causal" }, { "end": 1527.08, "start": 1521.6, "text": " masking thing for now and look at the results where you can see they verify" }, { "end": 1534.24, "start": 1527.08, "text": " the fact that in terms of time in terms of GPU memory if they apply their" }, { "end": 1539.12, "start": 1534.24, "text": " transformer and here on the x-axis you see sequence length and you can see" }, { "end": 1544.76, "start": 1539.12, "text": " that the this is log plot right these are log plots you can see that the" }, { "end": 1551.24, "start": 1544.76, "text": " original transformer right here has a way steeper slope than their transformer" }, { "end": 1557.84, "start": 1551.24, "text": " which is the black line right here the blue lines are the reformers which we've" }, { "end": 1562.08, "start": 1557.84, "text": " also I've also done a video on reformer if you want to check that out that is" }, { "end": 1567.52, "start": 1562.08, "text": " also a trick that uses locality sensitive hashing to get rid of the" }, { "end": 1574.32, "start": 1567.52, "text": " quadratic attention mechanism now the locality sensitive hashing also means" }, { "end": 1580.52, "start": 1574.32, "text": " that you kind of lose some accuracy so that's the trade-off right here but you" }, { "end": 1584.9, "start": 1580.52, "text": " can see that is also linear actually it's n log n depending on the sequence" }, { "end": 1591.6399999999999, "start": 1584.9, "text": " length but the log n is negligible so you see GPU memory and time way down and" }, { "end": 1596.56, "start": 1591.6399999999999, "text": " in terms of experiments it does perform on par it seems like it has different" }, { "end": 1600.8, "start": 1596.56, "text": " optimization trajectory but they show that you know there is this trade-off" }, { "end": 1605.72, "start": 1600.8, "text": " for the reformer where you lose inaccuracy they do not experience that" }, { "end": 1610.6000000000001, "start": 1605.72, "text": " trade-off in the linear transformer compared to the original transformer in" }, { "end": 1618.8, "start": 1610.6000000000001, "text": " their particular experiments now they do their experiments sort of show that they" }, { "end": 1624.2, "start": 1618.8, "text": " are not on par with the original transfer like they are on par in some of" }, { "end": 1629.72, "start": 1624.2, "text": " the tasks but also in some of the tasks they are not on par for example this" }, { "end": 1635.72, "start": 1629.72, "text": " speech data set right here where they do fairly well they actually beat the" }, { "end": 1641.44, "start": 1635.72, "text": " bi-lstm baseline and the reformer but they do not beat the softmax transformer" }, { "end": 1646.32, "start": 1641.44, "text": " so there there's it is still the case that the softmax transformer is more" }, { "end": 1652.16, "start": 1646.32, "text": " powerful than the thing here and will give some intuition very shortly on that" }, { "end": 1659.68, "start": 1652.16, "text": " but the linear transformer is way faster here it's three times faster and up" }, { "end": 1667.48, "start": 1659.68, "text": " here it is 300 times faster and on mnist and if you go and see for 10 is 4000" }, { "end": 1672.48, "start": 1667.48, "text": " times faster simply by property of the longer either sequences are that you" }, { "end": 1677.88, "start": 1672.48, "text": " input the much more matters the fact that the softmax transformer has a" }, { "end": 1684.28, "start": 1677.88, "text": " quadratic runtime whereas the linear transformer has a linear runtime and I" }, { "end": 1690.3999999999999, "start": 1684.28, "text": " was also surprised here to see that the reformer wasn't that much faster that's" }, { "end": 1694.72, "start": 1690.3999999999999, "text": " probably due to the fact that it already has like a big overhead in these hashing" }, { "end": 1700.92, "start": 1694.72, "text": " rounds and so on that probably is is hurting it at sort of a constant level I" }, { "end": 1705.08, "start": 1700.92, "text": " guess if you were to up the sequence length even more than the reformer would" }, { "end": 1712.84, "start": 1705.08, "text": " also improve a lot more over the softmax transformer ok so what's what's" }, { "end": 1718.08, "start": 1712.84, "text": " happening here what's happening with these with this attention and why is it" }, { "end": 1722.6799999999998, "start": 1718.08, "text": " different what does it makes it different from the old attention now I" }, { "end": 1729.1599999999999, "start": 1722.6799999999998, "text": " wanna I wanna sort of connect this to the kind of old and old the olden" }, { "end": 1735.6799999999998, "start": 1729.1599999999999, "text": " literature of topic modeling so if you think of the of this transformer of this" }, { "end": 1740.8799999999999, "start": 1735.6799999999998, "text": " attention mechanism what you'll have is a dynamic routing of information right" }, { "end": 1748.1200000000001, "start": 1740.88, "text": " so for each from each output token you get to look at all the input tokens if" }, { "end": 1752.7600000000002, "start": 1748.1200000000001, "text": " we for example select this one you get to look and you get to decide for each" }, { "end": 1757.64, "start": 1752.7600000000002, "text": " one how do I want to aggregate my information ok and this is what makes" }, { "end": 1761.0400000000002, "start": 1757.64, "text": " this quadratic from each of the output tokens you get to look at all of the" }, { "end": 1767.24, "start": 1761.0400000000002, "text": " input tokens and decide how you want to do that and that is can be very long" }, { "end": 1773.76, "start": 1767.24, "text": " nonlinear in terms of when we use the softmax and so on so that what makes it" }, { "end": 1778.28, "start": 1773.76, "text": " expensive what this thing is doing is the following it takes all the keys" }, { "end": 1783.56, "start": 1778.28, "text": " right here so here we have all the keys and it's going to map them through this" }, { "end": 1788.6, "start": 1783.56, "text": " five function right each key is going to map through the five function and each" }, { "end": 1794.24, "start": 1788.6, "text": " query is also going to be mapped through the five function into these higher" }, { "end": 1798.92, "start": 1794.24, "text": " dimensional spaces and then an inner product is performed between the two and" }, { "end": 1804.84, "start": 1798.92, "text": " that decides the routing this is very similar to like topic models where if" }, { "end": 1813.04, "start": 1804.84, "text": " you interpret this this right here can be a mapping of my dimension of these" }, { "end": 1817.16, "start": 1813.04, "text": " keys and queries to the topics so essentially what's happening right here" }, { "end": 1823.32, "start": 1817.16, "text": " is for each of the input tokens sorry input tokens here output tokens here" }, { "end": 1830.32, "start": 1823.32, "text": " the dimension of this map defines is how many topics there are so in you know in" }, { "end": 1835.56, "start": 1830.32, "text": " these topics modeling you would have things like I want to I have news" }, { "end": 1841.56, "start": 1835.56, "text": " articles or words and then I define like a set of topics and I'm going to assign" }, { "end": 1849.12, "start": 1841.56, "text": " each word to a topic and then I'm going to assign each news article to a topic" }, { "end": 1854.6799999999998, "start": 1849.12, "text": " and so on and then you kind of do this dimension reduction but this can be done" }, { "end": 1859.32, "start": 1854.6799999999998, "text": " in many ways so let's say this is a mapping to three dimensions what this" }, { "end": 1865.9599999999998, "start": 1859.32, "text": " does is essentially this five function decides how you're going to map each of" }, { "end": 1872.2399999999998, "start": 1865.9599999999998, "text": " these inputs into these three topics so you can say all this token goes here and" }, { "end": 1879.24, "start": 1872.24, "text": " here this one goes here and that bit here this one goes here and so on so" }, { "end": 1885.6, "start": 1879.24, "text": " again this is a this is a mapping into a well in this case a lower dimensional" }, { "end": 1891.48, "start": 1885.6, "text": " space and then this function decides how you're going to aggregate these topics" }, { "end": 1898.8, "start": 1891.48, "text": " over across here and since this is you know this is now a linear multiplication" }, { "end": 1902.76, "start": 1898.8, "text": " between the two things so these two are going to be your matrices this here is" }, { "end": 1910, "start": 1902.76, "text": " going to be your phi of K and this here is going to be your phi of Q so you can" }, { "end": 1914.72, "start": 1910, "text": " see the difference here between the old attention mechanism and the new attention" }, { "end": 1919.28, "start": 1914.72, "text": " mechanism right the old attention mechanism each token was directly able" }, { "end": 1924.2, "start": 1919.28, "text": " to look at all the input tokens and decide how to aggregate the information" }, { "end": 1930, "start": 1924.2, "text": " and here it's sort of we have this in between is in between representation in" }, { "end": 1935.6000000000001, "start": 1930, "text": " this higher dimensional space and we can aggregate in only a we can distribute in" }, { "end": 1940.52, "start": 1935.6000000000001, "text": " a linear fashion and we can aggregate in a linear fashion in and from this" }, { "end": 1949, "start": 1940.52, "text": " higher dimensional space that's sort of how how I sort of how I imagine that" }, { "end": 1954.56, "start": 1949, "text": " that okay so you get to distribute each token right here into these topics and" }, { "end": 1960.08, "start": 1954.56, "text": " then the the outputs they they don't see the inputs anymore right you see that in" }, { "end": 1968.6, "start": 1960.08, "text": " the formulation there is a sum over j so right here there is this sum over j and" }, { "end": 1974.28, "start": 1968.6, "text": " that means that the outputs here they don't see the different inputs as" }, { "end": 1979.48, "start": 1974.28, "text": " different inputs they only see the inputs through the map of the phi function so" }, { "end": 1983.72, "start": 1979.48, "text": " they can only see the individual dimensions of that phi function they" }, { "end": 1990.08, "start": 1983.72, "text": " cannot see the outputs anymore and therefore yeah therefore you don't have" }, { "end": 1998, "start": 1990.08, "text": " the dependence on the big quadratic dependence on this on this n okay however" }, { "end": 2003.36, "start": 1998, "text": " you do have a co of course now a dependence on this the dimension of the" }, { "end": 2007.4399999999998, "start": 2003.36, "text": " intermediate representation and they also they say this right this is you" }, { "end": 2016.62, "start": 2007.4399999999998, "text": " know reasonable yeah they do derive the gradients here to save even more memory" }, { "end": 2023.32, "start": 2016.62, "text": " so you don't have to such that you don't have to let's say store of all of these" }, { "end": 2026.9199999999998, "start": 2023.32, "text": " activations that's pretty cool as well and they implemented in CUDA there is" }, { "end": 2031.1999999999998, "start": 2026.9199999999998, "text": " code available for the linear transformer all of this pretty pretty" }, { "end": 2039.8, "start": 2031.2, "text": " cool okay so the last thing they say they make the connections to RNNs now" }, { "end": 2046.4, "start": 2039.8, "text": " this is a bit detached from the linear transformer but because they formulated" }, { "end": 2051.52, "start": 2046.4, "text": " how they do they can make this connection so this now this now is valid" }, { "end": 2057.48, "start": 2051.52, "text": " for all transformers what they say right here but keep in mind it is valid for" }, { "end": 2063.2, "start": 2057.48, "text": " the original transformers in practice if you can make this mapping Phi to map to" }, { "end": 2069.68, "start": 2063.2, "text": " infinite dimensions which you can't but the analysis is equivalent so they say" }, { "end": 2077.04, "start": 2069.68, "text": " look if we write the attention mechanism like this and therefore like this what" }, { "end": 2081.84, "start": 2077.04, "text": " we can do is we can define these two quantities right s and z this is what we" }, { "end": 2087.88, "start": 2081.84, "text": " said before we can actually pre compute these quantities right here okay so that" }, { "end": 2095.92, "start": 2087.88, "text": " reduces to this right here if now we are looking at a autoregressive transformer" }, { "end": 2099.04, "start": 2095.92, "text": " and we said before what an autoregressive transformer was an" }, { "end": 2102.8, "start": 2099.04, "text": " autoregressive transformers you have a piece of sequence and you are tasked to" }, { "end": 2109.8, "start": 2102.8, "text": " predict this next thing right here now usually if you want to train this using" }, { "end": 2116.88, "start": 2109.8, "text": " an RNN you have to you know run your RNN input this hidden state and input that" }, { "end": 2122.28, "start": 2116.88, "text": " map forward the hidden state so you have to do all of this forward propagation in" }, { "end": 2126.5600000000004, "start": 2122.28, "text": " order to derive at this hidden at this output right here make the output and" }, { "end": 2133.1200000000003, "start": 2126.5600000000004, "text": " then you need to back prop through time right here there is no way to what you" }, { "end": 2137.1200000000003, "start": 2133.1200000000003, "text": " would like to do is you would like to say here I have a sentence I can" }, { "end": 2141.8399999999997, "start": 2137.12, "text": " actually make like five different training examples from that sentence so" }, { "end": 2147.08, "start": 2141.8399999999997, "text": " the first one is the one you just saw I just block off the last word but I can" }, { "end": 2153.88, "start": 2147.08, "text": " also make that training example right here right to when I just cut a second" }, { "end": 2157.3199999999997, "start": 2153.88, "text": " to last word and so on I can actually make all of these different training" }, { "end": 2161.72, "start": 2157.3199999999997, "text": " examples for language modeling from a single sentence and what I would like to" }, { "end": 2166.7999999999997, "start": 2161.72, "text": " do is I would like to train them all in parallel right I load my data point once" }, { "end": 2171.46, "start": 2166.8, "text": " I already have it why can't I just train everything at the same time like" }, { "end": 2176.92, "start": 2171.46, "text": " predict this from this word now predict also this from these two words and the" }, { "end": 2185.1600000000003, "start": 2176.92, "text": " transformers are you know very well conditioned they are very good at this" }, { "end": 2192.76, "start": 2185.1600000000003, "text": " basically so what a transformer can do is if you input a sequence like sorry" }, { "end": 2199.2000000000003, "start": 2192.76, "text": " like the thing at the bottom you can calculate the training signal for all of" }, { "end": 2205, "start": 2199.2000000000003, "text": " these different things at the same time and okay this was maybe a mistake you" }, { "end": 2209.88, "start": 2205, "text": " can calculate the training signal for all of this at the same time by using" }, { "end": 2215.5200000000004, "start": 2209.88, "text": " what's called causal masking in attention so if I have my attention" }, { "end": 2220.48, "start": 2215.5200000000004, "text": " mechanism right here let's consider it again and let's consider these two" }, { "end": 2224.64, "start": 2220.48, "text": " layers if I have my attention mechanism what I want to do is I want to" }, { "end": 2229.48, "start": 2224.64, "text": " constrain each token to only attend to tokens that came before it in the" }, { "end": 2234.08, "start": 2229.48, "text": " sequence so for example this token right here I'm going to constrain it to only" }, { "end": 2244.2, "start": 2234.08, "text": " attend to itself and the past because it will it will predict the next token in" }, { "end": 2248.28, "start": 2244.2, "text": " the sequence and it would be it would be really easy if we could attend to the" }, { "end": 2253.32, "start": 2248.28, "text": " input of that token right it could simply remember what that token is" }, { "end": 2259.28, "start": 2253.32, "text": " and then aggregate that here and then predict that so if for each token we" }, { "end": 2265, "start": 2259.28, "text": " restrict the attention to the tokens that came before it like also for this" }, { "end": 2270.6400000000003, "start": 2265, "text": " right here we restrict the attention only to go backwards then we can train" }, { "end": 2274.5600000000004, "start": 2270.6400000000003, "text": " all of this in parallel this is called causal masking it's usually implemented" }, { "end": 2280.44, "start": 2274.56, "text": " with like a mask that is like an upper diagonal and it's a bit unclear if you" }, { "end": 2285.04, "start": 2280.44, "text": " can attend to yours to yourself because then I guess this will become the output" }, { "end": 2289.56, "start": 2285.04, "text": " and you can only attend to this I don't know exactly how it is implemented but" }, { "end": 2296.2799999999997, "start": 2289.56, "text": " there it is usually realized with an upper triangular matrix as a mask and" }, { "end": 2305.6800000000003, "start": 2296.28, "text": " you apply this mask to each layer now they say that this is actually like an" }, { "end": 2310.44, "start": 2305.6800000000003, "text": " or an N and with their formulation you can make this pretty explicit namely" }, { "end": 2317.7200000000003, "start": 2310.44, "text": " you have these two states s and a Z and in each sequence element it's actually" }, { "end": 2324.4, "start": 2317.7200000000003, "text": " like an or an N where you update the s and the Z with these quantities right" }, { "end": 2330.6800000000003, "start": 2324.4, "text": " here and so it's like an or an N where these are the hidden states that you" }, { "end": 2336.7200000000003, "start": 2330.6800000000003, "text": " pass forward right and then you can formulate any transformer as an or an N" }, { "end": 2342.84, "start": 2336.7200000000003, "text": " that simply updates these two states but you see you need the explicit mapping of" }, { "end": 2349.4, "start": 2342.84, "text": " these of this kernel function you need this explicit mapping in order to be" }, { "end": 2353.4, "start": 2349.4, "text": " able to do this because otherwise this is here this is not going to be a" }, { "end": 2359.1600000000003, "start": 2353.4, "text": " linear addition it is going to be complicated you can't do it by simply" }, { "end": 2364, "start": 2359.1600000000003, "text": " remembering the past state so you need that formulation in order to be able to" }, { "end": 2369.3, "start": 2364, "text": " express it as an RNN but their analysis shows that this a transformer" }, { "end": 2375.12, "start": 2369.3, "text": " autoregressive is essentially an RNN and you can you can so you can make a" }, { "end": 2381.7200000000003, "start": 2375.12, "text": " connection in that and you can actually formulate this as an RNN which means" }, { "end": 2387.12, "start": 2381.72, "text": " that you can train in the transformer fashion everything at the same time but" }, { "end": 2392.9599999999996, "start": 2387.12, "text": " what is cool about an RNN an RNN at inference time an RNN once it has" }, { "end": 2399.3599999999997, "start": 2392.9599999999996, "text": " produced you know this word it can then because if you produce autoregressively" }, { "end": 2404.64, "start": 2399.3599999999997, "text": " you simply say hey I have this beginning of my news article please finish it so" }, { "end": 2409.72, "start": 2404.64, "text": " the model must output the next word and then from that sequence it must output" }, { "end": 2413.52, "start": 2409.72, "text": " the next word the next word and then from that the next word and so on and" }, { "end": 2418.12, "start": 2413.52, "text": " RNN because of the nature of simply passing forward hidden states at" }, { "end": 2422.12, "start": 2418.12, "text": " inference time can simply you know remember what the hidden states were" }, { "end": 2427.72, "start": 2422.12, "text": " input those again input the output here and go on so it's pretty fast at" }, { "end": 2434.16, "start": 2427.72, "text": " inference time which a transformer isn't with their formulation now if they have" }, { "end": 2439.8799999999997, "start": 2434.16, "text": " the explicit function Phi they can use this at inference time to be so much" }, { "end": 2444.24, "start": 2439.8799999999997, "text": " faster in fact on their website which I'll link of course in the in the" }, { "end": 2449.08, "start": 2444.24, "text": " description you can play with image generation using one of these" }, { "end": 2454.8399999999997, "start": 2449.08, "text": " transformers in your browser so you can simply start a transformer run in your" }, { "end": 2462.12, "start": 2454.8399999999997, "text": " browser that's how easy this becomes so you can see the linear transformer with" }, { "end": 2467.7599999999998, "start": 2462.12, "text": " causal masking you'll simply update these states right here and then pass" }, { "end": 2474.52, "start": 2467.7599999999998, "text": " those forward so easy and the backward pass as we said I don't want to go into" }, { "end": 2479, "start": 2474.52, "text": " the gradient calculation but they derive the gradient such that you don't have to" }, { "end": 2486.08, "start": 2479, "text": " remember these hidden states and it becomes or it is linear in or it saves" }, { "end": 2493.48, "start": 2486.08, "text": " a lot of more memory than before okay note so this is the last comment from my" }, { "end": 2501.36, "start": 2493.48, "text": " side note that this this causal masking transformers they are they are a bit of" }, { "end": 2509.92, "start": 2501.36, "text": " a hack in transformers and because so ultimately let's say let's say I have" }, { "end": 2517.36, "start": 2509.92, "text": " this sequence right here this is given and I want to predict this word right" }, { "end": 2524.44, "start": 2517.36, "text": " here what and okay let's make it here so I need multiple layers for this so I" }, { "end": 2530.44, "start": 2524.44, "text": " want to predict that next word and I have multiple layers right so I want to" }, { "end": 2535.4, "start": 2530.44, "text": " predict this from from the outputs right here let's say there is an output node" }, { "end": 2541.88, "start": 2535.4, "text": " right here I want to predict that particular word it's true that I should" }, { "end": 2547, "start": 2541.88, "text": " only be able to aggregate information from whatever was you know on the back" }, { "end": 2552.7200000000003, "start": 2547, "text": " right here but technically in a transformer it would be completely valid" }, { "end": 2558.96, "start": 2552.7200000000003, "text": " to say that this node right here which is let's say that's an article and it" }, { "end": 2564.6, "start": 2558.96, "text": " followed by a noun right would be able to attend to that one and then that one" }, { "end": 2570.04, "start": 2564.6, "text": " would be able to attend to that one and or sorry the output right here would be" }, { "end": 2574.64, "start": 2570.04, "text": " able to attend to that one this would not violate the autoregressive property" }, { "end": 2580.4, "start": 2574.64, "text": " right you can but you can see that in the intermediate layer this node right" }, { "end": 2586.6, "start": 2580.4, "text": " here is attending to a forward node now if you do things like this you can't do" }, { "end": 2592.8399999999997, "start": 2586.6, "text": " this trick anymore where you train everything at once because if if this" }, { "end": 2598.76, "start": 2592.84, "text": " connection exists that also means that if in this other training sample where" }, { "end": 2603.7200000000003, "start": 2598.76, "text": " this is the word to be predicted then this node could aggregate information" }, { "end": 2609.6400000000003, "start": 2603.7200000000003, "text": " from that node and basically cheat but the the technical autoregressive" }, { "end": 2615.76, "start": 2609.6400000000003, "text": " property is not violated by this connection right here and you only get" }, { "end": 2621.1600000000003, "start": 2615.76, "text": " this RNN formulation if you do not have these connections right so the this this" }, { "end": 2625.12, "start": 2621.16, "text": " hack to make the autoregressive transformers train in parallel is" }, { "end": 2630.72, "start": 2625.12, "text": " actually making the transformer formulation much weaker and therefore" }, { "end": 2637.08, "start": 2630.72, "text": " that's then equivalent to an RNN okay I it's not that transformers in general" }, { "end": 2642.04, "start": 2637.08, "text": " are equivalent to an RNN or at least this paper doesn't show that it's just" }, { "end": 2647.44, "start": 2642.04, "text": " that this hacked transformers are and I think that's an important distinction to" }, { "end": 2652.7200000000003, "start": 2647.44, "text": " make here rather than saying transformers are RNNs if we could only" }, { "end": 2656.8, "start": 2652.7200000000003, "text": " approximate the softmax in these infinite dimensions I don't think that's" }, { "end": 2661.44, "start": 2656.8, "text": " entirely true but it is true for the transformers the autoregressive" }, { "end": 2669.08, "start": 2661.44, "text": " transformers that we currently train now why is this connection so powerful it" }, { "end": 2676.1, "start": 2669.08, "text": " allows a token to attend to you know tokens forward of it and what does it" }, { "end": 2680.7999999999997, "start": 2676.1, "text": " mean to be able to attend like let's say it's really important that this token" }, { "end": 2687.48, "start": 2680.7999999999997, "text": " right here attends to that token right here what would you need to do if you" }, { "end": 2693.3199999999997, "start": 2687.48, "text": " couldn't do that if you let's let's let's say this is a program right so this" }, { "end": 2699.38, "start": 2693.3199999999997, "text": " token is the function F and it needs the input this argument a of whatever token" }, { "end": 2705.7599999999998, "start": 2699.38, "text": " comes in front of it and it needs to do something conditioned on a so if a" }, { "end": 2713.0400000000004, "start": 2705.76, "text": " if a is one it does something if a is two it does something else right if you" }, { "end": 2719.0800000000004, "start": 2713.0400000000004, "text": " if you don't have if you can't input a then you can't simply pass on the output" }, { "end": 2723.88, "start": 2719.0800000000004, "text": " value what you'll have to do is conceptually is basically you'll have to" }, { "end": 2729, "start": 2723.88, "text": " store the entire code of the function into hidden state if this is an RNN" }, { "end": 2734.2200000000003, "start": 2729, "text": " right you can't look forward it needs to store the entire code of this function" }, { "end": 2739.72, "start": 2734.22, "text": " F so all it needs to basically build this map if a is one then this if a is" }, { "end": 2743.7999999999997, "start": 2739.72, "text": " two then this if a is three then this store that in the hidden state and then" }, { "end": 2748.04, "start": 2743.7999999999997, "text": " once a comes around in the next time step this can be resolved you can see" }, { "end": 2752.2, "start": 2748.04, "text": " that this is infinitely more complicated than simply looking forward and outputting" }, { "end": 2759.6, "start": 2752.2, "text": " the value yourself so that's sort of the difference in power that these two" }, { "end": 2766.44, "start": 2759.6, "text": " formulations are talking about okay so yeah two parts to this paper first part" }, { "end": 2771.8399999999997, "start": 2766.44, "text": " linear transformer through kernels second part if you formulate it like" }, { "end": 2776.44, "start": 2771.8399999999997, "text": " this it is equivalent and so a autoregressive transformer in this way" }, { "end": 2781.64, "start": 2776.44, "text": " becomes equivalent to an RNN and here is some of the output samples you know" }, { "end": 2786.14, "start": 2781.64, "text": " they're they're pretty pretty good though if you look at the more output" }, { "end": 2791.7599999999998, "start": 2786.14, "text": " samples they have here it so here this this is the linear one and you can see" }, { "end": 2798.14, "start": 2791.7599999999998, "text": " for example as already in this very bottom one this one right here it's the" }, { "end": 2804.14, "start": 2798.14, "text": " kind of all the other transformers get the slant to the right and that the the" }, { "end": 2808.96, "start": 2804.14, "text": " original has whereas this one is simply straight I mean I don't want it I don't" }, { "end": 2812.6, "start": 2808.96, "text": " want to dunk on this like these others make a lot of mistake mistakes right" }, { "end": 2816.88, "start": 2812.6, "text": " here but here I also saw you know all of them get that this is going to be the" }, { "end": 2823.68, "start": 2816.88, "text": " number three while this one is somehow making this circle in here so it is not" }, { "end": 2830.5, "start": 2823.68, "text": " perfect and even though it's on par where in the tasks they see you can see" }, { "end": 2834.72, "start": 2830.5, "text": " right here that especially in this speech recognition the original" }, { "end": 2842.2599999999998, "start": 2834.72, "text": " transformer right here is significantly outperforming the linear" }, { "end": 2847, "start": 2842.26, "text": " transformer which is the one in black right here in fact in all of the tasks" }, { "end": 2851.2400000000002, "start": 2847, "text": " but ultimately it might not matter because they reach you know the same" }, { "end": 2857.88, "start": 2851.2400000000002, "text": " they reach the same they reach the same accuracy or whatnot and the linear" }, { "end": 2864.6800000000003, "start": 2857.88, "text": " transformer is way way faster so I can see that this is going to be a thing" }, { "end": 2869.0400000000004, "start": 2864.6800000000003, "text": " that people apply I guess time will tell right I invite you to read the paper" }, { "end": 2873.72, "start": 2869.04, "text": " tell me what you think I might be totally wrong here with any of my" }, { "end": 2880.04, "start": 2873.72, "text": " formulations or my intuition about what this new attention mechanism does yeah" }, { "end": 2899.84, "start": 2880.04, "text": " please let me know and I'll see you next time bye bye" } ]
O9kFX33nUcU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
On the Measure of Intelligence by François Chollet - Part 4: The ARC Challenge (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "chollet", "keras", "google", "francois", "intelligence", "iq", "iq test", "deep neural networks", "prior", "skill", "performance", "measurement", "measure", "test", "number", "intelligent", "smart", "learning", "generalization", "ability", "experience", "humans", "evolution", "nature", "nurture", "psychometrics", "range", "adaptability", "arc", "kaggle", "difficulty", "entropy", "core knowledge", "objectness", "navigation", "contact", "agent", "goal" ]
In this part, we look at the ARC challenge as a proposed test of machine intelligence. The dataset features 1000 tasks that test rapid generalization based on human core knowledge priors, such as object-ness, symmetry, and navigation. OUTLINE: 0:00 - Intro 0:55 - What is ARC? 6:30 - The Goals of ARC 10:40 - Assumed Priors & Examples 21:50 - An Imagined Solution 28:15 - Consequences of a Solution 31:00 - Weaknesses 31:25 - My Comments & Ideas Paper: https://arxiv.org/abs/1911.01547 ARC: https://github.com/fchollet/ARC Abstract: To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans. Authors: François Chollet Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there and welcome to the last part of On the Measure of Intelligence by François Chollet. This last part concerns the ARC challenge that Chollet has proposed, or the ARC dataset, which stands for the Abstraction and Reasoning Corpus. And we're just quickly going over the dataset, look how it's built and discuss what kind of solutions might be relevant right here. So if you haven't seen the last videos in this series, this is the last one of a series, you might not exactly know what's going on. But I think you can you can keep up pretty well because this part is fairly independent of the other parts. And it's just cool to think about even if you haven't seen the other ones, I encourage you to go see the other ones, but it's not necessary. Okay, let's jump in. So the ARC is a challenge currently running a Kaggle challenge. But in essence, it is a dataset. And let me just jump into one of the tasks of the dataset. So in this dataset, you always have the task in the following form. So you always have multiple input examples like this or say these are called the training examples. And then you have a test example. In this case, you have three training example one test example. So an entire if you think of this in a machine learning way, this entire thing here is your x and this thing here is your y. Okay, so the label is going to be the output of the last example that you you don't know that now in the course in the training data set you do, but in the test you don't. So each one of these, as I said is is is demonstrated. These are the demonstration examples. And then you're supposed to sort of learn the regularity out of the demonstration examples. And then on this test example, you are supposed to apply this regularity that you learned. So in here, a human can fairly accurately see that there are these black squares in each image, and that in the training samples, the output will always sort of exactly match into the place of these black squares. As you can see, this is like a high rectangle. It goes here, it has the same amount of tiles and so on. And you can also see that whatever colors are in here sort of are the continuation of a symmetric pattern. So here, this is exactly the same as up here, but you know, flipped or turned by 180 degrees. So there is a notion of symmetry right here. So technically, one could compute this one would say, oh, that's probably going to be the three rows and this bunch of things. And it's probably going to be the same as this one down here, but just flipped on its head. So as a human, you get this even without a description, you realize like, oh, this is like a regular pattern, it's symmetric, there's a hole in it. And apparently, the thing here always fills the hole, I can see that, you know, three examples are enough for me to confirm that that's what's going on. And I see the hole here. So I'm going to do the same thing. So you can already see how these things are constructed in every, this is not the only task, by the way, this is just one task. Okay, there are 1000 tasks in this data set of this sort of nature. Now they're not always three demonstration examples, I believe there can be more or less. But what's always the case is they always each of these training examples consist of these demonstration examples and these test example. Each of the demonstration examples consists of an input grid and an output grid, the input grid and output grid, they can be anywhere from one by one to 30 by 30. Okay, anywhere in between that. And the colors here, I believe there are nine different colors that can go, they're just encoded by nine different numbers. But there are nine different colors that these things can have, you can see black, blue, orange, red, dark blue, and so on. And the output grid exactly the same. Now in this test example, you can only see the input grid, you cannot see the output grid. And that means you don't even know how large it should be. You can see right here, they're not all the same size, the output grids. In fact, not even the input grids have to be always the same size. But you have to now come up with an output grid. You have to first decide how big it is. And we've here we've determined since the whole has three rows, we're probably going to make three rows and it has like seven columns, we're probably going to make seven columns. And that's the sort of thing you have to do. And then not only do you have to decide how big it is, you now have to decide in each cell what color you put in. And only if this thing exactly matches the train or the test label, you get a you get a point. Otherwise you get no point. So in the training task, there are I believe 400 of these tasks and then there are 400 more as test split, but these are still public and then there are 200 that are secret. There are I guess, part of this Kaggle challenge. Yes, the training set features 400 tasks, while the evaluation set features 600 tasks. The evaluation set is further split into a public evaluation set of 400 tasks and a private evaluation set of 200 tasks. All tasks are unique. And the set of tasks and the set of training tasks are disjoint. Sorry, of test tasks and training tasks. The task data is available at this as you can see right here. So I really hope that Cholay will keep these 200 tasks as a secret, even after the Kaggle challenge, because it's going to be fun for people that might want to get into this later. So here are the goals of this data set. They want to stay close to these psychometric intelligence tests. They say in particular, it should be solvable by humans without any specific practice or training and probably also without any language instructions. So you just be able to set a human in front of it and the human should be able to solve it or a large portion of humans should be able to solve it. Ideally, this test would also differentiate humans from each other. But at this point, we want to simply assess machines. So they say focus on measuring developer aware generalization rather than task specific skill by only featuring novel tasks in the evaluation set. And the novel tasks are unknown to the developer of a test taker. So if I develop a system, I don't know what are these 200 tasks that Cholay keeps hidden. I simply submit my code and I'll figure out if my code does well on them. So they say they want to feature highly abstract tasks must be understood by a test taker using very few examples. That's what you saw. You don't have a big training example to learn that this task is about symmetry and hole filling. You only have three. And from three, you need to recognize what's going on and produce the output of the test sample. Quality of control for experience by only providing a fixed amount of training data for each task. That's what we saw. And only featuring tasks that do not lend themselves well to artificially generating new data. So it's not like ImageNet where you can go on the internet and find a whole bunch of images or some NLP tasks where people pre train on all of Wikipedia and all of the books in the world because they want to understand language better. These tasks are supposed to be such that it makes no sense for you to go out and try to find more data or find similar data or pre train your model on something. And then lastly, and this refers to the last few chapters we looked at explicitly describe the complete set of priors that it assumes and enable a fair general intelligence comparison between human and machines by only requiring priors to those innate human close to innate human prior knowledge. So that means that whatever human have whatever humans have as a prior built into them by let's say evolution or that most humans have picked up through life. Those are the things that you have to explicitly point out. So and you require that and you have to point them out. Sorry, explicitly describe them such that I as a developer of a system can build them into my system such that it's a fair comparison. In the last chapters, we looked at the fact that a fair intelligence comparison is only fair if two systems that are compared to each other have the same amount of experience. And here we control that by only providing a fixed amount of training data and also have the same prior knowledge. And here we simply do that by listing the human priors that are required for the tasks that we think that humans have. And then we enable the developers to explicitly build those into machines. So I would maybe build a little calculator module into my AI that solves this tasks. Okay, so they say he each task consists of a small number of demonstration examples, 3.3 on average, and a small number of test examples, generally one, although it might be two or three in rare cases. Each example consists of an input grid and an output grid. Each grid is a literal grid of symbols. Each symbol is visualized by color. There are 10 unique symbols, a grid can be any height or width between one by one and 30 by 30, so it doesn't even need to be square, right? And as I said, you need to provide your own output grid as an AI taking this test. So here are the priors that this test assumes. And we're going to look at some examples that make it explicit like some tasks in the training set that where you can see these priors in action. There's an object nest prior where the task assumes that the AI or the task tests that the AI understands something about objects. So these are tasks that you can only reasonably solve if you know something about objects, like you would, a human would recognize or would, you know, would recognize that these things might represent different objects, right? Now that's mainly, I think also due to the black background helps, but you would even recognize this with another background or here, the different colors indicate that those are two different things, even though those two pixels here touch and are different from black, you would recognize that those are two different things because they have different color. But you would generally recognize one of these things as an individual object. If you're not given anything here, you see, for example, a denoising task as a human, you can pretty quickly see what the task is about, right? There appear to be these green things. They're all rectangles and there appear to be these blue things. And on the right side, there are no more blue things. But the now it's not always that when there was a blue thing, there is now a green thing only here where it was sort of inside a green thing is now a green pixel. Whenever there was a blue pixel outside in this black area, then there is now black. So this is sort of like the blue things were noise and you're able to remove it. This already tests a lot of assumptions. A lot of these priors, a lot of understanding of the world. So there are objects, right? As human understands that objects are square in this case, or rectangles, the human understands that it that we need to remove the blue things going over. And the human understands that somehow this inside relation, right, if something is inside or outside of one of these rectangles, and that determines whether we have to turn the pixel green or black. You can, I mean, think about how you would train a machine to do something like this. It's not easy, especially if you don't know that this task is coming. Imagine for all of these things, you don't know that the task is coming. This is just one of 400 tasks that you know of. There are 600 tasks that you don't know of that are similar, but also in a way completely different. Here's another tasks that object influence via contact. So this is your first demonstration example. A human pretty quickly recognizes there appears to be red thing and a blue thing, and then they appear to be together. And then in the next thing, you see, oh, there appears to be a blue thing and the red thing in the next thing, they appear to be together. And if you look here, it always appears to be the red thing going to the blue thing in the most direct way. So in the in the along the grid. That's all that the human needs to see two examples and the human most humans will already make that inference and can now solve if there is like, if there now is a test example, where the blue thing is like, the blue thing is down here. And the red thing is here like this. And it asks you what comes next, you know, you know that the red thing is going down to the blue thing. But it's very hard to train a machine to do this. So I like this test, because it's sort of a different test. And I believe the test these tests weren't procedurally generated. These tests were actually generated by sholey or, you know, by by actual humans. That's pretty cool. And 1000 tasks like this is going to be very hard to solve. There are even more abstract priors like goal directedness. So now you here you can already see this a little bit in that you can say, well, the red thing wants to go to the blue thing. So there is a notion of time involved, maybe. There's also counting and numbers and numbers prior. So here you see like a time process. So in this demonstration example, you see blue things here, red, big thing. And then the next the output grid is this green thing. And as a human immediately recognize, okay, so it shoots out from the blue thing, the green thing shoots from the blue thing, hits the red wall and goes here. Try to make a machine understand this. This is insane, right? So if you look at the more examples, it all it appears that the blue thing always comes from somewhere like the side of the image, and the green thing comes out obviously from whatever is not at the at the border of the image and then bounces off the red thing if it hits the red thing. Now here you can you can already see what's going to happen. Remember your AI would need to first determine aha, okay, all of these output grids, they seem to be the same as the input grid. So it would need to explicitly construct the output grid in the same manner as the input grid because it understands this right? This is not the same in every task, then it needs to recognize the red thing that stays in every one. So it needs to put the red thing here right from from here. And then it needs to recognize the blue thing stays as well. And then most most shockingly needs to recognize, okay, I will draw a line in pixels and lines in pixels are hard here. And then as soon as it would hit the red thing, it bounces off into the other direction. So from just these three examples, the machine has to understand that and correctly output the exact solution, not an approximate solution, the exact solution. Okay, so yeah, there are these basic geometry and topology priors like lines, rectangular shapes, symmetries, rotations, translations, shape upscaling, containing being contained, drawing lines, connecting points, and so on. Now, let's look at some more examples. These are fun, right? Check out this one here. So you see green, red, and then somehow the green connected to the red. So this is an example of that has many of these priors in many of these concepts in there is goal directedness, you can already sort of form the hypothesis that the green wants to go to the red. But also you see that somehow it sort of appears to the blue things seem to be maybe obstacles and it appears to change direction when it encounters an obstacle like here. So here, you see the example, and you probably confirm so your hypothesis could be it always goes until it hits and then it changes direction towards the red thing, right? Always towards red thing, because it's not always towards the right because you return toward the left. So it goes somehow towards the red thing and so it's pretty ambiguous in this situation, but you can also make the assumption that if it's ambiguous, it goes towards the middle, maybe, maybe. So here, again, now we're actually confirming probably so we go towards the red thing, which would be towards this direction, then we hit an object, then we go towards the red thing until we hit an object, and then we go here. Also see that these grids here are not the same size. So it's not always the case that the grids of within the same tasks are even the same size. So now here, again, your AI would need to recognize what size of grid it needs to draw and what the result is. So it would need to copy this entire grid and also change these pixels right here to be green pixels. That's hard. I mean, that's I find I find this to be pretty hard. This is the line extrapolation and turning on obstacle and efficiently reaching a goal prior. That's crazy. And is there more? Yes, there is two more, I believe. Yeah, those are the last examples. So in this one, you can see right here, there appear to be objects, which there's this blue objects appear to be the same and there are these red, and then the output grid is one of these blue objects. Okay, so here we again see different objects, the output grid is one of them. So as a human, you can already recognize the output grid is probably always going to be one of these objects. And now we need to decide on which one. So we can formulate the hypothesis that it's probably going to be the one that's the most like here, there's three of the blue ones here, there's four of the yellow ones, that's more than any other. And this here confirms our hypothesis that the it's the object that appears most often. Now again, see that there is this notion of object ness. You need to upscale somehow. No, this is not upscale because the grid is the same size. It's simply the image that's upscale. But you need to somehow focus be able to focus in on one of these objects, I need to count them, you need to compare the counts via each other. And now here you can pretty easily see that the output grid is going to contain one of those blue things as a human. And here, it's it's sort of a symmetry filling task. Now as a human, you need one demonstration to get this. Maybe you need more, but many tasks involve some sort of symmetry. Okay, drawing the symmetrized version around the version of a shape around a marker, that's going to be fairly hard for a machine to learn without without the developer knowing that this task is coming. Okay, they highlight some differentiations to standard psychometric tests. But what I find interesting here is that this thing, what a solution to arc may look like and what it would imply for AI applications. They say we have found art to be fully solvable by humans. So they set a human in front of every, every one of these tasks, and it's solvable. While many arc tasks are intellectually challenging human test takes us appear to be able to solve the majority of tasks on their first try without any practice or verbal explanations. In effect, in this task, you get three tries at each at each of the at each of the problems, you get three, three tries and the humans can already solve it in one. So that just shows you shows you how cool humans are. So here is a shawley suggests a solution approach says by start by developing a domain specific language capable of expressing all possible situations, all possible solution programs for any arc task. Since the exact set of arc tax is purposely not formally definable, this may be challenging. The space of tasks is defined as anything expressible in terms of arc pairs that would only involve core knowledge. So core knowledge is this set of human priors that we discussed last time like objectness and symmetries and geometric shapes and navigation and so on. So he asks you to basically develop a DSL that can capture all the different tasks. So so kept basically define a formalism of these tasks. But it's hard because you don't know what the tasks are going to be. So your best bet is probably to make a formalism that completely over represents what the tasks can be. It would require hard coding the core knowledge priors from 3.1.2 in a sufficiently abstract and combinable program form to serve as a basis functions for a kind of human like reasoning DSL. We believe that solving this specific subproblem is critical to a to general AI progress. Basically says whenever we can describe this is like saying that this AI progress will make a big step once we can formally describe human priors. And while true this I feel the hardness of this problem is as hard as actually building general artificial intelligence or very close to it. So it is a bit of a like how to how to go how to build a GI step one build a GI that's sort of and not exactly but it's kind of what this says. Right. So if I could actually have this DSL to describe every single task and I could do it you know such that it is not not super over capturing all the tasks then I would be able and I would have described human core knowledge in a sufficiently accurate degree that I could just you know build a GI. So he goes on says given a task use the DSL to generate a set of candidate programs that turn the input grids into the corresponding output grids. This step would reuse and recombine sub programs that previously proved useful in other arc tasks. So says whenever you have captured the core knowledge or whenever you have captured the problem space in a formal language you can simply use that formal language to express whatever your input is so the that turn the input grids into the corresponding output grids. So you would put in these demonstration examples and describe this with your formal language that you have and you can somehow reuse and recombine sub programs that previously proved useful for basically asking you to write to come up with source code that would generate these demonstration examples in the language of your DSL. And then he says select top candidates among these programs so you would generate multiple versions of source code that generate these things based on a criterion such as a program simplicity or program likelihood. Note that we do not expect that merely selecting the simplest possible program that works on training pairs will generalize well to test pairs. And then use the top three candidates to generate output grids for the test examples. So I hope the approach here I feel it makes sense but it is sort of over hopeful in my mind and that's mainly because of step one. So step one asks you to come up with like a programming language that can capture all the tasks in this all the tasks in the data set even though you don't know what the tasks are and that has this human core knowledge in inside of it in a in a formally describable way. And then once you have that programming language you would if you're given this task where you have you know a bunch of these demonstration you have a bunch of these demonstration things and then you have the test thing you would generate all the programs that would produce these demonstration examples or that would given the demo given the input grade would produce the output grid right you would generate all the programs and then you would select somehow among all these programs the one that you think generalizes the most and you would use that program to put this in and get out the solution. They say it's probably it's not always the simplest program not always the shortest program maybe who knows like I feel step one is the kind of the crucial issue here. Okay so they say they make some claims here and about what this what this would bring the community we posit that the existence of human level arc solver would represent the ability to program and AI from demonstrations alone only requiring a handful of demonstrations to specify complex tasks to do a wide range of human relatable tasks of a kind that would normally require human level human like fluid intelligence. As supporting evidence we note that human performance on psychometric intelligent test which are similar to our is predictive of success across all human cognitive tasks. Further we posit that since an arc solver and human intelligence would be both founded on the same knowledge priors the scope of application of an arc solver would be closer to that of human cognition making it such a solver both practically valuable and easy to interact with and would produce behavior that is in line with human expectations. Okay so they're making the same argument that anyone before has made but they condition it on some things and this is I think the conclusion of the entire article here of on the measure of intelligence because people had this hope and they say that here claims are highly speculative and may prove incorrect much like Newell's 1973 hopes that progress on chess playing would translate into meaningful progress and achieving a broad range of cognitive abilities especially if arc turns out to feature unforeseen vulnerabilities to unintelligent shortcuts. This is the AI effect and basically means that whenever you think a task the solving of a task represents AI and then you actually see the solution then the solution turns out to be not AI in the eyes of the human. So the human at first they would say oh this task really requires intelligence and then someone solves the task and they'll say oh that's not intelligence you kind of hacked your way to that and the expectation is that in this arc challenge there might be a hacky way to that but I mean the good question is when at what is there even a task like this arc challenge here could that is there even a possibility of a task where you wouldn't say that and I'm not so sure about this they seem to be more hopeful than I am but at least they say the arc challenge is founded on the same priors as a human has it gives you the same amount of experience as a human has right and therefore it is much more comparable to human intelligence. They go over some weaknesses right here of criticizing their own thing generalization is not quantified so they have a measure of generalization in the previous chapter but they don't use it right here test validity is not established data set size and diversity may be limited and so on but I in my mind this I would not consider this as like an AGI task or anything like this I'm pretty sure the solution to this will come in a form again where people don't really think it's it exhibits intelligence but I do like the task as such and as a machine learner I am very excited to think about how machine learning can go about solving this task and especially with what we've seen from something like GPT-3 it has exactly this kind of structure where you train on a giant data set blah blah blah you pre train your language model but then at inference time you input a bunch of these demonstration examples and you ask it for the next output so I feel that might be a good start for for doing it the question of course is what what then do you pre train this model on this GPT-3 for ARC what's the pre training data set for it and I guess that's going to be the challenge and probably going to require people to specifically program all of these priors into a data set generator for pre training so that would be my approach my approach would be write a data set generator for pre training and GPT-3 model to do these kind of tasks and in order to write the data set generator you'd have to basically program in all of these priors and that's not going to be easy because your best bet is to sort of put yourself into the shoes of Chalet and be like oh if I were to design a task what kind of things would I do and then try to capture that that's going to be your best bet your most honest bet with respect to the challenge is to try to as faithfully as possible implement something like an object Ness prior where cohesion and persistence are captured that would be the most scientifically sound approach to my approach alright so that was my take on the ARC data set if you have any comments I'm very excited to hear comments on this if you have already tried the ARC challenge have some insight I also welcome comments on that and with that I'll see you next time bye bye
[ { "end": 9.040000000000001, "start": 0, "text": " Hi there and welcome to the last part of On the Measure of Intelligence by François Chollet." }, { "end": 16, "start": 9.040000000000001, "text": " This last part concerns the ARC challenge that Chollet has proposed, or the ARC dataset," }, { "end": 20.6, "start": 16, "text": " which stands for the Abstraction and Reasoning Corpus." }, { "end": 26.84, "start": 20.6, "text": " And we're just quickly going over the dataset, look how it's built and discuss what kind" }, { "end": 30.12, "start": 26.84, "text": " of solutions might be relevant right here." }, { "end": 36.08, "start": 30.12, "text": " So if you haven't seen the last videos in this series, this is the last one of a series," }, { "end": 38.56, "start": 36.08, "text": " you might not exactly know what's going on." }, { "end": 43.760000000000005, "start": 38.56, "text": " But I think you can you can keep up pretty well because this part is fairly independent" }, { "end": 45.519999999999996, "start": 43.760000000000005, "text": " of the other parts." }, { "end": 49.44, "start": 45.519999999999996, "text": " And it's just cool to think about even if you haven't seen the other ones, I encourage" }, { "end": 53.24, "start": 49.44, "text": " you to go see the other ones, but it's not necessary." }, { "end": 55.92, "start": 53.24, "text": " Okay, let's jump in." }, { "end": 61.400000000000006, "start": 55.92, "text": " So the ARC is a challenge currently running a Kaggle challenge." }, { "end": 64.36, "start": 61.400000000000006, "text": " But in essence, it is a dataset." }, { "end": 68.84, "start": 64.36, "text": " And let me just jump into one of the tasks of the dataset." }, { "end": 74.24000000000001, "start": 68.84, "text": " So in this dataset, you always have the task in the following form." }, { "end": 82.24000000000001, "start": 74.24000000000001, "text": " So you always have multiple input examples like this or say these are called the training" }, { "end": 83.48, "start": 82.24000000000001, "text": " examples." }, { "end": 85.32000000000001, "start": 83.48, "text": " And then you have a test example." }, { "end": 87.88, "start": 85.32, "text": " In this case, you have three training example one test example." }, { "end": 95.28, "start": 87.88, "text": " So an entire if you think of this in a machine learning way, this entire thing here is your" }, { "end": 98.96, "start": 95.28, "text": " x and this thing here is your y." }, { "end": 105.56, "start": 98.96, "text": " Okay, so the label is going to be the output of the last example that you you don't know" }, { "end": 113.8, "start": 105.56, "text": " that now in the course in the training data set you do, but in the test you don't." }, { "end": 118.16, "start": 113.8, "text": " So each one of these, as I said is is is demonstrated." }, { "end": 119.82, "start": 118.16, "text": " These are the demonstration examples." }, { "end": 124.72, "start": 119.82, "text": " And then you're supposed to sort of learn the regularity out of the demonstration examples." }, { "end": 130.54, "start": 124.72, "text": " And then on this test example, you are supposed to apply this regularity that you learned." }, { "end": 136.96, "start": 130.54, "text": " So in here, a human can fairly accurately see that there are these black squares in" }, { "end": 144.18, "start": 136.96, "text": " each image, and that in the training samples, the output will always sort of exactly match" }, { "end": 146.20000000000002, "start": 144.18, "text": " into the place of these black squares." }, { "end": 148.44, "start": 146.20000000000002, "text": " As you can see, this is like a high rectangle." }, { "end": 151.68, "start": 148.44, "text": " It goes here, it has the same amount of tiles and so on." }, { "end": 159.60000000000002, "start": 151.68, "text": " And you can also see that whatever colors are in here sort of are the continuation of" }, { "end": 160.96, "start": 159.60000000000002, "text": " a symmetric pattern." }, { "end": 170.52, "start": 160.96, "text": " So here, this is exactly the same as up here, but you know, flipped or turned by 180 degrees." }, { "end": 173.4, "start": 170.52, "text": " So there is a notion of symmetry right here." }, { "end": 179.72, "start": 173.4, "text": " So technically, one could compute this one would say, oh, that's probably going to be" }, { "end": 183.76000000000002, "start": 179.72, "text": " the three rows and this bunch of things." }, { "end": 187.88, "start": 183.76000000000002, "text": " And it's probably going to be the same as this one down here, but just flipped on its" }, { "end": 189.20000000000002, "start": 187.88, "text": " head." }, { "end": 194.67999999999998, "start": 189.2, "text": " So as a human, you get this even without a description, you realize like, oh, this is" }, { "end": 198.67999999999998, "start": 194.67999999999998, "text": " like a regular pattern, it's symmetric, there's a hole in it." }, { "end": 203.17999999999998, "start": 198.67999999999998, "text": " And apparently, the thing here always fills the hole, I can see that, you know, three" }, { "end": 206.56, "start": 203.17999999999998, "text": " examples are enough for me to confirm that that's what's going on." }, { "end": 207.79999999999998, "start": 206.56, "text": " And I see the hole here." }, { "end": 211.2, "start": 207.79999999999998, "text": " So I'm going to do the same thing." }, { "end": 215.56, "start": 211.2, "text": " So you can already see how these things are constructed in every, this is not the only" }, { "end": 218.35999999999999, "start": 215.56, "text": " task, by the way, this is just one task." }, { "end": 224.84, "start": 218.36, "text": " Okay, there are 1000 tasks in this data set of this sort of nature." }, { "end": 230.68, "start": 224.84, "text": " Now they're not always three demonstration examples, I believe there can be more or less." }, { "end": 235.08, "start": 230.68, "text": " But what's always the case is they always each of these training examples consist of" }, { "end": 238.76000000000002, "start": 235.08, "text": " these demonstration examples and these test example." }, { "end": 244.84, "start": 238.76000000000002, "text": " Each of the demonstration examples consists of an input grid and an output grid, the input" }, { "end": 251.24, "start": 244.84, "text": " grid and output grid, they can be anywhere from one by one to 30 by 30." }, { "end": 255.48000000000002, "start": 251.24, "text": " Okay, anywhere in between that." }, { "end": 260.88, "start": 255.48000000000002, "text": " And the colors here, I believe there are nine different colors that can go, they're just" }, { "end": 263.04, "start": 260.88, "text": " encoded by nine different numbers." }, { "end": 267.52, "start": 263.04, "text": " But there are nine different colors that these things can have, you can see black, blue," }, { "end": 271.88, "start": 267.52, "text": " orange, red, dark blue, and so on." }, { "end": 275.3, "start": 271.88, "text": " And the output grid exactly the same." }, { "end": 281.94, "start": 275.3, "text": " Now in this test example, you can only see the input grid, you cannot see the output" }, { "end": 282.94, "start": 281.94, "text": " grid." }, { "end": 285.52, "start": 282.94, "text": " And that means you don't even know how large it should be." }, { "end": 288.84, "start": 285.52, "text": " You can see right here, they're not all the same size, the output grids." }, { "end": 292.36, "start": 288.84, "text": " In fact, not even the input grids have to be always the same size." }, { "end": 295.88, "start": 292.36, "text": " But you have to now come up with an output grid." }, { "end": 297.64, "start": 295.88, "text": " You have to first decide how big it is." }, { "end": 301.44, "start": 297.64, "text": " And we've here we've determined since the whole has three rows, we're probably going" }, { "end": 307.36, "start": 301.44, "text": " to make three rows and it has like seven columns, we're probably going to make seven columns." }, { "end": 309.56, "start": 307.36, "text": " And that's the sort of thing you have to do." }, { "end": 314.12, "start": 309.56, "text": " And then not only do you have to decide how big it is, you now have to decide in each" }, { "end": 317.16, "start": 314.12, "text": " cell what color you put in." }, { "end": 326.92, "start": 317.16, "text": " And only if this thing exactly matches the train or the test label, you get a you get" }, { "end": 328.15999999999997, "start": 326.92, "text": " a point." }, { "end": 331.44, "start": 328.16, "text": " Otherwise you get no point." }, { "end": 338.52000000000004, "start": 331.44, "text": " So in the training task, there are I believe 400 of these tasks and then there are 400" }, { "end": 345.48, "start": 338.52000000000004, "text": " more as test split, but these are still public and then there are 200 that are secret." }, { "end": 349.6, "start": 345.48, "text": " There are I guess, part of this Kaggle challenge." }, { "end": 357.56, "start": 349.6, "text": " Yes, the training set features 400 tasks, while the evaluation set features 600 tasks." }, { "end": 362.32, "start": 357.56, "text": " The evaluation set is further split into a public evaluation set of 400 tasks and a private" }, { "end": 364.64, "start": 362.32, "text": " evaluation set of 200 tasks." }, { "end": 366.4, "start": 364.64, "text": " All tasks are unique." }, { "end": 370.16, "start": 366.4, "text": " And the set of tasks and the set of training tasks are disjoint." }, { "end": 373.36, "start": 370.16, "text": " Sorry, of test tasks and training tasks." }, { "end": 379.26, "start": 373.36, "text": " The task data is available at this as you can see right here." }, { "end": 386.4, "start": 379.26, "text": " So I really hope that Cholay will keep these 200 tasks as a secret, even after the Kaggle" }, { "end": 393.15999999999997, "start": 386.4, "text": " challenge, because it's going to be fun for people that might want to get into this later." }, { "end": 395.71999999999997, "start": 393.15999999999997, "text": " So here are the goals of this data set." }, { "end": 400.67999999999995, "start": 395.71999999999997, "text": " They want to stay close to these psychometric intelligence tests." }, { "end": 405.4, "start": 400.67999999999995, "text": " They say in particular, it should be solvable by humans without any specific practice or" }, { "end": 409.12, "start": 405.4, "text": " training and probably also without any language instructions." }, { "end": 413.47999999999996, "start": 409.12, "text": " So you just be able to set a human in front of it and the human should be able to solve" }, { "end": 418.36, "start": 413.48, "text": " it or a large portion of humans should be able to solve it." }, { "end": 422.12, "start": 418.36, "text": " Ideally, this test would also differentiate humans from each other." }, { "end": 426.84000000000003, "start": 422.12, "text": " But at this point, we want to simply assess machines." }, { "end": 433.08000000000004, "start": 426.84000000000003, "text": " So they say focus on measuring developer aware generalization rather than task specific skill" }, { "end": 437.28000000000003, "start": 433.08000000000004, "text": " by only featuring novel tasks in the evaluation set." }, { "end": 440.88, "start": 437.28000000000003, "text": " And the novel tasks are unknown to the developer of a test taker." }, { "end": 447, "start": 440.88, "text": " So if I develop a system, I don't know what are these 200 tasks that Cholay keeps hidden." }, { "end": 455.04, "start": 447, "text": " I simply submit my code and I'll figure out if my code does well on them." }, { "end": 462.44, "start": 455.04, "text": " So they say they want to feature highly abstract tasks must be understood by a test taker using" }, { "end": 464.71999999999997, "start": 462.44, "text": " very few examples." }, { "end": 465.71999999999997, "start": 464.71999999999997, "text": " That's what you saw." }, { "end": 470, "start": 465.71999999999997, "text": " You don't have a big training example to learn that this task is about symmetry and hole" }, { "end": 471, "start": 470, "text": " filling." }, { "end": 472.28, "start": 471, "text": " You only have three." }, { "end": 477.08, "start": 472.28, "text": " And from three, you need to recognize what's going on and produce the output of the test" }, { "end": 481.12, "start": 477.08, "text": " sample." }, { "end": 484.28, "start": 481.12, "text": " Quality of control for experience by only providing a fixed amount of training data" }, { "end": 485.28, "start": 484.28, "text": " for each task." }, { "end": 486.28, "start": 485.28, "text": " That's what we saw." }, { "end": 491.54, "start": 486.28, "text": " And only featuring tasks that do not lend themselves well to artificially generating new data." }, { "end": 496.22, "start": 491.54, "text": " So it's not like ImageNet where you can go on the internet and find a whole bunch of" }, { "end": 501.98, "start": 496.22, "text": " images or some NLP tasks where people pre train on all of Wikipedia and all of the books" }, { "end": 505.62, "start": 501.98, "text": " in the world because they want to understand language better." }, { "end": 511.94000000000005, "start": 505.62, "text": " These tasks are supposed to be such that it makes no sense for you to go out and try to" }, { "end": 517.52, "start": 511.94000000000005, "text": " find more data or find similar data or pre train your model on something." }, { "end": 523, "start": 517.52, "text": " And then lastly, and this refers to the last few chapters we looked at explicitly describe" }, { "end": 530.2, "start": 523, "text": " the complete set of priors that it assumes and enable a fair general intelligence comparison" }, { "end": 536.32, "start": 530.2, "text": " between human and machines by only requiring priors to those innate human close to innate" }, { "end": 537.84, "start": 536.32, "text": " human prior knowledge." }, { "end": 545.36, "start": 537.84, "text": " So that means that whatever human have whatever humans have as a prior built into them by" }, { "end": 550.46, "start": 545.36, "text": " let's say evolution or that most humans have picked up through life." }, { "end": 555.5400000000001, "start": 550.46, "text": " Those are the things that you have to explicitly point out." }, { "end": 560.9200000000001, "start": 555.5400000000001, "text": " So and you require that and you have to point them out." }, { "end": 566.64, "start": 560.9200000000001, "text": " Sorry, explicitly describe them such that I as a developer of a system can build them" }, { "end": 569.88, "start": 566.64, "text": " into my system such that it's a fair comparison." }, { "end": 574.52, "start": 569.88, "text": " In the last chapters, we looked at the fact that a fair intelligence comparison is only" }, { "end": 579.6600000000001, "start": 574.52, "text": " fair if two systems that are compared to each other have the same amount of experience." }, { "end": 585.52, "start": 579.66, "text": " And here we control that by only providing a fixed amount of training data and also have" }, { "end": 587.9599999999999, "start": 585.52, "text": " the same prior knowledge." }, { "end": 593.0799999999999, "start": 587.9599999999999, "text": " And here we simply do that by listing the human priors that are required for the tasks" }, { "end": 595.24, "start": 593.0799999999999, "text": " that we think that humans have." }, { "end": 599.92, "start": 595.24, "text": " And then we enable the developers to explicitly build those into machines." }, { "end": 607.36, "start": 599.92, "text": " So I would maybe build a little calculator module into my AI that solves this tasks." }, { "end": 613.88, "start": 607.36, "text": " Okay, so they say he each task consists of a small number of demonstration examples," }, { "end": 619.16, "start": 613.88, "text": " 3.3 on average, and a small number of test examples, generally one, although it might" }, { "end": 622, "start": 619.16, "text": " be two or three in rare cases." }, { "end": 624.32, "start": 622, "text": " Each example consists of an input grid and an output grid." }, { "end": 627.4, "start": 624.32, "text": " Each grid is a literal grid of symbols." }, { "end": 630.08, "start": 627.4, "text": " Each symbol is visualized by color." }, { "end": 633.72, "start": 630.08, "text": " There are 10 unique symbols, a grid can be any height or width between one by one and" }, { "end": 638, "start": 633.72, "text": " 30 by 30, so it doesn't even need to be square, right?" }, { "end": 644.6600000000001, "start": 638, "text": " And as I said, you need to provide your own output grid as an AI taking this test." }, { "end": 646.96, "start": 644.6600000000001, "text": " So here are the priors that this test assumes." }, { "end": 652.72, "start": 646.96, "text": " And we're going to look at some examples that make it explicit like some tasks in the training" }, { "end": 656.4, "start": 652.72, "text": " set that where you can see these priors in action." }, { "end": 664.3199999999999, "start": 656.4, "text": " There's an object nest prior where the task assumes that the AI or the task tests that" }, { "end": 667.4599999999999, "start": 664.3199999999999, "text": " the AI understands something about objects." }, { "end": 672.4399999999999, "start": 667.4599999999999, "text": " So these are tasks that you can only reasonably solve if you know something about objects," }, { "end": 679.1999999999999, "start": 672.4399999999999, "text": " like you would, a human would recognize or would, you know, would recognize that these" }, { "end": 682.6, "start": 679.1999999999999, "text": " things might represent different objects, right?" }, { "end": 689.0600000000001, "start": 682.6, "text": " Now that's mainly, I think also due to the black background helps, but you would even" }, { "end": 694.44, "start": 689.0600000000001, "text": " recognize this with another background or here, the different colors indicate that those" }, { "end": 700.2, "start": 694.44, "text": " are two different things, even though those two pixels here touch and are different from" }, { "end": 704.48, "start": 700.2, "text": " black, you would recognize that those are two different things because they have different" }, { "end": 705.48, "start": 704.48, "text": " color." }, { "end": 711.36, "start": 705.48, "text": " But you would generally recognize one of these things as an individual object." }, { "end": 716.82, "start": 711.36, "text": " If you're not given anything here, you see, for example, a denoising task as a human," }, { "end": 720.02, "start": 716.82, "text": " you can pretty quickly see what the task is about, right?" }, { "end": 722.62, "start": 720.02, "text": " There appear to be these green things." }, { "end": 726.9, "start": 722.62, "text": " They're all rectangles and there appear to be these blue things." }, { "end": 729.9, "start": 726.9, "text": " And on the right side, there are no more blue things." }, { "end": 735.6800000000001, "start": 729.9, "text": " But the now it's not always that when there was a blue thing, there is now a green thing" }, { "end": 742.7199999999999, "start": 735.68, "text": " only here where it was sort of inside a green thing is now a green pixel." }, { "end": 748.8, "start": 742.7199999999999, "text": " Whenever there was a blue pixel outside in this black area, then there is now black." }, { "end": 752.88, "start": 748.8, "text": " So this is sort of like the blue things were noise and you're able to remove it." }, { "end": 756.4399999999999, "start": 752.88, "text": " This already tests a lot of assumptions." }, { "end": 760.12, "start": 756.4399999999999, "text": " A lot of these priors, a lot of understanding of the world." }, { "end": 762.9599999999999, "start": 760.12, "text": " So there are objects, right?" }, { "end": 770.52, "start": 762.96, "text": " As human understands that objects are square in this case, or rectangles, the human understands" }, { "end": 775.72, "start": 770.52, "text": " that it that we need to remove the blue things going over." }, { "end": 784.08, "start": 775.72, "text": " And the human understands that somehow this inside relation, right, if something is inside" }, { "end": 788.2800000000001, "start": 784.08, "text": " or outside of one of these rectangles, and that determines whether we have to turn the" }, { "end": 790.84, "start": 788.2800000000001, "text": " pixel green or black." }, { "end": 794.9200000000001, "start": 790.84, "text": " You can, I mean, think about how you would train a machine to do something like this." }, { "end": 800.0400000000001, "start": 794.9200000000001, "text": " It's not easy, especially if you don't know that this task is coming." }, { "end": 803.32, "start": 800.0400000000001, "text": " Imagine for all of these things, you don't know that the task is coming." }, { "end": 806.48, "start": 803.32, "text": " This is just one of 400 tasks that you know of." }, { "end": 814.0400000000001, "start": 806.48, "text": " There are 600 tasks that you don't know of that are similar, but also in a way completely" }, { "end": 816.8000000000001, "start": 814.0400000000001, "text": " different." }, { "end": 820.2800000000001, "start": 816.8000000000001, "text": " Here's another tasks that object influence via contact." }, { "end": 823.36, "start": 820.28, "text": " So this is your first demonstration example." }, { "end": 828.4399999999999, "start": 823.36, "text": " A human pretty quickly recognizes there appears to be red thing and a blue thing, and then" }, { "end": 831.1, "start": 828.4399999999999, "text": " they appear to be together." }, { "end": 835.24, "start": 831.1, "text": " And then in the next thing, you see, oh, there appears to be a blue thing and the red thing" }, { "end": 837.3399999999999, "start": 835.24, "text": " in the next thing, they appear to be together." }, { "end": 843.0799999999999, "start": 837.3399999999999, "text": " And if you look here, it always appears to be the red thing going to the blue thing in" }, { "end": 844.4, "start": 843.0799999999999, "text": " the most direct way." }, { "end": 847.64, "start": 844.4, "text": " So in the in the along the grid." }, { "end": 855.1999999999999, "start": 847.64, "text": " That's all that the human needs to see two examples and the human most humans will already" }, { "end": 862.96, "start": 855.1999999999999, "text": " make that inference and can now solve if there is like, if there now is a test example, where" }, { "end": 866.84, "start": 862.96, "text": " the blue thing is like, the blue thing is down here." }, { "end": 869.92, "start": 866.84, "text": " And the red thing is here like this." }, { "end": 874.24, "start": 869.92, "text": " And it asks you what comes next, you know, you know that the red thing is going down" }, { "end": 877.12, "start": 874.24, "text": " to the blue thing." }, { "end": 879.6, "start": 877.12, "text": " But it's very hard to train a machine to do this." }, { "end": 884.32, "start": 879.6, "text": " So I like this test, because it's sort of a different test." }, { "end": 887.72, "start": 884.32, "text": " And I believe the test these tests weren't procedurally generated." }, { "end": 894.64, "start": 887.72, "text": " These tests were actually generated by sholey or, you know, by by actual humans." }, { "end": 896.72, "start": 894.64, "text": " That's pretty cool." }, { "end": 900.84, "start": 896.72, "text": " And 1000 tasks like this is going to be very hard to solve." }, { "end": 905.58, "start": 900.84, "text": " There are even more abstract priors like goal directedness." }, { "end": 911.48, "start": 905.58, "text": " So now you here you can already see this a little bit in that you can say, well, the" }, { "end": 914.7, "start": 911.48, "text": " red thing wants to go to the blue thing." }, { "end": 919.2800000000001, "start": 914.7, "text": " So there is a notion of time involved, maybe." }, { "end": 922.64, "start": 919.2800000000001, "text": " There's also counting and numbers and numbers prior." }, { "end": 925.58, "start": 922.64, "text": " So here you see like a time process." }, { "end": 931.36, "start": 925.58, "text": " So in this demonstration example, you see blue things here, red, big thing." }, { "end": 936.12, "start": 931.36, "text": " And then the next the output grid is this green thing." }, { "end": 941, "start": 936.12, "text": " And as a human immediately recognize, okay, so it shoots out from the blue thing, the" }, { "end": 946.5600000000001, "start": 941, "text": " green thing shoots from the blue thing, hits the red wall and goes here." }, { "end": 949.3000000000001, "start": 946.5600000000001, "text": " Try to make a machine understand this." }, { "end": 951, "start": 949.3000000000001, "text": " This is insane, right?" }, { "end": 956.98, "start": 951, "text": " So if you look at the more examples, it all it appears that the blue thing always comes" }, { "end": 962.44, "start": 956.98, "text": " from somewhere like the side of the image, and the green thing comes out obviously from" }, { "end": 968.74, "start": 962.44, "text": " whatever is not at the at the border of the image and then bounces off the red thing if" }, { "end": 971.82, "start": 968.74, "text": " it hits the red thing." }, { "end": 976.1, "start": 971.82, "text": " Now here you can you can already see what's going to happen." }, { "end": 982.32, "start": 976.1, "text": " Remember your AI would need to first determine aha, okay, all of these output grids, they" }, { "end": 985.16, "start": 982.32, "text": " seem to be the same as the input grid." }, { "end": 990.52, "start": 985.16, "text": " So it would need to explicitly construct the output grid in the same manner as the input" }, { "end": 992.64, "start": 990.52, "text": " grid because it understands this right?" }, { "end": 997.36, "start": 992.64, "text": " This is not the same in every task, then it needs to recognize the red thing that stays" }, { "end": 998.36, "start": 997.36, "text": " in every one." }, { "end": 1003.26, "start": 998.36, "text": " So it needs to put the red thing here right from from here." }, { "end": 1007.26, "start": 1003.26, "text": " And then it needs to recognize the blue thing stays as well." }, { "end": 1015.92, "start": 1007.26, "text": " And then most most shockingly needs to recognize, okay, I will draw a line in pixels and lines" }, { "end": 1019, "start": 1015.92, "text": " in pixels are hard here." }, { "end": 1024.94, "start": 1019, "text": " And then as soon as it would hit the red thing, it bounces off into the other direction." }, { "end": 1030.56, "start": 1024.94, "text": " So from just these three examples, the machine has to understand that and correctly output" }, { "end": 1035.84, "start": 1030.56, "text": " the exact solution, not an approximate solution, the exact solution." }, { "end": 1043.1999999999998, "start": 1035.84, "text": " Okay, so yeah, there are these basic geometry and topology priors like lines, rectangular" }, { "end": 1051.36, "start": 1043.1999999999998, "text": " shapes, symmetries, rotations, translations, shape upscaling, containing being contained," }, { "end": 1054.72, "start": 1051.36, "text": " drawing lines, connecting points, and so on." }, { "end": 1057.1799999999998, "start": 1054.72, "text": " Now, let's look at some more examples." }, { "end": 1059.8799999999999, "start": 1057.1799999999998, "text": " These are fun, right?" }, { "end": 1062.12, "start": 1059.8799999999999, "text": " Check out this one here." }, { "end": 1068.6399999999999, "start": 1062.12, "text": " So you see green, red, and then somehow the green connected to the red." }, { "end": 1073.4799999999998, "start": 1068.6399999999999, "text": " So this is an example of that has many of these priors in many of these concepts in" }, { "end": 1078.32, "start": 1073.4799999999998, "text": " there is goal directedness, you can already sort of form the hypothesis that the green" }, { "end": 1080.4599999999998, "start": 1078.32, "text": " wants to go to the red." }, { "end": 1088.84, "start": 1080.4599999999998, "text": " But also you see that somehow it sort of appears to the blue things seem to be maybe obstacles" }, { "end": 1095.28, "start": 1088.84, "text": " and it appears to change direction when it encounters an obstacle like here." }, { "end": 1101.9599999999998, "start": 1095.28, "text": " So here, you see the example, and you probably confirm so your hypothesis could be it always" }, { "end": 1107.72, "start": 1101.9599999999998, "text": " goes until it hits and then it changes direction towards the red thing, right?" }, { "end": 1110.84, "start": 1107.72, "text": " Always towards red thing, because it's not always towards the right because you return" }, { "end": 1112.8, "start": 1110.84, "text": " toward the left." }, { "end": 1120.44, "start": 1112.8, "text": " So it goes somehow towards the red thing and so it's pretty ambiguous in this situation," }, { "end": 1125.12, "start": 1120.44, "text": " but you can also make the assumption that if it's ambiguous, it goes towards the middle," }, { "end": 1127.72, "start": 1125.12, "text": " maybe, maybe." }, { "end": 1135.36, "start": 1127.72, "text": " So here, again, now we're actually confirming probably so we go towards the red thing, which" }, { "end": 1139.8, "start": 1135.36, "text": " would be towards this direction, then we hit an object, then we go towards the red thing" }, { "end": 1145.44, "start": 1139.8, "text": " until we hit an object, and then we go here." }, { "end": 1148.9199999999998, "start": 1145.44, "text": " Also see that these grids here are not the same size." }, { "end": 1154.12, "start": 1148.9199999999998, "text": " So it's not always the case that the grids of within the same tasks are even the same" }, { "end": 1155.12, "start": 1154.12, "text": " size." }, { "end": 1160.36, "start": 1155.12, "text": " So now here, again, your AI would need to recognize what size of grid it needs to draw" }, { "end": 1161.72, "start": 1160.36, "text": " and what the result is." }, { "end": 1168.72, "start": 1161.72, "text": " So it would need to copy this entire grid and also change these pixels right here to" }, { "end": 1172.24, "start": 1168.72, "text": " be green pixels." }, { "end": 1173.24, "start": 1172.24, "text": " That's hard." }, { "end": 1176.84, "start": 1173.24, "text": " I mean, that's I find I find this to be pretty hard." }, { "end": 1183.1200000000001, "start": 1176.84, "text": " This is the line extrapolation and turning on obstacle and efficiently reaching a goal" }, { "end": 1185.24, "start": 1183.1200000000001, "text": " prior." }, { "end": 1188, "start": 1185.24, "text": " That's crazy." }, { "end": 1189, "start": 1188, "text": " And is there more?" }, { "end": 1190.76, "start": 1189, "text": " Yes, there is two more, I believe." }, { "end": 1193.56, "start": 1190.76, "text": " Yeah, those are the last examples." }, { "end": 1201.04, "start": 1193.56, "text": " So in this one, you can see right here, there appear to be objects, which there's this blue" }, { "end": 1206.22, "start": 1201.04, "text": " objects appear to be the same and there are these red, and then the output grid is one" }, { "end": 1207.76, "start": 1206.22, "text": " of these blue objects." }, { "end": 1212.52, "start": 1207.76, "text": " Okay, so here we again see different objects, the output grid is one of them." }, { "end": 1216.56, "start": 1212.52, "text": " So as a human, you can already recognize the output grid is probably always going to be" }, { "end": 1218.9199999999998, "start": 1216.56, "text": " one of these objects." }, { "end": 1220.6799999999998, "start": 1218.9199999999998, "text": " And now we need to decide on which one." }, { "end": 1227.44, "start": 1220.68, "text": " So we can formulate the hypothesis that it's probably going to be the one that's the most" }, { "end": 1231.1200000000001, "start": 1227.44, "text": " like here, there's three of the blue ones here, there's four of the yellow ones, that's" }, { "end": 1232.96, "start": 1231.1200000000001, "text": " more than any other." }, { "end": 1240.24, "start": 1232.96, "text": " And this here confirms our hypothesis that the it's the object that appears most often." }, { "end": 1245.6000000000001, "start": 1240.24, "text": " Now again, see that there is this notion of object ness." }, { "end": 1247.4, "start": 1245.6000000000001, "text": " You need to upscale somehow." }, { "end": 1250.44, "start": 1247.4, "text": " No, this is not upscale because the grid is the same size." }, { "end": 1252.8, "start": 1250.44, "text": " It's simply the image that's upscale." }, { "end": 1258, "start": 1252.8, "text": " But you need to somehow focus be able to focus in on one of these objects, I need to count" }, { "end": 1262.96, "start": 1258, "text": " them, you need to compare the counts via each other." }, { "end": 1266.64, "start": 1262.96, "text": " And now here you can pretty easily see that the output grid is going to contain one of" }, { "end": 1271.1200000000001, "start": 1266.64, "text": " those blue things as a human." }, { "end": 1275.56, "start": 1271.1200000000001, "text": " And here, it's it's sort of a symmetry filling task." }, { "end": 1281.84, "start": 1275.56, "text": " Now as a human, you need one demonstration to get this." }, { "end": 1287.48, "start": 1281.84, "text": " Maybe you need more, but many tasks involve some sort of symmetry." }, { "end": 1293.72, "start": 1287.48, "text": " Okay, drawing the symmetrized version around the version of a shape around a marker, that's" }, { "end": 1299.12, "start": 1293.72, "text": " going to be fairly hard for a machine to learn without without the developer knowing that" }, { "end": 1302.1599999999999, "start": 1299.12, "text": " this task is coming." }, { "end": 1308.78, "start": 1302.16, "text": " Okay, they highlight some differentiations to standard psychometric tests." }, { "end": 1314.28, "start": 1308.78, "text": " But what I find interesting here is that this thing, what a solution to arc may look like" }, { "end": 1318.0800000000002, "start": 1314.28, "text": " and what it would imply for AI applications." }, { "end": 1320.9, "start": 1318.0800000000002, "text": " They say we have found art to be fully solvable by humans." }, { "end": 1326.8400000000001, "start": 1320.9, "text": " So they set a human in front of every, every one of these tasks, and it's solvable." }, { "end": 1331.3600000000001, "start": 1326.8400000000001, "text": " While many arc tasks are intellectually challenging human test takes us appear to be able to solve" }, { "end": 1336.9599999999998, "start": 1331.36, "text": " the majority of tasks on their first try without any practice or verbal explanations." }, { "end": 1344.7199999999998, "start": 1336.9599999999998, "text": " In effect, in this task, you get three tries at each at each of the at each of the problems," }, { "end": 1350.3999999999999, "start": 1344.7199999999998, "text": " you get three, three tries and the humans can already solve it in one." }, { "end": 1356.8, "start": 1350.3999999999999, "text": " So that just shows you shows you how cool humans are." }, { "end": 1365.28, "start": 1356.8, "text": " So here is a shawley suggests a solution approach says by start by developing a domain specific" }, { "end": 1372.24, "start": 1365.28, "text": " language capable of expressing all possible situations, all possible solution programs" }, { "end": 1375.28, "start": 1372.24, "text": " for any arc task." }, { "end": 1382.44, "start": 1375.28, "text": " Since the exact set of arc tax is purposely not formally definable, this may be challenging." }, { "end": 1387.3200000000002, "start": 1382.44, "text": " The space of tasks is defined as anything expressible in terms of arc pairs that would" }, { "end": 1389.96, "start": 1387.3200000000002, "text": " only involve core knowledge." }, { "end": 1395.88, "start": 1389.96, "text": " So core knowledge is this set of human priors that we discussed last time like objectness" }, { "end": 1402.0800000000002, "start": 1395.88, "text": " and symmetries and geometric shapes and navigation and so on." }, { "end": 1409.22, "start": 1402.0800000000002, "text": " So he asks you to basically develop a DSL that can capture all the different tasks." }, { "end": 1414.68, "start": 1409.22, "text": " So so kept basically define a formalism of these tasks." }, { "end": 1418.18, "start": 1414.68, "text": " But it's hard because you don't know what the tasks are going to be." }, { "end": 1423.72, "start": 1418.18, "text": " So your best bet is probably to make a formalism that completely over represents what the tasks" }, { "end": 1426.48, "start": 1423.72, "text": " can be." }, { "end": 1433.64, "start": 1426.48, "text": " It would require hard coding the core knowledge priors from 3.1.2 in a sufficiently abstract" }, { "end": 1440.4, "start": 1433.64, "text": " and combinable program form to serve as a basis functions for a kind of human like reasoning" }, { "end": 1441.5600000000002, "start": 1440.4, "text": " DSL." }, { "end": 1447.64, "start": 1441.5600000000002, "text": " We believe that solving this specific subproblem is critical to a to general AI progress." }, { "end": 1456.3600000000001, "start": 1447.64, "text": " Basically says whenever we can describe this is like saying that this AI progress will" }, { "end": 1461.0400000000002, "start": 1456.3600000000001, "text": " make a big step once we can formally describe human priors." }, { "end": 1468.92, "start": 1461.04, "text": " And while true this I feel the hardness of this problem is as hard as actually building" }, { "end": 1472.24, "start": 1468.92, "text": " general artificial intelligence or very close to it." }, { "end": 1482.52, "start": 1472.24, "text": " So it is a bit of a like how to how to go how to build a GI step one build a GI that's" }, { "end": 1487.76, "start": 1482.52, "text": " sort of and not exactly but it's kind of what this says." }, { "end": 1488.76, "start": 1487.76, "text": " Right." }, { "end": 1494.96, "start": 1488.76, "text": " So if I could actually have this DSL to describe every single task and I could do it you know" }, { "end": 1502.8, "start": 1494.96, "text": " such that it is not not super over capturing all the tasks then I would be able and I would" }, { "end": 1512.08, "start": 1502.8, "text": " have described human core knowledge in a sufficiently accurate degree that I could just you know" }, { "end": 1515.46, "start": 1512.08, "text": " build a GI." }, { "end": 1520.92, "start": 1515.46, "text": " So he goes on says given a task use the DSL to generate a set of candidate programs that" }, { "end": 1524.72, "start": 1520.92, "text": " turn the input grids into the corresponding output grids." }, { "end": 1529.8, "start": 1524.72, "text": " This step would reuse and recombine sub programs that previously proved useful in other arc" }, { "end": 1531.16, "start": 1529.8, "text": " tasks." }, { "end": 1535.04, "start": 1531.16, "text": " So says whenever you have captured the core knowledge or whenever you have captured the" }, { "end": 1540.72, "start": 1535.04, "text": " problem space in a formal language you can simply use that formal language to express" }, { "end": 1545.76, "start": 1540.72, "text": " whatever your input is so the that turn the input grids into the corresponding output" }, { "end": 1546.76, "start": 1545.76, "text": " grids." }, { "end": 1551.92, "start": 1546.76, "text": " So you would put in these demonstration examples and describe this with your formal language" }, { "end": 1557.3600000000001, "start": 1551.92, "text": " that you have and you can somehow reuse and recombine sub programs that previously proved" }, { "end": 1565.46, "start": 1557.3600000000001, "text": " useful for basically asking you to write to come up with source code that would generate" }, { "end": 1572.96, "start": 1565.46, "text": " these demonstration examples in the language of your DSL." }, { "end": 1578.44, "start": 1572.96, "text": " And then he says select top candidates among these programs so you would generate multiple" }, { "end": 1587.28, "start": 1578.44, "text": " versions of source code that generate these things based on a criterion such as a program" }, { "end": 1590.16, "start": 1587.28, "text": " simplicity or program likelihood." }, { "end": 1594.46, "start": 1590.16, "text": " Note that we do not expect that merely selecting the simplest possible program that works on" }, { "end": 1599.68, "start": 1594.46, "text": " training pairs will generalize well to test pairs." }, { "end": 1605.52, "start": 1599.68, "text": " And then use the top three candidates to generate output grids for the test examples." }, { "end": 1613.28, "start": 1605.52, "text": " So I hope the approach here I feel it makes sense but it is sort of over hopeful in my" }, { "end": 1617.32, "start": 1613.28, "text": " mind and that's mainly because of step one." }, { "end": 1622.04, "start": 1617.32, "text": " So step one asks you to come up with like a programming language that can capture all" }, { "end": 1628.52, "start": 1622.04, "text": " the tasks in this all the tasks in the data set even though you don't know what the tasks" }, { "end": 1635.2, "start": 1628.52, "text": " are and that has this human core knowledge in inside of it in a in a formally describable" }, { "end": 1636.42, "start": 1635.2, "text": " way." }, { "end": 1641.48, "start": 1636.42, "text": " And then once you have that programming language you would if you're given this task where" }, { "end": 1647.36, "start": 1641.48, "text": " you have you know a bunch of these demonstration you have a bunch of these demonstration things" }, { "end": 1655.4799999999998, "start": 1647.36, "text": " and then you have the test thing you would generate all the programs that would produce" }, { "end": 1661.6799999999998, "start": 1655.4799999999998, "text": " these demonstration examples or that would given the demo given the input grade would" }, { "end": 1666.56, "start": 1661.6799999999998, "text": " produce the output grid right you would generate all the programs and then you would select" }, { "end": 1672.04, "start": 1666.56, "text": " somehow among all these programs the one that you think generalizes the most and you would" }, { "end": 1678.6, "start": 1672.04, "text": " use that program to put this in and get out the solution." }, { "end": 1683.86, "start": 1678.6, "text": " They say it's probably it's not always the simplest program not always the shortest program" }, { "end": 1690.68, "start": 1683.86, "text": " maybe who knows like I feel step one is the kind of the crucial issue here." }, { "end": 1700.18, "start": 1690.68, "text": " Okay so they say they make some claims here and about what this what this would bring" }, { "end": 1704.64, "start": 1700.18, "text": " the community we posit that the existence of human level arc solver would represent" }, { "end": 1710.8, "start": 1704.64, "text": " the ability to program and AI from demonstrations alone only requiring a handful of demonstrations" }, { "end": 1717.8400000000001, "start": 1710.8, "text": " to specify complex tasks to do a wide range of human relatable tasks of a kind that would" }, { "end": 1724.48, "start": 1717.8400000000001, "text": " normally require human level human like fluid intelligence." }, { "end": 1728.64, "start": 1724.48, "text": " As supporting evidence we note that human performance on psychometric intelligent test" }, { "end": 1734.16, "start": 1728.64, "text": " which are similar to our is predictive of success across all human cognitive tasks." }, { "end": 1738.44, "start": 1734.16, "text": " Further we posit that since an arc solver and human intelligence would be both founded" }, { "end": 1743.24, "start": 1738.44, "text": " on the same knowledge priors the scope of application of an arc solver would be closer" }, { "end": 1748.5600000000002, "start": 1743.24, "text": " to that of human cognition making it such a solver both practically valuable and easy" }, { "end": 1755.44, "start": 1748.5600000000002, "text": " to interact with and would produce behavior that is in line with human expectations." }, { "end": 1762.72, "start": 1755.44, "text": " Okay so they're making the same argument that anyone before has made but they condition" }, { "end": 1767.88, "start": 1762.72, "text": " it on some things and this is I think the conclusion of the entire article here of on" }, { "end": 1772.6000000000001, "start": 1767.88, "text": " the measure of intelligence because people had this hope and they say that here claims" }, { "end": 1779.04, "start": 1772.6000000000001, "text": " are highly speculative and may prove incorrect much like Newell's 1973 hopes that progress" }, { "end": 1783.92, "start": 1779.04, "text": " on chess playing would translate into meaningful progress and achieving a broad range of cognitive" }, { "end": 1793, "start": 1783.92, "text": " abilities especially if arc turns out to feature unforeseen vulnerabilities to unintelligent" }, { "end": 1795.4, "start": 1793, "text": " shortcuts." }, { "end": 1802.44, "start": 1795.4, "text": " This is the AI effect and basically means that whenever you think a task the solving" }, { "end": 1808.6000000000001, "start": 1802.44, "text": " of a task represents AI and then you actually see the solution then the solution turns out" }, { "end": 1812.46, "start": 1808.6000000000001, "text": " to be not AI in the eyes of the human." }, { "end": 1817, "start": 1812.46, "text": " So the human at first they would say oh this task really requires intelligence and then" }, { "end": 1820.8, "start": 1817, "text": " someone solves the task and they'll say oh that's not intelligence you kind of hacked" }, { "end": 1826, "start": 1820.8, "text": " your way to that and the expectation is that in this arc challenge there might be a hacky" }, { "end": 1834, "start": 1826, "text": " way to that but I mean the good question is when at what is there even a task like this" }, { "end": 1840.42, "start": 1834, "text": " arc challenge here could that is there even a possibility of a task where you wouldn't" }, { "end": 1848.3600000000001, "start": 1840.42, "text": " say that and I'm not so sure about this they seem to be more hopeful than I am but at least" }, { "end": 1853.64, "start": 1848.3600000000001, "text": " they say the arc challenge is founded on the same priors as a human has it gives you the" }, { "end": 1859.88, "start": 1853.64, "text": " same amount of experience as a human has right and therefore it is much more comparable to" }, { "end": 1862.88, "start": 1859.88, "text": " human intelligence." }, { "end": 1870.92, "start": 1862.88, "text": " They go over some weaknesses right here of criticizing their own thing generalization" }, { "end": 1876.8000000000002, "start": 1870.92, "text": " is not quantified so they have a measure of generalization in the previous chapter but" }, { "end": 1882.6200000000001, "start": 1876.8000000000002, "text": " they don't use it right here test validity is not established data set size and diversity" }, { "end": 1892.3600000000001, "start": 1882.6200000000001, "text": " may be limited and so on but I in my mind this I would not consider this as like an" }, { "end": 1898.8799999999999, "start": 1892.36, "text": " AGI task or anything like this I'm pretty sure the solution to this will come in a form" }, { "end": 1904.8799999999999, "start": 1898.8799999999999, "text": " again where people don't really think it's it exhibits intelligence but I do like the" }, { "end": 1912.32, "start": 1904.8799999999999, "text": " task as such and as a machine learner I am very excited to think about how machine learning" }, { "end": 1920.84, "start": 1912.32, "text": " can go about solving this task and especially with what we've seen from something like GPT-3" }, { "end": 1926.3999999999999, "start": 1920.84, "text": " it has exactly this kind of structure where you train on a giant data set blah blah blah" }, { "end": 1932, "start": 1926.3999999999999, "text": " you pre train your language model but then at inference time you input a bunch of these" }, { "end": 1940.4399999999998, "start": 1932, "text": " demonstration examples and you ask it for the next output so I feel that might be a" }, { "end": 1948.84, "start": 1940.4399999999998, "text": " good start for for doing it the question of course is what what then do you pre train" }, { "end": 1955.56, "start": 1948.84, "text": " this model on this GPT-3 for ARC what's the pre training data set for it and I guess that's" }, { "end": 1962.32, "start": 1955.56, "text": " going to be the challenge and probably going to require people to specifically program" }, { "end": 1969.1, "start": 1962.32, "text": " all of these priors into a data set generator for pre training so that would be my approach" }, { "end": 1974.56, "start": 1969.1, "text": " my approach would be write a data set generator for pre training and GPT-3 model to do these" }, { "end": 1982.1599999999999, "start": 1974.56, "text": " kind of tasks and in order to write the data set generator you'd have to basically program" }, { "end": 1988.54, "start": 1982.1599999999999, "text": " in all of these priors and that's not going to be easy because your best bet is to sort" }, { "end": 1993.36, "start": 1988.54, "text": " of put yourself into the shoes of Chalet and be like oh if I were to design a task what" }, { "end": 1997.52, "start": 1993.36, "text": " kind of things would I do and then try to capture that that's going to be your best" }, { "end": 2004.48, "start": 1997.52, "text": " bet your most honest bet with respect to the challenge is to try to as faithfully as possible" }, { "end": 2011.04, "start": 2004.48, "text": " implement something like an object Ness prior where cohesion and persistence are captured" }, { "end": 2018.1, "start": 2011.04, "text": " that would be the most scientifically sound approach to my approach alright so that was" }, { "end": 2025.08, "start": 2018.1, "text": " my take on the ARC data set if you have any comments I'm very excited to hear comments" }, { "end": 2030.8, "start": 2025.08, "text": " on this if you have already tried the ARC challenge have some insight I also welcome" }, { "end": 2035.36, "start": 2030.8, "text": " comments on that and with that I'll see you next time bye bye" } ]
q6Kyvy1zLwQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
BERTology Meets Biology: Interpreting Attention in Protein Language Models (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "bert", "transformer", "mlm", "language model", "masked language modeling", "proteins", "protein", "amino acid", "primary", "secondary", "tertiary", "structure", "helix", "strand", "band", "sheet", "turn", "binding site", "contact map", "dna", "rna", "amino acids", "proline", "phenylalanine" ]
Proteins are the workhorses of almost all cellular functions and a core component of life. But despite their versatility, all proteins are built as sequences of the same 20 amino acids. These sequences can be analyzed with tools from NLP. This paper investigates the attention mechanism of a BERT model that has been trained on protein sequence data and discovers that the language model has implicitly learned non-trivial higher-order biological properties of proteins. OUTLINE: 0:00 - Intro & Overview 1:40 - From DNA to Proteins 5:20 - BERT for Amino Acid Sequences 8:50 - The Structure of Proteins 12:40 - Investigating Biological Properties by Inspecting BERT 17:45 - Amino Acid Substitution 24:55 - Contact Maps 30:15 - Binding Sites 33:45 - Linear Probes 35:25 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.15222 Code: https://github.com/salesforce/provis My Video on BERT: https://youtu.be/-9evrZnBorM My Video on Attention: https://youtu.be/iDulhoQ2pro Abstract: Transformer architectures have proven to learn useful representations for protein classification and generation tasks. However, these representations present challenges in interpretability. Through the lens of attention, we analyze the inner workings of the Transformer and explore how the model discerns structural and functional properties of proteins. We show that attention (1) captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure, (2) targets binding sites, a key functional component of proteins, and (3) focuses on progressively more complex biophysical properties with increasing layer depth. We also present a three-dimensional visualization of the interaction between attention and protein structure. Our findings align with known biological processes and provide a tool to aid discovery in protein engineering and synthetic biology. The code for visualization and analysis is available at this https URL. Authors: Jesse Vig, Ali Madani, Lav R. Varshney, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at Bertology meets Biology interpreting attention in protein language models by Jesse Vig, Ali Madani, Lav R Varshini, Kaiming Xiong, Richard Sokker and Nazneen Fatima Rajani. This paper is a investigative paper into models that are trained on biological data, specifically into BERT models. Actually into one specific BERT model that is trained on protein sequences. Now it is trained to simply perform language modeling on these protein sequences, but out of this language model you can then inspect this BERT model and read important biological data of these proteins, higher order data from the attention heads of the BERT model, which is pretty interesting. Basically means that the information of these higher order functions is at some point encoded in the structure of the language of the protein sequence. So we're going to go through what this means and how this comes about and what they did in order to investigate. I think this is a pretty cool investigative work and probably very promising for future research. Yeah, as always if you like content like this, consider sharing it out and leaving a like. Also tell me what you think in the comments. So biology. Really quick for people who maybe have never heard this. In your every cell you have this thing called DNA, which basically is an encoding of all of your biological functions. Now usually biological functions are realized through proteins. So DNA is basically a building plan for all of your proteins. This happens in the following two steps. First there is this transcription step where RNA is built. This is basically a copy of your DNA, but it's only single strand as you can see right here. And then there is a translation step that finally translates the RNA into the protein. What will end up is just a sequence of these beads right here. Now these beads are what are called amino acids. So a protein is simply a chain of these amino acids. There are 20 different amino acids and the order of these amino acids in the chain makes the function of the protein. Now specifically we know about these proteins that it seems to be very important how their three-dimensional shape is. So a lot of these different amino acids have different chemical properties. Some are sort of I think negatively charged, some are neutral, some are acids and so on. So they have very different chemical properties. So once you build this protein and you kind of release it into the cell it will curl up into a three-dimensional structure. So this one might be doing something like this and sort of form a circle or something like this. Just because these proteins here they kind of attract each other maybe electrically and thus the protein forms a circle and the function of the protein is very much related to its shape. So if it is a circle it could maybe trap something else in here. So you really have to think of these things like kind of tools. There are proteins that cut other proteins and they are really shaped sort of like a scissor that exactly fits these other proteins such that you can effectively cut them. So sometimes you can substitute an amino acid for a different amino acid like this here. If it doesn't change the shape very often you're fine. The protein function isn't changed. But if you change a different amino acid that is sort of vital to the shape and the shape changes then your protein very often loses function. So mutations in DNA sometimes lead to mutations in protein. Not always because there is some redundancy in this translation step from RNA. But if they do lead to a different amino acid it doesn't actually mean that the function changes. So there is sort of value in analyzing the sequence of the structure of proteins rather than the structure of DNA. Of course it's also important to analyze the structure of DNA but it is equally important to analyze the structure of proteins because not all the information is in the sequence. Not all the obvious information is in the sequence. So what does this paper do? This paper goes and takes a model that has been trained on protein data. So if you look at this protein it is simply a sequence of amino acids and these amino acids they all have names. I think I have a table somewhere here. Yes so these are the different amino acids that exist and you can see a protein is simply a sequence of these names. So usually they're abbreviated by like a three-letter abbreviation or just a one-letter abbreviation. So a protein might be AVMMVAG and so on. And this is just a string of text. So what I can do is I can train a language model on this. A language model is simply a model that takes a piece of text and tells you what's the next piece of text. So what's the next letter, what's the next word, in this case what's the next amino acid. And we can use tools from NLP for that. Specifically we can train a BERT model. Now BERT works a bit differently than a standard language model. BERT does what is called masked language modeling. So you would take this string, you would feed it into a BERT model right here. And I've made an entire video on BERT if you want to check that out. And what you'll do by inputting that you'll mask out some of the tokens. So you'll maybe mask out this one, mask out this one, and then you ask the model to reconstruct those. We say that here is an M and here is an A without seeing them. So the model somehow has to learn from the surrounding amino acids what this amino acid could be. So it has to reconstruct this sequence. So the hope here is, in natural language, is that BERT somehow learns something about language itself. By being able to reconstruct these things it has learned something about language, about which words appear together and when. It might even learn very long distance relationships between words just because it has to predict those. And the idea carries over to biology. So we might hope that a BERT trained on an amino acid sequence will learn something about the language of proteins, about the amino acid sequence. And our goal here is to ask, can we somehow infer the 3D shape of a protein, which is the important part, from its sequence right here? So given its sequence, can we infer the 3D shape? Now as I understand it, usually this has to be done in like a simulation. So you would build this in a simulator and then you do like some sort of a molecule simulation to see how this ends up in a 3D shape. You could train a model to just predict the 3D shape, but in this case we're just interested in what does the BERT model learn about the 3D shape while only ever having been trained on predicting the sequence of amino acids. So it's never been trained to look at the 3D shape. And that's our goal here. So specifically we'll look at two different things. So here you can see examples of proteins and their high-level structures. So in these proteins what you call the primary structure is this sequence of amino acid. This is simply which amino acids are in which order. There is a thing called the secondary structures and we often observe that spans of these amino acids like substrings form these what are called these helixes as you can see here or these sheets. I don't know how they're strands in English. We call them sheets or I think these are the alpha helixes and these are the beta sheets and there is also a turn. I think this here might be a turn. So there are these kind of secondary structures and then the tertiary structure is how these this is still one protein. This is one unbroken chain of amino acid. And you can see this here kind of forms this double ring which would be its tertiary structure. Very important for predicting the tertiary structure is to predict when two amino acids are close to each other. So if we have a chain right here and the chain as we saw before kind of turns and bends on itself then these two amino acids here are very close in close contact. And to predict which amino acids are in close contact to each other helps you determine the the tertiary structure. So that's a consequence of it. So we wonder does BERT know intrinsically which of these amino acids are going to end up being in contact with each other without ever having been trained to do it? The second thing we're interested in are binding sites. So here what you might not be able to see but we made this example before where this sort of forms a loop and then I say can trap something here right like another molecule. And this is what we would call a binding site. A binding site is a one amino acid that maybe through the structure of the surrounding amino acid as well but also through its properties and how it is exposed in 3d shape acts as sort of a receptor for other molecules. It binds to other things. So think of your hemoglobin that traps the oxygen in your blood or something like this. It is where a chemical reaction or a reaction with something else will happen. That's a binding site. And we are interested does BERT, the BERT that is only trained on a language modeling objective, know which ones are the binding sites? Because you know that would be very interesting and not something BERT was trained on. By the way I particularly liked Richard Sockers tweet on this. I think he tweeted out, oh BERT trained only on language model can predict binding sites and biological properties and formulated it like it was you know like GPT-3 was formulated like if we train on Wikipedia our model can do math. I thought it was kind of a satire headline. If we train on Wikipedia our model can predict biology and also it can tie your shoes and cook your dinner. So it's trained on language modeling on biological data and now that makes sense. So they're going to look at two different things or actually more than two different things but they formulate this in an abstract way right here. So what they'll look at is the so-called properties. A property F and this property F can be for example that a amino acid is a binding site. The property F can also be that two amino acids are in contact with each other. So F always takes I and J. If in the case for example where this is the contact property then it simply is the indicator function for when I and J are in contact. And if it is a just a binding site then I think we are looking at J. At the token level property we define to be an indicator that returns one if the property is present in token J. So whenever J is a binding site then that holds. So what we're looking at are these attention heads in BERT. If you don't know BERT has an attention mechanism which basically means from layer to layer each token can attend to all other tokens. So here the amino acid sequence I've drawn it twice and the next layer representation of this amino acid will be able to gather information from all of the other amino acid through an attention mechanism through a dynamic routing algorithm. I've made a video on attention is all you need if you want to find out more how this works. Now what we're interested in is the strength of these connections. So the hypothesis is if molecule 1 and 3 are contact sites then maybe we will find a layer where this connection between 1 and 3 is very strong. That would indicate that there is a connection site or that would indicate that BERT has learned something about the connection sites. If we find this repeatedly so if we look at many many proteins and whenever we know that there is a contact between two things and then we observe that the corresponding attention is very high then we can be pretty sure that BERT has learned something about contact between amino acids. The same goes for binding sites so if 4 here is a binding site and then all the connections all the attention that the higher layer gets from 4 so all the information routed away from 4 is very strong that means all these other tokens are paying special attention to the token number 4 to this amino acid and if we find that there's a big correlation with this being a binding site then we can reasonably conclude that BERT has learned something about binding sites. So we're going to do a correlative analysis for proteins where we know the binding sites where we know the contacts. We can analyze them we can run simulations therefore we can know them. So we're going to look at this quantity right here which is simply a normalized quantity. So we're going to look at the attention in a given attention head so as you know BERT has many layers with many attention heads and we're going to look at whether or not this property is active and just normalize it by the total attention in that head so that we get some kind of a percentage number. That's the first task we're basically going to look at how does the attention correlate with these properties and the second task we're going to do is this probing task. So a probing task is like a linear probe in like a classifier so what we're going to do is we're going to take a layer right here and even though it's an intermediate layer we're simply going to run it through a linear classifier and then decide is this a binding site for example or not. Is a given amino acid a binding site or not? Is a given pair a contact or not? So this is kind of a linear probe but this sort of takes a backseat in this paper the analysis is really on the attention heads and what the attention heads learn. And that's already it they don't they take a pre-trained BERT model so there are these BERT models that are already trained on these protein databases and first they look simply can we find attention heads that correlate with a given amino acid. So here you see the attention to the amino acid this is proline I believe and this is phenylalanine is that the same in English? Yes phenylalanine and proline right here. So you can see that the the plots here are there's almost no attention pretty much throughout the network that pays special attention to the amino acid proline except this head right here seems to have if you look at the scale over like a 70% of attention always goes to proline in this particular head so this is layer 1 head number 11 focuses 78% of its attention on proline. Now this is not that special if you think about it because in language models as well in natural language models you might want to think that you have some mechanism in your neural network that's especially specialized on like a very particular word in the language because that might just be a often occurring very particular word for example in English maybe the is very important or the word what these these are like very indicative very often occurring words so it is reasonable to expect to find a an attention head that pays a lot of attention to these things especially here where our vocabulary size is 20 instead of like 30,000 in natural language. And the same goes for this phenylalanine where you can see that in the layer in the last layer and in the first layer you have attention and also in the proline you have in the last layer so why does this make sense because what we would expect from like single tokens these are not interactions yet these are not biological functions yet so we know that in the lower layers of a neural network we have these kind of basic features basic feature extractors and here these basic feature extractors appear to be simply paying attention to one specific token in the vocabulary a lot okay so they kind of these heads sort of specialize for single for single amino acids and the same in the last layer so in the very last layer the the task of the very last layer is to prepare for the classification tasks so if you remember the BERT model you have layer layer layer layer and at the end you'll have to predict which ones are masked down here so at the end you'll have to predict single amino acids again so if there's a proline masked here you'll have to predict the proline so it also makes sense that the last layers would very much specialize to single tokens so this does make sense now our question is going to be do we find the biological function where where would you expect them we would expect the let's say the tertiary sorry the secondary structures which are sort of one level higher than the primary structure we would expect to find them maybe here and then we would expect to find the tertiary structures maybe somewhere here okay because these are most highest level and then it goes it goes back again or maybe it's like we find the tertiary structures rather here and here again and then in the middle we'll find the the most high-level the tertiary structure sorry yeah blue secondary this drawing is getting too too too weird but there there could be multiple scenarios but that could fit here but until now it sort of makes sense so they do it as an additional investigation where as I told you sometimes you can substitute an amino acid and nothing really happens right and in fact that this probably happens in you right now you probably might have some mutation that changed some amino acid and you don't even realize because it's just it's fine no notice so the biologists can build these matrices of how much you can substitute proteins with each other so here you see this blossom 62 substitution scores which are very I guess very high if you can substitute two protein two amino acids with each other and the effect is negligible and it's very low if it's the other way around now this is interesting so far but you compare this to this matrix right here this is the attention similarity so what we'll do is for each two amino acids we take those two attention things those two attention matrices and we'll calculate the correlation between the attention matrices and our hypothesis is that the more correlated the attention patterns are between the two amino acids the more likely we are to substitute them because as a direct result of our language model our language model is it's reconstructing right these things so our language model is going to treat if in natural language is like a synonym right is our language model is going to treat synonyms very similar to each other because they're synonyms they can be exchanged so a good language model should learn that they are almost the same and therefore the attention pattern is going to be almost the same so a high correlation we hypothesize is a means that the function of the amino acid is similar and therefore we can substitute it easily so this here is the matrix of the correlations between each two attention patterns of these amino acid and if you compare the two right here they look extremely similar just have a have a look for a little while and you'll see that the patterns they do not match perfectly but they are very very similar the dark spots are in the same places the light spots are in the same places so this already makes a good case that the language model here has learned something about biology now what we want to do is we want to investigate higher order higher order functions so here we are interested in these contact maps right so how likely is it that two amino acids are in contact and we'll look at it through the lens of attention as we did before so here you'll see percentage of each each head of each head's attention that is aligned with contact maps averaged over data set suggesting that had 12 for is uniquely specialized for contact prediction so look at this this this head here is just spiking so remember before we said our analysis is whenever whenever we're basically measuring the correlation of two things being in contact because we know it from our simulator or from our data set the correlation of that with an attention connection being particularly strong and will we find it in this attention head right here so this layer 12 head number four will always peek out whenever two things are in contact now you can see that it's it's not like always it's like 25% of its attention but significantly more than anything else right here in fact if you group the things by this attention you can build the following plot so you can see right here probability two amino acids are in contact as a function of attention between the amino acids in head 12 for showing attention approximates perfectly calibrated estimator which would be the green line so here we simply for each for each pairs to amino acids for each pair of amino acids we plot we make a histogram right here of what they're sorry not a histogram we plot the probability if they have the attention weight point nine we plot how likely is it that they are in contact so this is this if we just look at the data and we simply take this attention weight as a measure as a predictor of being in contact we get the blue curve and the green curve would be if we could perfectly predict from this attention head what the probability of contact would be and you can see that the fit is fairly good you can't predict with super high accuracy but the fit is fairly good and you can see that general trend that as the attention in this head rises the probability of the two amino acids being in contact with each other also rises so we can sort of confidently say that BERT has learned something about a higher level funk a higher level biological structure just from the language modeling objective how can we interpret this this must somehow mean that it is it is possible it is vital to it is vital for reconstructing the sequence from its surroundings so if we delete this right here if if this if these two are in contact in the 3d structure that makes probably probably means that this thing right here is a very good predictor of what was here right if we mask this out and we're asked to reconstruct which amino acid was there then it probably helps to look at its neighbors right it probably always helps to look at one's neighbors especially also in natural language but if these two are in contact they they have very special they have very special connection to each other it's very you can basically read out from this one which one this was this is sort of like if you have a sentence and you say does I don't know I can't come up with with one right now but if it's like da da da da da and then there is a name like mark and then da da da da da da and then there is him right and you would expect if I drop out okay let's do it the other way around if I drop out him then from the text right here you can probably determine that it is some sort of pronoun but then you go back and you see ah it's mark okay so it's not like it's not like it or some or or or she it's probably he or him this is sort of the analogous structure right here in biology the second thing we're looking at is these these binding sites now these are single properties of of of different amino acids and we're simply looking at all the incoming or sorry all the F all the other tokens that focuses their attention why is this important because these binding sites are central to the structure of the or to the function of the protein right if this here is a binding site then that's a very central important point of the protein so a lot of these other things are going to be determined by what this binding site is this binding site needs to have a very spurted particular function and therefore probably needs to be a very particular amino acid and the other things here are sort of supporting this binding site because they form the 3d structure around it and so on so you would expect a lot of attention to be put on this binding site and what do we find the we find that it's a bit more murky than before so you can see that the attention is kind of spread out percentage of each head's attention that focuses on binding sites especially in the deeper layers binding sites are targeted at much higher frequency than would occur by chance head 7 1 has the highest percentage with 34% so also here you can see that it is spread out but this is because multiple heads are now focusing on these binding sites because probably binding sites come in different variations so you'll have lots of heads specializing on attending to binding sites and they say it is much higher frequency than would occur by chance and you can see here this head is the highest with 34% of its attention focused on binding sites you can also see the general trend of the attention being rather in the later layers which we would expect from a tertiary structure now yeah it would be interesting here here you also see that actually most of the things are in the in the last layer which points to points to rather maybe lower lower level information because we reasoned before about the last layer or I was just wrong but also in a general trend you can see that the attention is rather shifted towards the later layers because this is sort of a higher order function okay if you look at the same calibration experiment you can see that the picture is not as clear there is the general trend at the beginning but then it sort of flattens out so you can sort of differentiate the very probably not a binding site from the somewhat probably a binding site but it's not a perfectly calibrated classifier and that might just be because there are many things specializing in different types of binding sites so you can't just go to this one head so this is just for this one head you can't just go to that one and expect that to classify all the binding sites because you might want to be you might want to combine all of the high ranking ones here to form a classifier the last experiment they do is these linear probes which where they just go and they just build classifiers from different parts of the network and you can see right here that what is predicted and how well they work so each bar here is going to be the difference of performance so this is differential performance of diagnostic classifier by layer sorted by task order in figure 8 each plot shows the change in performance between the given layer and the previous layer okay so a bar up shows it's performing better than the previous layer bar down shows it's performing worse than the previous layer so you see right here that the these are the secondary structures right here and you can see that there is a lot of performance in the earlier layers right here and sort of not that high performance in the later layers whereas for the tertiary structures the binding site and the contact you can see that there is a bit of performance on in places but it sort of tends to be more towards the middle certainly more at towards the middle of the end of the network than the the secondary structures which sort of makes sense with our hypothesis you can also see this here where they show the percent of attention focused as a function of layer and the red is the center of mass and you can see that as the the secondary structures this their center of mass is at a lower layer in general than the tertiary functions all of this is not perfect of course but it's still an open question I guess whether or not it's not perfect because we haven't built a strong enough language model yet do I want to say GPT-4 is now for biology and not for language or is it because there is really you need you really can't very well predict the these things just from a language model I mean you should technically all the information is there but maybe the language model objective as such isn't able to capture that information so yeah this this was the paper it's pretty simple they have the in the appendix they have a lot of a lot of these additional experiments or full experiments I believe for all the amino acids and so on and I invite you to check that out in general I like this kind of work because it's very applied it's and it can you know tell us something about the nature of both these language models and the biological things that we that we care about in biology okay I'm just talking crap right now thanks for being here I hope you enjoyed it and bye bye
[ { "end": 5.84, "start": 0, "text": " Hi there! Today we'll look at Bertology meets Biology interpreting attention in" }, { "end": 11.92, "start": 5.84, "text": " protein language models by Jesse Vig, Ali Madani, Lav R Varshini, Kaiming Xiong," }, { "end": 19.92, "start": 11.92, "text": " Richard Sokker and Nazneen Fatima Rajani. This paper is a investigative paper into" }, { "end": 25.560000000000002, "start": 19.92, "text": " models that are trained on biological data, specifically into BERT models." }, { "end": 31.88, "start": 25.56, "text": " Actually into one specific BERT model that is trained on protein sequences." }, { "end": 38.12, "start": 31.88, "text": " Now it is trained to simply perform language modeling on these protein" }, { "end": 45, "start": 38.12, "text": " sequences, but out of this language model you can then inspect this BERT model and" }, { "end": 51.72, "start": 45, "text": " read important biological data of these proteins, higher order data from the" }, { "end": 56.04, "start": 51.72, "text": " attention heads of the BERT model, which is pretty interesting. Basically means" }, { "end": 61.92, "start": 56.04, "text": " that the information of these higher order functions is at some point encoded" }, { "end": 68.16, "start": 61.92, "text": " in the structure of the language of the protein sequence. So we're going to go" }, { "end": 74.28, "start": 68.16, "text": " through what this means and how this comes about and what they did in order" }, { "end": 79.48, "start": 74.28, "text": " to investigate. I think this is a pretty cool investigative work and probably" }, { "end": 87.24000000000001, "start": 79.48, "text": " very promising for future research. Yeah, as always if you like content" }, { "end": 92.76, "start": 87.24000000000001, "text": " like this, consider sharing it out and leaving a like. Also tell me what you" }, { "end": 101.48, "start": 92.76, "text": " think in the comments. So biology. Really quick for people who maybe have never" }, { "end": 108.56, "start": 101.48, "text": " heard this. In your every cell you have this thing called DNA, which basically is" }, { "end": 114.8, "start": 108.56, "text": " an encoding of all of your biological functions. Now usually biological" }, { "end": 120.76, "start": 114.8, "text": " functions are realized through proteins. So DNA is basically a building plan for" }, { "end": 126.08, "start": 120.76, "text": " all of your proteins. This happens in the following two steps. First there is this" }, { "end": 132.72, "start": 126.08, "text": " transcription step where RNA is built. This is basically a copy of your DNA, but" }, { "end": 137.04, "start": 132.72, "text": " it's only single strand as you can see right here. And then there is a" }, { "end": 143.72, "start": 137.04, "text": " translation step that finally translates the RNA into the protein. What will end" }, { "end": 149.44, "start": 143.72, "text": " up is just a sequence of these beads right here. Now these beads are what are" }, { "end": 155.04, "start": 149.44, "text": " called amino acids. So a protein is simply a chain of these amino acids." }, { "end": 161.51999999999998, "start": 155.04, "text": " There are 20 different amino acids and the order of these amino acids in the" }, { "end": 166.88, "start": 161.51999999999998, "text": " chain makes the function of the protein. Now specifically we know about these" }, { "end": 172.6, "start": 166.88, "text": " proteins that it seems to be very important how their three-dimensional" }, { "end": 177.44, "start": 172.6, "text": " shape is. So a lot of these different amino acids have different chemical" }, { "end": 184.6, "start": 177.44, "text": " properties. Some are sort of I think negatively charged, some are neutral, some" }, { "end": 189.07999999999998, "start": 184.6, "text": " are acids and so on. So they have very different chemical properties. So once" }, { "end": 194.68, "start": 189.07999999999998, "text": " you build this protein and you kind of release it into the cell it will curl up" }, { "end": 198.72, "start": 194.68, "text": " into a three-dimensional structure. So this one might be doing" }, { "end": 205.48000000000002, "start": 198.72, "text": " something like this and sort of form a circle or something like this." }, { "end": 210.8, "start": 205.48000000000002, "text": " Just because these proteins here they kind of attract each other maybe" }, { "end": 216.52, "start": 210.8, "text": " electrically and thus the protein forms a circle and the function of the protein" }, { "end": 222.24, "start": 216.52, "text": " is very much related to its shape. So if it is a circle it could maybe trap" }, { "end": 226.48000000000002, "start": 222.24, "text": " something else in here. So you really have to think of these things like kind" }, { "end": 231.24, "start": 226.48000000000002, "text": " of tools. There are proteins that cut other proteins and they are really" }, { "end": 238.4, "start": 231.24, "text": " shaped sort of like a scissor that exactly fits these other proteins such" }, { "end": 245.12, "start": 238.4, "text": " that you can effectively cut them. So sometimes you can substitute an" }, { "end": 250.56, "start": 245.12, "text": " amino acid for a different amino acid like this here. If it doesn't change the" }, { "end": 257.4, "start": 250.56, "text": " shape very often you're fine. The protein function isn't changed." }, { "end": 262.56, "start": 257.4, "text": " But if you change a different amino acid that is sort of vital to the shape and" }, { "end": 269.68, "start": 262.56, "text": " the shape changes then your protein very often loses function. So mutations in" }, { "end": 277.96, "start": 269.68, "text": " DNA sometimes lead to mutations in protein. Not always because there is some" }, { "end": 282.64, "start": 277.96, "text": " redundancy in this translation step from RNA. But if they do lead to a" }, { "end": 288, "start": 282.64, "text": " different amino acid it doesn't actually mean that the function changes. So there" }, { "end": 294.64, "start": 288, "text": " is sort of value in analyzing the sequence of the structure of proteins" }, { "end": 298.56, "start": 294.64, "text": " rather than the structure of DNA. Of course it's also important to analyze" }, { "end": 304.47999999999996, "start": 298.56, "text": " the structure of DNA but it is equally important to analyze the" }, { "end": 310.88, "start": 304.48, "text": " structure of proteins because not all the information is in the" }, { "end": 317.6, "start": 310.88, "text": " sequence. Not all the obvious information is in the sequence. So what does this" }, { "end": 323.44, "start": 317.6, "text": " paper do? This paper goes and takes a model that has been trained on protein" }, { "end": 329.36, "start": 323.44, "text": " data. So if you look at this protein it is simply a sequence of amino acids and" }, { "end": 333.36, "start": 329.36, "text": " these amino acids they all have names. I think I have a table somewhere here." }, { "end": 340.24, "start": 333.36, "text": " Yes so these are the different amino acids that exist and you can see a" }, { "end": 347.68, "start": 340.24, "text": " protein is simply a sequence of these names. So usually they're abbreviated by" }, { "end": 352.6, "start": 347.68, "text": " like a three-letter abbreviation or just a one-letter abbreviation. So a protein" }, { "end": 363.52000000000004, "start": 352.6, "text": " might be AVMMVAG and so on. And this is just a string of text. So what I" }, { "end": 368.36, "start": 363.52000000000004, "text": " can do is I can train a language model on this. A language model is simply a" }, { "end": 374.56, "start": 368.36, "text": " model that takes a piece of text and tells you what's the next piece of text." }, { "end": 378.12, "start": 374.56, "text": " So what's the next letter, what's the next word, in this case what's the next" }, { "end": 385.16, "start": 378.12, "text": " amino acid. And we can use tools from NLP for that. Specifically we can train a" }, { "end": 389.96, "start": 385.16, "text": " BERT model. Now BERT works a bit differently than a standard language" }, { "end": 394.04, "start": 389.96, "text": " model. BERT does what is called masked language modeling. So you would take this" }, { "end": 399.44, "start": 394.04, "text": " string, you would feed it into a BERT model right here. And I've made an entire" }, { "end": 405.04, "start": 399.44, "text": " video on BERT if you want to check that out. And what you'll do by inputting" }, { "end": 409.36, "start": 405.04, "text": " that you'll mask out some of the tokens. So you'll maybe mask out this one, mask" }, { "end": 414.44, "start": 409.36, "text": " out this one, and then you ask the model to reconstruct those. We say that here is" }, { "end": 419, "start": 414.44, "text": " an M and here is an A without seeing them. So the model somehow has to learn" }, { "end": 427.48, "start": 419, "text": " from the surrounding amino acids what this amino acid could be. So it has" }, { "end": 433.96000000000004, "start": 427.48, "text": " to reconstruct this sequence. So the hope here is, in natural language, is that" }, { "end": 440.12, "start": 433.96, "text": " BERT somehow learns something about language itself. By being able to" }, { "end": 444.03999999999996, "start": 440.12, "text": " reconstruct these things it has learned something about language, about which" }, { "end": 448.56, "start": 444.03999999999996, "text": " words appear together and when. It might even learn very long distance" }, { "end": 455.79999999999995, "start": 448.56, "text": " relationships between words just because it has to predict those. And the idea" }, { "end": 463.47999999999996, "start": 455.79999999999995, "text": " carries over to biology. So we might hope that a BERT trained on an amino" }, { "end": 469.72, "start": 463.48, "text": " acid sequence will learn something about the language of proteins," }, { "end": 477, "start": 469.72, "text": " about the amino acid sequence. And our goal here is to ask, can we somehow" }, { "end": 483.96000000000004, "start": 477, "text": " infer the 3D shape of a protein, which is the important part, from its sequence" }, { "end": 491.16, "start": 483.96000000000004, "text": " right here? So given its sequence, can we infer the 3D shape? Now as I understand" }, { "end": 496.08000000000004, "start": 491.16, "text": " it, usually this has to be done in like a simulation. So you would build" }, { "end": 502.32000000000005, "start": 496.08000000000004, "text": " this in a simulator and then you do like some sort of a molecule simulation to" }, { "end": 507.72, "start": 502.32000000000005, "text": " see how this ends up in a 3D shape. You could train a model to just predict the" }, { "end": 511.76000000000005, "start": 507.72, "text": " 3D shape, but in this case we're just interested in what does the BERT model" }, { "end": 518.6800000000001, "start": 511.76000000000005, "text": " learn about the 3D shape while only ever having been trained on predicting the" }, { "end": 524.9599999999999, "start": 518.68, "text": " sequence of amino acids. So it's never been trained to look at the" }, { "end": 530.5999999999999, "start": 524.9599999999999, "text": " 3D shape. And that's our goal here. So specifically we'll look at two different" }, { "end": 535.3599999999999, "start": 530.5999999999999, "text": " things. So here you can see examples of proteins and their high-level structures." }, { "end": 541.76, "start": 535.3599999999999, "text": " So in these proteins what you call the primary structure is this sequence of" }, { "end": 547.76, "start": 541.76, "text": " amino acid. This is simply which amino acids are in which order. There is a" }, { "end": 554.08, "start": 547.76, "text": " thing called the secondary structures and we often observe that spans of these" }, { "end": 559.92, "start": 554.08, "text": " amino acids like substrings form these what are called these helixes as you can" }, { "end": 567.72, "start": 559.92, "text": " see here or these sheets. I don't know how they're strands in English. We call" }, { "end": 572.04, "start": 567.72, "text": " them sheets or I think these are the alpha helixes and these are the beta" }, { "end": 578, "start": 572.04, "text": " sheets and there is also a turn. I think this here might be a turn. So there are" }, { "end": 585.04, "start": 578, "text": " these kind of secondary structures and then the tertiary structure is how these" }, { "end": 589.64, "start": 585.04, "text": " this is still one protein. This is one unbroken chain of amino acid." }, { "end": 594.12, "start": 589.64, "text": " And you can see this here kind of forms this double ring which would be its" }, { "end": 600.9599999999999, "start": 594.12, "text": " tertiary structure. Very important for predicting the tertiary structure is to" }, { "end": 606.84, "start": 600.96, "text": " predict when two amino acids are close to each other. So if we have a chain" }, { "end": 612.8000000000001, "start": 606.84, "text": " right here and the chain as we saw before kind of turns and bends on itself" }, { "end": 619.48, "start": 612.8000000000001, "text": " then these two amino acids here are very close in close contact. And to predict" }, { "end": 626.2800000000001, "start": 619.48, "text": " which amino acids are in close contact to each other helps you determine the" }, { "end": 632.3199999999999, "start": 626.28, "text": " the tertiary structure. So that's a consequence of it. So we wonder does" }, { "end": 638.76, "start": 632.3199999999999, "text": " BERT know intrinsically which of these amino acids are going to end up being in" }, { "end": 644.04, "start": 638.76, "text": " contact with each other without ever having been trained to do it? The second" }, { "end": 649.76, "start": 644.04, "text": " thing we're interested in are binding sites. So here what you might not be able" }, { "end": 654.76, "start": 649.76, "text": " to see but we made this example before where this sort of forms a loop and then" }, { "end": 661.08, "start": 654.76, "text": " I say can trap something here right like another molecule. And this is what" }, { "end": 668.8, "start": 661.08, "text": " we would call a binding site. A binding site is a one amino acid that maybe" }, { "end": 673.72, "start": 668.8, "text": " through the structure of the surrounding amino acid as well but also through its" }, { "end": 680.64, "start": 673.72, "text": " properties and how it is exposed in 3d shape acts as sort of a receptor for" }, { "end": 687.92, "start": 680.64, "text": " other molecules. It binds to other things. So think of your hemoglobin" }, { "end": 695.04, "start": 687.92, "text": " that traps the oxygen in your blood or something like this. It is where a" }, { "end": 700.5, "start": 695.04, "text": " chemical reaction or a reaction with something else will happen. That's a" }, { "end": 706.6, "start": 700.5, "text": " binding site. And we are interested does BERT, the BERT that is only trained on a" }, { "end": 714.2, "start": 706.6, "text": " language modeling objective, know which ones are the binding sites? Because you" }, { "end": 719.28, "start": 714.2, "text": " know that would be very interesting and not something BERT was trained on. By the" }, { "end": 724, "start": 719.28, "text": " way I particularly liked Richard Sockers tweet on this. I think he tweeted" }, { "end": 729.72, "start": 724, "text": " out, oh BERT trained only on language model can predict binding sites and" }, { "end": 735.1800000000001, "start": 729.72, "text": " biological properties and formulated it like it was you know like GPT-3 was" }, { "end": 741, "start": 735.18, "text": " formulated like if we train on Wikipedia our model can do math. I thought it was" }, { "end": 746.56, "start": 741, "text": " kind of a satire headline. If we train on Wikipedia our model can predict biology" }, { "end": 752.7199999999999, "start": 746.56, "text": " and also it can tie your shoes and cook your dinner. So it's trained on" }, { "end": 758.3599999999999, "start": 752.7199999999999, "text": " language modeling on biological data and now that makes sense. So they're going to" }, { "end": 764.68, "start": 758.3599999999999, "text": " look at two different things or actually more than two different things but" }, { "end": 771, "start": 764.68, "text": " they formulate this in an abstract way right here. So what they'll look at is" }, { "end": 778.1999999999999, "start": 771, "text": " the so-called properties. A property F and this property F can be for example" }, { "end": 785.7199999999999, "start": 778.1999999999999, "text": " that a amino acid is a binding site. The property F can also be that two amino" }, { "end": 793.12, "start": 785.7199999999999, "text": " acids are in contact with each other. So F always takes I and J. If in the case" }, { "end": 798.76, "start": 793.12, "text": " for example where this is the contact property then it simply is the indicator" }, { "end": 810.08, "start": 798.76, "text": " function for when I and J are in contact. And if it is a just a binding site then" }, { "end": 817.52, "start": 810.08, "text": " I think we are looking at J. At the token level property we define to be an" }, { "end": 823.48, "start": 817.52, "text": " indicator that returns one if the property is present in token J. So whenever" }, { "end": 828.52, "start": 823.48, "text": " J is a binding site then that holds. So what we're looking at are these" }, { "end": 833.84, "start": 828.52, "text": " attention heads in BERT. If you don't know BERT has an attention mechanism" }, { "end": 840.56, "start": 833.84, "text": " which basically means from layer to layer each token can attend to all other" }, { "end": 846.0799999999999, "start": 840.56, "text": " tokens. So here the amino acid sequence I've drawn it twice and the next layer" }, { "end": 851.2, "start": 846.08, "text": " representation of this amino acid will be able to gather information from all" }, { "end": 855.6, "start": 851.2, "text": " of the other amino acid through an attention mechanism through a dynamic" }, { "end": 860.32, "start": 855.6, "text": " routing algorithm. I've made a video on attention is all you need if you want to" }, { "end": 867.5600000000001, "start": 860.32, "text": " find out more how this works. Now what we're interested in is the strength of" }, { "end": 879.88, "start": 867.56, "text": " these connections. So the hypothesis is if molecule 1 and 3 are" }, { "end": 888.68, "start": 879.88, "text": " contact sites then maybe we will find a layer where this connection between 1" }, { "end": 893.7199999999999, "start": 888.68, "text": " and 3 is very strong. That would indicate that there is a connection" }, { "end": 899.36, "start": 893.72, "text": " site or that would indicate that BERT has learned something about the" }, { "end": 903.6800000000001, "start": 899.36, "text": " connection sites. If we find this repeatedly so if we look at many" }, { "end": 909.88, "start": 903.6800000000001, "text": " many proteins and whenever we know that there is a contact between two things" }, { "end": 914.76, "start": 909.88, "text": " and then we observe that the corresponding attention is very high" }, { "end": 920.84, "start": 914.76, "text": " then we can be pretty sure that BERT has learned something about contact between" }, { "end": 930.4, "start": 920.84, "text": " amino acids. The same goes for binding sites so if 4 here is a binding" }, { "end": 936.24, "start": 930.4, "text": " site and then all the connections all the attention that the higher layer" }, { "end": 941.52, "start": 936.24, "text": " gets from 4 so all the information routed away from 4 is very strong that" }, { "end": 946.84, "start": 941.52, "text": " means all these other tokens are paying special attention to the token number 4" }, { "end": 952.8000000000001, "start": 946.84, "text": " to this amino acid and if we find that there's a big correlation with this" }, { "end": 957.0400000000001, "start": 952.8000000000001, "text": " being a binding site then we can reasonably conclude that BERT has" }, { "end": 962.84, "start": 957.0400000000001, "text": " learned something about binding sites. So we're going to do a correlative" }, { "end": 967.5600000000001, "start": 962.84, "text": " analysis for proteins where we know the binding sites where we know the" }, { "end": 973.96, "start": 967.5600000000001, "text": " contacts. We can analyze them we can run simulations therefore we can know" }, { "end": 979.44, "start": 973.96, "text": " them. So we're going to look at this quantity right here which is simply a" }, { "end": 983.6, "start": 979.44, "text": " normalized quantity. So we're going to look at the attention in a given" }, { "end": 988.9200000000001, "start": 983.6, "text": " attention head so as you know BERT has many layers with many attention heads" }, { "end": 995.08, "start": 988.9200000000001, "text": " and we're going to look at whether or not this property is active and just" }, { "end": 1000.0400000000001, "start": 995.08, "text": " normalize it by the total attention in that head so that we get some kind of a" }, { "end": 1005.4399999999999, "start": 1000.04, "text": " percentage number. That's the first task we're basically going to look at how" }, { "end": 1009.92, "start": 1005.4399999999999, "text": " does the attention correlate with these properties and the second task we're" }, { "end": 1016.8, "start": 1009.92, "text": " going to do is this probing task. So a probing task is like a linear probe in" }, { "end": 1023.36, "start": 1016.8, "text": " like a classifier so what we're going to do is we're going to take a layer right" }, { "end": 1028.2, "start": 1023.36, "text": " here and even though it's an intermediate layer we're simply going" }, { "end": 1034.4, "start": 1028.2, "text": " to run it through a linear classifier and then decide is this a binding site" }, { "end": 1040.4, "start": 1034.4, "text": " for example or not. Is a given amino acid a binding site or not? Is a" }, { "end": 1046.88, "start": 1040.4, "text": " given pair a contact or not? So this is kind of a linear probe but this sort of" }, { "end": 1051.68, "start": 1046.88, "text": " takes a backseat in this paper the analysis is really on the attention" }, { "end": 1056.8, "start": 1051.68, "text": " heads and what the attention heads learn. And that's already it they don't" }, { "end": 1062.04, "start": 1056.8, "text": " they take a pre-trained BERT model so there are these BERT models that are" }, { "end": 1068.48, "start": 1062.04, "text": " already trained on these protein databases and first they look simply can" }, { "end": 1075.24, "start": 1068.48, "text": " we find attention heads that correlate with a given amino acid. So here you see" }, { "end": 1082.3999999999999, "start": 1075.24, "text": " the attention to the amino acid this is proline I believe and this is phenylalanine" }, { "end": 1090.2800000000002, "start": 1082.4, "text": " is that the same in English? Yes phenylalanine and proline right here." }, { "end": 1100.3200000000002, "start": 1090.2800000000002, "text": " So you can see that the the plots here are there's almost no attention pretty" }, { "end": 1105.2800000000002, "start": 1100.3200000000002, "text": " much throughout the network that pays special attention to the amino acid" }, { "end": 1113.16, "start": 1105.28, "text": " proline except this head right here seems to have if you look at the scale" }, { "end": 1120.16, "start": 1113.16, "text": " over like a 70% of attention always goes to proline in this particular head so" }, { "end": 1131.8, "start": 1120.16, "text": " this is layer 1 head number 11 focuses 78% of its attention on proline. Now this" }, { "end": 1137.2, "start": 1131.8, "text": " is not that special if you think about it because in language models as well in" }, { "end": 1142.18, "start": 1137.2, "text": " natural language models you might want to think that you have some mechanism in" }, { "end": 1146.2, "start": 1142.18, "text": " your neural network that's especially specialized on like a very particular" }, { "end": 1151.1599999999999, "start": 1146.2, "text": " word in the language because that might just be a often occurring very" }, { "end": 1157.9199999999998, "start": 1151.1599999999999, "text": " particular word for example in English maybe the is very important or the" }, { "end": 1164.24, "start": 1157.92, "text": " word what these these are like very indicative very often occurring words so" }, { "end": 1168.44, "start": 1164.24, "text": " it is reasonable to expect to find a an attention head that pays a lot of" }, { "end": 1173.1200000000001, "start": 1168.44, "text": " attention to these things especially here where our vocabulary size is 20" }, { "end": 1179.8400000000001, "start": 1173.1200000000001, "text": " instead of like 30,000 in natural language. And the same goes for this" }, { "end": 1186.42, "start": 1179.8400000000001, "text": " phenylalanine where you can see that in the layer in the last layer and in the" }, { "end": 1190, "start": 1186.42, "text": " first layer you have attention and also in the proline you have in the last" }, { "end": 1194.3600000000001, "start": 1190, "text": " layer so why does this make sense because what we would expect from like" }, { "end": 1198.64, "start": 1194.3600000000001, "text": " single tokens these are not interactions yet these are not biological functions" }, { "end": 1204.5600000000002, "start": 1198.64, "text": " yet so we know that in the lower layers of a neural network we have these kind" }, { "end": 1209.48, "start": 1204.5600000000002, "text": " of basic features basic feature extractors and here these basic feature" }, { "end": 1216.4, "start": 1209.48, "text": " extractors appear to be simply paying attention to one specific token in the" }, { "end": 1221.3200000000002, "start": 1216.4, "text": " vocabulary a lot okay so they kind of these heads sort of specialize for" }, { "end": 1227.3600000000001, "start": 1221.3200000000002, "text": " single for single amino acids and the same in the last layer so in the very" }, { "end": 1234.3600000000001, "start": 1227.3600000000001, "text": " last layer the the task of the very last layer is to prepare for the" }, { "end": 1239.52, "start": 1234.3600000000001, "text": " classification tasks so if you remember the BERT model you have layer layer" }, { "end": 1244.52, "start": 1239.52, "text": " layer layer and at the end you'll have to predict which ones are masked down" }, { "end": 1248.84, "start": 1244.52, "text": " here so at the end you'll have to predict single amino acids again so if" }, { "end": 1254.96, "start": 1248.84, "text": " there's a proline masked here you'll have to predict the proline so it also" }, { "end": 1262.08, "start": 1254.96, "text": " makes sense that the last layers would very much specialize to single tokens so" }, { "end": 1270.4, "start": 1262.08, "text": " this does make sense now our question is going to be do we find the biological" }, { "end": 1274.68, "start": 1270.4, "text": " function where where would you expect them we would expect the let's say the" }, { "end": 1279.8400000000001, "start": 1274.68, "text": " tertiary sorry the secondary structures which are sort of one level higher than" }, { "end": 1284.64, "start": 1279.8400000000001, "text": " the primary structure we would expect to find them maybe here and then we would" }, { "end": 1289.96, "start": 1284.64, "text": " expect to find the tertiary structures maybe somewhere here okay because these" }, { "end": 1296.3200000000002, "start": 1289.96, "text": " are most highest level and then it goes it goes back again or maybe it's like" }, { "end": 1303.8799999999999, "start": 1296.32, "text": " we find the tertiary structures rather here and here again and then in the" }, { "end": 1307.84, "start": 1303.8799999999999, "text": " middle we'll find the the most high-level the tertiary structure sorry" }, { "end": 1315.08, "start": 1307.84, "text": " yeah blue secondary this drawing is getting too too too weird but there" }, { "end": 1320.3999999999999, "start": 1315.08, "text": " there could be multiple scenarios but that could fit here but until now it" }, { "end": 1326, "start": 1320.3999999999999, "text": " sort of makes sense so they do it as an additional investigation where as I" }, { "end": 1331.56, "start": 1326, "text": " told you sometimes you can substitute an amino acid and nothing really happens" }, { "end": 1337.52, "start": 1331.56, "text": " right and in fact that this probably happens in you right now you probably" }, { "end": 1343.08, "start": 1337.52, "text": " might have some mutation that changed some amino acid and you don't even" }, { "end": 1349.96, "start": 1343.08, "text": " realize because it's just it's fine no notice so the biologists can build" }, { "end": 1356.3600000000001, "start": 1349.96, "text": " these matrices of how much you can substitute proteins with each other so" }, { "end": 1362, "start": 1356.3600000000001, "text": " here you see this blossom 62 substitution scores which are very I" }, { "end": 1368.76, "start": 1362, "text": " guess very high if you can substitute two protein two amino acids with each" }, { "end": 1376.8, "start": 1368.76, "text": " other and the effect is negligible and it's very low if it's the other way" }, { "end": 1382.68, "start": 1376.8, "text": " around now this is interesting so far but you compare this to this matrix" }, { "end": 1388.24, "start": 1382.68, "text": " right here this is the attention similarity so what we'll do is for each" }, { "end": 1393.6599999999999, "start": 1388.24, "text": " two amino acids we take those two attention things those two attention" }, { "end": 1397.76, "start": 1393.6599999999999, "text": " matrices and we'll calculate the correlation between the attention" }, { "end": 1404.2, "start": 1397.76, "text": " matrices and our hypothesis is that the more correlated the attention patterns" }, { "end": 1409.68, "start": 1404.2, "text": " are between the two amino acids the more likely we are to substitute them" }, { "end": 1416.04, "start": 1409.68, "text": " because as a direct result of our language model our language model is" }, { "end": 1424.72, "start": 1416.04, "text": " it's reconstructing right these things so our language model is going to treat" }, { "end": 1431.3600000000001, "start": 1424.72, "text": " if in natural language is like a synonym right is our language model is going to" }, { "end": 1435.8, "start": 1431.36, "text": " treat synonyms very similar to each other because they're synonyms they can" }, { "end": 1440.8799999999999, "start": 1435.8, "text": " be exchanged so a good language model should learn that they are almost the" }, { "end": 1446.28, "start": 1440.8799999999999, "text": " same and therefore the attention pattern is going to be almost the same so a high" }, { "end": 1454.1599999999999, "start": 1446.28, "text": " correlation we hypothesize is a means that the function of the amino acid is" }, { "end": 1460.3999999999999, "start": 1454.1599999999999, "text": " similar and therefore we can substitute it easily so this here is the matrix of" }, { "end": 1465.72, "start": 1460.4, "text": " the correlations between each two attention patterns of these amino acid" }, { "end": 1473.5600000000002, "start": 1465.72, "text": " and if you compare the two right here they look extremely similar just have a" }, { "end": 1479.0800000000002, "start": 1473.5600000000002, "text": " have a look for a little while and you'll see that the patterns they do not" }, { "end": 1485.22, "start": 1479.0800000000002, "text": " match perfectly but they are very very similar the dark spots are in the same" }, { "end": 1491.24, "start": 1485.22, "text": " places the light spots are in the same places so this already makes a good case" }, { "end": 1498, "start": 1491.24, "text": " that the language model here has learned something about biology now what we want" }, { "end": 1507.16, "start": 1498, "text": " to do is we want to investigate higher order higher order functions so here we" }, { "end": 1514.08, "start": 1507.16, "text": " are interested in these contact maps right so how likely is it that two amino" }, { "end": 1518.96, "start": 1514.08, "text": " acids are in contact and we'll look at it through the lens of attention as we" }, { "end": 1523.8799999999999, "start": 1518.96, "text": " did before so here you'll see percentage of each each head of each head's" }, { "end": 1529.9199999999998, "start": 1523.8799999999999, "text": " attention that is aligned with contact maps averaged over data set suggesting" }, { "end": 1535.12, "start": 1529.9199999999998, "text": " that had 12 for is uniquely specialized for contact prediction so look at this" }, { "end": 1544.06, "start": 1535.12, "text": " this this head here is just spiking so remember before we said our analysis is" }, { "end": 1551.3799999999999, "start": 1544.06, "text": " whenever whenever we're basically measuring the correlation of two things" }, { "end": 1556.8999999999999, "start": 1551.3799999999999, "text": " being in contact because we know it from our simulator or from our data set the" }, { "end": 1562.72, "start": 1556.8999999999999, "text": " correlation of that with an attention connection being particularly strong and" }, { "end": 1570.76, "start": 1562.72, "text": " will we find it in this attention head right here so this layer 12 head number" }, { "end": 1576.4, "start": 1570.76, "text": " four will always peek out whenever two things are in contact now you can see" }, { "end": 1582.2, "start": 1576.4, "text": " that it's it's not like always it's like 25% of its attention but significantly" }, { "end": 1589.04, "start": 1582.2, "text": " more than anything else right here in fact if you group the things by this" }, { "end": 1593.52, "start": 1589.04, "text": " attention you can build the following plot so you can see right here" }, { "end": 1599.32, "start": 1593.52, "text": " probability two amino acids are in contact as a function of attention" }, { "end": 1603.96, "start": 1599.32, "text": " between the amino acids in head 12 for showing attention approximates perfectly" }, { "end": 1610.36, "start": 1603.96, "text": " calibrated estimator which would be the green line so here we simply for each" }, { "end": 1617.52, "start": 1610.36, "text": " for each pairs to amino acids for each pair of amino acids we plot we make a" }, { "end": 1624.6399999999999, "start": 1617.52, "text": " histogram right here of what they're sorry not a histogram we plot the" }, { "end": 1634.3200000000002, "start": 1624.64, "text": " probability if they have the attention weight point nine we plot how likely is" }, { "end": 1640.8000000000002, "start": 1634.3200000000002, "text": " it that they are in contact so this is this if we just look at the data and we" }, { "end": 1645.5200000000002, "start": 1640.8000000000002, "text": " simply take this attention weight as a measure as a predictor of being in" }, { "end": 1650.6000000000001, "start": 1645.5200000000002, "text": " contact we get the blue curve and the green curve would be if we could" }, { "end": 1656.6399999999999, "start": 1650.6, "text": " perfectly predict from this attention head what the probability of contact" }, { "end": 1662.04, "start": 1656.6399999999999, "text": " would be and you can see that the fit is fairly good you can't predict with" }, { "end": 1666.7199999999998, "start": 1662.04, "text": " super high accuracy but the fit is fairly good and you can see that general" }, { "end": 1674.56, "start": 1666.7199999999998, "text": " trend that as the attention in this head rises the probability of the two amino" }, { "end": 1682.48, "start": 1674.56, "text": " acids being in contact with each other also rises so we can sort of confidently" }, { "end": 1689.28, "start": 1682.48, "text": " say that BERT has learned something about a higher level funk a higher level" }, { "end": 1693.6, "start": 1689.28, "text": " biological structure just from the language modeling objective how can we" }, { "end": 1702.28, "start": 1693.6, "text": " interpret this this must somehow mean that it is it is possible it is vital to" }, { "end": 1709.16, "start": 1702.28, "text": " it is vital for reconstructing the sequence from its surroundings so if we" }, { "end": 1717.44, "start": 1709.16, "text": " delete this right here if if this if these two are in contact in the 3d" }, { "end": 1724.2, "start": 1717.44, "text": " structure that makes probably probably means that this thing right here is a" }, { "end": 1729.16, "start": 1724.2, "text": " very good predictor of what was here right if we mask this out and we're asked" }, { "end": 1733.4, "start": 1729.16, "text": " to reconstruct which amino acid was there then it probably helps to look at" }, { "end": 1736.8400000000001, "start": 1733.4, "text": " its neighbors right it probably always helps to look at one's neighbors" }, { "end": 1744.5600000000002, "start": 1736.8400000000001, "text": " especially also in natural language but if these two are in contact they they" }, { "end": 1749.76, "start": 1744.5600000000002, "text": " have very special they have very special connection to each other it's very you" }, { "end": 1756, "start": 1749.76, "text": " can basically read out from this one which one this was this is sort of like" }, { "end": 1770, "start": 1756, "text": " if you have a sentence and you say does I don't know I can't come up with with" }, { "end": 1776.68, "start": 1770, "text": " one right now but if it's like da da da da da and then there is a name like mark" }, { "end": 1783.28, "start": 1776.68, "text": " and then da da da da da da and then there is him right and you would expect" }, { "end": 1790.6, "start": 1783.28, "text": " if I drop out okay let's do it the other way around if I drop out him then from" }, { "end": 1794.6399999999999, "start": 1790.6, "text": " the text right here you can probably determine that it is some sort of pronoun" }, { "end": 1799.16, "start": 1794.6399999999999, "text": " but then you go back and you see ah it's mark okay so it's not like it's" }, { "end": 1809.2, "start": 1799.16, "text": " not like it or some or or or she it's probably he or him this is sort of the" }, { "end": 1816.64, "start": 1809.2, "text": " analogous structure right here in biology the second thing we're looking" }, { "end": 1823.68, "start": 1816.64, "text": " at is these these binding sites now these are single properties of of of" }, { "end": 1828.48, "start": 1823.68, "text": " different amino acids and we're simply looking at all the incoming or sorry all" }, { "end": 1833.52, "start": 1828.48, "text": " the F all the other tokens that focuses their attention why is this important" }, { "end": 1839.56, "start": 1833.52, "text": " because these binding sites are central to the structure of the or to the" }, { "end": 1843.68, "start": 1839.56, "text": " function of the protein right if this here is a binding site then that's a" }, { "end": 1851.12, "start": 1843.68, "text": " very central important point of the protein so a lot of these other things" }, { "end": 1856.6, "start": 1851.12, "text": " are going to be determined by what this binding site is this binding site needs" }, { "end": 1859.76, "start": 1856.6, "text": " to have a very spurted particular function and therefore probably needs" }, { "end": 1865.04, "start": 1859.76, "text": " to be a very particular amino acid and the other things here are sort of" }, { "end": 1868.68, "start": 1865.04, "text": " supporting this binding site because they form the 3d structure around it" }, { "end": 1875.44, "start": 1868.68, "text": " and so on so you would expect a lot of attention to be put on this binding site" }, { "end": 1884.32, "start": 1875.44, "text": " and what do we find the we find that it's a bit more murky than before so you" }, { "end": 1888.36, "start": 1884.32, "text": " can see that the attention is kind of spread out percentage of each head's" }, { "end": 1892.76, "start": 1888.36, "text": " attention that focuses on binding sites especially in the deeper layers binding" }, { "end": 1896.8, "start": 1892.76, "text": " sites are targeted at much higher frequency than would occur by chance" }, { "end": 1905.1999999999998, "start": 1896.8, "text": " head 7 1 has the highest percentage with 34% so also here you can see that it is" }, { "end": 1910.8799999999999, "start": 1905.1999999999998, "text": " spread out but this is because multiple heads are now focusing on these binding" }, { "end": 1915.52, "start": 1910.8799999999999, "text": " sites because probably binding sites come in different variations so you'll" }, { "end": 1920.56, "start": 1915.52, "text": " have lots of heads specializing on attending to binding sites and they say" }, { "end": 1924.72, "start": 1920.56, "text": " it is much higher frequency than would occur by chance and you can see here" }, { "end": 1931.72, "start": 1924.72, "text": " this head is the highest with 34% of its attention focused on binding sites you" }, { "end": 1936.04, "start": 1931.72, "text": " can also see the general trend of the attention being rather in the later" }, { "end": 1943.76, "start": 1936.04, "text": " layers which we would expect from a tertiary structure now yeah it would be" }, { "end": 1948.8799999999999, "start": 1943.76, "text": " interesting here here you also see that actually most of the things are in the" }, { "end": 1956, "start": 1948.8799999999999, "text": " in the last layer which points to points to rather maybe lower lower level" }, { "end": 1959.92, "start": 1956, "text": " information because we reasoned before about the last layer or I was just wrong" }, { "end": 1964.84, "start": 1959.92, "text": " but also in a general trend you can see that the attention is rather shifted" }, { "end": 1973.32, "start": 1964.84, "text": " towards the later layers because this is sort of a higher order function okay if" }, { "end": 1979.84, "start": 1973.32, "text": " you look at the same calibration experiment you can see that the picture" }, { "end": 1983.2, "start": 1979.84, "text": " is not as clear there is the general trend at the beginning but then it sort" }, { "end": 1990.04, "start": 1983.2, "text": " of flattens out so you can sort of differentiate the very probably not a" }, { "end": 1995.24, "start": 1990.04, "text": " binding site from the somewhat probably a binding site but it's not a perfectly" }, { "end": 2000.2, "start": 1995.24, "text": " calibrated classifier and that might just be because there are many things" }, { "end": 2004.76, "start": 2000.2, "text": " specializing in different types of binding sites so you can't just go to" }, { "end": 2010.2, "start": 2004.76, "text": " this one head so this is just for this one head you can't just go to that one" }, { "end": 2015.6000000000001, "start": 2010.2, "text": " and expect that to classify all the binding sites because you might want to" }, { "end": 2023.4, "start": 2015.6000000000001, "text": " be you might want to combine all of the high ranking ones here to form a" }, { "end": 2029.32, "start": 2023.4, "text": " classifier the last experiment they do is these linear probes which where they" }, { "end": 2034.1599999999999, "start": 2029.32, "text": " just go and they just build classifiers from different parts of the network and" }, { "end": 2040.2, "start": 2034.1599999999999, "text": " you can see right here that what is predicted and how well they work so each" }, { "end": 2045.04, "start": 2040.2, "text": " bar here is going to be the difference of performance so this is differential" }, { "end": 2050.4, "start": 2045.04, "text": " performance of diagnostic classifier by layer sorted by task order in figure 8" }, { "end": 2054.7999999999997, "start": 2050.4, "text": " each plot shows the change in performance between the given layer and" }, { "end": 2061.48, "start": 2054.8, "text": " the previous layer okay so a bar up shows it's performing better than the" }, { "end": 2065, "start": 2061.48, "text": " previous layer bar down shows it's performing worse than the previous layer" }, { "end": 2070.96, "start": 2065, "text": " so you see right here that the these are the secondary structures right here and" }, { "end": 2075.6400000000003, "start": 2070.96, "text": " you can see that there is a lot of performance in the earlier layers right" }, { "end": 2080.92, "start": 2075.6400000000003, "text": " here and sort of not that high performance in the later layers whereas" }, { "end": 2085.4, "start": 2080.92, "text": " for the tertiary structures the binding site and the contact you can see that" }, { "end": 2092.52, "start": 2085.4, "text": " there is a bit of performance on in places but it sort of tends to be more" }, { "end": 2097.08, "start": 2092.52, "text": " towards the middle certainly more at towards the middle of the end of the" }, { "end": 2103.08, "start": 2097.08, "text": " network than the the secondary structures which sort of makes sense with" }, { "end": 2108.12, "start": 2103.08, "text": " our hypothesis you can also see this here where they show the percent of" }, { "end": 2114.64, "start": 2108.12, "text": " attention focused as a function of layer and the red is the center of mass and" }, { "end": 2120.48, "start": 2114.64, "text": " you can see that as the the secondary structures this their center of mass is" }, { "end": 2128.3599999999997, "start": 2120.48, "text": " at a lower layer in general than the tertiary functions all of this is not" }, { "end": 2134.4, "start": 2128.3599999999997, "text": " perfect of course but it's still an open question I guess whether or not it's not" }, { "end": 2140.64, "start": 2134.4, "text": " perfect because we haven't built a strong enough language model yet do I" }, { "end": 2147.28, "start": 2140.64, "text": " want to say GPT-4 is now for biology and not for language or is it because there" }, { "end": 2155.56, "start": 2147.28, "text": " is really you need you really can't very well predict the these things just from" }, { "end": 2159.1600000000003, "start": 2155.56, "text": " a language model I mean you should technically all the information is there" }, { "end": 2165.96, "start": 2159.16, "text": " but maybe the language model objective as such isn't able to capture that" }, { "end": 2171.8399999999997, "start": 2165.96, "text": " information so yeah this this was the paper it's pretty simple they have the" }, { "end": 2176.12, "start": 2171.8399999999997, "text": " in the appendix they have a lot of a lot of these additional experiments or full" }, { "end": 2180.8799999999997, "start": 2176.12, "text": " experiments I believe for all the amino acids and so on and I invite you to" }, { "end": 2187, "start": 2180.8799999999997, "text": " check that out in general I like this kind of work because it's very applied" }, { "end": 2192.56, "start": 2187, "text": " it's and it can you know tell us something about the nature of both these" }, { "end": 2200.08, "start": 2192.56, "text": " language models and the biological things that we that we care about in" }, { "end": 2207.08, "start": 2200.08, "text": " biology okay I'm just talking crap right now thanks for being here I hope you" }, { "end": 2217.7999999999997, "start": 2207.08, "text": " enjoyed it and bye bye" } ]
1VdEw_mGjFk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nlp", "billion", "parameters", "float32", "attention mechanism", "transformer", "scale", "gpt-3", "google", "gshard", "xla", "sharding", "parallelism", "mixture of experts", "trillion", "tpus", "distributed", "m4", "multilingual translation", "natural language processing" ]
Google builds a 600 billion parameter transformer to do massively multilingual, massive machine translation. Interestingly, the larger model scale does not come from increasing depth of the transformer, but from increasing width in the feedforward layers, combined with a hard routing to parallelize computations on up to 2048 TPUs. A very detailed engineering paper! OUTLINE: 0:00 - Intro & Overview 4:10 - Main Results 5:10 - Mixture-of-Experts 16:00 - Difference to Scaling Classic Transformers 18:50 - Backpropagation in Mixture-of-Experts 20:05 - MoE Routing Algorithm in GShard 38:20 - GShard Einsum Examples 47:40 - Massively Multilingual Translation 56:00 - Results 1:11:30 - Conclusion & Comments ERRATA: I said the computation of MoE scales linearly, but actually, it's sub(!)-linear. Paper: https://arxiv.org/abs/2006.16668 Abstract: Neural network scaling has been critical for improving the model quality in many real-world machine learning applications with vast amounts of training data and compute. Although this trend of scaling is affirmed to be a sure-fire approach for better model quality, there are challenges on the path such as the computation cost, ease of programming, and efficient implementation on parallel devices. GShard is a module composed of a set of lightweight annotation APIs and an extension to the XLA compiler. It provides an elegant way to express a wide range of parallel computation patterns with minimal changes to the existing model code. GShard enabled us to scale up multilingual neural machine translation Transformer model with Sparsely-Gated Mixture-of-Experts beyond 600 billion parameters using automatic sharding. We demonstrate that such a giant model can efficiently be trained on 2048 TPU v3 accelerators in 4 days to achieve far superior quality for translation from 100 languages to English compared to the prior art. Authors: Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, Zhifeng Chen Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
OpenAI has a 175 billion parameter model. You thought that was large? That's cute. Check out Google's 600 billion parameter model, 600 billion floating point numbers doing things at the same time. This has absolutely become a body part measuring competitions between companies. Google be like, oh, GPT-3. I spit on you. I spit on you and your little tiny 175 billion. OK. Let's stop kidding. This is a giant model that Google has trained right here. The paper we're going to look at today is called G-shard Scaling Giant Models with Conditional Computation and Automatic Sharding by Dmitri Lepikhin et al. of Google. And this paper basically tells the story of how they built this 600 billion parameter model, how actually they attempted to build a model that had a trillion parameters but just didn't manage to quite train it. And this is all using this system called G-shard. So I haven't actually seen the code out for G-shard yet, but I'm going to maybe assume that this is something that they're going to release at some point. Who knows? Or maybe I just haven't seen it yet. So this is basically describing a system on how to train these giant models. So if you have watched my video on GPT-3, which, of course, was this 175 billion parameter model of OpenAI, which already was record breaking, the paper was very much like, oh, we built a model and look at what things it can do. So that was the OpenAI paper. This paper here is like the complete opposite. It basically says, oh, yeah, we do language model. But here is how we built the model, which is equally cool. So OpenAI basically just made everything bigger. And here they say to make everything even bigger, you need some tricks in how to build models. And then they've basically developed this entire framework to build these giant models. And this paper mainly describes that framework. And the actual task here, which is machine translation, is almost sort of a side thing in the paper. It's just a task to showcase what this system can do. So this is very much an engineering paper rather than that much than a machine learning paper. And that's how you have to look at it right here. That being said, the machine learning results are of course, quite impressive. If you look at this graph here, you have a quality gain. It's a difference in blur score. And this is a quality score for machine translation over the previous state of the art. So over their baseline, which, as you can see here, you have 37 billion weights, 150 billion weights, and 600 billion weights, which they only train. They train for, you know, 2000 and on 2048 TPUs for just four days. They stress this is very efficient because they just have to train it for four days on 2000 TPUs. Absolutely crazy. So let's have a look at what this paper does. If you enjoy this, if you enjoyed this at the end, consider, you know, sharing the video out if you like it and tell me what you think about this stuff in the comments. Alright, so we'll go through the abstract and then we'll go through highlighted sections of the paper because the paper is 23 pages long. So I won't be able to cover everything, just kind of give you the high level ideas and highlight a few things. Actually let's not go into the abstract. Let's go into these results first. So as you can see, they managed to continue the trend. The trend in NLP has always been, at least since, you know, transformers were invented, the bigger the better, like larger model, larger data, more compute means better performance. And this is sort of unbroken here, as you can see, if you increase the number of parameters in these models, you do get a very, very big gain in these blur score, though it sort of seems to be kind of a logarithmic scaling, like you have to keep doubling and doubling and doubling the number of weights, sort of like Moore's law in computation. You can see that at the same time, the training wall time is going down and the computational cost, the computational cost of these models, it doesn't scale quadratically like you would expect, it scales linearly. And that's the big difference here in how these authors scale their model, rather than how the open AI authors scale their model. So in a traditional transformer, it looks like this. So it has these blocks of attention. If you don't know what this is, I have a video called Attention is All You Need. I explain how the attention blocks in transformers work. So this is nothing different. These are just standard transformers. There is an encoder and a decoder. Everything works as you know. So you have these blocks, you have n blocks. These are the number of layers that you have. And in these blocks, you always have an attention layer, and then a feed forward layer that acts on the tokens. So without repeating too much what an attention mechanism does, basically, you have input tokens. So this is a sequence, it's technically a set processing unit, but we use it for sequences of text. So here you have six tokens, a sentence of maybe six words. And then you transform it with the attention layer by having this attention mechanism that routes information from tokens to from positions to other positions, maybe like this route is here, route this here. And then you have a feed forward network that is applied on a per token basis. So each of these tokens now goes through this feed forward network and is kind of transformed. So the embedding of that token is transformed by that feed forward network. Now every token does this. And it's always the same feed forward network. So this network here is the same as this network. Now usually, when we talk about scaling transformers, we talk about this part right here, we talk about the attention mechanism. And also we talk about this part, the number of layers. So you know, we talk about scaling the number of transformer layers, more layers, more layers, more layers. And if we want to scale the attention mechanism, what that basically means is we have we increase the context size of the text we can input. So transformers are very limited by the size of this context right here that they can take. Like the original transformer started with something like 512 tokens that they were able to take because this attention mechanism has quadratic complexity. This went up and the open AI GPT-3, I believe, had a context size of 2048 tokens, which if it scales quadratically, that's quite an achievement. And also it stacked the layers very, very deep. Now in this paper, they scale the transformers differently. They basically leave the context size and I believe that their context size is 1024. So significantly smaller than the open AI context size. And they don't scale the layers. So their largest transformer is 36 layers, whereas I believe GPT-3 was maybe correct me but I think it was like 90 or 100 layers or something like this, at least significantly larger than this. Instead, what they scale is this part right here, the feet forward layers. Now that might seem counterintuitive. But they basically, they basically say what if we didn't only have one feed forward network right here, but we had many, right? We don't always have the same. We have many, many feed forward networks, different ones that can do different things. So that's what they call experts. Each one of these feed forward layers is an expert. And then you have yet another routing mechanism, kind of like in attention, you have a routing mechanism that decides which tokens go where. Okay, so this token here, this token here, this token here, and the sort of the implication being that different tokens, different parts of the input you want to transform require a different kind of transformations here. And these different experts can sort of specialize in how they transform the input. Now their task here is going to be machine translation as a multitask setup. So what you'll have is you'll have all kinds of languages like French and German, what's the E, and maybe a lot of languages. I don't know any other languages. And you want to translate all of them to English and you want to do it using the same model. So these experts here, they might specialize in the individual languages. Like maybe you will have to handle a pronoun differently if it comes from German than if it comes from French. You want to do it with the same model at the same time. That means you maybe want to have the one expert specialize in German pronouns and one expert specialize in French pronouns. Also you can think of the experts as maybe one specializes in question words, it doesn't matter which language they're from. And the other one specializes in some sort of other kind of linguistic feature. In any case, this number of experts here is if you want to scale that up, then that becomes the bottleneck of the transformer. They go up to 2000, 2048 experts in parallel. So that doesn't fit into a single accelerator anymore. And that's why the entire system has to be sharded. And that's what they call G shard. So G shard, the main application here is going to be how can we build this giant model on many, many distributed computers where the attention mechanism isn't the problem. The attention mechanism we just distribute like we do data parallelism. The attention, it lives on all of the accelerators, it synchronizes and so on. But the experts here, there's only so this expert lives on machine A, this expert lives on machine B, this expert lives on machine C. And then we do a hard routing. So we don't do a soft routing like an attention, we do a hard routing where one token goes to one or at maximum two experts. So this is sent to these machines. And then after the machines, you kind of gather all the results back right here. So G shard is the system that enables this sharding of these experts. And the everything in between everything that is necessary, but it can also be applied to shard any computation. And that's why it's so cool. So here you see what what they do, they always they take these transformers. And they always consider a block of two transformer layers. So this is a block of two transformer layers, you can see there is twice the attention, and there's twice this feed forward. So in one point, this feed forward is just a regular everything, all the tokens go through the same network. So that's like a classic transformer. But here, you have a lot of these different experts and the tokens are routed to these experts. It's important that the tokens are hard routed, right? If the tokens were soft routed, you don't you don't gain anything, because every token has to go through every expert. But here, the tokens are hard routed to the expert, which means that you can if you if I have an input size of 1024 tokens, maybe only 10 go to this one, and maybe only 10 of that those go to this one. Now you also have a batch size, of course, I haven't actually looked at what the batch size here is, but you usually have quite a large batch size in these things like maybe a batch size of 1000 as well. So ultimately, what you'll end up is, you know, 1000 times 10 tokens going to the first expert and so on. But still, you can significantly parallelize this computation. Okay. So this this, if you use G chart, this is going to result in the following in the thing on the right, where you have two machines, this is machine one, and this is machine two, you can see that the machines will what happened here, or someone made the PowerPoint mistake. So you can see that the the attention, everything is shared between the machines. So this here and this here, these are synchronized, the weights are synchronized, right, you simply do a data sharing. But here, you can see that you have model parallelism, model parallel mixture of experts, where on the first machine, you have the first expert, and then you have e devices. And on the last one, you have the last expert. And then it's all routed out and routed in again. And then you can continue your transformer. And this is layer after layer. So what's the problem here, the problem is that an operation like this is going to come to incur significant sort of overhead in terms of communication, and so on, if you were to do it naively, and it's going to be a real pain to program this. And that's why G chart is made to do all of this automatically. And you don't, you don't incur much of a cost, because you distribute. So what's the difference to the old scaling? Why don't they just make transformers larger in number of layers? And that's because this this is, I guess, what opening into as well, if you make transformers simply larger in number of layers, sorry, if you make it transformers larger in the attention mechanism, it just won't fit into memory at some point. And you'll you'll have to share that somehow. And you can do this with G shard. If you scale it in number of layers, that incurs significant cost where you have to wait, because you have to forward propagate, and then you have to backward propagate in your training sequence. And if you have just too many layers, then a lot of the a lot of the frameworks get at their limit, where at some point they say, well, I still have to wait for the signal to come back in order to continue. And they explore this in this benchmark right here. You can see they say the largest model, the 600 billion parameter model that achieved the best translation quality was trained with 2000 TPU v3 cores for three days, a total cost of 22 TPU core years. In contrast, training all 100 bilingual baseline models would have required 29 core years. So the model here is faster than if you train them individually. But if you want to train a single transformer that is just very deep, and achieves reasonable performance, you have to invest a lot more. Our best quality dense single transformer model 2.3 billion parameters. So it's also significantly smaller. Achieving this was trained with G pipe, which is a previous framework. So G pipe is kind of a task runner that also distributes computation was trained with G pipe on 2048 TPU cores for six weeks or a total of 235 TPU core years. By the way, for if you if you have $1 per TPU hour, that'll only cause that'll only, I guess set you back about 2 million or so. It's easy peasy. Or even 200,000 just, you know, a tiny, tiny bit of of money. But you can see that this transformer model that is dense, which means that is a classic transformer where you stack the transformer layers, you stack them, you stack them, you stack them. It, in fact, it has 96 layers, their baseline 96 layer transformer model, that's sort of what opening I did, they just kept stacking the transformer layers. You get a model that has less parameters and trains for much longer. And its performance is only about this good. Whereas here, if you scale not into depth, but into width of these experts, and it's not dense, but it's shorted, which means it calculates this in a in a kind of sparsified way because it has this hard routing, you can scale up to a lot more parameters. So 600 billion parameters, over 200 times more parameters than the deep model, and you can get a much better performance. Okay, so this is what is different here, it scales into these experts rather than scaling into depth or, or size of the attention mechanism itself. All right, the question, I guess that you come up with if you're a machine learner is how do you back propagate if you route if here you route to these different experts, and you do a hard routing like here, how do you back propagate the signal because it seems like you need a soft routing. But this has been handled, in fact, these mixture of experts has been introduced previously, in a paper I think called outrageously large language models or something like this. And so they've introduced that, you know, it, it still works. So backprop still works through so basically you have a backprop path through here. And because you put a little bit of noise in this routing, every path gets explored a few times, and therefore you have enough backprop signal to make it work. It can it could technically fail, but they do observe generally that it does work if you do this kind of hard routing with a bit of noise. All right, so where do we go from here, as I said, this is an engineering paper, and it's a long engineering paper. So they, they set up their, they set up a lot of a lot of the details of engineering directly in the paper, which we're not used to in the machine learning world. They really detail how they shard things and so on, which is pretty cool. But I invite you to look at the paper yourself. If you really want to know what's going on right here. Suffice to say, they, as you can see right here, what they do is, this is the input right here, and then they have this weight matrix, which is a this routing, this is learned routing weights. Okay. So you have trainable weights that decide how to route the input, and that's dependent on the input. So you have a bunch of inputs that comes from the lower layer. And this matrix right here determines where to route them. Probably says, okay, the input is a vector like this. I know that must probably go to the expert number three. Okay. And you have a softmax across that. So it's a really, it's an assignment to, it's a soft assignment to the experts. So once you've done the soft assignment to the expert, you do a hard assignment by collecting the top two. For each token, you say you collect the top two experts, and you only send it to the top two experts and you ignore all else, which is not a lot right there. At times there are 2000 experts in the system. And yeah, you distribute and you have some noise. So with a random probability, you actually don't even send it to the second expert. You just leave it at the first one. And with some noise, you send it also to the second one. And I think that that noise is part of what if what makes the system work a bit. And then you also have this auxiliary loss right here that you add on top, which just makes sure that you distribute evenly. So this encourages the system to distribute the tokens evenly, because sorry, what it penalizes is a this here is the mean assignment to each expert. So it penalizes whenever the mean assignment is out of out of line, basically, so a distribution assignment to the expert or one expert gets a lot of tokens, because I don't know, it tends to be really good at something. So all the tokens are routed to it. And the other expert don't get a lot that's penalized. So you encourage the system to distribute tokens evenly between those experts. And then there are also like upper limits where you drop tokens and so on. They really build a system that is out for performance rather than machine learning correctness. So they demonstrate how to do this in in sort of code with their system. And the cool thing about their system is that you don't have to do much. What you'll have to do is just specify which tensors are sharded at along which dimensions and the system does the rest. So this is pretty cool. So this here is this mixture of experts, mixture of experts as you would write it in code. And they make use a lot of this Einstein, this Einstein some notation. If you don't know what the Einstein some notation is, it's a general notation to describe matrix or tensor multiplications. So a for example, if you were to multiply two matrices, you could have a string there, you describe it as a string and it comes from how Einstein wrote up the kind of tensor contractions in his work. So if you want to multiply two matrices, you can you could put the string a b b c goes to a c. So this and then you put two matrices right here. This will tell it, okay, I have a one matrix. I'm going to call the axis a and b, I have another matrix or tensor where I'm going to call the axis b and c. Now I have the resulting tensor, and I want the first axis to be a and the a is this one. And I want the last axis to be c and the c is this one. And b is nowhere b is not in the output, which means it should contract over b. So it should sum along b, sorry, it should multiply along b and then add such contract over b. So this here describes a regular matrix matrix multiplication. Now if I could do something else, I could do something like a just a element wise product, an element wise product would be something like this a b comma a b goes to a b, which means here, I have a in the first input, and here I have a again. And I'm so you already see that you can even though these are different tensors, you can call the axis the same, which means that they're going to somehow be multiplied together. Now if you leave it away here, it means that it's going to be contracted and therefore the axis no longer exists. But here we don't leave it away, which simply means that these axes are going to be multiplied together. And the same for b right here. So this describes an element wise. This describes an element wise product, you can go really funky with this. So this, this here would be a row wise dot product, where a is more it for all the a is it's element wise, but then over b, it's contracted. So you know, you can go, you can go wild with the Einstein some notation, you can describe a lot of things with it. So here is this algorithm to distribute the computation among these different experts. So you have the inputs and the weight matrix for the, they call this the gates function. That's the routing function to these experts. So what do we do? We first of all, we have these tensors, these, they have these grouping, these grouping dimension right here. So they come along to along groups, which in our case, we could maybe say these are batches or the batch dimension. So they come across groups, and there is the sequence length and there is this M right here. That's going to be the feature dimension, the M. And you can see the M is contracted. So the M is no longer here. So the gating function is going to route each input token right here to one of the experts for each thing in the group. So you can see, you can express this with an Einstein some notation. Then you have a top two gating, which selects the top two from each of the last, from each of the entries. And that gives you this dispatch mask and the sorry, and the weights that you have to use at the end to combine. You can use the dispatch mask in order to distribute the inputs. So you have reshaped inputs, and so on. So I'm not going to go through all of this right here, but you can express all of this in terms of the Einstein some notation. And you can express pretty much any sort of computation that is along the line. You can express the attention mechanism and so on. You can express the feed forward layers in terms of these Einstein some notations and the underlying the underlined dimensions here are the dimensions where we want to shard the computation. So here, because we have this G underlined, that means that we are interested in sharding the computation along this axis. So this, I said, this is the batch dimension. This is your classic data parallelism, which means that the first machine gets the first couple of data points, the second machine gets the second couple of data points, and so on. And you can see in the weight matrix, there is no sharding, which means that the weight matrix lives on every machine as a copy of one another. This is different from from here, where you can see that what we're now going to do is here it's still sharded according to the batch, but we now are going to shard this according to the different experts. So we're going to route whatever the inputs are in to these experts. And then we're going to execute the computations on the experts. So this is now sharded according to the experts. And at the end, right here, you can see this is still sharded according to the experts. We're going to put it back together. And now it's sharded according to the groups again. That's what we said, we have the input right here, the inputs, and the inputs are maybe distributed according to the according to machines, right, we have these go through the first machine, these the second, these the third, and so on. This is your classic data parallelism. But then we have all of these experts. And now all of a sudden, we're going to route these things to the individual experts. And we're going to execute the computation in parallel on the experts. And then after that, we're going to put back together from wherever we got them now have to. So this goes here again. And so this is just the reverse of what we did before. So right, like that. So you get all of the outputs again. I hope you kind of can imagine how this happens. So the first difference is, is that's sharded according to a different dimension. And the second difference is, is that when we shard in data parallelism, we execute the same computation on all the machines, which means that we have the same weight matrix. If we do x times w in a feet forward layer, and we shard this thing here in data parallelism, what we do is we send the x to different machines, we split the x, we send it to different machines, this is x1, next to x3, x4. But we always multiply it with the same weight matrix that weight matrix lives on all of the machines and is regularly synchronized, it's kept synchronous in some way. Whereas if we shard x to the experts, then the experts have individual functions. So the expert one is different from the expert two is different from the expert three, and so on. Which means that before it wasn't important where x was routed, because we would execute the same computation. So we can just, you know, sharded according to you know, the first 10 go there, the next 10 go there. But here, it's not crucially important where they are routed to to which expert. And that's why we learn the function that is going to route them. So this is learned, this is these first line here, these are the weights that we learn to route, then we route right here. And we calculate your your, we calculate the feet forward layers on the expert, you see that this wi and wo, they are the weight matrices of the feet forward layer, the feet forward layers are, you have your input, you multiply it by wi, you have a ReLU, ReLU, and then you multiply it by wo. So it's kind of a two layer feet forward network. So this two layer feet forward network, as you can see, this is sharded according to the experts. And then, and the important part is, of course, that here, the weight is also sharded according to the experts. And that's what makes each expert different. And then it's combined again down here. So I hope you kind of get the idea of what this algorithm does. But the fact that we shard according to these experts is in fact different than your regular sharding where you shard the data like the batch, the batches, but keep the model in parallel, keep the model synchronized. With their system right now, this is how easy this is. So before we simply stated our algorithm in Einstein's sumnotations, there is no way to underline code and that magically happened something that was simply for us to visualize. Now we want to apply their system in order to make this actually sharded. And with the Gshard system, and as I said, I don't know if the code is out or it will be out, but with the Gshard system, this is basically all that you have to do. So you have these functions, they're called split and replicate. What replicate does is it takes that weight tensor and it replicates it on all the machines and that keeps it synchronized. This is a computation where we simply want to shard out the different to the different machines but keep it synchronized. And you can see if you do this, this is the operation, then the system knows, ah, this here is replicated across the machines. So that means I'm going to distribute the data points according to this G dimension, according to the batch dimension and multiply it with this matrix according to this Einstein sum notation string on all of the machines. And I'm going to keep this tensor in sync. Okay, so the system knows as opposed to that you have you have the split tensor right here. So the split, what it does is it splits a computation here the dispatch expert inputs, it splits it according to a axis index onto D different machines or into D different parts. So you see here you calculate the how you should do the routing and the resulting tensors first dimension is this E dimension. And then you say that should be split, you know, according to this first dimension onto D different places and these D different places are now separate. They don't have the they don't have to be kept in sync. Everyone has their own weights. And now when you do this, you know, according to this dimension, you can see because we know Einstein sum notation now, you can see this E appears here, here and here. So this operation is going to be applied element wise, that means independent of each other in the direction of this dimension, the system understands that since this tensor is sharded according to that dimension, I have to execute this on each of these entries in separate with on each expert having their own weight matrix right here. I hope this is a bit clear that their system makes it super easy. You can basically do two things. You can say this thing here is my classic parallelism where I want to keep it in sync. And this thing here is where I want to split up and do different computation on the different parts. And then they have also a general function that is more powerful. Yeah, they and they you can auto partition and whatnot. So they have a a a they have this we implemented the partitioner in the XLA compiler, which means that anything that can translate to XLA is a target for the system. And that's, you know, TensorFlow and pytorch can do this. So technically, this can come to any of those systems. But of course, who has their 2000 TPUs lying around to make use of this? But no, I'm kidding. I mean, this, I they here use it for transformers. And I am very excited to to see what people can come up with for the system, I believe a system like this where it's super easy to to shard. And they have some, you know, they talk about, okay, we do the single machine compiler. So the compiler is also fast and so on. I don't even want to go into this. But this is very well engineered, it seems. And they, they, they basically implement this for all of the operators. So I'm very excited to see what people can come up with outside of the traditional applications. I think there can be new types of models developed simply because we have a system like this that makes it easier. So yeah, I'm excited. So here, they show a bit how this works on the example of this Einstein, some notation. So here, we want to do this thing here, which if you remember, this is the operation where we want to route the input to these experts. So we want to start with something that is sharded according to the batch dimension. That means that we, you know, we have different different parts of the batch on different machines. And we want to route this and finally end up with something that is sharded on the different experts. So this is what the system does is first you have these here are the different shards, right? You want to multiply this, as you can see, this and this right here means that these this routing table is also sharded according to the same machines. So you have the zero is all on the same machine, the one is all on the same machine, and so on. So what you want to do is you want to contract is there you want to contract according to this s dimension, right, which we have we have omitted right here. And if you multiply that, sorry, okay, we omit the s so this is not much of a this is not much of a graphic right here. But then they have this reshard operation where they do and you don't have to worry about this. So from here to here, there is this reshard operation that just shards it according to the according to E. Yep. I find this to be a bit more a bit more insightful. So if you have something like this, this which is a regular matrix multiplication, right? And you want to contract along B, this is exactly the example we had before. So here is a situation where our tensor is sharded according to the B dimension and this tensor is also sharded according to the B dimension. You want to do a matrix multiplication of the whole tensor. So what can you do, you're supposed to multiply these two matrices, but they are sharded on different machines. If you consider what you actually have to do is you have to multiply each row here with each column here. And that in an element wise fashion. So that distributes according to you have to multiply this by this plus this by this plus this by this plus the red by the red. So you can simply multiply the zero tensors together, the one tensors together, the two tensors together and the three tensors together. Each one will give you a full matrix and then you can simply add all of them in order to get your full results. This is illustrated down here. So what machine one does, it simply multiplies its shard by its own shard of the second matrix, which will give it this thing here. And by the nature of how matrix multiplication is constructed, you can simply do an all reduce, which means you reduce you sum across all of the machines, and that will give you the full result. So this is a this is a an example of how this works. This is, you know, pretty simple. And I believe you may have seen something like this already when you were looking at just parallelizing matrix multiplication, and so on. So this system handles this transparently, right? If you're shorted like this, this is what the system will do. However, if you are shorted differently, the system will act differently. So here is a system you want to do the same matrix multiplication, but the first tensor happens to be shorted according to the A dimension, the second tensor happens to be shorted according to the C dimension. And you want to end up with something that's shorted to the C dimension. Now we have an additional constraint here that here you can see, we kind of assume that this full thing here fits into memory, mainly because we want to obtain the full result you see here, a and c should not be shorted. So we assume that we can keep that in memory. But here we want the final result to be shorted according to C, which imposes the additional constraint that it might be that the full matrix never fits into memory. So how are we going to calculate all of that? We can't do the same trick anymore. Now this G short system apparently realizes itself when something is out of memory, and it can do a smart move around being out of memory using a loop, which basically means that it will compute entry by entry or block by block. So these are the matrices we have to multiply. And you can see that if I want to do multiply this by this, that's fine, I can do this on one machine. And that will give me the block up here. But if I want the block up here, I have to multiply this by this, which is across two different machines. So what this system does is it's going into a while loop because it realizes there's not enough memory. And it kind of sends around these different slices to the different parts, each time computing a little piece. So here, first, we do this by this, this is fine. But then we grab ourselves from the we we grab ourselves this one here, calculate the next little piece up here, and then we grab ourselves the number two, calculate the piece here. And then so this is from zero, this is from two, the one we already had, and then we grab ourselves piece three, and multiply that until here, until we have this final slice that we want. Okay, so this goes in a while loop in multiple rounds, the system gets knows itself when it has to do this, and when it can calculate the full thing at once because it fits into in memory. It's even smarter than that, and that it can do these halo exchanges. So if you have to do something like this, a convolution, now in a convolution, what you'll do if you think of a think of an image, and you want to do a convolution on it, but the image happens to be sharded. Let's say the image is so large, it's sharded across nine different machines like this. Now, if you want to do a convolution, that's pretty cool, you know, here, here, here, but here, all of a sudden, your convolution is across two different machines. So this system, G shard will adapt automatically, and do these halo exchanges where it kind of sends around from this machine, it'll send something to this machine such that it can do the convolution in that step, and vice versa. And then this can be padded accordingly, as you can see. This is I think this is this is this was like super ugly to implement. If you just imagine that for each of these operations, you have to think about, okay, how can you express this with these MPI primitives like dynamic slice and collective permute, and so on. It's just an absolute nightmare. And I'm very happy that other people have done this, and I will probably just get to use it. So there is a lot more to this system than I've just explained, I just try to give you a flavor of what building a system like this means and how easy it is to use it like this. In order to implement all of this mixture of experts things, you simply go from this, which is one single machine implementation, how you would write it to this, which is now the same, it's almost the same code. But this now you can run on however many machines, and if you compile it with the system, it will do what you expect it to do in this shorted way. Completely crazy. Okay, so they apply this to massively multilingual massive machine translation. So two things, it's massively multilingual, and it's massive machine, which means, I guess a lot of machines. And the reason here is twofold. So what they say is, we have massively multilingual translation. Why don't they just look at single machine translation? And it has a very specific reason. Namely, if you have massively multilingual translation, which means that you have a lot of different languages, and you all have to translate them, ideally, to all the other languages or, you know, every language pair. But in this case, they only look at all the languages to English. I don't exactly know why, but I guess there must be some kind of reason. If you do this, then you can make use of a thing where there are languages that you just don't have much data on. Like I don't know, Basque or something like this. There's not that many people speaking Basque or Swiss German. There's not even a written form, a standard written form of Swiss German. So you just don't have as many resources. And for other languages, you have giant amounts of resources. And what you can make use of is this phenomenon called positive language transfer, where it happens that, for example, Swiss German is very close to German. Now, they can't understand us, which is a giant advantage for us, but still it shares a lot of similarities with German. So if you learn a lot about German, you can sort of transfer learn to Swiss German pretty easily. So if you have a system that does German and Swiss German at the same time, you can perform better on both languages because the Swiss German part of your model, the part of your model that does Swiss German, profits from the German inputs as well. Now don't understand me wrong. There is not an individual part of your model that for each language, it's all done at the same time. But still you can imagine that, you know, some of these things will specialize in some of the languages. But the hope is that if you have German and Swiss German in the same training set, that if the model realizes what a question construct is in German, it will be able to apply that also to Swiss German with some minor modification. So there is a benefit of having these many languages, especially for the low resource languages. Okay. So as the number of languages, sorry, as the number of language pairs to be modeled within a single translation model increases, positive language transfer starts to deliver large gains for low resource languages. Given the number of languages considered, which I believe is a hundred here, M4 has a clear advantage on improving the low resource task. On the contrary, for high resource languages, the increased number of tasks limit per task capacity within the model, resulting in lower translation quality compared to a models, to a models trained on a single language pair. This capacity bottleneck for high resource languages can be relaxed by increasing the model size to massive scale in order to satisfy the need for additional capacity. So basically they're saying, if we train all of these languages together, that will help a lot for these low resource languages, but it might hurt the high resource languages because now we would have enough data technically to train a French to English model on this giant model. We could train that. And now that we have all these other languages in there, it just hurts us because we don't have enough parameters. And we can solve this, of course, by simply adding more parameters. So that's the solution. Add more parameters and you increase the capacity of the model and you still get the benefits of the positive language transfer. So their investigations is going to be into how much can we scale this? And is there like a sweet spot where because if you, if you increase the parameters too much, you counteract this positive language transfer again. So since, you know, since Swiss German and German can sort of benefit from each other. However, if we have too many parameters, so, and then we end up having all of these experts right here and the tokens are always routed to these experts and it always happens that all the Swiss German tokens are always routed to this expert and all the German tokens are always routed to that expert. There will be no sharing of weights. There will be this positive language transfer will not happen because we have too much capacity. So the goal is to find a sweet spot between positive language transfer and this capacity bottleneck. They do use an in-house data set, which we don't have access to, but they say the training corpus mined from the web contains parallel documents for a hundred languages to and from English adding up to a total of 25 billion training examples. However, they only use from 100 languages to English. This result in approximately 13 billion training examples to be used for model training. So that's a lot. It's a lot of data, especially for translation. It's kind of a noisy translation because it's mined from the web, but still it's a lot of data. They have baselines. So the baselines are first of all, in order to form our baselines, we trained separate bilingual neural machine translation models for each language pair. So that means a single model for each language to English, depending on the available training data per language. And then they also have a baseline where they try open AI style to build as deep as single transformer as possible. And by that, they mean we also include a variant of a dense 96 layer transformer encoder decoder network trained with G pipe pipeline parallelism on the same data set as another baseline. So the difference again here is that this 96 layer is a dense transformer, which means that all of the tokens go through the same computation and we don't shard the computation out to these experts, right? We do shard according to the batch, but all of them go through the same parameters. And that means we can we can only scale up the number of layers and that severely limits the that severely limits the computational efficiency. Even if we have, you know, your pipeline parallelism and so on that hurts. They say training to convergence took over six weeks on 2000 TPU course. That's crazy. But I guess, yeah, you know, I was saying earlier that that I always thought we were happy. I always thought we were happy in machine learning because kind of the hip science fields being biology, like genetics and machine learning. I was thought like, oh, but these biology people, they always need like million dollar grants from government to run their experiments. And we can just sit down with a laptop. This time is over. If you start a PhD now start applying for money to get TPUs. Yeah. Okay. In any case, here you can see what this does. So they compare a bunch of models right here. So this T, this is this big dense transformer that's going to be one of our baselines and the other baseline here is going to be the zero axis. The zero axis means this is the single model for that language pair. So only so for each language, they trained one model only on data from that language. And that's going to be the worst thing here because this multi language translation in one model will generally help you if you have enough parameters. You can see all the models here have enough parameters such that the difference here, this is difference in blue is positive including this baseline model right here. So the baseline model, as you can see, has 2.3 billion parameters, even though it takes that much longer to train and that's, as we said, a function of the fact that it's dense and deep, so that hurts in training efficiency. And then you have these mixture of expert models. They always consider two things. They consider different numbers of experts. You can see it goes from 128 to 2048 experts. And they consider a number, different number of layers from 12 layers to 36 layers, 36 layers still being way smaller than the 96 layer transformer here. And that's the reason why it trains faster. So it doesn't train faster. So the reason it trains faster is because it has less layers. And then the reason it has more parameters is because it has a lot of these experts. And the art here is to constrain how much these more experts hurt you. So you know, you could run into the same problem where if you scale up the experts, in fact, you do, it doesn't fit into memory anymore. And it's going to hurt you a lot in training efficiency, kind of like if you increase the number of layers. But the G shard system prevents that it lets you up the number of experts without incurring the cost. That being said, it does not let you up the number of layers, you're going to incur the same cost if you up the number of layers as you have with the dense transformers. So does this help? It helps a lot. As you can see right here, there's a general trend upwards. And what's the x axis, the x axis is low resource languages. So you can see that as we as we go to lower and lower resource languages, this multi task training, this multilingual translation improves significantly over the baseline where we only train a system for that language specifically. And these 10k examples, it's it's it's quite a bit, but it's not that much, especially since it's noisy data. So this is specifically good for low resource languages. But you can see also the high resource languages here benefit from the multilingual translation. And that's a function of the fact that we have, you know, large enough models. In fact, you can see the larger the models, the more the difference in blue is, and there's not really an end in sight. And they also see it say that they haven't seen convergence in training. So you can technically train this forever. Yeah, you can also see that the the lowest mixture of experts right here is almost on par with their big dense transformer that took so much longer to train. Right. So this lowest model right here, I believe it took I don't want to go back, but it took it took hours or so or few hours to train, whereas this 96 layer dense transformer took these six weeks to train. So has to be said, the number of TPUs is not to be neglected, but if you're Google, you know, you just have them laying around. What's also interesting here, and you can start seeing this two things. First of all, you can see that the difference between here in between the dense transformer and this baseline model is very low for high resource languages, but gets larger for low resource languages. This is an indication that the dense transformer, it does more to share parameters between the languages because it shares parameters between all the things because all the tokens go through the same computation. So it is going to be a bit better in low resource languages, but still the general trend upwards holds even for the mixture of experts. The second thing is that you see there is a crossover here in these in these big in these biggest models. And what are the big models? One, the blue one is the one with 2048 experts and the green one is the one with 500 experts. They're both as deep models. But all of a sudden, over here for the high resource languages, it's still true that if you up the number of parameters, you get a benefit. So up the number of experts as well, you get a benefit. But over here for the low resource languages, it's it you see, it actually hurts you to up the number of experts. And that's the phenomenon exactly we talked about before. If you have too many of these experts, and you do a hard routing, that means all the tokens go a different way. And that means you don't get any sharing benefit from the multilingual translation. And they investigate a lot. And they basically claim that their sweet spot of expert in their particular task appears to be somewhere in between these 2000 and this 500 expert number, where you can see it doesn't always help you to scale up the model. So I have to say maybe the transformers, maybe they need a ResNet moment. So I believe in computer vision, it was sort of the same problem that we try to build deeper models and why like, okay, this, this is more width. But yeah, I think there might be some breakthrough on the horizon where someone just figures out how to train these giant models, even more giant transformer models with deeper layers. And then there's a new era of transformers. However, this is not that effect. I'm sorry, I said this at the wrong place. This is not that effect. This is to show that in this case, we do benefit for the high resource languages because we increase capacity. But for the low resource languages, we suffer if we up the number of experts too much, because they don't share any parameters anymore between the languages or between the different parts. Like it's not a necessity that the different languages are going to be routed to different experts. But it's probably going to happen, right? There's no hard coded thing that says if it's this language, it needs to go there. It just probably is going to happen this way because the different languages are going to be needed to be treated differently and therefore the system learns to route first and foremost, those two different experts. Here you can see the model sizes, including this 60 layer models model with 2000 experts that they didn't manage to train. They said they had numerical instability, but that had one trillion parameters. And I'm pretty sure they're, they're cool. They must be quite mad about this, right? Like you have the trillion parameters, even though it's not that much bigger than the 600 billion, that the trillion, it would be cool to write a paper like a trillion parameter model. But for now they are at the 600 billion mark and they simply want to tell you that they have actually compiled a model that's that big, just didn't manage to train it. And yeah, that's here. Here is where I wanted to say that maybe we're waiting for the ResNet moment where all of a sudden someone figures something out that makes the training of basically infinitely deep transformers possible. Like we made the training for almost infinitely deep CNNs possible with ResNet. Okay, so they conclude this and so they, that's the investigation of what the number of experts and so on gives you. And here is a bit of a different investigation where they more care about training efficiency. So they ask themselves, how many billion tokens of input do we need to reach a given cross entropy? So here, the more tokens you need, the lower your efficiency is, right? You can see that the general trend is the following. If you up the number of layers, you get more efficient, you can see and just look at this column for now, this point seven column, you can see it already pretty clearly. So here you go from 12 layers to 36, you gain efficiency, here you gain here you gain pretty predictable. If you up the number of layers, you need to see fewer tokens to get to the same cross entropy. And in fact, you can get to a lower cross entropy altogether at the end. We've known this for language models already. The other effect is of course, what happens if we go not deeper, but wider, if we increase these number of experts, if we increase this sparse computation. So here you can see, let's just look at the 12 layers for now. Let's look at all the rows where there's 12 layers. So here you get a significant advantage by upping the number of experts from 100 to 500. But then you hurt upping the number of experts to 2000, right? So that's that's sort of your you're hurting efficiency by upping the number of experts too much. And the same if we look at the 36 layer, so you gain massive efficiency by upping the number of experts, but you lose that a fish part of that efficiency again, by increasing it even more. Now we saw that the this model is still the best model, but it's not as efficient as that model. And that gives you another indication that there is sort of a sweet spot between these two things between the positive transfer and the bottleneck capacity that appears to be somewhere in between right here. So that's pretty interesting. Because we know about depth that you can basically up and up and up and get more efficient, but with not that much. Yeah, the largest model can be trained in under four days to achieving the best quality. Yes, yes, yes, but this is just a yeah. So here, oh, you can see the batch size in in tokens is quite, quite a bit. So yeah, if you have a 1000, if you have a context window of 1000, that means the batch size here was about 4000. So as as expected. Yeah, this is just easy peasy 22 TPU core years. I've seen someone on Twitter saying this, this is the new measure for computer. It's no longer like flops. It's TPU core years. Just mad, mad. And yeah. So 42 days to train this thing right here. Crazy, crazy, crazy. All right. They also have a number of investigations in other parts of efficiency, like per device memory consumption. You can see here that as you up the as you up the number of experts, you can see here, here, here, your weights don't go up because as you up the number of experts, you can just up the number of machines and the per machine weight usage will be the same, right? Because the experts are independent of each other, each one has their own weight matrix. So you can just add machines and you keep your weight requirements the same. However, if you go deeper, then your weights increase because you're now deeper, you have more layers. You have your so also your transformer weights will be higher and so on. So you go deeper right here. You see 3660 layers, your memory consumption increases for the weight. And also, this is the other big part in transformers, right? The activations that you have to save, because as we said, if you have a transformer and I have layer, layer, layer, layer, I basically have to keep around each of these signals in order to do back propagation. And that's why also the activation here increases as I go deeper. Now you can see percentually, it decreases again here. So what's happening? Technically, you don't have to keep these things around. You can also once the signal comes back, you can recompute them from the beginning or from an intermediate point. Now this increases computation, but saves the need to store the activations. And apparently G shard, yet another thing it does is it will recompute as necessary the activations if it realizes that you don't have enough memory to store them. So all of this is pretty crazy, honestly. And they look at where the different computations go. And I don't want to go into this. And they have these micro benchmarks where they really show that the increase in complexity is really according to square root of n, because that's how long it takes to distribute along these actors, sorry, along these experts. There's a lot to this paper. And there's no time to go through all of it. I think this video is already way too long. I hope I have given you an impression of what's possible with this system. And as I said, I'm excited what people can come up with. Just to say that in the appendix here, they detail that they have done this for all the operations in XLA. So for example, convolution, this is so ugly, how you have to implement the convolution because you have to padding must be correct across these expert across the the sharded machine. So there are no experts anymore. This is just G shard, the padding has to be correct. The strides have to be correct. Data needs to be exchanged according to the machines, the window size needs to be correct, blah, blah, blah. So just thank you for doing this and not having to do it myself. Yeah, I'm excited as soon as as the codes out, if I get a hold of it, I'll you know, link it or you'll find it once it's out. If it's already out, I'm just too dumb to see it. I enjoyed reading this. It's different than a machine learning paper. It kind of shows you what goes into engineering a system like this, and how easy it can be if it's engineered well to then apply it. I think this is going to be extremely helpful to the community. And with that said, 23 pages later, see you next time. Bye bye.
[ { "end": 5.48, "start": 0, "text": " OpenAI has a 175 billion parameter model." }, { "end": 7.04, "start": 5.48, "text": " You thought that was large?" }, { "end": 8.48, "start": 7.04, "text": " That's cute." }, { "end": 16.28, "start": 8.48, "text": " Check out Google's 600 billion parameter model, 600 billion floating point numbers doing things" }, { "end": 17.400000000000002, "start": 16.28, "text": " at the same time." }, { "end": 24.68, "start": 17.400000000000002, "text": " This has absolutely become a body part measuring competitions between companies." }, { "end": 28.12, "start": 24.68, "text": " Google be like, oh, GPT-3." }, { "end": 29.32, "start": 28.12, "text": " I spit on you." }, { "end": 33.28, "start": 29.32, "text": " I spit on you and your little tiny 175 billion." }, { "end": 34.28, "start": 33.28, "text": " OK." }, { "end": 35.28, "start": 34.28, "text": " Let's stop kidding." }, { "end": 40, "start": 35.28, "text": " This is a giant model that Google has trained right here." }, { "end": 46.760000000000005, "start": 40, "text": " The paper we're going to look at today is called G-shard Scaling Giant Models with Conditional" }, { "end": 54.08, "start": 46.760000000000005, "text": " Computation and Automatic Sharding by Dmitri Lepikhin et al. of Google." }, { "end": 60.72, "start": 54.08, "text": " And this paper basically tells the story of how they built this 600 billion parameter" }, { "end": 65.96, "start": 60.72, "text": " model, how actually they attempted to build a model that had a trillion parameters but" }, { "end": 70.06, "start": 65.96, "text": " just didn't manage to quite train it." }, { "end": 73.75999999999999, "start": 70.06, "text": " And this is all using this system called G-shard." }, { "end": 79.6, "start": 73.75999999999999, "text": " So I haven't actually seen the code out for G-shard yet, but I'm going to maybe assume" }, { "end": 83.08, "start": 79.6, "text": " that this is something that they're going to release at some point." }, { "end": 85.36, "start": 83.08, "text": " Who knows?" }, { "end": 87.67999999999999, "start": 85.36, "text": " Or maybe I just haven't seen it yet." }, { "end": 96.46, "start": 87.67999999999999, "text": " So this is basically describing a system on how to train these giant models." }, { "end": 102.7, "start": 96.46, "text": " So if you have watched my video on GPT-3, which, of course, was this 175 billion parameter" }, { "end": 112.28, "start": 102.7, "text": " model of OpenAI, which already was record breaking, the paper was very much like, oh," }, { "end": 115.96000000000001, "start": 112.28, "text": " we built a model and look at what things it can do." }, { "end": 117.64, "start": 115.96000000000001, "text": " So that was the OpenAI paper." }, { "end": 122.52, "start": 117.64, "text": " This paper here is like the complete opposite." }, { "end": 125.84, "start": 122.52, "text": " It basically says, oh, yeah, we do language model." }, { "end": 130.04, "start": 125.84, "text": " But here is how we built the model, which is equally cool." }, { "end": 132.92000000000002, "start": 130.04, "text": " So OpenAI basically just made everything bigger." }, { "end": 137.36, "start": 132.92000000000002, "text": " And here they say to make everything even bigger, you need some tricks in how to build" }, { "end": 138.36, "start": 137.36, "text": " models." }, { "end": 144.84, "start": 138.36, "text": " And then they've basically developed this entire framework to build these giant models." }, { "end": 147.96, "start": 144.84, "text": " And this paper mainly describes that framework." }, { "end": 153.28, "start": 147.96, "text": " And the actual task here, which is machine translation, is almost sort of a side thing" }, { "end": 155.24, "start": 153.28, "text": " in the paper." }, { "end": 160.28000000000003, "start": 155.24, "text": " It's just a task to showcase what this system can do." }, { "end": 165.4, "start": 160.28000000000003, "text": " So this is very much an engineering paper rather than that much than a machine learning" }, { "end": 166.4, "start": 165.4, "text": " paper." }, { "end": 167.76000000000002, "start": 166.4, "text": " And that's how you have to look at it right here." }, { "end": 171.23999999999998, "start": 167.76, "text": " That being said, the machine learning results are of course, quite impressive." }, { "end": 177.12, "start": 171.23999999999998, "text": " If you look at this graph here, you have a quality gain." }, { "end": 178.88, "start": 177.12, "text": " It's a difference in blur score." }, { "end": 185.51999999999998, "start": 178.88, "text": " And this is a quality score for machine translation over the previous state of the art." }, { "end": 194.2, "start": 185.51999999999998, "text": " So over their baseline, which, as you can see here, you have 37 billion weights, 150" }, { "end": 199.51999999999998, "start": 194.2, "text": " billion weights, and 600 billion weights, which they only train." }, { "end": 205.92, "start": 199.51999999999998, "text": " They train for, you know, 2000 and on 2048 TPUs for just four days." }, { "end": 210.44, "start": 205.92, "text": " They stress this is very efficient because they just have to train it for four days on" }, { "end": 213.28, "start": 210.44, "text": " 2000 TPUs." }, { "end": 214.28, "start": 213.28, "text": " Absolutely crazy." }, { "end": 217.6, "start": 214.28, "text": " So let's have a look at what this paper does." }, { "end": 222.95999999999998, "start": 217.6, "text": " If you enjoy this, if you enjoyed this at the end, consider, you know, sharing the video" }, { "end": 230.88, "start": 222.96, "text": " out if you like it and tell me what you think about this stuff in the comments." }, { "end": 236.56, "start": 230.88, "text": " Alright, so we'll go through the abstract and then we'll go through highlighted sections" }, { "end": 239.24, "start": 236.56, "text": " of the paper because the paper is 23 pages long." }, { "end": 243.84, "start": 239.24, "text": " So I won't be able to cover everything, just kind of give you the high level ideas and" }, { "end": 248.04000000000002, "start": 243.84, "text": " highlight a few things." }, { "end": 250.44, "start": 248.04000000000002, "text": " Actually let's not go into the abstract." }, { "end": 252.94, "start": 250.44, "text": " Let's go into these results first." }, { "end": 256.44, "start": 252.94, "text": " So as you can see, they managed to continue the trend." }, { "end": 261.92, "start": 256.44, "text": " The trend in NLP has always been, at least since, you know, transformers were invented," }, { "end": 267.28, "start": 261.92, "text": " the bigger the better, like larger model, larger data, more compute means better performance." }, { "end": 272.92, "start": 267.28, "text": " And this is sort of unbroken here, as you can see, if you increase the number of parameters" }, { "end": 280.6, "start": 272.92, "text": " in these models, you do get a very, very big gain in these blur score, though it sort of" }, { "end": 285.88, "start": 280.6, "text": " seems to be kind of a logarithmic scaling, like you have to keep doubling and doubling" }, { "end": 292.48, "start": 285.88, "text": " and doubling the number of weights, sort of like Moore's law in computation." }, { "end": 298.68, "start": 292.48, "text": " You can see that at the same time, the training wall time is going down and the computational" }, { "end": 304.52000000000004, "start": 298.68, "text": " cost, the computational cost of these models, it doesn't scale quadratically like you would" }, { "end": 306.8, "start": 304.52000000000004, "text": " expect, it scales linearly." }, { "end": 313.12, "start": 306.8, "text": " And that's the big difference here in how these authors scale their model, rather than" }, { "end": 317.12, "start": 313.12, "text": " how the open AI authors scale their model." }, { "end": 323.40000000000003, "start": 317.12, "text": " So in a traditional transformer, it looks like this." }, { "end": 326.32, "start": 323.40000000000003, "text": " So it has these blocks of attention." }, { "end": 330.24, "start": 326.32, "text": " If you don't know what this is, I have a video called Attention is All You Need." }, { "end": 334.74, "start": 330.24, "text": " I explain how the attention blocks in transformers work." }, { "end": 336.28000000000003, "start": 334.74, "text": " So this is nothing different." }, { "end": 338.67999999999995, "start": 336.28, "text": " These are just standard transformers." }, { "end": 341.44, "start": 338.67999999999995, "text": " There is an encoder and a decoder." }, { "end": 342.7, "start": 341.44, "text": " Everything works as you know." }, { "end": 345.08, "start": 342.7, "text": " So you have these blocks, you have n blocks." }, { "end": 348.14, "start": 345.08, "text": " These are the number of layers that you have." }, { "end": 353.78, "start": 348.14, "text": " And in these blocks, you always have an attention layer, and then a feed forward layer that" }, { "end": 356.03999999999996, "start": 353.78, "text": " acts on the tokens." }, { "end": 362.67999999999995, "start": 356.03999999999996, "text": " So without repeating too much what an attention mechanism does, basically, you have input" }, { "end": 364.21999999999997, "start": 362.67999999999995, "text": " tokens." }, { "end": 369.24, "start": 364.22, "text": " So this is a sequence, it's technically a set processing unit, but we use it for sequences" }, { "end": 370.24, "start": 369.24, "text": " of text." }, { "end": 374.40000000000003, "start": 370.24, "text": " So here you have six tokens, a sentence of maybe six words." }, { "end": 380.20000000000005, "start": 374.40000000000003, "text": " And then you transform it with the attention layer by having this attention mechanism that" }, { "end": 387.28000000000003, "start": 380.20000000000005, "text": " routes information from tokens to from positions to other positions, maybe like this route" }, { "end": 389.32000000000005, "start": 387.28000000000003, "text": " is here, route this here." }, { "end": 394.92, "start": 389.32, "text": " And then you have a feed forward network that is applied on a per token basis." }, { "end": 402.68, "start": 394.92, "text": " So each of these tokens now goes through this feed forward network and is kind of transformed." }, { "end": 406.28, "start": 402.68, "text": " So the embedding of that token is transformed by that feed forward network." }, { "end": 408.64, "start": 406.28, "text": " Now every token does this." }, { "end": 410.94, "start": 408.64, "text": " And it's always the same feed forward network." }, { "end": 414.44, "start": 410.94, "text": " So this network here is the same as this network." }, { "end": 419.88, "start": 414.44, "text": " Now usually, when we talk about scaling transformers, we talk about this part right here, we talk" }, { "end": 422, "start": 419.88, "text": " about the attention mechanism." }, { "end": 425.86, "start": 422, "text": " And also we talk about this part, the number of layers." }, { "end": 432.12, "start": 425.86, "text": " So you know, we talk about scaling the number of transformer layers, more layers, more layers," }, { "end": 433.32, "start": 432.12, "text": " more layers." }, { "end": 438.44, "start": 433.32, "text": " And if we want to scale the attention mechanism, what that basically means is we have we increase" }, { "end": 442.2, "start": 438.44, "text": " the context size of the text we can input." }, { "end": 449.71999999999997, "start": 442.2, "text": " So transformers are very limited by the size of this context right here that they can take." }, { "end": 454.4, "start": 449.71999999999997, "text": " Like the original transformer started with something like 512 tokens that they were able" }, { "end": 459.4, "start": 454.4, "text": " to take because this attention mechanism has quadratic complexity." }, { "end": 467.56, "start": 459.4, "text": " This went up and the open AI GPT-3, I believe, had a context size of 2048 tokens, which if" }, { "end": 470.9, "start": 467.56, "text": " it scales quadratically, that's quite an achievement." }, { "end": 476.23999999999995, "start": 470.9, "text": " And also it stacked the layers very, very deep." }, { "end": 479.32, "start": 476.23999999999995, "text": " Now in this paper, they scale the transformers differently." }, { "end": 486.41999999999996, "start": 479.32, "text": " They basically leave the context size and I believe that their context size is 1024." }, { "end": 489.64, "start": 486.41999999999996, "text": " So significantly smaller than the open AI context size." }, { "end": 491.94, "start": 489.64, "text": " And they don't scale the layers." }, { "end": 498.67999999999995, "start": 491.94, "text": " So their largest transformer is 36 layers, whereas I believe GPT-3 was maybe correct" }, { "end": 503.44, "start": 498.68, "text": " me but I think it was like 90 or 100 layers or something like this, at least significantly" }, { "end": 505.32, "start": 503.44, "text": " larger than this." }, { "end": 510.6, "start": 505.32, "text": " Instead, what they scale is this part right here, the feet forward layers." }, { "end": 513.08, "start": 510.6, "text": " Now that might seem counterintuitive." }, { "end": 520.6, "start": 513.08, "text": " But they basically, they basically say what if we didn't only have one feed forward network" }, { "end": 523.16, "start": 520.6, "text": " right here, but we had many, right?" }, { "end": 524.84, "start": 523.16, "text": " We don't always have the same." }, { "end": 531.6, "start": 524.84, "text": " We have many, many feed forward networks, different ones that can do different things." }, { "end": 534.62, "start": 531.6, "text": " So that's what they call experts." }, { "end": 536.96, "start": 534.62, "text": " Each one of these feed forward layers is an expert." }, { "end": 541.88, "start": 536.96, "text": " And then you have yet another routing mechanism, kind of like in attention, you have a routing" }, { "end": 545.8000000000001, "start": 541.88, "text": " mechanism that decides which tokens go where." }, { "end": 552.9200000000001, "start": 545.8000000000001, "text": " Okay, so this token here, this token here, this token here, and the sort of the implication" }, { "end": 559.36, "start": 552.92, "text": " being that different tokens, different parts of the input you want to transform require" }, { "end": 562.4799999999999, "start": 559.36, "text": " a different kind of transformations here." }, { "end": 568.06, "start": 562.4799999999999, "text": " And these different experts can sort of specialize in how they transform the input." }, { "end": 573.7199999999999, "start": 568.06, "text": " Now their task here is going to be machine translation as a multitask setup." }, { "end": 579.8, "start": 573.7199999999999, "text": " So what you'll have is you'll have all kinds of languages like French and German, what's" }, { "end": 585.88, "start": 579.8, "text": " the E, and maybe a lot of languages." }, { "end": 588.5999999999999, "start": 585.88, "text": " I don't know any other languages." }, { "end": 595.12, "start": 588.5999999999999, "text": " And you want to translate all of them to English and you want to do it using the same model." }, { "end": 601, "start": 595.12, "text": " So these experts here, they might specialize in the individual languages." }, { "end": 606.8, "start": 601, "text": " Like maybe you will have to handle a pronoun differently if it comes from German than if" }, { "end": 608.16, "start": 606.8, "text": " it comes from French." }, { "end": 611.28, "start": 608.16, "text": " You want to do it with the same model at the same time." }, { "end": 617.52, "start": 611.28, "text": " That means you maybe want to have the one expert specialize in German pronouns and one" }, { "end": 620.3199999999999, "start": 617.52, "text": " expert specialize in French pronouns." }, { "end": 626.56, "start": 620.3199999999999, "text": " Also you can think of the experts as maybe one specializes in question words, it doesn't" }, { "end": 628.48, "start": 626.56, "text": " matter which language they're from." }, { "end": 633.76, "start": 628.48, "text": " And the other one specializes in some sort of other kind of linguistic feature." }, { "end": 640.56, "start": 633.76, "text": " In any case, this number of experts here is if you want to scale that up, then that becomes" }, { "end": 642.84, "start": 640.56, "text": " the bottleneck of the transformer." }, { "end": 648.8199999999999, "start": 642.84, "text": " They go up to 2000, 2048 experts in parallel." }, { "end": 653.5, "start": 648.8199999999999, "text": " So that doesn't fit into a single accelerator anymore." }, { "end": 656.54, "start": 653.5, "text": " And that's why the entire system has to be sharded." }, { "end": 658.36, "start": 656.54, "text": " And that's what they call G shard." }, { "end": 667.16, "start": 658.36, "text": " So G shard, the main application here is going to be how can we build this giant model on" }, { "end": 671.88, "start": 667.16, "text": " many, many distributed computers where the attention mechanism isn't the problem." }, { "end": 676.16, "start": 671.88, "text": " The attention mechanism we just distribute like we do data parallelism." }, { "end": 681.44, "start": 676.16, "text": " The attention, it lives on all of the accelerators, it synchronizes and so on." }, { "end": 687.5600000000001, "start": 681.44, "text": " But the experts here, there's only so this expert lives on machine A, this expert lives" }, { "end": 694.04, "start": 687.56, "text": " on machine B, this expert lives on machine C. And then we do a hard routing." }, { "end": 698.8199999999999, "start": 694.04, "text": " So we don't do a soft routing like an attention, we do a hard routing where one token goes" }, { "end": 702.9599999999999, "start": 698.8199999999999, "text": " to one or at maximum two experts." }, { "end": 705.1199999999999, "start": 702.9599999999999, "text": " So this is sent to these machines." }, { "end": 710.4, "start": 705.1199999999999, "text": " And then after the machines, you kind of gather all the results back right here." }, { "end": 715.4399999999999, "start": 710.4, "text": " So G shard is the system that enables this sharding of these experts." }, { "end": 720.48, "start": 715.44, "text": " And the everything in between everything that is necessary, but it can also be applied to" }, { "end": 722.48, "start": 720.48, "text": " shard any computation." }, { "end": 724.2800000000001, "start": 722.48, "text": " And that's why it's so cool." }, { "end": 732.8000000000001, "start": 724.2800000000001, "text": " So here you see what what they do, they always they take these transformers." }, { "end": 736.6800000000001, "start": 732.8000000000001, "text": " And they always consider a block of two transformer layers." }, { "end": 742.5400000000001, "start": 736.6800000000001, "text": " So this is a block of two transformer layers, you can see there is twice the attention," }, { "end": 744.9000000000001, "start": 742.5400000000001, "text": " and there's twice this feed forward." }, { "end": 750.12, "start": 744.9, "text": " So in one point, this feed forward is just a regular everything, all the tokens go through" }, { "end": 751.48, "start": 750.12, "text": " the same network." }, { "end": 753.4, "start": 751.48, "text": " So that's like a classic transformer." }, { "end": 759.24, "start": 753.4, "text": " But here, you have a lot of these different experts and the tokens are routed to these" }, { "end": 761.34, "start": 759.24, "text": " experts." }, { "end": 764.48, "start": 761.34, "text": " It's important that the tokens are hard routed, right?" }, { "end": 768.96, "start": 764.48, "text": " If the tokens were soft routed, you don't you don't gain anything, because every token" }, { "end": 771.0799999999999, "start": 768.96, "text": " has to go through every expert." }, { "end": 776.4200000000001, "start": 771.08, "text": " But here, the tokens are hard routed to the expert, which means that you can if you if" }, { "end": 786.44, "start": 776.4200000000001, "text": " I have an input size of 1024 tokens, maybe only 10 go to this one, and maybe only 10" }, { "end": 787.76, "start": 786.44, "text": " of that those go to this one." }, { "end": 791.8000000000001, "start": 787.76, "text": " Now you also have a batch size, of course, I haven't actually looked at what the batch" }, { "end": 797.32, "start": 791.8000000000001, "text": " size here is, but you usually have quite a large batch size in these things like maybe" }, { "end": 799.58, "start": 797.32, "text": " a batch size of 1000 as well." }, { "end": 804.12, "start": 799.58, "text": " So ultimately, what you'll end up is, you know, 1000 times 10 tokens going to the first" }, { "end": 805.1800000000001, "start": 804.12, "text": " expert and so on." }, { "end": 810.76, "start": 805.1800000000001, "text": " But still, you can significantly parallelize this computation." }, { "end": 812.4000000000001, "start": 810.76, "text": " Okay." }, { "end": 818.74, "start": 812.4000000000001, "text": " So this this, if you use G chart, this is going to result in the following in the thing" }, { "end": 823.76, "start": 818.74, "text": " on the right, where you have two machines, this is machine one, and this is machine two," }, { "end": 834.4399999999999, "start": 823.76, "text": " you can see that the machines will what happened here, or someone made the PowerPoint mistake." }, { "end": 839.48, "start": 834.4399999999999, "text": " So you can see that the the attention, everything is shared between the machines." }, { "end": 844.76, "start": 839.48, "text": " So this here and this here, these are synchronized, the weights are synchronized, right, you simply" }, { "end": 848.04, "start": 844.76, "text": " do a data sharing." }, { "end": 857.0799999999999, "start": 848.04, "text": " But here, you can see that you have model parallelism, model parallel mixture of experts," }, { "end": 863.8, "start": 857.0799999999999, "text": " where on the first machine, you have the first expert, and then you have e devices." }, { "end": 867.48, "start": 863.8, "text": " And on the last one, you have the last expert." }, { "end": 871.04, "start": 867.48, "text": " And then it's all routed out and routed in again." }, { "end": 874.04, "start": 871.04, "text": " And then you can continue your transformer." }, { "end": 877.06, "start": 874.04, "text": " And this is layer after layer." }, { "end": 880.8399999999999, "start": 877.06, "text": " So what's the problem here, the problem is that an operation like this is going to come" }, { "end": 887.4, "start": 880.8399999999999, "text": " to incur significant sort of overhead in terms of communication, and so on, if you were to" }, { "end": 891.28, "start": 887.4, "text": " do it naively, and it's going to be a real pain to program this." }, { "end": 896.7399999999999, "start": 891.28, "text": " And that's why G chart is made to do all of this automatically." }, { "end": 902.8399999999999, "start": 896.7399999999999, "text": " And you don't, you don't incur much of a cost, because you distribute." }, { "end": 905.5999999999999, "start": 902.8399999999999, "text": " So what's the difference to the old scaling?" }, { "end": 909.52, "start": 905.6, "text": " Why don't they just make transformers larger in number of layers?" }, { "end": 915.38, "start": 909.52, "text": " And that's because this this is, I guess, what opening into as well, if you make transformers" }, { "end": 920.5, "start": 915.38, "text": " simply larger in number of layers, sorry, if you make it transformers larger in the" }, { "end": 924.2, "start": 920.5, "text": " attention mechanism, it just won't fit into memory at some point." }, { "end": 926.24, "start": 924.2, "text": " And you'll you'll have to share that somehow." }, { "end": 928.34, "start": 926.24, "text": " And you can do this with G shard." }, { "end": 934.12, "start": 928.34, "text": " If you scale it in number of layers, that incurs significant cost where you have to" }, { "end": 938.96, "start": 934.12, "text": " wait, because you have to forward propagate, and then you have to backward propagate in" }, { "end": 940.7, "start": 938.96, "text": " your training sequence." }, { "end": 946.62, "start": 940.7, "text": " And if you have just too many layers, then a lot of the a lot of the frameworks get at" }, { "end": 952.7, "start": 946.62, "text": " their limit, where at some point they say, well, I still have to wait for the signal" }, { "end": 956.92, "start": 952.7, "text": " to come back in order to continue." }, { "end": 962.86, "start": 956.92, "text": " And they explore this in this benchmark right here." }, { "end": 968.86, "start": 962.86, "text": " You can see they say the largest model, the 600 billion parameter model that achieved" }, { "end": 975.22, "start": 968.86, "text": " the best translation quality was trained with 2000 TPU v3 cores for three days, a total" }, { "end": 979.66, "start": 975.22, "text": " cost of 22 TPU core years." }, { "end": 986.0600000000001, "start": 979.66, "text": " In contrast, training all 100 bilingual baseline models would have required 29 core years." }, { "end": 990.0600000000001, "start": 986.0600000000001, "text": " So the model here is faster than if you train them individually." }, { "end": 997.9399999999999, "start": 990.06, "text": " But if you want to train a single transformer that is just very deep, and achieves reasonable" }, { "end": 1001.06, "start": 997.9399999999999, "text": " performance, you have to invest a lot more." }, { "end": 1006.14, "start": 1001.06, "text": " Our best quality dense single transformer model 2.3 billion parameters." }, { "end": 1008.8599999999999, "start": 1006.14, "text": " So it's also significantly smaller." }, { "end": 1013.76, "start": 1008.8599999999999, "text": " Achieving this was trained with G pipe, which is a previous framework." }, { "end": 1021.5, "start": 1013.76, "text": " So G pipe is kind of a task runner that also distributes computation was trained with G" }, { "end": 1031.66, "start": 1021.5, "text": " pipe on 2048 TPU cores for six weeks or a total of 235 TPU core years." }, { "end": 1037.58, "start": 1031.66, "text": " By the way, for if you if you have $1 per TPU hour, that'll only cause that'll only," }, { "end": 1042.46, "start": 1037.58, "text": " I guess set you back about 2 million or so." }, { "end": 1044.3, "start": 1042.46, "text": " It's easy peasy." }, { "end": 1053.06, "start": 1044.3, "text": " Or even 200,000 just, you know, a tiny, tiny bit of of money." }, { "end": 1060.8600000000001, "start": 1053.06, "text": " But you can see that this transformer model that is dense, which means that is a classic" }, { "end": 1066.26, "start": 1060.8600000000001, "text": " transformer where you stack the transformer layers, you stack them, you stack them, you" }, { "end": 1067.26, "start": 1066.26, "text": " stack them." }, { "end": 1073.62, "start": 1067.26, "text": " It, in fact, it has 96 layers, their baseline 96 layer transformer model, that's sort of" }, { "end": 1078.58, "start": 1073.62, "text": " what opening I did, they just kept stacking the transformer layers." }, { "end": 1083.02, "start": 1078.58, "text": " You get a model that has less parameters and trains for much longer." }, { "end": 1087.02, "start": 1083.02, "text": " And its performance is only about this good." }, { "end": 1093.58, "start": 1087.02, "text": " Whereas here, if you scale not into depth, but into width of these experts, and it's" }, { "end": 1098.3799999999999, "start": 1093.58, "text": " not dense, but it's shorted, which means it calculates this in a in a kind of sparsified" }, { "end": 1103.62, "start": 1098.3799999999999, "text": " way because it has this hard routing, you can scale up to a lot more parameters." }, { "end": 1110.1, "start": 1103.62, "text": " So 600 billion parameters, over 200 times more parameters than the deep model, and you" }, { "end": 1113.1, "start": 1110.1, "text": " can get a much better performance." }, { "end": 1119.6399999999999, "start": 1113.1, "text": " Okay, so this is what is different here, it scales into these experts rather than scaling" }, { "end": 1125.98, "start": 1119.64, "text": " into depth or, or size of the attention mechanism itself." }, { "end": 1131.94, "start": 1125.98, "text": " All right, the question, I guess that you come up with if you're a machine learner is" }, { "end": 1137.9, "start": 1131.94, "text": " how do you back propagate if you route if here you route to these different experts," }, { "end": 1143.0200000000002, "start": 1137.9, "text": " and you do a hard routing like here, how do you back propagate the signal because it seems" }, { "end": 1144.92, "start": 1143.0200000000002, "text": " like you need a soft routing." }, { "end": 1150.74, "start": 1144.92, "text": " But this has been handled, in fact, these mixture of experts has been introduced previously," }, { "end": 1156.66, "start": 1150.74, "text": " in a paper I think called outrageously large language models or something like this." }, { "end": 1161.26, "start": 1156.66, "text": " And so they've introduced that, you know, it, it still works." }, { "end": 1167.1000000000001, "start": 1161.26, "text": " So backprop still works through so basically you have a backprop path through here." }, { "end": 1172.78, "start": 1167.1000000000001, "text": " And because you put a little bit of noise in this routing, every path gets explored" }, { "end": 1177.42, "start": 1172.78, "text": " a few times, and therefore you have enough backprop signal to make it work." }, { "end": 1183.42, "start": 1177.42, "text": " It can it could technically fail, but they do observe generally that it does work if" }, { "end": 1186.78, "start": 1183.42, "text": " you do this kind of hard routing with a bit of noise." }, { "end": 1192.78, "start": 1186.78, "text": " All right, so where do we go from here, as I said, this is an engineering paper, and" }, { "end": 1194.24, "start": 1192.78, "text": " it's a long engineering paper." }, { "end": 1200.8999999999999, "start": 1194.24, "text": " So they, they set up their, they set up a lot of a lot of the details of engineering" }, { "end": 1205.5, "start": 1200.9, "text": " directly in the paper, which we're not used to in the machine learning world." }, { "end": 1212.98, "start": 1205.5, "text": " They really detail how they shard things and so on, which is pretty cool." }, { "end": 1217.7, "start": 1212.98, "text": " But I invite you to look at the paper yourself." }, { "end": 1220.74, "start": 1217.7, "text": " If you really want to know what's going on right here." }, { "end": 1228.8600000000001, "start": 1220.74, "text": " Suffice to say, they, as you can see right here, what they do is, this is the input right" }, { "end": 1236.58, "start": 1228.86, "text": " here, and then they have this weight matrix, which is a this routing, this is learned routing" }, { "end": 1237.58, "start": 1236.58, "text": " weights." }, { "end": 1238.58, "start": 1237.58, "text": " Okay." }, { "end": 1245.4599999999998, "start": 1238.58, "text": " So you have trainable weights that decide how to route the input, and that's dependent" }, { "end": 1246.54, "start": 1245.4599999999998, "text": " on the input." }, { "end": 1250.02, "start": 1246.54, "text": " So you have a bunch of inputs that comes from the lower layer." }, { "end": 1255.02, "start": 1250.02, "text": " And this matrix right here determines where to route them." }, { "end": 1260.06, "start": 1255.02, "text": " Probably says, okay, the input is a vector like this." }, { "end": 1264.1399999999999, "start": 1260.06, "text": " I know that must probably go to the expert number three." }, { "end": 1265.1399999999999, "start": 1264.1399999999999, "text": " Okay." }, { "end": 1267.2, "start": 1265.1399999999999, "text": " And you have a softmax across that." }, { "end": 1272.48, "start": 1267.2, "text": " So it's a really, it's an assignment to, it's a soft assignment to the experts." }, { "end": 1278.26, "start": 1272.48, "text": " So once you've done the soft assignment to the expert, you do a hard assignment by collecting" }, { "end": 1280.62, "start": 1278.26, "text": " the top two." }, { "end": 1287.4599999999998, "start": 1280.62, "text": " For each token, you say you collect the top two experts, and you only send it to the top" }, { "end": 1291.6, "start": 1287.4599999999998, "text": " two experts and you ignore all else, which is not a lot right there." }, { "end": 1296.9399999999998, "start": 1291.6, "text": " At times there are 2000 experts in the system." }, { "end": 1300.9599999999998, "start": 1296.9399999999998, "text": " And yeah, you distribute and you have some noise." }, { "end": 1306.8, "start": 1300.9599999999998, "text": " So with a random probability, you actually don't even send it to the second expert." }, { "end": 1310.8799999999999, "start": 1306.8, "text": " You just leave it at the first one." }, { "end": 1313.94, "start": 1310.8799999999999, "text": " And with some noise, you send it also to the second one." }, { "end": 1319.96, "start": 1313.94, "text": " And I think that that noise is part of what if what makes the system work a bit." }, { "end": 1325.1, "start": 1319.96, "text": " And then you also have this auxiliary loss right here that you add on top, which just" }, { "end": 1328.6399999999999, "start": 1325.1, "text": " makes sure that you distribute evenly." }, { "end": 1336.94, "start": 1328.64, "text": " So this encourages the system to distribute the tokens evenly, because sorry, what it" }, { "end": 1344.6200000000001, "start": 1336.94, "text": " penalizes is a this here is the mean assignment to each expert." }, { "end": 1353.0200000000002, "start": 1344.6200000000001, "text": " So it penalizes whenever the mean assignment is out of out of line, basically, so a distribution" }, { "end": 1357.3400000000001, "start": 1353.0200000000002, "text": " assignment to the expert or one expert gets a lot of tokens, because I don't know, it" }, { "end": 1358.82, "start": 1357.34, "text": " tends to be really good at something." }, { "end": 1360.9399999999998, "start": 1358.82, "text": " So all the tokens are routed to it." }, { "end": 1364.1799999999998, "start": 1360.9399999999998, "text": " And the other expert don't get a lot that's penalized." }, { "end": 1369.34, "start": 1364.1799999999998, "text": " So you encourage the system to distribute tokens evenly between those experts." }, { "end": 1373.8, "start": 1369.34, "text": " And then there are also like upper limits where you drop tokens and so on." }, { "end": 1382.9399999999998, "start": 1373.8, "text": " They really build a system that is out for performance rather than machine learning correctness." }, { "end": 1388.38, "start": 1382.94, "text": " So they demonstrate how to do this in in sort of code with their system." }, { "end": 1393.8600000000001, "start": 1388.38, "text": " And the cool thing about their system is that you don't have to do much." }, { "end": 1401.78, "start": 1393.8600000000001, "text": " What you'll have to do is just specify which tensors are sharded at along which dimensions" }, { "end": 1403.7, "start": 1401.78, "text": " and the system does the rest." }, { "end": 1405.8, "start": 1403.7, "text": " So this is pretty cool." }, { "end": 1414.18, "start": 1405.8, "text": " So this here is this mixture of experts, mixture of experts as you would write it in code." }, { "end": 1418.18, "start": 1414.18, "text": " And they make use a lot of this Einstein, this Einstein some notation." }, { "end": 1423.78, "start": 1418.18, "text": " If you don't know what the Einstein some notation is, it's a general notation to describe matrix" }, { "end": 1426.1, "start": 1423.78, "text": " or tensor multiplications." }, { "end": 1434.22, "start": 1426.1, "text": " So a for example, if you were to multiply two matrices, you could have a string there," }, { "end": 1441.7, "start": 1434.22, "text": " you describe it as a string and it comes from how Einstein wrote up the kind of tensor contractions" }, { "end": 1444.44, "start": 1441.7, "text": " in his work." }, { "end": 1454.18, "start": 1444.44, "text": " So if you want to multiply two matrices, you can you could put the string a b b c goes" }, { "end": 1457.32, "start": 1454.18, "text": " to a c." }, { "end": 1461.38, "start": 1457.32, "text": " So this and then you put two matrices right here." }, { "end": 1464.02, "start": 1461.38, "text": " This will tell it, okay, I have a one matrix." }, { "end": 1468.58, "start": 1464.02, "text": " I'm going to call the axis a and b, I have another matrix or tensor where I'm going to" }, { "end": 1470.9, "start": 1468.58, "text": " call the axis b and c." }, { "end": 1478.82, "start": 1470.9, "text": " Now I have the resulting tensor, and I want the first axis to be a and the a is this one." }, { "end": 1482.22, "start": 1478.82, "text": " And I want the last axis to be c and the c is this one." }, { "end": 1488.52, "start": 1482.22, "text": " And b is nowhere b is not in the output, which means it should contract over b." }, { "end": 1495.04, "start": 1488.52, "text": " So it should sum along b, sorry, it should multiply along b and then add such contract" }, { "end": 1496.04, "start": 1495.04, "text": " over b." }, { "end": 1501.6, "start": 1496.04, "text": " So this here describes a regular matrix matrix multiplication." }, { "end": 1508.74, "start": 1501.6, "text": " Now if I could do something else, I could do something like a just a element wise product," }, { "end": 1518.5, "start": 1508.74, "text": " an element wise product would be something like this a b comma a b goes to a b, which" }, { "end": 1527.86, "start": 1518.5, "text": " means here, I have a in the first input, and here I have a again." }, { "end": 1532.9, "start": 1527.86, "text": " And I'm so you already see that you can even though these are different tensors, you can" }, { "end": 1537.02, "start": 1532.9, "text": " call the axis the same, which means that they're going to somehow be multiplied together." }, { "end": 1541.18, "start": 1537.02, "text": " Now if you leave it away here, it means that it's going to be contracted and therefore" }, { "end": 1542.86, "start": 1541.18, "text": " the axis no longer exists." }, { "end": 1547.1, "start": 1542.86, "text": " But here we don't leave it away, which simply means that these axes are going to be multiplied" }, { "end": 1548.1, "start": 1547.1, "text": " together." }, { "end": 1549.78, "start": 1548.1, "text": " And the same for b right here." }, { "end": 1553.54, "start": 1549.78, "text": " So this describes an element wise." }, { "end": 1556.62, "start": 1553.54, "text": " This describes an element wise product, you can go really funky with this." }, { "end": 1566.98, "start": 1556.62, "text": " So this, this here would be a row wise dot product, where a is more it for all the a" }, { "end": 1573.38, "start": 1566.98, "text": " is it's element wise, but then over b, it's contracted." }, { "end": 1578.26, "start": 1573.38, "text": " So you know, you can go, you can go wild with the Einstein some notation, you can describe" }, { "end": 1581.78, "start": 1578.26, "text": " a lot of things with it." }, { "end": 1589.02, "start": 1581.78, "text": " So here is this algorithm to distribute the computation among these different experts." }, { "end": 1597.26, "start": 1589.02, "text": " So you have the inputs and the weight matrix for the, they call this the gates function." }, { "end": 1601.42, "start": 1597.26, "text": " That's the routing function to these experts." }, { "end": 1602.42, "start": 1601.42, "text": " So what do we do?" }, { "end": 1610.98, "start": 1602.42, "text": " We first of all, we have these tensors, these, they have these grouping, these grouping dimension" }, { "end": 1611.98, "start": 1610.98, "text": " right here." }, { "end": 1618.18, "start": 1611.98, "text": " So they come along to along groups, which in our case, we could maybe say these are" }, { "end": 1623.38, "start": 1618.18, "text": " batches or the batch dimension." }, { "end": 1629.22, "start": 1623.38, "text": " So they come across groups, and there is the sequence length and there is this M right" }, { "end": 1631.46, "start": 1629.22, "text": " here." }, { "end": 1638.6200000000001, "start": 1631.46, "text": " That's going to be the feature dimension, the M. And you can see the M is contracted." }, { "end": 1639.8400000000001, "start": 1638.6200000000001, "text": " So the M is no longer here." }, { "end": 1647.66, "start": 1639.8400000000001, "text": " So the gating function is going to route each input token right here to one of the experts" }, { "end": 1652.42, "start": 1647.66, "text": " for each thing in the group." }, { "end": 1656.74, "start": 1652.42, "text": " So you can see, you can express this with an Einstein some notation." }, { "end": 1663.9, "start": 1656.74, "text": " Then you have a top two gating, which selects the top two from each of the last, from each" }, { "end": 1668.5400000000002, "start": 1663.9, "text": " of the entries." }, { "end": 1674.4, "start": 1668.5400000000002, "text": " And that gives you this dispatch mask and the sorry, and the weights that you have to" }, { "end": 1676.1000000000001, "start": 1674.4, "text": " use at the end to combine." }, { "end": 1680.4599999999998, "start": 1676.1, "text": " You can use the dispatch mask in order to distribute the inputs." }, { "end": 1684.74, "start": 1680.4599999999998, "text": " So you have reshaped inputs, and so on." }, { "end": 1688.4599999999998, "start": 1684.74, "text": " So I'm not going to go through all of this right here, but you can express all of this" }, { "end": 1693.5, "start": 1688.4599999999998, "text": " in terms of the Einstein some notation." }, { "end": 1699.1, "start": 1693.5, "text": " And you can express pretty much any sort of computation that is along the line." }, { "end": 1703.3, "start": 1699.1, "text": " You can express the attention mechanism and so on." }, { "end": 1708.58, "start": 1703.3, "text": " You can express the feed forward layers in terms of these Einstein some notations and" }, { "end": 1715.3, "start": 1708.58, "text": " the underlying the underlined dimensions here are the dimensions where we want to shard" }, { "end": 1716.78, "start": 1715.3, "text": " the computation." }, { "end": 1727.54, "start": 1716.78, "text": " So here, because we have this G underlined, that means that we are interested in sharding" }, { "end": 1730.74, "start": 1727.54, "text": " the computation along this axis." }, { "end": 1733.02, "start": 1730.74, "text": " So this, I said, this is the batch dimension." }, { "end": 1738.94, "start": 1733.02, "text": " This is your classic data parallelism, which means that the first machine gets the first" }, { "end": 1743.58, "start": 1738.94, "text": " couple of data points, the second machine gets the second couple of data points, and" }, { "end": 1744.58, "start": 1743.58, "text": " so on." }, { "end": 1750.02, "start": 1744.58, "text": " And you can see in the weight matrix, there is no sharding, which means that the weight" }, { "end": 1756.34, "start": 1750.02, "text": " matrix lives on every machine as a copy of one another." }, { "end": 1766.74, "start": 1756.34, "text": " This is different from from here, where you can see that what we're now going to do is" }, { "end": 1771.6599999999999, "start": 1766.74, "text": " here it's still sharded according to the batch, but we now are going to shard this according" }, { "end": 1773.1399999999999, "start": 1771.6599999999999, "text": " to the different experts." }, { "end": 1782.22, "start": 1773.1399999999999, "text": " So we're going to route whatever the inputs are in to these experts." }, { "end": 1787.34, "start": 1782.22, "text": " And then we're going to execute the computations on the experts." }, { "end": 1790.6200000000001, "start": 1787.34, "text": " So this is now sharded according to the experts." }, { "end": 1794.94, "start": 1790.6200000000001, "text": " And at the end, right here, you can see this is still sharded according to the experts." }, { "end": 1798.04, "start": 1794.94, "text": " We're going to put it back together." }, { "end": 1801.92, "start": 1798.04, "text": " And now it's sharded according to the groups again." }, { "end": 1808.98, "start": 1801.92, "text": " That's what we said, we have the input right here, the inputs, and the inputs are maybe" }, { "end": 1814.1200000000001, "start": 1808.98, "text": " distributed according to the according to machines, right, we have these go through" }, { "end": 1817.66, "start": 1814.1200000000001, "text": " the first machine, these the second, these the third, and so on." }, { "end": 1820.2, "start": 1817.66, "text": " This is your classic data parallelism." }, { "end": 1824.98, "start": 1820.2, "text": " But then we have all of these experts." }, { "end": 1830.5, "start": 1824.98, "text": " And now all of a sudden, we're going to route these things to the individual experts." }, { "end": 1834.94, "start": 1830.5, "text": " And we're going to execute the computation in parallel on the experts." }, { "end": 1840.18, "start": 1834.94, "text": " And then after that, we're going to put back together from wherever we got them now have" }, { "end": 1841.18, "start": 1840.18, "text": " to." }, { "end": 1843.3, "start": 1841.18, "text": " So this goes here again." }, { "end": 1846.98, "start": 1843.3, "text": " And so this is just the reverse of what we did before." }, { "end": 1852.2, "start": 1846.98, "text": " So right, like that." }, { "end": 1854.38, "start": 1852.2, "text": " So you get all of the outputs again." }, { "end": 1857.52, "start": 1854.38, "text": " I hope you kind of can imagine how this happens." }, { "end": 1860.8200000000002, "start": 1857.52, "text": " So the first difference is, is that's sharded according to a different dimension." }, { "end": 1867.1, "start": 1860.82, "text": " And the second difference is, is that when we shard in data parallelism, we execute the" }, { "end": 1871.98, "start": 1867.1, "text": " same computation on all the machines, which means that we have the same weight matrix." }, { "end": 1881.06, "start": 1871.98, "text": " If we do x times w in a feet forward layer, and we shard this thing here in data parallelism," }, { "end": 1890.8, "start": 1881.06, "text": " what we do is we send the x to different machines, we split the x, we send it to different machines," }, { "end": 1894.18, "start": 1890.8, "text": " this is x1, next to x3, x4." }, { "end": 1898.98, "start": 1894.18, "text": " But we always multiply it with the same weight matrix that weight matrix lives on all of" }, { "end": 1903.7, "start": 1898.98, "text": " the machines and is regularly synchronized, it's kept synchronous in some way." }, { "end": 1911.3, "start": 1903.7, "text": " Whereas if we shard x to the experts, then the experts have individual functions." }, { "end": 1916.94, "start": 1911.3, "text": " So the expert one is different from the expert two is different from the expert three, and" }, { "end": 1919.62, "start": 1916.94, "text": " so on." }, { "end": 1924.3799999999999, "start": 1919.62, "text": " Which means that before it wasn't important where x was routed, because we would execute" }, { "end": 1925.54, "start": 1924.3799999999999, "text": " the same computation." }, { "end": 1930.34, "start": 1925.54, "text": " So we can just, you know, sharded according to you know, the first 10 go there, the next" }, { "end": 1931.34, "start": 1930.34, "text": " 10 go there." }, { "end": 1935.6799999999998, "start": 1931.34, "text": " But here, it's not crucially important where they are routed to to which expert." }, { "end": 1939.58, "start": 1935.6799999999998, "text": " And that's why we learn the function that is going to route them." }, { "end": 1944.34, "start": 1939.58, "text": " So this is learned, this is these first line here, these are the weights that we learn" }, { "end": 1948.62, "start": 1944.34, "text": " to route, then we route right here." }, { "end": 1955.7399999999998, "start": 1948.62, "text": " And we calculate your your, we calculate the feet forward layers on the expert, you see" }, { "end": 1962.06, "start": 1955.7399999999998, "text": " that this wi and wo, they are the weight matrices of the feet forward layer, the feet forward" }, { "end": 1970.2399999999998, "start": 1962.06, "text": " layers are, you have your input, you multiply it by wi, you have a ReLU, ReLU, and then" }, { "end": 1972.1999999999998, "start": 1970.2399999999998, "text": " you multiply it by wo." }, { "end": 1977.6399999999999, "start": 1972.1999999999998, "text": " So it's kind of a two layer feet forward network." }, { "end": 1982.5400000000002, "start": 1977.64, "text": " So this two layer feet forward network, as you can see, this is sharded according to" }, { "end": 1984.74, "start": 1982.5400000000002, "text": " the experts." }, { "end": 1992.94, "start": 1984.74, "text": " And then, and the important part is, of course, that here, the weight is also sharded according" }, { "end": 1993.94, "start": 1992.94, "text": " to the experts." }, { "end": 1996.74, "start": 1993.94, "text": " And that's what makes each expert different." }, { "end": 2000.0200000000002, "start": 1996.74, "text": " And then it's combined again down here." }, { "end": 2003.42, "start": 2000.0200000000002, "text": " So I hope you kind of get the idea of what this algorithm does." }, { "end": 2009.14, "start": 2003.42, "text": " But the fact that we shard according to these experts is in fact different than your regular" }, { "end": 2015.8200000000002, "start": 2009.14, "text": " sharding where you shard the data like the batch, the batches, but keep the model in" }, { "end": 2021.1000000000001, "start": 2015.8200000000002, "text": " parallel, keep the model synchronized." }, { "end": 2024.64, "start": 2021.1000000000001, "text": " With their system right now, this is how easy this is." }, { "end": 2028.66, "start": 2024.64, "text": " So before we simply stated our algorithm in Einstein's sumnotations, there is no way" }, { "end": 2033.94, "start": 2028.66, "text": " to underline code and that magically happened something that was simply for us to visualize." }, { "end": 2041.22, "start": 2033.94, "text": " Now we want to apply their system in order to make this actually sharded." }, { "end": 2046.5, "start": 2041.22, "text": " And with the Gshard system, and as I said, I don't know if the code is out or it will" }, { "end": 2051.32, "start": 2046.5, "text": " be out, but with the Gshard system, this is basically all that you have to do." }, { "end": 2056.26, "start": 2051.32, "text": " So you have these functions, they're called split and replicate." }, { "end": 2064.5400000000004, "start": 2056.26, "text": " What replicate does is it takes that weight tensor and it replicates it on all the machines" }, { "end": 2066.5800000000004, "start": 2064.5400000000004, "text": " and that keeps it synchronized." }, { "end": 2071.9, "start": 2066.5800000000004, "text": " This is a computation where we simply want to shard out the different to the different" }, { "end": 2074.0200000000004, "start": 2071.9, "text": " machines but keep it synchronized." }, { "end": 2081.3, "start": 2074.0200000000004, "text": " And you can see if you do this, this is the operation, then the system knows, ah, this" }, { "end": 2084.4, "start": 2081.3, "text": " here is replicated across the machines." }, { "end": 2090.98, "start": 2084.4, "text": " So that means I'm going to distribute the data points according to this G dimension," }, { "end": 2096.7000000000003, "start": 2090.98, "text": " according to the batch dimension and multiply it with this matrix according to this Einstein" }, { "end": 2099.64, "start": 2096.7000000000003, "text": " sum notation string on all of the machines." }, { "end": 2102.7000000000003, "start": 2099.64, "text": " And I'm going to keep this tensor in sync." }, { "end": 2113.58, "start": 2102.7000000000003, "text": " Okay, so the system knows as opposed to that you have you have the split tensor right here." }, { "end": 2125.58, "start": 2113.58, "text": " So the split, what it does is it splits a computation here the dispatch expert inputs," }, { "end": 2135.4, "start": 2125.58, "text": " it splits it according to a axis index onto D different machines or into D different parts." }, { "end": 2144.1600000000003, "start": 2135.4, "text": " So you see here you calculate the how you should do the routing and the resulting tensors" }, { "end": 2147.2200000000003, "start": 2144.1600000000003, "text": " first dimension is this E dimension." }, { "end": 2152.34, "start": 2147.2200000000003, "text": " And then you say that should be split, you know, according to this first dimension onto" }, { "end": 2156.2200000000003, "start": 2152.34, "text": " D different places and these D different places are now separate." }, { "end": 2160.26, "start": 2156.2200000000003, "text": " They don't have the they don't have to be kept in sync." }, { "end": 2162.7000000000003, "start": 2160.26, "text": " Everyone has their own weights." }, { "end": 2169.3799999999997, "start": 2162.7, "text": " And now when you do this, you know, according to this dimension, you can see because we" }, { "end": 2175.3599999999997, "start": 2169.3799999999997, "text": " know Einstein sum notation now, you can see this E appears here, here and here." }, { "end": 2182.12, "start": 2175.3599999999997, "text": " So this operation is going to be applied element wise, that means independent of each other" }, { "end": 2189.64, "start": 2182.12, "text": " in the direction of this dimension, the system understands that since this tensor is sharded" }, { "end": 2197.22, "start": 2189.64, "text": " according to that dimension, I have to execute this on each of these entries in separate" }, { "end": 2203.4, "start": 2197.22, "text": " with on each expert having their own weight matrix right here." }, { "end": 2209.2999999999997, "start": 2203.4, "text": " I hope this is a bit clear that their system makes it super easy." }, { "end": 2211.16, "start": 2209.2999999999997, "text": " You can basically do two things." }, { "end": 2217.18, "start": 2211.16, "text": " You can say this thing here is my classic parallelism where I want to keep it in sync." }, { "end": 2222.3599999999997, "start": 2217.18, "text": " And this thing here is where I want to split up and do different computation on the different" }, { "end": 2224.2599999999998, "start": 2222.3599999999997, "text": " parts." }, { "end": 2229.98, "start": 2224.2599999999998, "text": " And then they have also a general function that is more powerful." }, { "end": 2235.48, "start": 2229.98, "text": " Yeah, they and they you can auto partition and whatnot." }, { "end": 2243.7799999999997, "start": 2235.48, "text": " So they have a a a they have this we implemented the partitioner in the XLA compiler, which" }, { "end": 2250.7000000000003, "start": 2243.78, "text": " means that anything that can translate to XLA is a target for the system." }, { "end": 2255.6800000000003, "start": 2250.7000000000003, "text": " And that's, you know, TensorFlow and pytorch can do this." }, { "end": 2260.42, "start": 2255.6800000000003, "text": " So technically, this can come to any of those systems." }, { "end": 2264.82, "start": 2260.42, "text": " But of course, who has their 2000 TPUs lying around to make use of this?" }, { "end": 2265.82, "start": 2264.82, "text": " But no, I'm kidding." }, { "end": 2269.6000000000004, "start": 2265.82, "text": " I mean, this, I they here use it for transformers." }, { "end": 2275.62, "start": 2269.6, "text": " And I am very excited to to see what people can come up with for the system, I believe" }, { "end": 2281.2599999999998, "start": 2275.62, "text": " a system like this where it's super easy to to shard." }, { "end": 2287.36, "start": 2281.2599999999998, "text": " And they have some, you know, they talk about, okay, we do the single machine compiler." }, { "end": 2290.18, "start": 2287.36, "text": " So the compiler is also fast and so on." }, { "end": 2292.02, "start": 2290.18, "text": " I don't even want to go into this." }, { "end": 2294.8199999999997, "start": 2292.02, "text": " But this is very well engineered, it seems." }, { "end": 2304.48, "start": 2294.82, "text": " And they, they, they basically implement this for all of the operators." }, { "end": 2310.98, "start": 2304.48, "text": " So I'm very excited to see what people can come up with outside of the traditional applications." }, { "end": 2316.46, "start": 2310.98, "text": " I think there can be new types of models developed simply because we have a system like this" }, { "end": 2318.2200000000003, "start": 2316.46, "text": " that makes it easier." }, { "end": 2319.46, "start": 2318.2200000000003, "text": " So yeah, I'm excited." }, { "end": 2329.26, "start": 2319.46, "text": " So here, they show a bit how this works on the example of this Einstein, some notation." }, { "end": 2335.1, "start": 2329.26, "text": " So here, we want to do this thing here, which if you remember, this is the operation where" }, { "end": 2338.52, "start": 2335.1, "text": " we want to route the input to these experts." }, { "end": 2343.84, "start": 2338.52, "text": " So we want to start with something that is sharded according to the batch dimension." }, { "end": 2349.54, "start": 2343.84, "text": " That means that we, you know, we have different different parts of the batch on different" }, { "end": 2351.7400000000002, "start": 2349.54, "text": " machines." }, { "end": 2358.1400000000003, "start": 2351.7400000000002, "text": " And we want to route this and finally end up with something that is sharded on the different" }, { "end": 2360.3, "start": 2358.1400000000003, "text": " experts." }, { "end": 2366.5, "start": 2360.3, "text": " So this is what the system does is first you have these here are the different shards," }, { "end": 2367.6600000000003, "start": 2366.5, "text": " right?" }, { "end": 2375.46, "start": 2367.66, "text": " You want to multiply this, as you can see, this and this right here means that these" }, { "end": 2380.2599999999998, "start": 2375.46, "text": " this routing table is also sharded according to the same machines." }, { "end": 2385.2999999999997, "start": 2380.2599999999998, "text": " So you have the zero is all on the same machine, the one is all on the same machine, and so" }, { "end": 2387.3999999999996, "start": 2385.2999999999997, "text": " on." }, { "end": 2395.06, "start": 2387.3999999999996, "text": " So what you want to do is you want to contract is there you want to contract according to" }, { "end": 2404.58, "start": 2395.06, "text": " this s dimension, right, which we have we have omitted right here." }, { "end": 2409.9, "start": 2404.58, "text": " And if you multiply that, sorry, okay, we omit the s so this is not much of a this is" }, { "end": 2413.5, "start": 2409.9, "text": " not much of a graphic right here." }, { "end": 2418.22, "start": 2413.5, "text": " But then they have this reshard operation where they do and you don't have to worry" }, { "end": 2419.22, "start": 2418.22, "text": " about this." }, { "end": 2425.2599999999998, "start": 2419.22, "text": " So from here to here, there is this reshard operation that just shards it according to" }, { "end": 2426.54, "start": 2425.2599999999998, "text": " the according to E." }, { "end": 2435.7799999999997, "start": 2426.54, "text": " Yep." }, { "end": 2439.8999999999996, "start": 2435.7799999999997, "text": " I find this to be a bit more a bit more insightful." }, { "end": 2450.5, "start": 2439.9, "text": " So if you have something like this, this which is a regular matrix multiplication, right?" }, { "end": 2455.78, "start": 2450.5, "text": " And you want to contract along B, this is exactly the example we had before." }, { "end": 2462.6600000000003, "start": 2455.78, "text": " So here is a situation where our tensor is sharded according to the B dimension and this" }, { "end": 2466.02, "start": 2462.6600000000003, "text": " tensor is also sharded according to the B dimension." }, { "end": 2470.62, "start": 2466.02, "text": " You want to do a matrix multiplication of the whole tensor." }, { "end": 2474.98, "start": 2470.62, "text": " So what can you do, you're supposed to multiply these two matrices, but they are sharded on" }, { "end": 2476.58, "start": 2474.98, "text": " different machines." }, { "end": 2482.54, "start": 2476.58, "text": " If you consider what you actually have to do is you have to multiply each row here with" }, { "end": 2484.82, "start": 2482.54, "text": " each column here." }, { "end": 2486.82, "start": 2484.82, "text": " And that in an element wise fashion." }, { "end": 2493.74, "start": 2486.82, "text": " So that distributes according to you have to multiply this by this plus this by this" }, { "end": 2498.7799999999997, "start": 2493.74, "text": " plus this by this plus the red by the red." }, { "end": 2506.7, "start": 2498.7799999999997, "text": " So you can simply multiply the zero tensors together, the one tensors together, the two" }, { "end": 2510.2999999999997, "start": 2506.7, "text": " tensors together and the three tensors together." }, { "end": 2517.02, "start": 2510.2999999999997, "text": " Each one will give you a full matrix and then you can simply add all of them in order to" }, { "end": 2518.5, "start": 2517.02, "text": " get your full results." }, { "end": 2520.54, "start": 2518.5, "text": " This is illustrated down here." }, { "end": 2530.06, "start": 2520.54, "text": " So what machine one does, it simply multiplies its shard by its own shard of the second matrix," }, { "end": 2532.14, "start": 2530.06, "text": " which will give it this thing here." }, { "end": 2537.7799999999997, "start": 2532.14, "text": " And by the nature of how matrix multiplication is constructed, you can simply do an all reduce," }, { "end": 2542.02, "start": 2537.7799999999997, "text": " which means you reduce you sum across all of the machines, and that will give you the" }, { "end": 2544.46, "start": 2542.02, "text": " full result." }, { "end": 2547.7799999999997, "start": 2544.46, "text": " So this is a this is a an example of how this works." }, { "end": 2554.0600000000004, "start": 2547.78, "text": " This is, you know, pretty simple. And I believe you may have seen something like this already" }, { "end": 2559.92, "start": 2554.0600000000004, "text": " when you were looking at just parallelizing matrix multiplication, and so on." }, { "end": 2563.02, "start": 2559.92, "text": " So this system handles this transparently, right?" }, { "end": 2567.1000000000004, "start": 2563.02, "text": " If you're shorted like this, this is what the system will do." }, { "end": 2572.34, "start": 2567.1000000000004, "text": " However, if you are shorted differently, the system will act differently." }, { "end": 2576.5800000000004, "start": 2572.34, "text": " So here is a system you want to do the same matrix multiplication, but the first tensor" }, { "end": 2581.9, "start": 2576.58, "text": " happens to be shorted according to the A dimension, the second tensor happens to be shorted according" }, { "end": 2584.06, "start": 2581.9, "text": " to the C dimension." }, { "end": 2589.5, "start": 2584.06, "text": " And you want to end up with something that's shorted to the C dimension." }, { "end": 2594.94, "start": 2589.5, "text": " Now we have an additional constraint here that here you can see, we kind of assume that" }, { "end": 2602.22, "start": 2594.94, "text": " this full thing here fits into memory, mainly because we want to obtain the full result" }, { "end": 2605.62, "start": 2602.22, "text": " you see here, a and c should not be shorted." }, { "end": 2608.06, "start": 2605.62, "text": " So we assume that we can keep that in memory." }, { "end": 2615.02, "start": 2608.06, "text": " But here we want the final result to be shorted according to C, which imposes the additional" }, { "end": 2621.02, "start": 2615.02, "text": " constraint that it might be that the full matrix never fits into memory." }, { "end": 2623.8199999999997, "start": 2621.02, "text": " So how are we going to calculate all of that?" }, { "end": 2626.54, "start": 2623.8199999999997, "text": " We can't do the same trick anymore." }, { "end": 2633.38, "start": 2626.54, "text": " Now this G short system apparently realizes itself when something is out of memory, and" }, { "end": 2640.26, "start": 2633.38, "text": " it can do a smart move around being out of memory using a loop, which basically means" }, { "end": 2644.98, "start": 2640.26, "text": " that it will compute entry by entry or block by block." }, { "end": 2648.06, "start": 2644.98, "text": " So these are the matrices we have to multiply." }, { "end": 2653.38, "start": 2648.06, "text": " And you can see that if I want to do multiply this by this, that's fine, I can do this on" }, { "end": 2655.2200000000003, "start": 2653.38, "text": " one machine." }, { "end": 2657.1400000000003, "start": 2655.2200000000003, "text": " And that will give me the block up here." }, { "end": 2662.42, "start": 2657.1400000000003, "text": " But if I want the block up here, I have to multiply this by this, which is across two" }, { "end": 2665.1, "start": 2662.42, "text": " different machines." }, { "end": 2670.66, "start": 2665.1, "text": " So what this system does is it's going into a while loop because it realizes there's not" }, { "end": 2672.62, "start": 2670.66, "text": " enough memory." }, { "end": 2679.42, "start": 2672.62, "text": " And it kind of sends around these different slices to the different parts, each time computing" }, { "end": 2681.1800000000003, "start": 2679.42, "text": " a little piece." }, { "end": 2685.86, "start": 2681.1800000000003, "text": " So here, first, we do this by this, this is fine." }, { "end": 2693.6200000000003, "start": 2685.86, "text": " But then we grab ourselves from the we we grab ourselves this one here, calculate the" }, { "end": 2699.02, "start": 2693.6200000000003, "text": " next little piece up here, and then we grab ourselves the number two, calculate the piece" }, { "end": 2700.38, "start": 2699.02, "text": " here." }, { "end": 2705.7000000000003, "start": 2700.38, "text": " And then so this is from zero, this is from two, the one we already had, and then we grab" }, { "end": 2713.38, "start": 2705.7000000000003, "text": " ourselves piece three, and multiply that until here, until we have this final slice that" }, { "end": 2714.38, "start": 2713.38, "text": " we want." }, { "end": 2719.82, "start": 2714.38, "text": " Okay, so this goes in a while loop in multiple rounds, the system gets knows itself when" }, { "end": 2724.7000000000003, "start": 2719.82, "text": " it has to do this, and when it can calculate the full thing at once because it fits into" }, { "end": 2728.12, "start": 2724.7000000000003, "text": " in memory." }, { "end": 2732.26, "start": 2728.12, "text": " It's even smarter than that, and that it can do these halo exchanges." }, { "end": 2739.1, "start": 2732.26, "text": " So if you have to do something like this, a convolution, now in a convolution, what you'll" }, { "end": 2747.5, "start": 2739.1, "text": " do if you think of a think of an image, and you want to do a convolution on it, but the" }, { "end": 2750.1, "start": 2747.5, "text": " image happens to be sharded." }, { "end": 2755.18, "start": 2750.1, "text": " Let's say the image is so large, it's sharded across nine different machines like this." }, { "end": 2760.9, "start": 2755.18, "text": " Now, if you want to do a convolution, that's pretty cool, you know, here, here, here, but" }, { "end": 2767.08, "start": 2760.9, "text": " here, all of a sudden, your convolution is across two different machines." }, { "end": 2773.86, "start": 2767.08, "text": " So this system, G shard will adapt automatically, and do these halo exchanges where it kind" }, { "end": 2779.54, "start": 2773.86, "text": " of sends around from this machine, it'll send something to this machine such that it can" }, { "end": 2783.22, "start": 2779.54, "text": " do the convolution in that step, and vice versa." }, { "end": 2789.34, "start": 2783.22, "text": " And then this can be padded accordingly, as you can see." }, { "end": 2793.7799999999997, "start": 2789.34, "text": " This is I think this is this is this was like super ugly to implement." }, { "end": 2797.94, "start": 2793.78, "text": " If you just imagine that for each of these operations, you have to think about, okay," }, { "end": 2804.6600000000003, "start": 2797.94, "text": " how can you express this with these MPI primitives like dynamic slice and collective permute," }, { "end": 2806.2000000000003, "start": 2804.6600000000003, "text": " and so on." }, { "end": 2809.1000000000004, "start": 2806.2000000000003, "text": " It's just an absolute nightmare." }, { "end": 2814.5, "start": 2809.1000000000004, "text": " And I'm very happy that other people have done this, and I will probably just get to" }, { "end": 2816.28, "start": 2814.5, "text": " use it." }, { "end": 2821.0600000000004, "start": 2816.28, "text": " So there is a lot more to this system than I've just explained, I just try to give you" }, { "end": 2830.1, "start": 2821.06, "text": " a flavor of what building a system like this means and how easy it is to use it like this." }, { "end": 2835.94, "start": 2830.1, "text": " In order to implement all of this mixture of experts things, you simply go from this," }, { "end": 2844.54, "start": 2835.94, "text": " which is one single machine implementation, how you would write it to this, which is now" }, { "end": 2846.74, "start": 2844.54, "text": " the same, it's almost the same code." }, { "end": 2852.58, "start": 2846.74, "text": " But this now you can run on however many machines, and if you compile it with the system, it" }, { "end": 2857.8599999999997, "start": 2852.58, "text": " will do what you expect it to do in this shorted way." }, { "end": 2859.5, "start": 2857.8599999999997, "text": " Completely crazy." }, { "end": 2867.2799999999997, "start": 2859.5, "text": " Okay, so they apply this to massively multilingual massive machine translation." }, { "end": 2874.3599999999997, "start": 2867.2799999999997, "text": " So two things, it's massively multilingual, and it's massive machine, which means, I guess" }, { "end": 2877.6, "start": 2874.36, "text": " a lot of machines." }, { "end": 2880.3, "start": 2877.6, "text": " And the reason here is twofold." }, { "end": 2887.26, "start": 2880.3, "text": " So what they say is, we have massively multilingual translation." }, { "end": 2891.58, "start": 2887.26, "text": " Why don't they just look at single machine translation?" }, { "end": 2893.5, "start": 2891.58, "text": " And it has a very specific reason." }, { "end": 2898.38, "start": 2893.5, "text": " Namely, if you have massively multilingual translation, which means that you have a lot" }, { "end": 2904.58, "start": 2898.38, "text": " of different languages, and you all have to translate them, ideally, to all the other" }, { "end": 2908.46, "start": 2904.58, "text": " languages or, you know, every language pair." }, { "end": 2912.42, "start": 2908.46, "text": " But in this case, they only look at all the languages to English." }, { "end": 2919.46, "start": 2912.42, "text": " I don't exactly know why, but I guess there must be some kind of reason." }, { "end": 2930.5, "start": 2919.46, "text": " If you do this, then you can make use of a thing where there are languages that you just" }, { "end": 2932.42, "start": 2930.5, "text": " don't have much data on." }, { "end": 2936.7, "start": 2932.42, "text": " Like I don't know, Basque or something like this." }, { "end": 2940.9, "start": 2936.7, "text": " There's not that many people speaking Basque or Swiss German." }, { "end": 2945.44, "start": 2940.9, "text": " There's not even a written form, a standard written form of Swiss German." }, { "end": 2949.06, "start": 2945.44, "text": " So you just don't have as many resources." }, { "end": 2952.94, "start": 2949.06, "text": " And for other languages, you have giant amounts of resources." }, { "end": 2960.06, "start": 2952.94, "text": " And what you can make use of is this phenomenon called positive language transfer, where it" }, { "end": 2964.9, "start": 2960.06, "text": " happens that, for example, Swiss German is very close to German." }, { "end": 2971.2799999999997, "start": 2964.9, "text": " Now, they can't understand us, which is a giant advantage for us, but still it shares" }, { "end": 2974.66, "start": 2971.2799999999997, "text": " a lot of similarities with German." }, { "end": 2981.14, "start": 2974.66, "text": " So if you learn a lot about German, you can sort of transfer learn to Swiss German pretty" }, { "end": 2982.14, "start": 2981.14, "text": " easily." }, { "end": 2987.7799999999997, "start": 2982.14, "text": " So if you have a system that does German and Swiss German at the same time, you can perform" }, { "end": 2997.22, "start": 2987.7799999999997, "text": " better on both languages because the Swiss German part of your model, the part of your" }, { "end": 3002.7, "start": 2997.22, "text": " model that does Swiss German, profits from the German inputs as well." }, { "end": 3004.22, "start": 3002.7, "text": " Now don't understand me wrong." }, { "end": 3009.7, "start": 3004.22, "text": " There is not an individual part of your model that for each language, it's all done at the" }, { "end": 3011.4599999999996, "start": 3009.7, "text": " same time." }, { "end": 3015.3399999999997, "start": 3011.4599999999996, "text": " But still you can imagine that, you know, some of these things will specialize in some" }, { "end": 3016.64, "start": 3015.3399999999997, "text": " of the languages." }, { "end": 3021.98, "start": 3016.64, "text": " But the hope is that if you have German and Swiss German in the same training set, that" }, { "end": 3029.22, "start": 3021.98, "text": " if the model realizes what a question construct is in German, it will be able to apply that" }, { "end": 3032.3599999999997, "start": 3029.22, "text": " also to Swiss German with some minor modification." }, { "end": 3038.42, "start": 3032.36, "text": " So there is a benefit of having these many languages, especially for the low resource" }, { "end": 3039.42, "start": 3038.42, "text": " languages." }, { "end": 3040.42, "start": 3039.42, "text": " Okay." }, { "end": 3048.04, "start": 3040.42, "text": " So as the number of languages, sorry, as the number of language pairs to be modeled within" }, { "end": 3052.9, "start": 3048.04, "text": " a single translation model increases, positive language transfer starts to deliver large" }, { "end": 3056.6200000000003, "start": 3052.9, "text": " gains for low resource languages." }, { "end": 3060.98, "start": 3056.6200000000003, "text": " Given the number of languages considered, which I believe is a hundred here, M4 has" }, { "end": 3064.38, "start": 3060.98, "text": " a clear advantage on improving the low resource task." }, { "end": 3068.9, "start": 3064.38, "text": " On the contrary, for high resource languages, the increased number of tasks limit per task" }, { "end": 3075.18, "start": 3068.9, "text": " capacity within the model, resulting in lower translation quality compared to a models," }, { "end": 3079.76, "start": 3075.18, "text": " to a models trained on a single language pair." }, { "end": 3084.06, "start": 3079.76, "text": " This capacity bottleneck for high resource languages can be relaxed by increasing the" }, { "end": 3089.7, "start": 3084.06, "text": " model size to massive scale in order to satisfy the need for additional capacity." }, { "end": 3094.58, "start": 3089.7, "text": " So basically they're saying, if we train all of these languages together, that will help" }, { "end": 3098.5, "start": 3094.58, "text": " a lot for these low resource languages, but it might hurt the high resource languages" }, { "end": 3106.3799999999997, "start": 3098.5, "text": " because now we would have enough data technically to train a French to English model on this" }, { "end": 3107.3799999999997, "start": 3106.3799999999997, "text": " giant model." }, { "end": 3108.7799999999997, "start": 3107.3799999999997, "text": " We could train that." }, { "end": 3112.3799999999997, "start": 3108.7799999999997, "text": " And now that we have all these other languages in there, it just hurts us because we don't" }, { "end": 3113.98, "start": 3112.3799999999997, "text": " have enough parameters." }, { "end": 3116.96, "start": 3113.98, "text": " And we can solve this, of course, by simply adding more parameters." }, { "end": 3118.74, "start": 3116.96, "text": " So that's the solution." }, { "end": 3125.18, "start": 3118.74, "text": " Add more parameters and you increase the capacity of the model and you still get the benefits" }, { "end": 3128.54, "start": 3125.18, "text": " of the positive language transfer." }, { "end": 3134.2599999999998, "start": 3128.54, "text": " So their investigations is going to be into how much can we scale this?" }, { "end": 3141.3399999999997, "start": 3134.2599999999998, "text": " And is there like a sweet spot where because if you, if you increase the parameters too" }, { "end": 3146.08, "start": 3141.3399999999997, "text": " much, you counteract this positive language transfer again." }, { "end": 3151.46, "start": 3146.08, "text": " So since, you know, since Swiss German and German can sort of benefit from each other." }, { "end": 3157.9, "start": 3151.46, "text": " However, if we have too many parameters, so, and then we end up having all of these experts" }, { "end": 3162.9, "start": 3157.9, "text": " right here and the tokens are always routed to these experts and it always happens that" }, { "end": 3167.2999999999997, "start": 3162.9, "text": " all the Swiss German tokens are always routed to this expert and all the German tokens are" }, { "end": 3169.58, "start": 3167.2999999999997, "text": " always routed to that expert." }, { "end": 3172.02, "start": 3169.58, "text": " There will be no sharing of weights." }, { "end": 3177.94, "start": 3172.02, "text": " There will be this positive language transfer will not happen because we have too much capacity." }, { "end": 3183.32, "start": 3177.94, "text": " So the goal is to find a sweet spot between positive language transfer and this capacity" }, { "end": 3185.66, "start": 3183.32, "text": " bottleneck." }, { "end": 3195.14, "start": 3185.66, "text": " They do use an in-house data set, which we don't have access to, but they say the training" }, { "end": 3199.18, "start": 3195.14, "text": " corpus mined from the web contains parallel documents for a hundred languages to and from" }, { "end": 3202.66, "start": 3199.18, "text": " English adding up to a total of 25 billion training examples." }, { "end": 3209.46, "start": 3202.66, "text": " However, they only use from 100 languages to English." }, { "end": 3214.98, "start": 3209.46, "text": " This result in approximately 13 billion training examples to be used for model training." }, { "end": 3216.7799999999997, "start": 3214.98, "text": " So that's a lot." }, { "end": 3220.22, "start": 3216.7799999999997, "text": " It's a lot of data, especially for translation." }, { "end": 3224.62, "start": 3220.22, "text": " It's kind of a noisy translation because it's mined from the web, but still it's a lot of" }, { "end": 3226.3799999999997, "start": 3224.62, "text": " data." }, { "end": 3227.98, "start": 3226.3799999999997, "text": " They have baselines." }, { "end": 3234.34, "start": 3227.98, "text": " So the baselines are first of all, in order to form our baselines, we trained separate" }, { "end": 3238.34, "start": 3234.34, "text": " bilingual neural machine translation models for each language pair." }, { "end": 3243.9, "start": 3238.34, "text": " So that means a single model for each language to English, depending on the available training" }, { "end": 3247.58, "start": 3243.9, "text": " data per language." }, { "end": 3255.9, "start": 3247.58, "text": " And then they also have a baseline where they try open AI style to build as deep as single" }, { "end": 3257.82, "start": 3255.9, "text": " transformer as possible." }, { "end": 3264.94, "start": 3257.82, "text": " And by that, they mean we also include a variant of a dense 96 layer transformer encoder decoder" }, { "end": 3274.1600000000003, "start": 3264.94, "text": " network trained with G pipe pipeline parallelism on the same data set as another baseline." }, { "end": 3279.5800000000004, "start": 3274.1600000000003, "text": " So the difference again here is that this 96 layer is a dense transformer, which means" }, { "end": 3284.98, "start": 3279.5800000000004, "text": " that all of the tokens go through the same computation and we don't shard the computation" }, { "end": 3286.8, "start": 3284.98, "text": " out to these experts, right?" }, { "end": 3292.78, "start": 3286.8, "text": " We do shard according to the batch, but all of them go through the same parameters." }, { "end": 3298.78, "start": 3292.78, "text": " And that means we can we can only scale up the number of layers and that severely limits" }, { "end": 3304.54, "start": 3298.78, "text": " the that severely limits the computational efficiency." }, { "end": 3310.9, "start": 3304.54, "text": " Even if we have, you know, your pipeline parallelism and so on that hurts." }, { "end": 3319.6600000000003, "start": 3310.9, "text": " They say training to convergence took over six weeks on 2000 TPU course." }, { "end": 3323.02, "start": 3319.6600000000003, "text": " That's crazy." }, { "end": 3332.94, "start": 3323.02, "text": " But I guess, yeah, you know, I was saying earlier that that I always thought we were happy." }, { "end": 3337.42, "start": 3332.94, "text": " I always thought we were happy in machine learning because kind of the hip science fields" }, { "end": 3341.66, "start": 3337.42, "text": " being biology, like genetics and machine learning." }, { "end": 3346.02, "start": 3341.66, "text": " I was thought like, oh, but these biology people, they always need like million dollar" }, { "end": 3348.88, "start": 3346.02, "text": " grants from government to run their experiments." }, { "end": 3350.98, "start": 3348.88, "text": " And we can just sit down with a laptop." }, { "end": 3352.7400000000002, "start": 3350.98, "text": " This time is over." }, { "end": 3357.5, "start": 3352.7400000000002, "text": " If you start a PhD now start applying for money to get TPUs." }, { "end": 3358.5, "start": 3357.5, "text": " Yeah." }, { "end": 3359.5, "start": 3358.5, "text": " Okay." }, { "end": 3362.12, "start": 3359.5, "text": " In any case, here you can see what this does." }, { "end": 3365.2200000000003, "start": 3362.12, "text": " So they compare a bunch of models right here." }, { "end": 3371.06, "start": 3365.22, "text": " So this T, this is this big dense transformer that's going to be one of our baselines and" }, { "end": 3374.3399999999997, "start": 3371.06, "text": " the other baseline here is going to be the zero axis." }, { "end": 3381.3799999999997, "start": 3374.3399999999997, "text": " The zero axis means this is the single model for that language pair." }, { "end": 3390.1, "start": 3381.3799999999997, "text": " So only so for each language, they trained one model only on data from that language." }, { "end": 3396.7799999999997, "start": 3390.1, "text": " And that's going to be the worst thing here because this multi language translation in" }, { "end": 3400.02, "start": 3396.7799999999997, "text": " one model will generally help you if you have enough parameters." }, { "end": 3405.2599999999998, "start": 3400.02, "text": " You can see all the models here have enough parameters such that the difference here," }, { "end": 3413.08, "start": 3405.2599999999998, "text": " this is difference in blue is positive including this baseline model right here." }, { "end": 3418.54, "start": 3413.08, "text": " So the baseline model, as you can see, has 2.3 billion parameters, even though it takes" }, { "end": 3423, "start": 3418.54, "text": " that much longer to train and that's, as we said, a function of the fact that it's dense" }, { "end": 3427.62, "start": 3423, "text": " and deep, so that hurts in training efficiency." }, { "end": 3430.64, "start": 3427.62, "text": " And then you have these mixture of expert models." }, { "end": 3432.42, "start": 3430.64, "text": " They always consider two things." }, { "end": 3434.62, "start": 3432.42, "text": " They consider different numbers of experts." }, { "end": 3439.9, "start": 3434.62, "text": " You can see it goes from 128 to 2048 experts." }, { "end": 3447.2599999999998, "start": 3439.9, "text": " And they consider a number, different number of layers from 12 layers to 36 layers, 36" }, { "end": 3452.7400000000002, "start": 3447.26, "text": " layers still being way smaller than the 96 layer transformer here." }, { "end": 3455.34, "start": 3452.7400000000002, "text": " And that's the reason why it trains faster." }, { "end": 3460.6200000000003, "start": 3455.34, "text": " So it doesn't train faster." }, { "end": 3466.0600000000004, "start": 3460.6200000000003, "text": " So the reason it trains faster is because it has less layers." }, { "end": 3472.82, "start": 3466.0600000000004, "text": " And then the reason it has more parameters is because it has a lot of these experts." }, { "end": 3479.96, "start": 3472.82, "text": " And the art here is to constrain how much these more experts hurt you." }, { "end": 3484.5, "start": 3479.96, "text": " So you know, you could run into the same problem where if you scale up the experts, in fact," }, { "end": 3487.6600000000003, "start": 3484.5, "text": " you do, it doesn't fit into memory anymore." }, { "end": 3492.36, "start": 3487.6600000000003, "text": " And it's going to hurt you a lot in training efficiency, kind of like if you increase the" }, { "end": 3493.84, "start": 3492.36, "text": " number of layers." }, { "end": 3500.7000000000003, "start": 3493.84, "text": " But the G shard system prevents that it lets you up the number of experts without incurring" }, { "end": 3501.7000000000003, "start": 3500.7000000000003, "text": " the cost." }, { "end": 3505.8999999999996, "start": 3501.7, "text": " That being said, it does not let you up the number of layers, you're going to incur the" }, { "end": 3512.7, "start": 3505.8999999999996, "text": " same cost if you up the number of layers as you have with the dense transformers." }, { "end": 3513.98, "start": 3512.7, "text": " So does this help?" }, { "end": 3515.06, "start": 3513.98, "text": " It helps a lot." }, { "end": 3517.8999999999996, "start": 3515.06, "text": " As you can see right here, there's a general trend upwards." }, { "end": 3522.22, "start": 3517.8999999999996, "text": " And what's the x axis, the x axis is low resource languages." }, { "end": 3530.7799999999997, "start": 3522.22, "text": " So you can see that as we as we go to lower and lower resource languages, this multi task" }, { "end": 3537.34, "start": 3530.78, "text": " training, this multilingual translation improves significantly over the baseline where we only" }, { "end": 3540.6400000000003, "start": 3537.34, "text": " train a system for that language specifically." }, { "end": 3544.78, "start": 3540.6400000000003, "text": " And these 10k examples, it's it's it's quite a bit, but it's not that much, especially" }, { "end": 3548.1400000000003, "start": 3544.78, "text": " since it's noisy data." }, { "end": 3551.94, "start": 3548.1400000000003, "text": " So this is specifically good for low resource languages." }, { "end": 3558.28, "start": 3551.94, "text": " But you can see also the high resource languages here benefit from the multilingual translation." }, { "end": 3562.6200000000003, "start": 3558.28, "text": " And that's a function of the fact that we have, you know, large enough models." }, { "end": 3569.1400000000003, "start": 3562.6200000000003, "text": " In fact, you can see the larger the models, the more the difference in blue is, and there's" }, { "end": 3570.5, "start": 3569.1400000000003, "text": " not really an end in sight." }, { "end": 3574.42, "start": 3570.5, "text": " And they also see it say that they haven't seen convergence in training." }, { "end": 3578.3, "start": 3574.42, "text": " So you can technically train this forever." }, { "end": 3587.38, "start": 3578.3, "text": " Yeah, you can also see that the the lowest mixture of experts right here is almost on" }, { "end": 3593.46, "start": 3587.38, "text": " par with their big dense transformer that took so much longer to train." }, { "end": 3594.46, "start": 3593.46, "text": " Right." }, { "end": 3600.7400000000002, "start": 3594.46, "text": " So this lowest model right here, I believe it took I don't want to go back, but it took" }, { "end": 3608.1, "start": 3600.7400000000002, "text": " it took hours or so or few hours to train, whereas this 96 layer dense transformer took" }, { "end": 3612.5, "start": 3608.1, "text": " these six weeks to train." }, { "end": 3618.26, "start": 3612.5, "text": " So has to be said, the number of TPUs is not to be neglected, but if you're Google, you" }, { "end": 3622.1, "start": 3618.26, "text": " know, you just have them laying around." }, { "end": 3627.9, "start": 3622.1, "text": " What's also interesting here, and you can start seeing this two things." }, { "end": 3635.3, "start": 3627.9, "text": " First of all, you can see that the difference between here in between the dense transformer" }, { "end": 3642.98, "start": 3635.3, "text": " and this baseline model is very low for high resource languages, but gets larger for low" }, { "end": 3646.0800000000004, "start": 3642.98, "text": " resource languages." }, { "end": 3652.1000000000004, "start": 3646.0800000000004, "text": " This is an indication that the dense transformer, it does more to share parameters between the" }, { "end": 3656.5, "start": 3652.1000000000004, "text": " languages because it shares parameters between all the things because all the tokens go through" }, { "end": 3658.7000000000003, "start": 3656.5, "text": " the same computation." }, { "end": 3664.38, "start": 3658.7000000000003, "text": " So it is going to be a bit better in low resource languages, but still the general trend upwards" }, { "end": 3667.42, "start": 3664.38, "text": " holds even for the mixture of experts." }, { "end": 3674.6600000000003, "start": 3667.42, "text": " The second thing is that you see there is a crossover here in these in these big in" }, { "end": 3676.42, "start": 3674.6600000000003, "text": " these biggest models." }, { "end": 3677.7400000000002, "start": 3676.42, "text": " And what are the big models?" }, { "end": 3686.3, "start": 3677.7400000000002, "text": " One, the blue one is the one with 2048 experts and the green one is the one with 500 experts." }, { "end": 3689.7400000000002, "start": 3686.3, "text": " They're both as deep models." }, { "end": 3695.3799999999997, "start": 3689.74, "text": " But all of a sudden, over here for the high resource languages, it's still true that if" }, { "end": 3698.3399999999997, "start": 3695.3799999999997, "text": " you up the number of parameters, you get a benefit." }, { "end": 3701.56, "start": 3698.3399999999997, "text": " So up the number of experts as well, you get a benefit." }, { "end": 3707.3799999999997, "start": 3701.56, "text": " But over here for the low resource languages, it's it you see, it actually hurts you to" }, { "end": 3709.02, "start": 3707.3799999999997, "text": " up the number of experts." }, { "end": 3712.7799999999997, "start": 3709.02, "text": " And that's the phenomenon exactly we talked about before." }, { "end": 3718.12, "start": 3712.7799999999997, "text": " If you have too many of these experts, and you do a hard routing, that means all the" }, { "end": 3720.38, "start": 3718.12, "text": " tokens go a different way." }, { "end": 3725.9, "start": 3720.38, "text": " And that means you don't get any sharing benefit from the multilingual translation." }, { "end": 3727.66, "start": 3725.9, "text": " And they investigate a lot." }, { "end": 3732.98, "start": 3727.66, "text": " And they basically claim that their sweet spot of expert in their particular task appears" }, { "end": 3742.38, "start": 3732.98, "text": " to be somewhere in between these 2000 and this 500 expert number, where you can see" }, { "end": 3746.94, "start": 3742.38, "text": " it doesn't always help you to scale up the model." }, { "end": 3753.16, "start": 3746.94, "text": " So I have to say maybe the transformers, maybe they need a ResNet moment." }, { "end": 3758.4, "start": 3753.16, "text": " So I believe in computer vision, it was sort of the same problem that we try to build deeper" }, { "end": 3761.98, "start": 3758.4, "text": " models and why like, okay, this, this is more width." }, { "end": 3769.28, "start": 3761.98, "text": " But yeah, I think there might be some breakthrough on the horizon where someone just figures" }, { "end": 3774.6, "start": 3769.28, "text": " out how to train these giant models, even more giant transformer models with deeper" }, { "end": 3776.1, "start": 3774.6, "text": " layers." }, { "end": 3780.66, "start": 3776.1, "text": " And then there's a new era of transformers." }, { "end": 3782.06, "start": 3780.66, "text": " However, this is not that effect." }, { "end": 3785.14, "start": 3782.06, "text": " I'm sorry, I said this at the wrong place." }, { "end": 3786.14, "start": 3785.14, "text": " This is not that effect." }, { "end": 3794.2, "start": 3786.14, "text": " This is to show that in this case, we do benefit for the high resource languages because we" }, { "end": 3795.64, "start": 3794.2, "text": " increase capacity." }, { "end": 3800.8399999999997, "start": 3795.64, "text": " But for the low resource languages, we suffer if we up the number of experts too much, because" }, { "end": 3806.38, "start": 3800.84, "text": " they don't share any parameters anymore between the languages or between the different parts." }, { "end": 3812.58, "start": 3806.38, "text": " Like it's not a necessity that the different languages are going to be routed to different" }, { "end": 3814.46, "start": 3812.58, "text": " experts." }, { "end": 3816.42, "start": 3814.46, "text": " But it's probably going to happen, right?" }, { "end": 3821.02, "start": 3816.42, "text": " There's no hard coded thing that says if it's this language, it needs to go there." }, { "end": 3825.7200000000003, "start": 3821.02, "text": " It just probably is going to happen this way because the different languages are going" }, { "end": 3830.6, "start": 3825.72, "text": " to be needed to be treated differently and therefore the system learns to route first" }, { "end": 3834.3599999999997, "start": 3830.6, "text": " and foremost, those two different experts." }, { "end": 3841.12, "start": 3834.3599999999997, "text": " Here you can see the model sizes, including this 60 layer models model with 2000 experts" }, { "end": 3843.48, "start": 3841.12, "text": " that they didn't manage to train." }, { "end": 3848.02, "start": 3843.48, "text": " They said they had numerical instability, but that had one trillion parameters." }, { "end": 3850.52, "start": 3848.02, "text": " And I'm pretty sure they're, they're cool." }, { "end": 3853.2, "start": 3850.52, "text": " They must be quite mad about this, right?" }, { "end": 3858.24, "start": 3853.2, "text": " Like you have the trillion parameters, even though it's not that much bigger than the" }, { "end": 3863.2, "start": 3858.24, "text": " 600 billion, that the trillion, it would be cool to write a paper like a trillion parameter" }, { "end": 3865.3999999999996, "start": 3863.2, "text": " model." }, { "end": 3871.08, "start": 3865.3999999999996, "text": " But for now they are at the 600 billion mark and they simply want to tell you that they" }, { "end": 3876.4199999999996, "start": 3871.08, "text": " have actually compiled a model that's that big, just didn't manage to train it." }, { "end": 3877.6, "start": 3876.4199999999996, "text": " And yeah, that's here." }, { "end": 3882.48, "start": 3877.6, "text": " Here is where I wanted to say that maybe we're waiting for the ResNet moment where all of" }, { "end": 3888.84, "start": 3882.48, "text": " a sudden someone figures something out that makes the training of basically infinitely" }, { "end": 3891.16, "start": 3888.84, "text": " deep transformers possible." }, { "end": 3899.72, "start": 3891.16, "text": " Like we made the training for almost infinitely deep CNNs possible with ResNet." }, { "end": 3910.16, "start": 3899.72, "text": " Okay, so they conclude this and so they, that's the investigation of what the number of experts" }, { "end": 3911.64, "start": 3910.16, "text": " and so on gives you." }, { "end": 3918.06, "start": 3911.64, "text": " And here is a bit of a different investigation where they more care about training efficiency." }, { "end": 3926.3199999999997, "start": 3918.06, "text": " So they ask themselves, how many billion tokens of input do we need to reach a given cross" }, { "end": 3927.3199999999997, "start": 3926.3199999999997, "text": " entropy?" }, { "end": 3933.08, "start": 3927.3199999999997, "text": " So here, the more tokens you need, the lower your efficiency is, right?" }, { "end": 3939.16, "start": 3933.08, "text": " You can see that the general trend is the following." }, { "end": 3945.7599999999998, "start": 3939.16, "text": " If you up the number of layers, you get more efficient, you can see and just look at this" }, { "end": 3951.24, "start": 3945.7599999999998, "text": " column for now, this point seven column, you can see it already pretty clearly." }, { "end": 3958.22, "start": 3951.24, "text": " So here you go from 12 layers to 36, you gain efficiency, here you gain here you gain pretty" }, { "end": 3959.22, "start": 3958.22, "text": " predictable." }, { "end": 3964.98, "start": 3959.22, "text": " If you up the number of layers, you need to see fewer tokens to get to the same cross" }, { "end": 3971.8, "start": 3964.98, "text": " entropy. And in fact, you can get to a lower cross entropy altogether at the end." }, { "end": 3976.12, "start": 3971.8, "text": " We've known this for language models already." }, { "end": 3981.36, "start": 3976.12, "text": " The other effect is of course, what happens if we go not deeper, but wider, if we increase" }, { "end": 3986.14, "start": 3981.36, "text": " these number of experts, if we increase this sparse computation." }, { "end": 3990.62, "start": 3986.14, "text": " So here you can see, let's just look at the 12 layers for now." }, { "end": 3993.86, "start": 3990.62, "text": " Let's look at all the rows where there's 12 layers." }, { "end": 4002.1200000000003, "start": 3993.86, "text": " So here you get a significant advantage by upping the number of experts from 100 to 500." }, { "end": 4009.1800000000003, "start": 4002.1200000000003, "text": " But then you hurt upping the number of experts to 2000, right?" }, { "end": 4016.4, "start": 4009.1800000000003, "text": " So that's that's sort of your you're hurting efficiency by upping the number of experts" }, { "end": 4017.48, "start": 4016.4, "text": " too much." }, { "end": 4022.84, "start": 4017.48, "text": " And the same if we look at the 36 layer, so you gain massive efficiency by upping the" }, { "end": 4028.6400000000003, "start": 4022.84, "text": " number of experts, but you lose that a fish part of that efficiency again, by increasing" }, { "end": 4030.6800000000003, "start": 4028.6400000000003, "text": " it even more." }, { "end": 4037.84, "start": 4030.6800000000003, "text": " Now we saw that the this model is still the best model, but it's not as efficient as that" }, { "end": 4038.84, "start": 4037.84, "text": " model." }, { "end": 4043.6000000000004, "start": 4038.84, "text": " And that gives you another indication that there is sort of a sweet spot between these" }, { "end": 4050.48, "start": 4043.6000000000004, "text": " two things between the positive transfer and the bottleneck capacity that appears to be" }, { "end": 4055.36, "start": 4050.48, "text": " somewhere in between right here." }, { "end": 4057.48, "start": 4055.36, "text": " So that's pretty interesting." }, { "end": 4062, "start": 4057.48, "text": " Because we know about depth that you can basically up and up and up and get more efficient, but" }, { "end": 4065.32, "start": 4062, "text": " with not that much." }, { "end": 4072.6, "start": 4065.32, "text": " Yeah, the largest model can be trained in under four days to achieving the best quality." }, { "end": 4081.38, "start": 4072.6, "text": " Yes, yes, yes, but this is just a yeah." }, { "end": 4091.74, "start": 4081.38, "text": " So here, oh, you can see the batch size in in tokens is quite, quite a bit." }, { "end": 4097.5, "start": 4091.74, "text": " So yeah, if you have a 1000, if you have a context window of 1000, that means the batch" }, { "end": 4101.28, "start": 4097.5, "text": " size here was about 4000." }, { "end": 4104.12, "start": 4101.28, "text": " So as as expected." }, { "end": 4110.36, "start": 4104.12, "text": " Yeah, this is just easy peasy 22 TPU core years." }, { "end": 4115.2, "start": 4110.36, "text": " I've seen someone on Twitter saying this, this is the new measure for computer." }, { "end": 4117.04, "start": 4115.2, "text": " It's no longer like flops." }, { "end": 4120.759999999999, "start": 4117.04, "text": " It's TPU core years." }, { "end": 4123, "start": 4120.759999999999, "text": " Just mad, mad." }, { "end": 4125.5, "start": 4123, "text": " And yeah." }, { "end": 4128.5199999999995, "start": 4125.5, "text": " So 42 days to train this thing right here." }, { "end": 4131.72, "start": 4128.52, "text": " Crazy, crazy, crazy." }, { "end": 4133.1, "start": 4131.72, "text": " All right." }, { "end": 4138.42, "start": 4133.1, "text": " They also have a number of investigations in other parts of efficiency, like per device" }, { "end": 4140.820000000001, "start": 4138.42, "text": " memory consumption." }, { "end": 4149.56, "start": 4140.820000000001, "text": " You can see here that as you up the as you up the number of experts, you can see here," }, { "end": 4155.64, "start": 4149.56, "text": " here, here, your weights don't go up because as you up the number of experts, you can just" }, { "end": 4161.72, "start": 4155.64, "text": " up the number of machines and the per machine weight usage will be the same, right?" }, { "end": 4168.9800000000005, "start": 4161.72, "text": " Because the experts are independent of each other, each one has their own weight matrix." }, { "end": 4173.88, "start": 4168.9800000000005, "text": " So you can just add machines and you keep your weight requirements the same." }, { "end": 4179.5, "start": 4173.88, "text": " However, if you go deeper, then your weights increase because you're now deeper, you have" }, { "end": 4181.14, "start": 4179.5, "text": " more layers." }, { "end": 4187.76, "start": 4181.14, "text": " You have your so also your transformer weights will be higher and so on." }, { "end": 4190.360000000001, "start": 4187.76, "text": " So you go deeper right here." }, { "end": 4196.9800000000005, "start": 4190.360000000001, "text": " You see 3660 layers, your memory consumption increases for the weight." }, { "end": 4201.820000000001, "start": 4196.9800000000005, "text": " And also, this is the other big part in transformers, right?" }, { "end": 4207.400000000001, "start": 4201.820000000001, "text": " The activations that you have to save, because as we said, if you have a transformer and" }, { "end": 4213.679999999999, "start": 4207.4, "text": " I have layer, layer, layer, layer, I basically have to keep around each of these signals" }, { "end": 4217.16, "start": 4213.679999999999, "text": " in order to do back propagation." }, { "end": 4222.839999999999, "start": 4217.16, "text": " And that's why also the activation here increases as I go deeper." }, { "end": 4226.679999999999, "start": 4222.839999999999, "text": " Now you can see percentually, it decreases again here." }, { "end": 4228.4, "start": 4226.679999999999, "text": " So what's happening?" }, { "end": 4231.759999999999, "start": 4228.4, "text": " Technically, you don't have to keep these things around." }, { "end": 4236.62, "start": 4231.759999999999, "text": " You can also once the signal comes back, you can recompute them from the beginning or from" }, { "end": 4238.04, "start": 4236.62, "text": " an intermediate point." }, { "end": 4243.68, "start": 4238.04, "text": " Now this increases computation, but saves the need to store the activations." }, { "end": 4252.28, "start": 4243.68, "text": " And apparently G shard, yet another thing it does is it will recompute as necessary" }, { "end": 4257.62, "start": 4252.28, "text": " the activations if it realizes that you don't have enough memory to store them." }, { "end": 4262.24, "start": 4257.62, "text": " So all of this is pretty crazy, honestly." }, { "end": 4270.8, "start": 4262.24, "text": " And they look at where the different computations go." }, { "end": 4274.5199999999995, "start": 4270.8, "text": " And I don't want to go into this." }, { "end": 4280.76, "start": 4274.5199999999995, "text": " And they have these micro benchmarks where they really show that the increase in complexity" }, { "end": 4288.92, "start": 4280.76, "text": " is really according to square root of n, because that's how long it takes to distribute along" }, { "end": 4292.68, "start": 4288.92, "text": " these actors, sorry, along these experts." }, { "end": 4295.04, "start": 4292.68, "text": " There's a lot to this paper." }, { "end": 4297.6, "start": 4295.04, "text": " And there's no time to go through all of it." }, { "end": 4299.8, "start": 4297.6, "text": " I think this video is already way too long." }, { "end": 4305.04, "start": 4299.8, "text": " I hope I have given you an impression of what's possible with this system." }, { "end": 4309.78, "start": 4305.04, "text": " And as I said, I'm excited what people can come up with." }, { "end": 4315.28, "start": 4309.78, "text": " Just to say that in the appendix here, they detail that they have done this for all the" }, { "end": 4317.04, "start": 4315.28, "text": " operations in XLA." }, { "end": 4322.32, "start": 4317.04, "text": " So for example, convolution, this is so ugly, how you have to implement the convolution" }, { "end": 4327.76, "start": 4322.32, "text": " because you have to padding must be correct across these expert across the the sharded" }, { "end": 4328.76, "start": 4327.76, "text": " machine." }, { "end": 4330.16, "start": 4328.76, "text": " So there are no experts anymore." }, { "end": 4333.4, "start": 4330.16, "text": " This is just G shard, the padding has to be correct." }, { "end": 4335.84, "start": 4333.4, "text": " The strides have to be correct." }, { "end": 4340.2, "start": 4335.84, "text": " Data needs to be exchanged according to the machines, the window size needs to be correct," }, { "end": 4341.2, "start": 4340.2, "text": " blah, blah, blah." }, { "end": 4347.16, "start": 4341.2, "text": " So just thank you for doing this and not having to do it myself." }, { "end": 4354.08, "start": 4347.16, "text": " Yeah, I'm excited as soon as as the codes out, if I get a hold of it, I'll you know," }, { "end": 4357.04, "start": 4354.08, "text": " link it or you'll find it once it's out." }, { "end": 4360.08, "start": 4357.04, "text": " If it's already out, I'm just too dumb to see it." }, { "end": 4362.42, "start": 4360.08, "text": " I enjoyed reading this." }, { "end": 4364.4, "start": 4362.42, "text": " It's different than a machine learning paper." }, { "end": 4370.84, "start": 4364.4, "text": " It kind of shows you what goes into engineering a system like this, and how easy it can be" }, { "end": 4373.72, "start": 4370.84, "text": " if it's engineered well to then apply it." }, { "end": 4377.64, "start": 4373.72, "text": " I think this is going to be extremely helpful to the community." }, { "end": 4382.52, "start": 4377.64, "text": " And with that said, 23 pages later, see you next time." }, { "end": 4403.4800000000005, "start": 4382.52, "text": " Bye bye." } ]
DYBmD88vpiA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Object-Centric Learning with Slot Attention (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "ethz", "vision", "objects", "slots", "attention mechanism", "gru", "lstm", "routing", "capsules", "permutation invariant", "encoder", "set", "detr", "embeddings", "transformer", "weight sharing", "disentanglement", "render", "tetris", "clevr", "cnn", "convolutional neural network", "attention" ]
Visual scenes are often comprised of sets of independent objects. Yet, current vision models make no assumptions about the nature of the pictures they look at. By imposing an objectness prior, this paper a module that is able to recognize permutation-invariant sets of objects from pixels in both supervised and unsupervised settings. It does so by introducing a slot attention module that combines an attention mechanism with dynamic routing. OUTLINE: 0:00 - Intro & Overview 1:40 - Problem Formulation 4:30 - Slot Attention Architecture 13:30 - Slot Attention Algorithm 21:30 - Iterative Routing Visualization 29:15 - Experiments 36:20 - Inference Time Flexibility 38:35 - Broader Impact Statement 42:05 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.15055 My Video on Facebook's DETR: https://youtu.be/T35ba_VXkMY My Video on Attention: https://youtu.be/iDulhoQ2pro My Video on Capsules: https://youtu.be/nXGHJTtFYRU Abstract: Learning object-centric representations of complex scenes is a promising step towards enabling efficient abstract reasoning from low-level perceptual features. Yet, most deep learning approaches learn distributed representations that do not capture the compositional properties of natural scenes. In this paper, we present the Slot Attention module, an architectural component that interfaces with perceptual representations such as the output of a convolutional neural network and produces a set of task-dependent abstract representations which we call slots. These slots are exchangeable and can bind to any object in the input by specializing through a competitive procedure over multiple rounds of attention. We empirically demonstrate that Slot Attention can extract object-centric representations that enable generalization to unseen compositions when trained on unsupervised object discovery and supervised property prediction tasks. Authors: Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, Thomas Kipf Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at object-centric learning with slot attention by Francesco Locotello, Thomas Kipf and others of Google Brain, ETH Zurich and MPI. On a high level this paper recognizes scenes of objects from single pixels and it's best I show you a picture of what's going on. So you have scenes like this where there is some sort of an arrangement of objects and there are multiple tasks you can do here. Specifically they consider the task of unsupervised recognition of objects which they call object discovery and supervised classification of objects. The difficulty being that these are sets of objects so there is no ordering to the sets. They do this via a thing they call slot attention that basically is a permutation invariant attention mechanism over these objects in both the supervised and unsupervised domain and they do this in a fashion where they iteratively route the attention in order to make the different slots compete for attention over these objects. So that's the sort of high level. If you are in this field you probably know right now what's going on. If you're not we'll dive into it together so stay tuned. If you like content like this consider sharing it out, leaving a like or tell me what you think about it in the comments. I appreciate any suggestion for making these videos better so people can learn more from it. Alright so the problem I've already described the problem a little bit but let's go a bit deeper here. You have images like this and the images we're considering are going to be images that have some sort of arrangement of objects or what we humans would call objects. In this case you can see there is this gray square, not sorry, this gray cube right here. There is a smaller green cube and then there is a yellow cylinder. Now in the task of object discovery what you're supposed to do is you're simply supposed to say that there is an object right here, there is an object about here and there is an object here. So basically you're supposed to point to the pixels where there are objects and you're supposed to segment the objects from each other. You can see right here that this model, we don't know how it works yet, but it separates the left cube here, the bottom cube here and the top right cylinder right here. In the task of set prediction you're supposed to say what objects there are. So you're supposed to say there is a gray cube right here, a green cube right here and there is a yellow cylinder right there. Actually you don't have to say where they are I guess. There are many different variants of this task but mainly you're supposed to classify them, meaning you have to say there is a gray cube there. I believe in this case it's with coordinates but you can do it without. The difficulty here of course being that these are sets so there is no natural order in it. So if you say there is a green cube and a yellow cylinder it's going to be the same as there is a yellow cylinder and a green cube. So you have to build an architecture that is somehow invariant with respect to the labels. We've seen a lot of the concepts in this video in this paper before. This video is sort of a kind of a mash together of different concepts of other places. So what you'll see is for example this property of the fact that here you see are the labels for these objects. This could be there is a green cube, there is a gray cube and you'll have to come up with an architecture that if here you predict that green cube you consider it correct even though the corresponding label isn't the one for the green cube. And we saw this for example in this DETR architecture by Facebook where they use a matching loss but we'll get into that. Okay so these are the tasks. The tasks are object discovery and set prediction. So how does this paper deal with this? They use this thing called a slot attention module. Now the slot attention module is in essence it's pretty simple. What it does is it has these different slots right here as you can see and it divides the input into features. So you can see there is a CNN encoder because we're working with pixels it's natural that we want to encode these into a CNN. This CNN will probably down sample the image a bit and divide subdivided into this grid right here. So you have a fairly coarse grid. The grid is actually not a bit finer than you see here. This is just for example but you'll have ultimately a number of features so each pixel right here is going to be a feature. Each feature will have not only this one channel as you see here but many many channels of information down here. So the CNN will encode each of these regions in the picture into a feature vector and then you have these slots. So what you'll want to do and we maybe look at this so you'll have the features right here. These are your features and you'll have the slots and the slots let's say there are fewer slots than features. Three slots, four slots as in this case. What you'll want to do is you'll want to assign the features to the slots. So you maybe say okay this feature right here and this feature right here they go to this slot and then these two features go to this slot and then these two go to this and that feature goes to that and that's equivalent to basically subdividing the picture into these slots. Ultimately your goal is going to be to say that these features right here these pixels right here are going maybe into that slot and then these ones right here are going into that slot and these ones here going into that slot and the rest so all of their background is going into that one. You can see that if you have a system like this if you can train it correctly then it becomes pretty easy to classify it right here because you can just take each slot and independently classify it. Because you already know you already have assigned all the pixels where the object appears into that slot you can just super easily predict a class from it. So we're almost at the end so you now predict for each slot a class or a description of the object whatever you want to predict and this is the exact same thing as in this Facebook paper now where for each of these slots we've predicted a bounding box. The question is how do you assign this to the labels and that's pretty easy that there's this thing called the Hungarian matching that basically what you're saying is you want to be as forthcoming as possible right so if you predict a gray cube somewhere and there is a gray cube somewhere here you want to match them you'll say okay I'm going to give you the benefit of the doubt and I'm going to do your model I'm going to assume with the gray cube you meant that gray cube right here and if there is the yellow rectangle and the yellow rectangle somewhere over there you don't incur any penalty as long as you predict the correct things. Now only whenever you predict like a second yellow rectangle so both of these slots now so this slot and this slot for some reason they predict a yellow rectangle this one correctly and this one was assigned this object and it incorrectly predicts a yellow rectangle oh sorry other way around this one incorrectly predicts a yellow rectangle where there is no second yellow rectangle in our label set there's only this maybe this green cube then this will be a mistake because it can't be matched it will be matched to the one where it has the least loss but it will be matched to something that's not a yellow rectangle and therefore that's going to be a mistake so this is how you calculate the loss function with this matching algorithm and you can calculate that matching in a deterministic fashion so you can back propagate through it so you can see if this slot assignment works we'll have a pretty easy time then calculating the classes coming up with a loss the same for the unsupervised object discovery what we'll do is we'll run these things through this slot decoder now this slot decoder is very similar to an a generator in GANs for example it takes a hidden representation as input now the hidden representation here is going to be these these slots and it's going to up sample it into an image if we train the whole if if we have a good slot assignment mechanism we can pretty easily train a decoder like this right with any method you want in this case I believe they use some yeah some sort of up sampling up convolution architecture right here and they use the L2 they minimize the reconstruction error between the end the output image and the input image so it's sort of like a variational autoencoder or just autoencoder objective in this case all right so we know how to encode a picture into hidden representation using a standard convolutional neural network and we know once our slot attention mechanism works we pretty much know how to go from there so the question is what is this slot attention mechanism now what we're supposed to do is we're supposed to again assign each one of the features into a slot and in a very specific fashion so if you think about the pixels right here there can be multiple of these pixels or multiple of the regions multiple features can be assigned to one slot but we'd rather not have the same feature assigned to multiple slots so each slot takes in many features but the features should be this divided between the slots such that only one slot attends to a feature and by me saying attend you probably already know where this is going so if you have the features and you consider the slots right and we just look at a single feature for now what we'll do is we'll have an attention mechanism from the slots going into the features so if you don't know what an attention mechanism is I have this video called attention is all you need where I explained this but briefly the features they will emit something that's called a key which is a vector and then the slots will emit a query which are also vectors and the sir the the information is now routed by agreement of key and query in this case this thing this this feature right here would be routed to this slot now it would be routed to both slots but it wouldn't be routed as much to the bottom slot and we make sure that this happens by using a softmax assignment so if this is like 9 and this is 4 what we'll do is a softmax assignment such that after that so we have a proper distribution which would be something like after the softmax be something like 0.9 and 0.1 right here so you can see that the attention is fairly hard so this is basically it's a differentiable way to assign these things okay so an attention mechanism fulfills the property that we want to basically assign features to the slots in a way that the slots compete for the features as you can see right here if this slot here matches the feature the best it come it out competes the other slot because at the end this has to be normalized to one because of the softmax so this competition is the heart of the slot attention mechanism and this is this is how it works so this is the slot attention module as you can see so you'll take your inputs and they have lots of layer norms in here but disregard the the layer norms so what you'll do is you'll calculate the agreement between the inputs and the slots now you might wonder in a standard attention mechanism you'll have input signal coming from here which is like maybe these are the input signals and then you construct the keys and the queries for the next layer you construct all from that input signal right and also the values by the way you construct everything from that input signal but in this case will have many features and will only have a fixed amount of slots right here so where do these slots come from where do the the signal for the keys come from in the Facebook DETR paper we saw that these are learned embeddings however in this case right here these are not learned the slots are initialized randomly so at the beginning of each thing the slots are initialized randomly you can think of this as an attention mechanism where you have the attention module right here and then at the beginning you simply have randomly initialized positional embedding or randomly initialized slots and then the image is going to be encoded through a CNN right here giving you a bunch of these features and then you'll have cross attention between these features and the slots and that will give you the next layer right here okay all right so you want to calculate the routing between the inputs and the slots and then you want to perform a softmax over the slots which will give you this competitive nature between the slots so all the slots are going to compete for the features to be routed to them and then this is simply the second part of the attention mechanism and so you will have a weighted mean now this is a slightly different from an attention mechanism because in a real attention mechanism you'll have a weighted sum right here here you will have a weighted mean but it's basically such that you can have a different amount of slots and the kind of values will stay the same that's why you do the mean so you weight them up and the values are simply a function of the inputs this is like in a standard attention mechanism then what you'll do you can see that this is now called updates okay so you start with the slots randomly and then you use the slots to route the information you take the inputs and you use that information routing to calculate the updates now you put the updates through a GRU with the state being the previous slots and then you'll add that to the slots either this says optional residual MLP so what you can do is you will have a residual MLP or not this is a fairly complicated thing but if you think of it it is just a transformer so what they describe here sorry the purpose of this GRU here of course is that the GRU is a recurrent unit and you can see right here that they do this multiple times so once you start with the random slots right but then you update the slots and you go you go again okay so you do this first of all you'll have the features and you'll just have random slots and then you do a bit of routing okay okay so now we have a bit of routing cool you update these slots to be the next set of slots and then you take the same features and route them again so you you route them again and this is supposed to be kind of this iterative procedure you might have seen this in capsule networks I've done a video on capsule networks where exactly this type of iterative routing you always have the same routing functions right these the functions for value and key and query are always the same but you do this iteratively many times in a row this is like a transformer with weight sharing it's exactly the same right so you have these slots you initialize them randomly you do your query times keys your soft max times the value right here and this the transformer even has this plus this MLP layer right here like this the transformer has that in there and then you simply do it again so up here you have the next transformer layer but instead of being its own layer you'll copy the so it's it's weight shared it's a transformer with weight sharing between the modules and the inputs they are also copied up here this these side inputs all right it's otherwise it's the it's the same thing except that these aren't produced by an encoder that is also a transformer they're actually produced by a CNN and the weights here are shared the only difference is that in between here they also have like this GRU this GRU thing but they do an ablation on it and it's actually not that important so you could might as well just leave it away bring it brings only very few benefits so this is how I want how I think of this model this is a multi layer a t layer transformer with weight sharing in for the individual layers where the inputs the input positional encoding are randomly initialized each time okay now they they really stress this random initialization because this differs from the DETR paper in that in the DETR paper these things here are learned and the DETR paper we have also this kind of object detection thing and it what will happen when you learn these is that for example this one right here might specialize in objects that are sort of on the top left of the image and this one might specialize in objects that are kind of long and in the middle and so on and this one might specialize to something else now I can't tell you what works better or what not it seems like you can if I were to implement something like this I might want to go with the Facebook one and then just have more right here in this paper they opt for having fewer but because they're fewer if you learn them they become I guess too specialized and you will need to keep them agnostic so you don't want to learn them you simply want to randomly initialize them each time and via via the iterative routing via the weight sharing they will be sort of assigned correctly all right I hope you could follow this yeah if you if you want to anthropomorphize this you could think if each of these slots starts out just randomly and then just by sheer coincidence through this attention mechanism they happen to be assigned a couple of these features now because we train the model to perform well because they're already assigned these features in the next layer they'll basically ask through the query function they'll ask for more of that they'll basically say oh I'm now responsible kind of for the gray pixels give me more of the gray pixels right give me give me more of that and then in the next layer even more of that even more of that and you'll see in the in the investigations into what happens exactly this type of thing happening so if we skip ahead to the experiments where they show what happens through the iterations you can see this right here so the attention the attention maps of these slots you can see that in after the first step you can see that you know it's slot two right here is assigned kind of these both of these objects slot three is already pretty so the first step kind of learns to segment a little bit of the image but not you know too well slot four it also the attention map here is pretty pretty wonky but if you in the next step and this is kind of crucial basically the these slots they specialize there's a slot to realize as well I have a lot of these these blue pixels I'm gonna give me more of those right give me more of those so it gets all the blue pixel well slot four has a lot of these golden pixels says give me more of that of those golden stuff that's also regionally right next to that and since these two compete I'm pretty sure slot two would also ask for more of the golden pixels because it has a lot of golden pixels but it competes with slot four because of the softmax so all of the golden pixels are assigned to slot four and not slot two well all of the blue pixels that slot four surely asks for as well are assigned to slot two in the next iteration so I actually consider iteration one is for you take the randomly initialized slots and you kind of assign them stuff so this is mainly the this is now mainly the transformer layer learning to segment but then step two is where the magic really happens is where the slots they kind of realize what's assigned to them and they ask for more of it and through the competition you'll get this separation into objects right so the whole thing is trained end to end which basically means that these functions get really good at doing this kind of segmentation alright and then in subsequent iterations you can just see this effect multiplying even more and more right but I might even you might even be able to think that you might want to separate step one and the subsequent steps because step one is sort of seems fundamentally different from steps two three four and so on because step one is this kind of assignment process and then the other steps are refinement so if I were to take this model and make it better I would try to have a special like not way sharing between steps the first step and the subsequent steps but what do I know this apparently it works okay you can also look at the reconstruction since their objective is to reconstruct so basically what each slot outputs each slot out if you reconstruct each slot here we these are the different slots each slot is supposed to output a picture of the reconstruction now if we consider that each slot is responsible for an object you might very well say okay this slot here gives me a picture with just the object in it that it's supposed to reconstruct and then this lot here gives me a picture with just the object that it is supposed to reconstruct now how do you know how to combine these pictures especially since they might be overlapping and so on so the way you do it is you actually output four channels so you output R G B and A so a being the alpha channel so each slot also has to decide where the object is that it reconstructs and so each so this this here might be okay everything here is alpha one and including the shadow maybe maybe there's a little shadow and everything else is alpha zero and then the alpha maps you combine also via a softmax to ensure that they sum up to one so you combine the pictures including their alpha maps but that means you can basically reconstruct from the slots where the where the objects are now you'll know you'll notice this is this thing here you'll notice that they often use for example here four different slots because even though the image has three different objects why is that because you need to reconstruct the entire image so you need at least one slot for the background and that's always what you can see right here so if you have the sorry the reconstruction you'll see that slot too with time with iterations it reconstructs this cube slot three reconstructs the ball slot four reconstructs the yellow cylinder and slot one reconstructs the background okay also here if you see the attention masks you see that the slot one will be responsible for the background here the background is significantly darker than in these others though they do say the background doesn't really tend to go to one slot in particular it tends to kind of spread out across all the slots and this might mean more investigation yeah so they have these different tasks right here for example to segment these Tetris blocks here and you can see the segmentations it works pretty pretty well now why does this work so well it's probably because of the data sets so these kinds of data sets they come you know they they're produced by a generator and the generator specifically has these these objects right here and it sort of in it arranges them in an independent fashion the background is really clean right the objects themselves are really clean and geometric and so on and they're they're kind of arranged in a random fashion and then there's a render of that so this is like super super duper clean data set and I guess that has a lot to do with why these methods work so well because they can just assume okay an object is generally you know spatially something some geometric shape that I know it's close together it's pretty independent from its surrounding and it's trained with objects that are almost zero correlated like there is zero correlation between the objects in the training data set so I wouldn't yet apply this much to to real-world problems but is an interesting thought right here so that's the sort of idea behind the paper I hope you got that they do a lot of experiments and here is a bit where my quarrels start so they say that they compare for example with the with these in the unsupervised object discovery experiments they have this data set called clever and this data set has these images with sort of I believe clever six has all up to six different objects now this is already one of the things this is not a specifically quarrel with this model but if your data set has six things they all they give like they give seven slots because they know that the data set has at most six things which means they can always cover all the things now it works when there's less objects but I think the knowledge of how many objects there's gonna be is also a big part of why these models work and why maybe it's not entirely ready yet for the real world but anyway they compare to these two baselines they're called iodine which also employs kind of a recurrent architecture but not with an attention mechanism and Monet and they say yada yada yada this replaces lot of them no that's not it for the Monet iodine DSP and baselines we compare with the published numbers as we use the same experimental setup so they say they use the same experimental setup and that's why they don't re-implement these models but they use the published numbers in their respective papers which is something you can do this is often I guess these machine translation papers and so on they do this just because you know it's a lot to to run these things however here I'm a bit skeptical first yes because it is it is Google so they do have a lot of resources available to technically run these things I've seen at least Monet has an implementation by the author or I've seen one of them the other one also there is an implementation and there's eight authors on this particular paper so I yeah I this would be okay they say as we use the same experimental setup so even in that case if you have the same setup it's more okay but it really depends on you really having the same setup and this is a bit where it kind of falls so for example one example right here is they say we train the moon using the atom optimizer with a learning rate and 60 so on they use a single GPU we further make use of learning rate warm-up to prevent early saturation of the attention mechanism and an exponential decay decay schedule in the learning rate which we found to reduce variance so I've checked these other models and none of them talks about learning rate warm-up and nowhere in their code is there a learning rate warm-up now you might you might argue okay this is specific to this model it might need this but if you look at the results right here for example you see that they don't outperform these other models by too much so you can see right here this is on par this here outperforms this one a little bit but then also the star here means that they have left out one of the denotes that one outlier was excluded from evaluation I guess which is valid if it's a super outlier but in this case I would categorize this model as a different way of doing doing things and not necessarily outperforming the others so this also if you look at the ablations the differences here are miniscule and in these ablations that they show every single thing they do like gives them a little bit of a boost and you just make it kind of across the line to reach state-of-the-art I'd rather have research move in a direction where we just show cool ideas and that they work and that's what this paper does to to be fair what I do have more of a problem with a little bit is this here on clever six we can use a batch size of up to 64 on a single V 100 GPU as opposed to four in this iodine baseline right compared to ion and our model is significantly more efficient in terms of both memory consumption and runtime which is you know something I believe but this characterization that they use a batch size of four and here in this paper they can use a size up to 64 on a single V 100 GPU I've read the iodine paper and the iodine paper says yes they do use a batch size of four on one GPU so they also use one GPU but they say their GPU has a RAM of 12 gigabytes and 12 gigabyte RAM GPUs that points to something like TI either I guess a 1080 or a 20 2080 or something like this this is not a V 100 the V 100 come in 16 or probably Google has the 32 gigabyte version so this is a 32 gigabyte GPU that is significantly better than the TI GPUs these V 100 they cost like five or ten times more than the TI GPUs and to simply say we have also one GPU and we can run up to a batch size of 64 and they can only run a batch size of four it seems I don't know it seems sort of overstating what you can do now maybe I'm wrong maybe they have actually tested this other model and concluded also on their GPUs it can only run to a batch size of in a bat as a batch size of four but I highly doubt it because in their here the paper is cited and in their paper they explicitly named that they use for a batch size of four for their 12 gigabyte GPUs so yeah in this that that just kind of pulls through so there's the there's the miniscule improvements and then there is the ablations of all these tricks where everyone just gives you a little bit and then there is this kind of very very very favorable comparison wordsmithy a bit which gives a bit of a bitter taste to what I think is actually a very very cool method because how why is this method so cool because for example these slots here they are trained to also absorb the background right so you can technically at inference time increase the number of slots even though you've trained with just with a few slots right you can you can increase the number of slots and the model can just handle it and they show it right here in these results here you can this data set has six objects and this data set has ten objects now the model has only been trained ever on six objects and they can just up the number of slots at inference time and it'll work also very well also they can now up the number of iterations since these are all weight shared these iterations right we've looked at it there's there's weight sharing between the iterations there is nothing stopping you from just piling on here because it's weight shared you don't need any more weights you can just refine this iteration and since the iteration themselves are refining these attention masks anyway you might as well at inference time refine them some more they have an ablation where they show that technically like three two or three iterations at training time gives them the best result I guess just because of gradient propagation because more layers means you have to propagate the gradient back more but at inference time you can just up these iterations and as you can see right here you get better and better so this these results are are pretty cool and they respect the property that sets should be permutation invariant and and so on this routing view of the transformer is pretty cool even though it's you can look at it as a transformer with weight sharing or an iterative routing protocol like in capsules so all of this I I find to be very very cool idea and I think that's how we should look at this this paper so before I am too critical of this paper I want to say that I really like the idea and the algorithm here the implementation yeah so that was the paper at last I actually want to look at the broader impact statement just because I've I've complained about brought the need for broader impact statements so I just want to kind of go just read them and just like look how how the different companies how the different institutions how the different people how the community reacts to them crafts them and so on so this one I find particularly interesting let's go through it says the slot detention module allows to learn object centric representation from perceptual input okay as such it is a general module that can be used in a wide range of domains and applications in our paper we only consider artificially generated data set under well controlled settings where slots are expected to specialize to objects however the specializations of our model is implicit and fully driven by the downstream task we remark that as a concrete measure to assess whether the module is specialized in unwanted ways one can visualize the attention masks to understand how the input features are distributed across the slots while more work is required to properly address the usefulness of the attention coefficients in explaining the overall predictions of the network especially if the input features are not human interpretable we argued that they may serve as a step towards more transparent and interpretable predictions this is a I mean it's a fine statement but it's not a broader impact statement right if you followed a bit what the broader impact statement is supposed to be this is not one okay the closest this comes to a broader impact statement is said as such it is a general model that can be used in a wide range of domains and applications and maybe a little bit that you can visualize the attention masks to understand how the input features are distributed but the broader impact statement is supposed to give you a preview of how this might affect society at large while this here just kind of lists properties of the model for the research community and sort of for this for the application of this model as as you know the introspection of the model itself this says nothing about society as such so maybe you know maybe that's I think that the smarter people will turn the broader impact statement into more of an introduction section because that's something you usually put in a conclusion or in an introduction where you say look here are some things our model can do and this is what it might be useful for and this is how you could introspect it and so on and since the broader impact statement especially at NURIPS you were allowed to put the broader impact statement on the main paper so not in the appendix but it wouldn't count towards your page limit it's I guess pretty foreseeable without what people are gonna start to do is simply put more of their paper into the broader impact section kind of cloaked in the veneer of a broader impact statement but I this is this is clearly not what what the broader impact statement was originally supposed to be now I don't know if this is good or bad I just think these authors are you know they're doing I think the a good thing here by simply telling us actually something useful about the model but that's just my opinion I do thank you for being with me here I know this was a bit ranty flip-flopping back and forth between the different things we haven't looked at set prediction at all we've only looked at these kind of masks but I invite you to go through the paper yourself and check it out it's pretty cool and they do describe a lot of things in pretty detail the appendix is very long and has very many ablations and this is something I do appreciate and with that bye bye and see you next time
[ { "end": 4.44, "start": 0, "text": " Hi there! Today we'll look at object-centric learning with slot attention" }, { "end": 9.36, "start": 4.44, "text": " by Francesco Locotello, Thomas Kipf and others of Google Brain, ETH Zurich and" }, { "end": 16.12, "start": 9.36, "text": " MPI. On a high level this paper recognizes scenes of objects from single" }, { "end": 20.76, "start": 16.12, "text": " pixels and it's best I show you a picture of what's going on. So you have" }, { "end": 25.560000000000002, "start": 20.76, "text": " scenes like this where there is some sort of an arrangement of objects and" }, { "end": 29.6, "start": 25.560000000000002, "text": " there are multiple tasks you can do here. Specifically they consider the task of" }, { "end": 34.24, "start": 29.6, "text": " unsupervised recognition of objects which they call object discovery and" }, { "end": 38.96, "start": 34.24, "text": " supervised classification of objects. The difficulty being that these are sets of" }, { "end": 45.32, "start": 38.96, "text": " objects so there is no ordering to the sets. They do this via a thing they call" }, { "end": 51.84, "start": 45.32, "text": " slot attention that basically is a permutation invariant attention mechanism" }, { "end": 56.8, "start": 51.84, "text": " over these objects in both the supervised and unsupervised domain and" }, { "end": 62.4, "start": 56.8, "text": " they do this in a fashion where they iteratively route the attention in order" }, { "end": 68.72, "start": 62.4, "text": " to make the different slots compete for attention over these objects. So that's" }, { "end": 73.52, "start": 68.72, "text": " the sort of high level. If you are in this field you probably know right now" }, { "end": 80.34, "start": 73.52, "text": " what's going on. If you're not we'll dive into it together so stay tuned. If you" }, { "end": 85.12, "start": 80.34, "text": " like content like this consider sharing it out, leaving a like or tell me what" }, { "end": 89.64, "start": 85.12, "text": " you think about it in the comments. I appreciate any suggestion for making" }, { "end": 94.96000000000001, "start": 89.64, "text": " these videos better so people can learn more from it." }, { "end": 100.28, "start": 94.96000000000001, "text": " Alright so the problem I've already described the problem a little bit but" }, { "end": 104.92, "start": 100.28, "text": " let's go a bit deeper here. You have images like this and the images we're" }, { "end": 109.4, "start": 104.92, "text": " considering are going to be images that have some sort of arrangement of objects" }, { "end": 113.64, "start": 109.4, "text": " or what we humans would call objects. In this case you can see there is this gray" }, { "end": 120.2, "start": 113.64, "text": " square, not sorry, this gray cube right here. There is a smaller green cube and" }, { "end": 127.8, "start": 120.2, "text": " then there is a yellow cylinder. Now in the task of object discovery what you're" }, { "end": 132.76, "start": 127.8, "text": " supposed to do is you're simply supposed to say that there is an object right" }, { "end": 140.4, "start": 132.76, "text": " here, there is an object about here and there is an object here. So basically" }, { "end": 145.68, "start": 140.4, "text": " you're supposed to point to the pixels where there are objects and you're" }, { "end": 150.08, "start": 145.68, "text": " supposed to segment the objects from each other. You can see right here that" }, { "end": 156.32, "start": 150.08, "text": " this model, we don't know how it works yet, but it separates the left cube here," }, { "end": 163.92000000000002, "start": 156.32, "text": " the bottom cube here and the top right cylinder right here. In the task of set" }, { "end": 170.76, "start": 163.92, "text": " prediction you're supposed to say what objects there are. So you're supposed to" }, { "end": 176.11999999999998, "start": 170.76, "text": " say there is a gray cube right here, a green cube right here and there is a" }, { "end": 181.67999999999998, "start": 176.11999999999998, "text": " yellow cylinder right there. Actually you don't have to say where they are I guess." }, { "end": 186.79999999999998, "start": 181.67999999999998, "text": " There are many different variants of this task but mainly you're supposed to" }, { "end": 193.44, "start": 186.79999999999998, "text": " classify them, meaning you have to say there is a gray cube there. I believe in" }, { "end": 197.88, "start": 193.44, "text": " this case it's with coordinates but you can do it without. The difficulty here of" }, { "end": 202.88, "start": 197.88, "text": " course being that these are sets so there is no natural order in it. So if" }, { "end": 207.07999999999998, "start": 202.88, "text": " you say there is a green cube and a yellow cylinder it's going to be the" }, { "end": 214.16, "start": 207.07999999999998, "text": " same as there is a yellow cylinder and a green cube. So you have to build an" }, { "end": 220.32, "start": 214.16, "text": " architecture that is somehow invariant with respect to the labels. We've" }, { "end": 224.68, "start": 220.32, "text": " seen a lot of the concepts in this video in this paper before. This video is sort" }, { "end": 230.72, "start": 224.68, "text": " of a kind of a mash together of different concepts of other places. So" }, { "end": 236.6, "start": 230.72, "text": " what you'll see is for example this property of the fact that here you see" }, { "end": 241.72, "start": 236.6, "text": " are the labels for these objects. This could be there is a green cube, there is" }, { "end": 247.76, "start": 241.72, "text": " a gray cube and you'll have to come up with an architecture that if here you" }, { "end": 253, "start": 247.76, "text": " predict that green cube you consider it correct even though the corresponding" }, { "end": 258.44, "start": 253, "text": " label isn't the one for the green cube. And we saw this for example in this DETR" }, { "end": 262.71999999999997, "start": 258.44, "text": " architecture by Facebook where they use a matching loss but we'll get into that." }, { "end": 268, "start": 262.71999999999997, "text": " Okay so these are the tasks. The tasks are object discovery and set prediction." }, { "end": 274.71999999999997, "start": 268, "text": " So how does this paper deal with this? They use this thing called a slot" }, { "end": 281.24, "start": 274.72, "text": " attention module. Now the slot attention module is in essence it's pretty simple." }, { "end": 288.56, "start": 281.24, "text": " What it does is it has these different slots right here as you can see and it" }, { "end": 294.24, "start": 288.56, "text": " divides the input into features. So you can see there is a CNN encoder because" }, { "end": 298.6, "start": 294.24, "text": " we're working with pixels it's natural that we want to encode these into a CNN." }, { "end": 305.92, "start": 298.6, "text": " This CNN will probably down sample the image a bit and divide subdivided into" }, { "end": 310.24, "start": 305.92, "text": " this grid right here. So you have a fairly coarse grid. The grid is actually" }, { "end": 316.28000000000003, "start": 310.24, "text": " not a bit finer than you see here. This is just for example but you'll have" }, { "end": 321.08000000000004, "start": 316.28000000000003, "text": " ultimately a number of features so each pixel right here is going to be a" }, { "end": 326.04, "start": 321.08000000000004, "text": " feature. Each feature will have not only this one channel as you see here but" }, { "end": 331.96000000000004, "start": 326.04, "text": " many many channels of information down here. So the CNN will encode each of" }, { "end": 338, "start": 331.96000000000004, "text": " these regions in the picture into a feature vector and then you have these" }, { "end": 343.08000000000004, "start": 338, "text": " slots. So what you'll want to do and we maybe look at this so you'll have the" }, { "end": 350.6, "start": 343.08000000000004, "text": " features right here. These are your features and you'll have the slots and" }, { "end": 357.36, "start": 350.6, "text": " the slots let's say there are fewer slots than features. Three slots," }, { "end": 365.28000000000003, "start": 357.36, "text": " four slots as in this case. What you'll want to do is you'll want to" }, { "end": 372, "start": 365.28000000000003, "text": " assign the features to the slots. So you maybe say okay this feature right here" }, { "end": 378.32000000000005, "start": 372, "text": " and this feature right here they go to this slot and then these two features go" }, { "end": 382.88, "start": 378.32, "text": " to this slot and then these two go to this and that feature goes to that and" }, { "end": 387.4, "start": 382.88, "text": " that's equivalent to basically subdividing the picture into these" }, { "end": 392.24, "start": 387.4, "text": " slots. Ultimately your goal is going to be to say that these features right here" }, { "end": 398.6, "start": 392.24, "text": " these pixels right here are going maybe into that slot and then these ones" }, { "end": 404.24, "start": 398.6, "text": " right here are going into that slot and these ones here going into that slot and" }, { "end": 409.2, "start": 404.24, "text": " the rest so all of their background is going into that one. You can see that if" }, { "end": 413.84000000000003, "start": 409.2, "text": " you have a system like this if you can train it correctly then it becomes pretty" }, { "end": 419.24, "start": 413.84000000000003, "text": " easy to classify it right here because you can just take each" }, { "end": 424.6, "start": 419.24, "text": " slot and independently classify it. Because you already know you already" }, { "end": 429.68, "start": 424.6, "text": " have assigned all the pixels where the object appears into that slot you can" }, { "end": 436.16, "start": 429.68, "text": " just super easily predict a class from it. So we're almost at the end so you" }, { "end": 441.2, "start": 436.16, "text": " now predict for each slot a class or a description of the object whatever you" }, { "end": 446.24, "start": 441.2, "text": " want to predict and this is the exact same thing as in this Facebook paper now" }, { "end": 452.76, "start": 446.24, "text": " where for each of these slots we've predicted a bounding box. The" }, { "end": 457.72, "start": 452.76, "text": " question is how do you assign this to the labels and that's pretty easy that" }, { "end": 465.52000000000004, "start": 457.72, "text": " there's this thing called the Hungarian matching that basically what you're" }, { "end": 470.92, "start": 465.52000000000004, "text": " saying is you want to be as forthcoming as possible right so if you predict a" }, { "end": 475.72, "start": 470.92, "text": " gray cube somewhere and there is a gray cube somewhere here you want to match" }, { "end": 479.84000000000003, "start": 475.72, "text": " them you'll say okay I'm going to give you the benefit of the doubt and I'm" }, { "end": 485.40000000000003, "start": 479.84000000000003, "text": " going to do your model I'm going to assume with the gray cube you meant that" }, { "end": 492.08, "start": 485.4, "text": " gray cube right here and if there is the yellow rectangle and the yellow" }, { "end": 496.12, "start": 492.08, "text": " rectangle somewhere over there you don't incur any penalty as long as you" }, { "end": 501.84, "start": 496.12, "text": " predict the correct things. Now only whenever you predict like a second" }, { "end": 508.08, "start": 501.84, "text": " yellow rectangle so both of these slots now so this slot and this slot for some" }, { "end": 512.28, "start": 508.08, "text": " reason they predict a yellow rectangle this one correctly and this one was" }, { "end": 516.88, "start": 512.28, "text": " assigned this object and it incorrectly predicts a yellow rectangle oh sorry" }, { "end": 520.88, "start": 516.88, "text": " other way around this one incorrectly predicts a yellow rectangle where there" }, { "end": 525.24, "start": 520.88, "text": " is no second yellow rectangle in our label set there's only this maybe this" }, { "end": 531.88, "start": 525.24, "text": " green cube then this will be a mistake because it can't be matched it will be" }, { "end": 534.88, "start": 531.88, "text": " matched to the one where it has the least loss but it will be matched to" }, { "end": 538.68, "start": 534.88, "text": " something that's not a yellow rectangle and therefore that's going to be a" }, { "end": 542.8, "start": 538.68, "text": " mistake so this is how you calculate the loss function with this matching" }, { "end": 547.52, "start": 542.8, "text": " algorithm and you can calculate that matching in a deterministic fashion so" }, { "end": 553.28, "start": 547.52, "text": " you can back propagate through it so you can see if this slot assignment works" }, { "end": 559.68, "start": 553.28, "text": " we'll have a pretty easy time then calculating the classes coming up with a" }, { "end": 565.4, "start": 559.68, "text": " loss the same for the unsupervised object discovery what we'll do is we'll" }, { "end": 570.52, "start": 565.4, "text": " run these things through this slot decoder now this slot decoder is very" }, { "end": 577.76, "start": 570.52, "text": " similar to an a generator in GANs for example it takes a hidden representation" }, { "end": 582.64, "start": 577.76, "text": " as input now the hidden representation here is going to be these these slots" }, { "end": 589.64, "start": 582.64, "text": " and it's going to up sample it into an image if we train the whole if if we" }, { "end": 595.68, "start": 589.64, "text": " have a good slot assignment mechanism we can pretty easily train a decoder like" }, { "end": 600.6, "start": 595.68, "text": " this right with any method you want in this case I believe they use some yeah" }, { "end": 606.72, "start": 600.6, "text": " some sort of up sampling up convolution architecture right here and they use the" }, { "end": 613.68, "start": 606.72, "text": " L2 they minimize the reconstruction error between the end the output image" }, { "end": 619.16, "start": 613.68, "text": " and the input image so it's sort of like a variational autoencoder or just" }, { "end": 626.92, "start": 619.16, "text": " autoencoder objective in this case all right so we know how to encode a picture" }, { "end": 631.36, "start": 626.92, "text": " into hidden representation using a standard convolutional neural network" }, { "end": 637.0799999999999, "start": 631.36, "text": " and we know once our slot attention mechanism works we pretty much know how" }, { "end": 642.52, "start": 637.0799999999999, "text": " to go from there so the question is what is this slot attention mechanism now" }, { "end": 647.64, "start": 642.52, "text": " what we're supposed to do is we're supposed to again assign each one of the" }, { "end": 652.8, "start": 647.64, "text": " features into a slot and in a very specific fashion so if you think about" }, { "end": 657.84, "start": 652.8, "text": " the pixels right here there can be multiple of these pixels or multiple of" }, { "end": 663.28, "start": 657.84, "text": " the regions multiple features can be assigned to one slot but we'd rather not" }, { "end": 672.72, "start": 663.28, "text": " have the same feature assigned to multiple slots so each slot takes in" }, { "end": 679.8000000000001, "start": 672.72, "text": " many features but the features should be this divided between the slots such that" }, { "end": 684.6, "start": 679.8000000000001, "text": " only one slot attends to a feature and by me saying attend you probably already" }, { "end": 691.44, "start": 684.6, "text": " know where this is going so if you have the features and you consider the slots" }, { "end": 696.96, "start": 691.44, "text": " right and we just look at a single feature for now what we'll do is we'll" }, { "end": 702.76, "start": 696.96, "text": " have an attention mechanism from the slots going into the features so if you" }, { "end": 706.52, "start": 702.76, "text": " don't know what an attention mechanism is I have this video called attention is" }, { "end": 711.96, "start": 706.52, "text": " all you need where I explained this but briefly the features they will emit" }, { "end": 717.88, "start": 711.96, "text": " something that's called a key which is a vector and then the slots will emit a" }, { "end": 727.8, "start": 717.88, "text": " query which are also vectors and the sir the the information is now routed by" }, { "end": 733.8, "start": 727.8, "text": " agreement of key and query in this case this thing this this feature right here" }, { "end": 738.96, "start": 733.8, "text": " would be routed to this slot now it would be routed to both slots but it" }, { "end": 744.56, "start": 738.96, "text": " wouldn't be routed as much to the bottom slot and we make sure that this happens" }, { "end": 751.16, "start": 744.56, "text": " by using a softmax assignment so if this is like 9 and this is 4 what we'll do is" }, { "end": 756.4, "start": 751.16, "text": " a softmax assignment such that after that so we have a proper distribution" }, { "end": 762.1199999999999, "start": 756.4, "text": " which would be something like after the softmax be something like 0.9 and 0.1" }, { "end": 769.4799999999999, "start": 762.1199999999999, "text": " right here so you can see that the attention is fairly hard so this is" }, { "end": 775.6, "start": 769.48, "text": " basically it's a differentiable way to assign these things okay so an attention" }, { "end": 781.6800000000001, "start": 775.6, "text": " mechanism fulfills the property that we want to basically assign features to the" }, { "end": 786.6, "start": 781.6800000000001, "text": " slots in a way that the slots compete for the features as you can see right" }, { "end": 793.3000000000001, "start": 786.6, "text": " here if this slot here matches the feature the best it come it out" }, { "end": 798, "start": 793.3000000000001, "text": " competes the other slot because at the end this has to be normalized to one" }, { "end": 803.2, "start": 798, "text": " because of the softmax so this competition is the heart of the slot" }, { "end": 813.24, "start": 803.2, "text": " attention mechanism and this is this is how it works so this is the slot" }, { "end": 818.56, "start": 813.24, "text": " attention module as you can see so you'll take your inputs and they have" }, { "end": 823.6, "start": 818.56, "text": " lots of layer norms in here but disregard the the layer norms so what" }, { "end": 828.4, "start": 823.6, "text": " you'll do is you'll calculate the agreement between the inputs and the" }, { "end": 835.9200000000001, "start": 828.4, "text": " slots now you might wonder in a standard attention mechanism you'll have input" }, { "end": 839.8000000000001, "start": 835.9200000000001, "text": " signal coming from here which is like maybe these are the input signals and" }, { "end": 845.52, "start": 839.8000000000001, "text": " then you construct the keys and the queries for the next layer you" }, { "end": 851.3000000000001, "start": 845.52, "text": " construct all from that input signal right and also the values by the way you" }, { "end": 857.92, "start": 851.3, "text": " construct everything from that input signal but in this case will have many" }, { "end": 862.64, "start": 857.92, "text": " features and will only have a fixed amount of slots right here so where do" }, { "end": 867.76, "start": 862.64, "text": " these slots come from where do the the signal for the keys come from in the" }, { "end": 873.4, "start": 867.76, "text": " Facebook DETR paper we saw that these are learned embeddings however in this" }, { "end": 878.7199999999999, "start": 873.4, "text": " case right here these are not learned the slots are initialized randomly so at" }, { "end": 883.88, "start": 878.72, "text": " the beginning of each thing the slots are initialized randomly you can think" }, { "end": 889.2, "start": 883.88, "text": " of this as an attention mechanism where you have the attention module right here" }, { "end": 895.12, "start": 889.2, "text": " and then at the beginning you simply have randomly initialized positional" }, { "end": 900.94, "start": 895.12, "text": " embedding or randomly initialized slots and then the image is going to be" }, { "end": 908.6800000000001, "start": 900.94, "text": " encoded through a CNN right here giving you a bunch of these features and then" }, { "end": 913.5999999999999, "start": 908.68, "text": " you'll have cross attention between these features and the slots and that" }, { "end": 920.0799999999999, "start": 913.5999999999999, "text": " will give you the next layer right here okay" }, { "end": 926.8399999999999, "start": 920.0799999999999, "text": " all right so you want to calculate the routing between the inputs and the" }, { "end": 932.64, "start": 926.8399999999999, "text": " slots and then you want to perform a softmax over the slots which will give" }, { "end": 935.92, "start": 932.64, "text": " you this competitive nature between the slots so all the slots are going to" }, { "end": 943.24, "start": 935.92, "text": " compete for the features to be routed to them and then this is" }, { "end": 948.8399999999999, "start": 943.24, "text": " simply the second part of the attention mechanism and so you will have a weighted" }, { "end": 952.4399999999999, "start": 948.8399999999999, "text": " mean now this is a slightly different from an attention mechanism because in" }, { "end": 956.3199999999999, "start": 952.4399999999999, "text": " a real attention mechanism you'll have a weighted sum right here here you will" }, { "end": 960.4, "start": 956.3199999999999, "text": " have a weighted mean but it's basically such that you can have a different" }, { "end": 966.4, "start": 960.4, "text": " amount of slots and the kind of values will stay the same that's why you do the" }, { "end": 972.12, "start": 966.4, "text": " mean so you weight them up and the values are simply a function of the" }, { "end": 977.52, "start": 972.12, "text": " inputs this is like in a standard attention mechanism then what you'll do" }, { "end": 983.4399999999999, "start": 977.52, "text": " you can see that this is now called updates okay so you start with the" }, { "end": 990.6, "start": 983.44, "text": " slots randomly and then you use the slots to route the information" }, { "end": 996.8800000000001, "start": 990.6, "text": " you take the inputs and you use that information routing to calculate the" }, { "end": 1004.7600000000001, "start": 996.8800000000001, "text": " updates now you put the updates through a GRU with the state being the previous" }, { "end": 1014.56, "start": 1004.76, "text": " slots and then you'll add that to the slots either this says optional residual" }, { "end": 1021.76, "start": 1014.56, "text": " MLP so what you can do is you will have a residual MLP or not this is a fairly" }, { "end": 1032.96, "start": 1021.76, "text": " complicated thing but if you think of it it is just a transformer so what they" }, { "end": 1038.96, "start": 1032.96, "text": " describe here sorry the purpose of this GRU here of course is that the GRU is a" }, { "end": 1044.28, "start": 1038.96, "text": " recurrent unit and you can see right here that they do this multiple times so" }, { "end": 1050.48, "start": 1044.28, "text": " once you start with the random slots right but then you update the slots and" }, { "end": 1057.04, "start": 1050.48, "text": " you go you go again okay so you do this first of all you'll have the features" }, { "end": 1063.32, "start": 1057.04, "text": " and you'll just have random slots and then you do a bit of routing okay okay" }, { "end": 1069.52, "start": 1063.32, "text": " so now we have a bit of routing cool you update these slots to be the next set of" }, { "end": 1079.08, "start": 1069.52, "text": " slots and then you take the same features and route them again so you you" }, { "end": 1083.44, "start": 1079.08, "text": " route them again and this is supposed to be kind of this iterative procedure you" }, { "end": 1087.04, "start": 1083.44, "text": " might have seen this in capsule networks I've done a video on capsule networks" }, { "end": 1090.88, "start": 1087.04, "text": " where exactly this type of iterative routing you always have the same" }, { "end": 1097.72, "start": 1090.88, "text": " routing functions right these the functions for value and key and query" }, { "end": 1104.76, "start": 1097.72, "text": " are always the same but you do this iteratively many times in a row this is" }, { "end": 1111.24, "start": 1104.76, "text": " like a transformer with weight sharing it's exactly the same right so you have" }, { "end": 1118.92, "start": 1111.24, "text": " these slots you initialize them randomly you do your query times keys your soft" }, { "end": 1125.56, "start": 1118.92, "text": " max times the value right here and this the transformer even has this plus this" }, { "end": 1131.68, "start": 1125.56, "text": " MLP layer right here like this the transformer has that in there and then" }, { "end": 1137.96, "start": 1131.68, "text": " you simply do it again so up here you have the next transformer layer but" }, { "end": 1145.04, "start": 1137.96, "text": " instead of being its own layer you'll copy the so it's it's weight shared it's" }, { "end": 1150.88, "start": 1145.04, "text": " a transformer with weight sharing between the modules and the inputs they" }, { "end": 1159.4, "start": 1150.88, "text": " are also copied up here this these side inputs all right it's otherwise it's the" }, { "end": 1163.32, "start": 1159.4, "text": " it's the same thing except that these aren't produced by an encoder that is" }, { "end": 1167.8799999999999, "start": 1163.32, "text": " also a transformer they're actually produced by a CNN and the weights here" }, { "end": 1172.84, "start": 1167.8799999999999, "text": " are shared the only difference is that in between here they also have like this" }, { "end": 1178.04, "start": 1172.84, "text": " GRU this GRU thing but they do an ablation on it and it's actually not" }, { "end": 1182.56, "start": 1178.04, "text": " that important so you could might as well just leave it away bring it brings" }, { "end": 1190.52, "start": 1182.56, "text": " only very few benefits so this is how I want how I think of this model this is a" }, { "end": 1197.48, "start": 1190.52, "text": " multi layer a t layer transformer with weight sharing in for the individual" }, { "end": 1204.76, "start": 1197.48, "text": " layers where the inputs the input positional encoding are randomly" }, { "end": 1211.04, "start": 1204.76, "text": " initialized each time okay now they they really stress this random" }, { "end": 1216.6399999999999, "start": 1211.04, "text": " initialization because this differs from the DETR paper in that in the DETR paper" }, { "end": 1221.24, "start": 1216.64, "text": " these things here are learned and the DETR paper we have also this kind of" }, { "end": 1226.68, "start": 1221.24, "text": " object detection thing and it what will happen when you learn these is that for" }, { "end": 1231.8400000000001, "start": 1226.68, "text": " example this one right here might specialize in objects that are sort of" }, { "end": 1236.68, "start": 1231.8400000000001, "text": " on the top left of the image and this one might specialize in objects that are" }, { "end": 1240.3000000000002, "start": 1236.68, "text": " kind of long and in the middle and so on and this one might specialize to" }, { "end": 1245.96, "start": 1240.3000000000002, "text": " something else now I can't tell you what works better or what not it seems like" }, { "end": 1251.4, "start": 1245.96, "text": " you can if I were to implement something like this I might want to go with the" }, { "end": 1256.4, "start": 1251.4, "text": " Facebook one and then just have more right here in this paper they opt for" }, { "end": 1261.4, "start": 1256.4, "text": " having fewer but because they're fewer if you learn them they become I guess" }, { "end": 1267.08, "start": 1261.4, "text": " too specialized and you will need to keep them agnostic so you don't want to" }, { "end": 1272.24, "start": 1267.08, "text": " learn them you simply want to randomly initialize them each time and via via" }, { "end": 1277.6, "start": 1272.24, "text": " the iterative routing via the weight sharing they will be sort of assigned" }, { "end": 1287.88, "start": 1277.6, "text": " correctly all right I hope you could follow this yeah if you if you want to" }, { "end": 1292.08, "start": 1287.88, "text": " anthropomorphize this you could think if each of these slots starts out just" }, { "end": 1296.6, "start": 1292.08, "text": " randomly and then just by sheer coincidence through this attention" }, { "end": 1302.06, "start": 1296.6, "text": " mechanism they happen to be assigned a couple of these features now because we" }, { "end": 1305.9199999999998, "start": 1302.06, "text": " train the model to perform well because they're already assigned these features" }, { "end": 1309.56, "start": 1305.9199999999998, "text": " in the next layer they'll basically ask through the query function they'll ask" }, { "end": 1314.48, "start": 1309.56, "text": " for more of that they'll basically say oh I'm now responsible kind of for the" }, { "end": 1318.1599999999999, "start": 1314.48, "text": " gray pixels give me more of the gray pixels right give me give me more of" }, { "end": 1321.3999999999999, "start": 1318.1599999999999, "text": " that and then in the next layer even more of that even more of that and" }, { "end": 1327.9199999999998, "start": 1321.3999999999999, "text": " you'll see in the in the investigations into what happens exactly this type of" }, { "end": 1333.2, "start": 1327.92, "text": " thing happening so if we skip ahead to the experiments where they show what" }, { "end": 1339.24, "start": 1333.2, "text": " happens through the iterations you can see this right here so the attention" }, { "end": 1346.8400000000001, "start": 1339.24, "text": " the attention maps of these slots you can see that in after the first step you" }, { "end": 1351.2, "start": 1346.8400000000001, "text": " can see that you know it's slot two right here is assigned kind of these" }, { "end": 1355.88, "start": 1351.2, "text": " both of these objects slot three is already pretty so the first step kind of" }, { "end": 1360.72, "start": 1355.88, "text": " learns to segment a little bit of the image but not you know too well slot" }, { "end": 1367.3600000000001, "start": 1360.72, "text": " four it also the attention map here is pretty pretty wonky but if you in the" }, { "end": 1373.5600000000002, "start": 1367.3600000000001, "text": " next step and this is kind of crucial basically the these slots they" }, { "end": 1378.2, "start": 1373.5600000000002, "text": " specialize there's a slot to realize as well I have a lot of these these blue" }, { "end": 1381.92, "start": 1378.2, "text": " pixels I'm gonna give me more of those right give me more of those so it gets" }, { "end": 1386.76, "start": 1381.92, "text": " all the blue pixel well slot four has a lot of these golden pixels says give me" }, { "end": 1391.04, "start": 1386.76, "text": " more of that of those golden stuff that's also regionally right next to" }, { "end": 1395.6000000000001, "start": 1391.04, "text": " that and since these two compete I'm pretty sure slot two would also ask for" }, { "end": 1400.88, "start": 1395.6000000000001, "text": " more of the golden pixels because it has a lot of golden pixels but it competes" }, { "end": 1405.72, "start": 1400.88, "text": " with slot four because of the softmax so all of the golden pixels are assigned to" }, { "end": 1411.3400000000001, "start": 1405.72, "text": " slot four and not slot two well all of the blue pixels that slot four surely" }, { "end": 1416.76, "start": 1411.34, "text": " asks for as well are assigned to slot two in the next iteration so I actually" }, { "end": 1423, "start": 1416.76, "text": " consider iteration one is for you take the randomly initialized slots and you" }, { "end": 1429, "start": 1423, "text": " kind of assign them stuff so this is mainly the this is now mainly the" }, { "end": 1435.52, "start": 1429, "text": " transformer layer learning to segment but then step two is where the magic" }, { "end": 1440.52, "start": 1435.52, "text": " really happens is where the slots they kind of realize what's assigned to them" }, { "end": 1445.36, "start": 1440.52, "text": " and they ask for more of it and through the competition you'll get this" }, { "end": 1450.56, "start": 1445.36, "text": " separation into objects right so the whole thing is trained end to end which" }, { "end": 1454.24, "start": 1450.56, "text": " basically means that these functions get really good at doing this kind of" }, { "end": 1458.68, "start": 1454.24, "text": " segmentation alright and then in subsequent iterations you can just see" }, { "end": 1466.12, "start": 1458.68, "text": " this effect multiplying even more and more right but I might even you might" }, { "end": 1470.4399999999998, "start": 1466.12, "text": " even be able to think that you might want to separate step one and the" }, { "end": 1474.9599999999998, "start": 1470.4399999999998, "text": " subsequent steps because step one is sort of seems fundamentally different" }, { "end": 1479.6, "start": 1474.9599999999998, "text": " from steps two three four and so on because step one is this kind of" }, { "end": 1484.2399999999998, "start": 1479.6, "text": " assignment process and then the other steps are refinement so if I were to" }, { "end": 1490.7199999999998, "start": 1484.2399999999998, "text": " take this model and make it better I would try to have a special like not" }, { "end": 1497.68, "start": 1490.72, "text": " way sharing between steps the first step and the subsequent steps but what do I" }, { "end": 1503.1200000000001, "start": 1497.68, "text": " know this apparently it works okay you can also look at the reconstruction" }, { "end": 1510.08, "start": 1503.1200000000001, "text": " since their objective is to reconstruct so basically what each slot outputs each" }, { "end": 1515.1200000000001, "start": 1510.08, "text": " slot out if you reconstruct each slot here we these are the different slots" }, { "end": 1520.52, "start": 1515.1200000000001, "text": " each slot is supposed to output a picture of the reconstruction now if we" }, { "end": 1525.08, "start": 1520.52, "text": " consider that each slot is responsible for an object you might very well say" }, { "end": 1529.56, "start": 1525.08, "text": " okay this slot here gives me a picture with just the object in it that it's" }, { "end": 1534.08, "start": 1529.56, "text": " supposed to reconstruct and then this lot here gives me a picture with just" }, { "end": 1537.92, "start": 1534.08, "text": " the object that it is supposed to reconstruct now how do you know how to" }, { "end": 1545, "start": 1537.92, "text": " combine these pictures especially since they might be overlapping and so on so" }, { "end": 1552.16, "start": 1545, "text": " the way you do it is you actually output four channels so you output R G B and A" }, { "end": 1558.56, "start": 1552.16, "text": " so a being the alpha channel so each slot also has to decide where the object" }, { "end": 1565.72, "start": 1558.56, "text": " is that it reconstructs and so each so this this here might be okay everything" }, { "end": 1572, "start": 1565.72, "text": " here is alpha one and including the shadow maybe maybe there's a little" }, { "end": 1579.36, "start": 1572, "text": " shadow and everything else is alpha zero and then the alpha maps you combine also" }, { "end": 1584.76, "start": 1579.36, "text": " via a softmax to ensure that they sum up to one so you combine the pictures" }, { "end": 1589.84, "start": 1584.76, "text": " including their alpha maps but that means you can basically reconstruct from" }, { "end": 1598.84, "start": 1589.84, "text": " the slots where the where the objects are now you'll know you'll notice this" }, { "end": 1603.56, "start": 1598.84, "text": " is this thing here you'll notice that they often use for example here four" }, { "end": 1608.1999999999998, "start": 1603.56, "text": " different slots because even though the image has three different objects why is" }, { "end": 1614.52, "start": 1608.1999999999998, "text": " that because you need to reconstruct the entire image so you need at least one" }, { "end": 1620.1999999999998, "start": 1614.52, "text": " slot for the background and that's always what you can see right here so if" }, { "end": 1626.6399999999999, "start": 1620.1999999999998, "text": " you have the sorry the reconstruction you'll see that slot too with time with" }, { "end": 1631.4, "start": 1626.64, "text": " iterations it reconstructs this cube slot three reconstructs the ball slot" }, { "end": 1636.76, "start": 1631.4, "text": " four reconstructs the yellow cylinder and slot one reconstructs the background" }, { "end": 1647.64, "start": 1636.76, "text": " okay also here if you see the attention masks you see that the slot one will be" }, { "end": 1651.64, "start": 1647.64, "text": " responsible for the background here the background is significantly darker than" }, { "end": 1656.0400000000002, "start": 1651.64, "text": " in these others though they do say the background doesn't really tend to go to" }, { "end": 1660.56, "start": 1656.04, "text": " one slot in particular it tends to kind of spread out across all the slots and" }, { "end": 1667.72, "start": 1660.56, "text": " this might mean more investigation yeah so they have these different tasks right" }, { "end": 1673.36, "start": 1667.72, "text": " here for example to segment these Tetris blocks here and you can see the" }, { "end": 1680.6399999999999, "start": 1673.36, "text": " segmentations it works pretty pretty well now why does this work so well it's" }, { "end": 1686.5600000000002, "start": 1680.64, "text": " probably because of the data sets so these kinds of data sets they come you" }, { "end": 1690.1200000000001, "start": 1686.5600000000002, "text": " know they they're produced by a generator and the generator specifically" }, { "end": 1696.0400000000002, "start": 1690.1200000000001, "text": " has these these objects right here and it sort of in it arranges them in an" }, { "end": 1699.8400000000001, "start": 1696.0400000000002, "text": " independent fashion the background is really clean right the objects" }, { "end": 1703.8400000000001, "start": 1699.8400000000001, "text": " themselves are really clean and geometric and so on and they're they're" }, { "end": 1710.2, "start": 1703.8400000000001, "text": " kind of arranged in a random fashion and then there's a render of that so this is" }, { "end": 1715.48, "start": 1710.2, "text": " like super super duper clean data set and I guess that has a lot to do with" }, { "end": 1721, "start": 1715.48, "text": " why these methods work so well because they can just assume okay an object is" }, { "end": 1725.92, "start": 1721, "text": " generally you know spatially something some geometric shape that I know it's" }, { "end": 1730.2, "start": 1725.92, "text": " close together it's pretty independent from its surrounding and it's trained" }, { "end": 1734.48, "start": 1730.2, "text": " with objects that are almost zero correlated like there is zero" }, { "end": 1740.32, "start": 1734.48, "text": " correlation between the objects in the training data set so I wouldn't yet" }, { "end": 1747.4, "start": 1740.32, "text": " apply this much to to real-world problems but is an interesting thought" }, { "end": 1754.4, "start": 1747.4, "text": " right here so that's the sort of idea behind the paper I hope you got that" }, { "end": 1763.44, "start": 1754.4, "text": " they do a lot of experiments and here is a bit where my quarrels start so they" }, { "end": 1772.0800000000002, "start": 1763.44, "text": " say that they compare for example with the with these in the unsupervised" }, { "end": 1777.3200000000002, "start": 1772.0800000000002, "text": " object discovery experiments they have this data set called clever and this" }, { "end": 1784.24, "start": 1777.3200000000002, "text": " data set has these images with sort of I believe clever six has all up to six" }, { "end": 1788.44, "start": 1784.24, "text": " different objects now this is already one of the things this is not a" }, { "end": 1793.2, "start": 1788.44, "text": " specifically quarrel with this model but if your data set has six things they all" }, { "end": 1799.02, "start": 1793.2, "text": " they give like they give seven slots because they know that the data set has" }, { "end": 1803.76, "start": 1799.02, "text": " at most six things which means they can always cover all the things now it works" }, { "end": 1807.8, "start": 1803.76, "text": " when there's less objects but I think the knowledge of how many objects" }, { "end": 1813.28, "start": 1807.8, "text": " there's gonna be is also a big part of why these models work and why maybe it's" }, { "end": 1821.28, "start": 1813.28, "text": " not entirely ready yet for the real world but anyway they compare to these" }, { "end": 1827.3799999999999, "start": 1821.28, "text": " two baselines they're called iodine which also employs kind of a recurrent" }, { "end": 1835.68, "start": 1827.3799999999999, "text": " architecture but not with an attention mechanism and Monet and they say" }, { "end": 1842.82, "start": 1836.24, "text": " yada yada yada this replaces lot of them no that's not it for the Monet iodine" }, { "end": 1848, "start": 1842.82, "text": " DSP and baselines we compare with the published numbers as we use the same" }, { "end": 1853.04, "start": 1848, "text": " experimental setup so they say they use the same experimental setup and that's" }, { "end": 1858.08, "start": 1853.04, "text": " why they don't re-implement these models but they use the published numbers in" }, { "end": 1864.2, "start": 1858.08, "text": " their respective papers which is something you can do this is often I" }, { "end": 1869.04, "start": 1864.2, "text": " guess these machine translation papers and so on they do this just because you" }, { "end": 1875.16, "start": 1869.04, "text": " know it's a lot to to run these things however here I'm a bit skeptical first" }, { "end": 1880.96, "start": 1875.16, "text": " yes because it is it is Google so they do have a lot of resources available to" }, { "end": 1887.64, "start": 1880.96, "text": " technically run these things I've seen at least Monet has an implementation by" }, { "end": 1891.68, "start": 1887.64, "text": " the author or I've seen one of them the other one also there is an implementation" }, { "end": 1899.3200000000002, "start": 1891.68, "text": " and there's eight authors on this particular paper so I yeah I this would" }, { "end": 1903.8400000000001, "start": 1899.3200000000002, "text": " be okay they say as we use the same experimental setup so even in that case" }, { "end": 1910.08, "start": 1903.84, "text": " if you have the same setup it's more okay but it really depends on you" }, { "end": 1918.6399999999999, "start": 1910.08, "text": " really having the same setup and this is a bit where it kind of falls so for" }, { "end": 1924.76, "start": 1918.6399999999999, "text": " example one example right here is they say we train the moon using the atom" }, { "end": 1930, "start": 1924.76, "text": " optimizer with a learning rate and 60 so on they use a single GPU we further make" }, { "end": 1934.1, "start": 1930, "text": " use of learning rate warm-up to prevent early saturation of the attention" }, { "end": 1939.88, "start": 1934.1, "text": " mechanism and an exponential decay decay schedule in the learning rate which we" }, { "end": 1944.72, "start": 1939.88, "text": " found to reduce variance so I've checked these other models and none of them" }, { "end": 1948.36, "start": 1944.72, "text": " talks about learning rate warm-up and nowhere in their code is there a" }, { "end": 1955.52, "start": 1948.36, "text": " learning rate warm-up now you might you might argue okay this is specific to" }, { "end": 1959.36, "start": 1955.52, "text": " this model it might need this but if you look at the results right here for" }, { "end": 1964.56, "start": 1959.36, "text": " example you see that they don't outperform these other models by too" }, { "end": 1970.9199999999998, "start": 1964.56, "text": " much so you can see right here this is on par this here outperforms this one a" }, { "end": 1977.4399999999998, "start": 1970.9199999999998, "text": " little bit but then also the star here means that they have left out one of the" }, { "end": 1982.3999999999999, "start": 1977.4399999999998, "text": " denotes that one outlier was excluded from evaluation I guess which is valid" }, { "end": 1990.8400000000001, "start": 1982.4, "text": " if it's a super outlier but in this case I would categorize this model as a" }, { "end": 1997.52, "start": 1990.8400000000001, "text": " different way of doing doing things and not necessarily outperforming the others" }, { "end": 2002.5600000000002, "start": 1997.52, "text": " so this also if you look at the ablations the differences here are" }, { "end": 2007.92, "start": 2002.5600000000002, "text": " miniscule and in these ablations that they show every single thing they do" }, { "end": 2013.88, "start": 2007.92, "text": " like gives them a little bit of a boost and you just make it kind of across the" }, { "end": 2018.44, "start": 2013.88, "text": " line to reach state-of-the-art I'd rather have research move in a direction" }, { "end": 2021.92, "start": 2018.44, "text": " where we just show cool ideas and that they work and that's what this paper" }, { "end": 2029.28, "start": 2021.92, "text": " does to to be fair what I do have more of a problem with a little bit is this" }, { "end": 2036.16, "start": 2029.28, "text": " here on clever six we can use a batch size of up to 64 on a single V 100 GPU" }, { "end": 2041.28, "start": 2036.16, "text": " as opposed to four in this iodine baseline right compared to ion and our" }, { "end": 2045, "start": 2041.28, "text": " model is significantly more efficient in terms of both memory consumption and" }, { "end": 2051.28, "start": 2045, "text": " runtime which is you know something I believe but this characterization that" }, { "end": 2057.48, "start": 2051.28, "text": " they use a batch size of four and here in this paper they can use a size up to" }, { "end": 2064.7200000000003, "start": 2057.48, "text": " 64 on a single V 100 GPU I've read the iodine paper and the iodine paper says" }, { "end": 2071.56, "start": 2064.72, "text": " yes they do use a batch size of four on one GPU so they also use one GPU but they" }, { "end": 2080.4399999999996, "start": 2071.56, "text": " say their GPU has a RAM of 12 gigabytes and 12 gigabyte RAM GPUs that points to" }, { "end": 2087.56, "start": 2080.4399999999996, "text": " something like TI either I guess a 1080 or a 20 2080 or something like this this" }, { "end": 2094.24, "start": 2087.56, "text": " is not a V 100 the V 100 come in 16 or probably Google has the 32 gigabyte" }, { "end": 2102.64, "start": 2094.24, "text": " version so this is a 32 gigabyte GPU that is significantly better than the TI" }, { "end": 2110.24, "start": 2102.64, "text": " GPUs these V 100 they cost like five or ten times more than the TI GPUs and to" }, { "end": 2115.8799999999997, "start": 2110.24, "text": " simply say we have also one GPU and we can run up to a batch size of 64 and" }, { "end": 2121.72, "start": 2115.8799999999997, "text": " they can only run a batch size of four it seems I don't know it seems sort of" }, { "end": 2126.3199999999997, "start": 2121.72, "text": " overstating what you can do now maybe I'm wrong maybe they have actually tested" }, { "end": 2131.16, "start": 2126.3199999999997, "text": " this other model and concluded also on their GPUs it can only run to a batch" }, { "end": 2136.16, "start": 2131.16, "text": " size of in a bat as a batch size of four but I highly doubt it because in their" }, { "end": 2140.8399999999997, "start": 2136.16, "text": " here the paper is cited and in their paper they explicitly named that they" }, { "end": 2151.2, "start": 2140.8399999999997, "text": " use for a batch size of four for their 12 gigabyte GPUs so yeah in this that" }, { "end": 2155.72, "start": 2151.2, "text": " that just kind of pulls through so there's the there's the miniscule" }, { "end": 2160.04, "start": 2155.72, "text": " improvements and then there is the ablations of all these tricks where" }, { "end": 2164.2799999999997, "start": 2160.04, "text": " everyone just gives you a little bit and then there is this kind of very very" }, { "end": 2174.72, "start": 2164.2799999999997, "text": " very favorable comparison wordsmithy a bit which gives a bit of a bitter taste" }, { "end": 2179.7599999999998, "start": 2174.72, "text": " to what I think is actually a very very cool method because how why is this" }, { "end": 2186.44, "start": 2179.76, "text": " method so cool because for example these slots here they are trained to also" }, { "end": 2192.28, "start": 2186.44, "text": " absorb the background right so you can technically at inference time increase" }, { "end": 2197.48, "start": 2192.28, "text": " the number of slots even though you've trained with just with a few slots right" }, { "end": 2202.4, "start": 2197.48, "text": " you can you can increase the number of slots and the model can just handle it" }, { "end": 2212.8, "start": 2202.4, "text": " and they show it right here in these results here you can this data set has" }, { "end": 2216.8, "start": 2212.8, "text": " six objects and this data set has ten objects now the model has only been" }, { "end": 2221, "start": 2216.8, "text": " trained ever on six objects and they can just up the number of slots at inference" }, { "end": 2226.76, "start": 2221, "text": " time and it'll work also very well also they can now up the number of iterations" }, { "end": 2230.12, "start": 2226.76, "text": " since these are all weight shared these iterations right we've looked at it" }, { "end": 2236.44, "start": 2230.12, "text": " there's there's weight sharing between the iterations there is nothing stopping" }, { "end": 2240.44, "start": 2236.44, "text": " you from just piling on here because it's weight shared you don't need any" }, { "end": 2244.88, "start": 2240.44, "text": " more weights you can just refine this iteration and since the iteration" }, { "end": 2250.2799999999997, "start": 2244.88, "text": " themselves are refining these attention masks anyway you might as well at" }, { "end": 2254.24, "start": 2250.2799999999997, "text": " inference time refine them some more they have an ablation where they show" }, { "end": 2259.04, "start": 2254.24, "text": " that technically like three two or three iterations at training time gives them" }, { "end": 2263.7599999999998, "start": 2259.04, "text": " the best result I guess just because of gradient propagation because more layers" }, { "end": 2268.48, "start": 2263.7599999999998, "text": " means you have to propagate the gradient back more but at inference time you can" }, { "end": 2271.8, "start": 2268.48, "text": " just up these iterations and as you can see right here you get better and better" }, { "end": 2278.2, "start": 2271.8, "text": " so this these results are are pretty cool and they respect the property that" }, { "end": 2284.48, "start": 2278.2, "text": " sets should be permutation invariant and and so on this routing view of the" }, { "end": 2288.6, "start": 2284.48, "text": " transformer is pretty cool even though it's you can look at it as a" }, { "end": 2292.92, "start": 2288.6, "text": " transformer with weight sharing or an iterative routing protocol like in" }, { "end": 2299.68, "start": 2292.92, "text": " capsules so all of this I I find to be very very cool idea and I think that's" }, { "end": 2306.12, "start": 2299.68, "text": " how we should look at this this paper so before I am too critical of this paper I" }, { "end": 2312.7999999999997, "start": 2306.12, "text": " want to say that I really like the idea and the algorithm here the" }, { "end": 2319.6400000000003, "start": 2312.8, "text": " implementation yeah so that was the paper at last I actually want to look at" }, { "end": 2323.96, "start": 2319.6400000000003, "text": " the broader impact statement just because I've I've complained about" }, { "end": 2328.36, "start": 2323.96, "text": " brought the need for broader impact statements so I just want to kind of go" }, { "end": 2333.88, "start": 2328.36, "text": " just read them and just like look how how the different companies how the" }, { "end": 2337.88, "start": 2333.88, "text": " different institutions how the different people how the community reacts to them" }, { "end": 2342.4, "start": 2337.88, "text": " crafts them and so on so this one I find particularly interesting" }, { "end": 2346.56, "start": 2342.4, "text": " let's go through it says the slot detention module allows to learn object" }, { "end": 2351.76, "start": 2346.56, "text": " centric representation from perceptual input okay as such it is a general module" }, { "end": 2356.56, "start": 2351.76, "text": " that can be used in a wide range of domains and applications in our paper we" }, { "end": 2360.1600000000003, "start": 2356.56, "text": " only consider artificially generated data set under well controlled settings" }, { "end": 2364.7200000000003, "start": 2360.1600000000003, "text": " where slots are expected to specialize to objects however the specializations" }, { "end": 2369, "start": 2364.7200000000003, "text": " of our model is implicit and fully driven by the downstream task we remark" }, { "end": 2373.4, "start": 2369, "text": " that as a concrete measure to assess whether the module is specialized in" }, { "end": 2377.28, "start": 2373.4, "text": " unwanted ways one can visualize the attention masks to understand how the" }, { "end": 2382.68, "start": 2377.28, "text": " input features are distributed across the slots while more work is required to" }, { "end": 2387.36, "start": 2382.68, "text": " properly address the usefulness of the attention coefficients in explaining the" }, { "end": 2391.48, "start": 2387.36, "text": " overall predictions of the network especially if the input features are not" }, { "end": 2395.56, "start": 2391.48, "text": " human interpretable we argued that they may serve as a step towards more" }, { "end": 2400.24, "start": 2395.56, "text": " transparent and interpretable predictions this is a I mean it's a fine" }, { "end": 2404.68, "start": 2400.24, "text": " statement but it's not a broader impact statement right if you followed a bit" }, { "end": 2409.2, "start": 2404.68, "text": " what the broader impact statement is supposed to be this is not one okay the" }, { "end": 2413.32, "start": 2409.2, "text": " closest this comes to a broader impact statement is said as such it is a" }, { "end": 2417, "start": 2413.32, "text": " general model that can be used in a wide range of domains and applications and" }, { "end": 2421.88, "start": 2417, "text": " maybe a little bit that you can visualize the attention masks to" }, { "end": 2425.88, "start": 2421.88, "text": " understand how the input features are distributed but the broader impact" }, { "end": 2431.28, "start": 2425.88, "text": " statement is supposed to give you a preview of how this might affect society" }, { "end": 2436.6, "start": 2431.28, "text": " at large while this here just kind of lists properties of the model for the" }, { "end": 2443.1600000000003, "start": 2436.6, "text": " research community and sort of for this for the application of this model as as" }, { "end": 2448.92, "start": 2443.1600000000003, "text": " you know the introspection of the model itself this says nothing about society" }, { "end": 2455.76, "start": 2448.92, "text": " as such so maybe you know maybe that's I think that the smarter people will turn" }, { "end": 2460.52, "start": 2455.76, "text": " the broader impact statement into more of an introduction section because" }, { "end": 2465.04, "start": 2460.52, "text": " that's something you usually put in a conclusion or in an introduction where" }, { "end": 2468.88, "start": 2465.04, "text": " you say look here are some things our model can do and this is what it might" }, { "end": 2475.04, "start": 2468.88, "text": " be useful for and this is how you could introspect it and so on and since the" }, { "end": 2479.36, "start": 2475.04, "text": " broader impact statement especially at NURIPS you were allowed to put the" }, { "end": 2483.72, "start": 2479.36, "text": " broader impact statement on the main paper so not in the appendix but it" }, { "end": 2488.88, "start": 2483.72, "text": " wouldn't count towards your page limit it's I guess pretty foreseeable without" }, { "end": 2494.16, "start": 2488.88, "text": " what people are gonna start to do is simply put more of their paper into the" }, { "end": 2500.24, "start": 2494.16, "text": " broader impact section kind of cloaked in the veneer of a broader impact" }, { "end": 2507.64, "start": 2500.24, "text": " statement but I this is this is clearly not what what the broader impact" }, { "end": 2511.9199999999996, "start": 2507.64, "text": " statement was originally supposed to be now I don't know if this is good or bad" }, { "end": 2517.2, "start": 2511.9199999999996, "text": " I just think these authors are you know they're doing I think the a good thing" }, { "end": 2522.24, "start": 2517.2, "text": " here by simply telling us actually something useful about the model but" }, { "end": 2528.64, "start": 2522.24, "text": " that's just my opinion I do thank you for being with me here I know this was a" }, { "end": 2532.72, "start": 2528.64, "text": " bit ranty flip-flopping back and forth between the different things we haven't" }, { "end": 2538.2799999999997, "start": 2532.72, "text": " looked at set prediction at all we've only looked at these kind of masks but" }, { "end": 2542.72, "start": 2538.2799999999997, "text": " I invite you to go through the paper yourself and check it out it's pretty" }, { "end": 2548.64, "start": 2542.72, "text": " cool and they do describe a lot of things in pretty detail the appendix is" }, { "end": 2554.3599999999997, "start": 2548.64, "text": " very long and has very many ablations and this is something I do appreciate and" }, { "end": 2559.48, "start": 2554.36, "text": " with that bye bye and see you next time" } ]
V79rRI05Lj4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Set Distribution Networks: a Generative Model for Sets of Images (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "sets", "images", "cnn", "convolutional neural network", "gan", "generator", "encoder", "discriminator", "prior", "mean", "made", "latent", "binary", "conditional", "noise", "distribution", "probability", "energy-based", "energy", "apple", "research", "sdn", "variational", "elbo" ]
We've become very good at making generative models for images and classes of images, but not yet of sets of images, especially when the number of sets is unknown and can contain sets that have never been encountered during training. This paper builds a probabilistic framework and a practical implementation of a generative model for sets of images based on variational methods. OUTLINE: 0:00 - Intro & Overview 1:25 - Problem Statement 8:05 - Architecture Overview 20:05 - Probabilistic Model 33:50 - Likelihood Function 40:30 - Model Architectures 44:20 - Loss Function & Optimization 47:30 - Results 58:45 - Conclusion Paper: https://arxiv.org/abs/2006.10705 Abstract: Images with shared characteristics naturally form sets. For example, in a face verification benchmark, images of the same identity form sets. For generative models, the standard way of dealing with sets is to represent each as a one hot vector, and learn a conditional generative model p(x|y). This representation assumes that the number of sets is limited and known, such that the distribution over sets reduces to a simple multinomial distribution. In contrast, we study a more generic problem where the number of sets is large and unknown. We introduce Set Distribution Networks (SDNs), a novel framework that learns to autoencode and freely generate sets. We achieve this by jointly learning a set encoder, set discriminator, set generator, and set prior. We show that SDNs are able to reconstruct image sets that preserve salient attributes of the inputs in our benchmark datasets, and are also able to generate novel objects/identities. We examine the sets generated by SDN with a pre-trained 3D reconstruction network and a face verification network, respectively, as a novel way to evaluate the quality of generated sets of images. Authors: Shuangfei Zhai, Walter Talbott, Miguel Angel Bautista, Carlos Guestrin, Josh M. Susskind Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at set distribution networks, a generative model for sets of images by Xuangfei Chai, Walter Tablet, Miguel Angel Bautista, Carlos Gestrin and Josh M. Suskind of Apple. So this paper introduces a generative model for sets and it does so in an energy-based model fashion. It will have an encoder, a decoder in form of a generator, it will have a discriminator and it will have all kinds of math but the end result is a model that can generate sets of images and by sets we mean it can generate different kind of views on the same identity of image and you'll see what that means and it can generate even sets that it has never seen before which makes it different from a class conditional GAN or something like this. So I can't really describe it on a high level in a very concise fashion, you'll just have to stick around and see what's going on right here. So if you like content like this feel also free to share it out and leave it a like, tell me in the comments what you like. This is going to be a fairly math heavy paper and I'll try my best to kind of distill it down to what's happening because ultimately it's not that difficult. Alright so if you have a look at these samples right here these are examples of sets of images. Now without actually caring for top and bottom row they will have some meaning right here. Top row is always a row from the actual data set and the bottom row is the reconstruction of that set. Now you'll see that the images don't really have a correspondence so you see it's the same truck in the top and the bottom row but the orientation here isn't really shared or anything and that's because as we said this is a set network. So what you want to do in this problem setting is you want to take, you want to build a model that can take this set right here from the data set and it can encode it into a latent description that we call z. z simply describes the set as a whole. So z here would be, sorry, would be truck right? It would sort of be the 3D model so not the class truck but the 3D information of the truck without having any information of the different views okay? And then you want to build another model that can generate from this low level representation of the set, can generate the different views okay? Like each one of these by sort of rotating it. So we want to build a model that understands just from the pixels that here we have sets of things that they somehow always share a commonality and in this case they always share their 3D structure right? What they don't share is the view where they rendered from. So our model is supposed to kind of parse the two apart and encode that 3D structure just from the pixels in this z variable and then encode the fact that you can rotate it and look at it from different views into the generative model that then produces the different views okay? And the reason why there is no correspondence between the views is we simply regard these things as sets. So our final objective is simply going to be that the set on top is different views of that particular truck is going to be very similar to the set on the bottom which is also different views of that particular truck and that's what our model is supposed to do. Now you might know something like this from where you can simply say well this this looks like a class conditional GAN right? I simply have the class truck and I you know I feed this to my generator and my discriminator and my encoder and so on and it will produce the same truck and here that and here the bench and so on. The problem I guess becomes more apparent when you go to this different data set. So this is a face data set and again top row being the input and bottom row being the output. Now what is here what is supposed to be preserved you can kind of see in the images is sort of the identity of the person on the photo. Now you have to kind of gloss around your human bias here. You as a human can tell extremely tiny differences between faces and therefore none of the actual identities are going to be preserved right? So this on the top I believe is Ali and on on the bottom here it's not not Ali. So but you'll have to sort of gloss around this and you'll see that what is preserved or what is supposed to be preserved is something like the rough identity of the person on the picture and also a little bit of the image compositions. So you see here in the background you often have sort of these sports backgrounds right where there's kind of a washed out stadium or whatnot and you can see that this is also preserved here. I think it's kind of a glamour glamour shots of this must be some sort of glamour model and you'll see that this as well is preserved. What is different within each set is of course the different views on the same identity right? So you have the same person and you have pictures of them in different of different views, different lighting, different hairstyles and so on. Here you even see you have the black and white image and the set that is produced also contains some black and white images. So this model this is already the trained model here is doing a fairly good job. Here you can see that almost all the pictures have some sort of double like two people with one being sort of half in frame and it leads to a fairly strange thing. I'm gonna guess the model hasn't seen lots of that during training. I like it, particularly like this. This is pretty good. Also here we know that a lot of these face datasets they don't really have bald people all too often so you can see here this results in sort of kind of a weird Richard Branson type picture. In any case it does a fairly good job as you can see right here. And what's the problem with these faces? Can't we just do a class conditional again where we basically say here this is Ali and that's one class in our latent vector and then and so on. What we want to do is we want to train something like this where we can give in a new identity that we haven't seen during training. In fact we want to train something where we don't even know how many sets there are going to be in the end. We simply want to train it in a way where we say look I'm going to give you a set and the set will have images of the same person and you're going to sort of reconstruct that in a way where you output a set of images of that person conserving the identity of the person and the rough style of the picture. So this is really different from a class conditional again as in we don't know how many classes there are and there can be new unseen ones during testing. So how do we go about something like this? And here is where the thing starts. So we're going to dive into a bit of the math here and then into a bit of the reasoning but ultimately what they're going to do is they're going to build three things. They're going to build an encoder, a discriminator, and a generator. So what does the encoder do? And remember our task here is going to be the way we train it is going to be by reconstruction, at least one way we train it. So the encoder is going to take this set of images that we give it, for example here different views of this car, and is going to produce this representation, this set representation. Now what should a property be of the set representation? For example it should be independent of the ordering of these inputs. A set is simply a collection of objects, it's independent of the ordering, and it's also independent of the size in some way. Of course a bigger set gives you more information but the set identity, the fact that this is this particular car, is independent of how many views you have. So what they do is they build and they put each image through an encoder which is a convolutional neural network and I guess that gives them a hidden representation and encoding and embedding for each image and then they have this operation called pool and binarize. And so this does two things, namely first of all it pools, it simply averages these things. So it goes 1 over n C of XI, or C of XI I guess. So this simply averages these encodings right here and this already fulfills our property. So this average is independent of the order of the images and also I can add more and I can add less in expectation the average will result in the same thing. So this here now we have basically lost the information of the exact ordering and so on. This is simply an average of these images. It's a good representation for a set. This is now if we give enough images this will be independent of the particular rendering position and it will only depend on the fact that this is that particular car, if it's trained well of course. And then the second thing is binarize. And here you have to understand what exactly this set latent representation is. How do we encode a set in latent space? And as far as I understand it, what they do is they do the following. Since they don't know how many sets there are, they can't simply do the classic one-hot vector. So what you would do in a class conditional GAN is you would say I have a vector and maybe I have ten classes. So I'll make ten entries right here. Is that ten? I don't know. So if I have C classes I'll make C entries right and I'll put a zero in all of them and a one in where a one where my class is or something like this. So this would be a valid encoding for a class conditional GAN to represent the identity of the class. Here however no we don't know. And also we can't really make this continuous because other if you make this continuous you wouldn't really encode the identity of the set. You would encode more of a continuous latent space and then that becomes kind of different when you have new sets and so on. So what they really want to do is they want to make this representation here be a description of the set itself but not a one-hot. So what do we do? What they do is they do the same thing. They have a vector but not of size C because they don't know C but of some dimensionality D. This can be 10, this can be 4, this can be you know whatever. Just let's say it's 10 again but we don't know how many classes there are. What the model can do is it can encode each class as a binary vector, a binary combination of negative ones and ones. So it can put like a negative one here a one negative one negative one one one negative one and so on. So what does that give you? Now you can encode much more than ten classes. In fact you can with this you can encode two to the ten classes. So it's not that they can encode an unlimited set of set identities but they can encode in this manner they can encode a lot. They can encode this many sets using a representation like this. So this binarize operation it will take this output right here and basically clamp it to either one or negative one. So the set will be encoded by a binary vector like this and then the generator and the discriminator take that information. We'll go over what this architectural choice means but right now this you know see that this is a way to encode a large number of set identities in a low dimensional vector. So there are two things now. There are the discriminator and the generator. So first of all the generator is pretty easy. The generators task is to take this Z that we just saw which is the set identity and to generate different instances of that set. For that it needs this noise here. So if you know a generator from a from a GAN it always kind of needs input noise in order to produce different outputs and that's this thing on the right here. This Z' they simply come from some sort of latent distribution. I think they call this P of psi which is some like a uniform or a Gaussian or something. You just sample some noise right and you combine it with this thing right here which is the set identity. You concatenate it and then each one of these different Z will produce one different view right here. So the generators task is simply take the set identity and combine each with some noise and produce some views of that set. Now the discriminators task right here is going to be to decide it's going to get a set of views of a set of pictures and it's going to have to decide is this set coming from the generator or is this set coming from the data set. Now you can't simply compare images to each other like you would do in a regular GAN because they don't correspond to each other right but what should correspond to each other is this identity this Z identity. So the discriminator is going to take also two inputs. It's going to take this set right here or you know from the data set and it's going to take this right here this Z. Now this Z is going to be the set identity and that you get from the encoder right it's the same as you get right here you get it from the encoder which set you should produce and the same goes here. So the discriminator knows the set discriminator knows for example I'm trying to produce that particular car and it gets a set of images that is supposedly of that particular car needs to decide does it come from the data set or from the generator. So it uses the same encoder pipeline here it's just a like a CNN giving you a latent representation and then it has two tasks the discriminator has two different tasks. First of all this here is the regular GAN path so it there's an there's an MLP there's simply a pipeline that outputs a number and the number is does this come from the data set or does this come from the generator but then there is an additional pipeline that they have found to be vital to train the objective which is a reconstruction pipeline. So this is more like a sort of like an auto encoder pipeline where they have a decoder they try to reconstruct the input set and they then compare it using mean squared error. Now here they try to reconstruct the input set really picture by picture not as a set but picture by picture so that's it's different from the set generator okay and this is this pipeline is just to stabilize the training but it also goes into this the output of the discriminator. So sort of the discriminator is happier the more it can reconstruct the images which seems kind of weird at the beginning but it you know they say it has helped in other GANs I'm not super familiar with GAN literature but it's just a another objective that you can add. So this is going to be the overview right here so if everything works well we should be able to take a set from the data set X right which is going to be you know images different images from the same person we should be able to feed that to the encoder get a latent representation Z for that set that somehow encodes here the identity of the person and now we don't if the encoder works really well we don't have to have seen that person before it will simply somehow encode the identity in that binary vector then we feed that to the generator together with some noise and we'll get out a set of pictures of different views of that same person or the person with a similar identity and pictures with similar kind of picture style and that if our discriminator works well will look very very similar to or will be images really of that same person and our discriminator if we plug that in right here will agree and we plug in the Z right here okay okay that's the overview now the math so they go about this in this sort of probabilistic framework what they say is we have we denote an image set of size n as X and that it comes from the space of sets of images so capital X right here is going to be a set of images as you see here and this here is the space of all sets of images okay so what they want to do is they want to build a probabilistic model of that so a model where you can input a set and it'll tell you how likely is that now you don't you don't actually have to have a number as an output right here what they often do is they start with a formulation like this and what they end up with is simply a model that allows you to sample from this distribution right from which you can estimate the probability but ultimately what we want is a generator that can sample from this okay so how do they build it they decompose this into two parts and this is just a you know a decomposition this is standard decomposition of probability where you'll say okay what's the probability of a set X the probability of a set X is the probability of the latent code of that set times the conditional probability of that set given the latent code so already we we have we ask ourselves what's the probability of X and what we might ask is huh well X you know if I look at X it has these different images of of this it has these different images of that thing whatever is on the image I can first ask what is the probability of that particular thing on the image and then conditioned on that what is the probability that I'll get these particular images right this is this simple decomposition already kind of builds up this model of encoding decoding so we'll go through that step this here is going to be a deterministic function our encoder and this here is going to be a probabilistic function the the decoder and it's probabilistic because every time you call it basically every time you call the generator it's going to give you a different output because you you're going to feed it different noise at the beginning okay so so this is going to be this is going to be our encoder here and this is going to be our decoder so they say here X is a Z is a deterministic function that maps a set X to an element in a discrete space Z okay this is a discrete space as opposed to maybe a regular autoencoder where you have a continuous space so here we want to discrete space and a lot of mathematical problems are going to arise from the fact that this Z is a discrete space and not a and not a continuous space so here P is a prior distribution with the support given by sub C which means all the Z vectors that have some sort of set associated with them which is a subset of all the Z so that basically means that if if we have a given encoder there not all of these binary vectors are going to be filled even you know with if we plug in all the world's faces into our encoder it might not fill all of the binary capabilities that we have so this prayer is only defined on this support of this and here you already kind of see what what kind of mathematical hurdles you have to go through if you do something like this and all the math here is going or most of the math here is going to deal with the fact that we have this discrete thing and so on and a bit of a bit of a little bit of a of a caveat here also is that this here they mentioned this here is a prior you need a prior distribution on your Z variables and this is also not easy so really quickly what does it mean to have a prior distribution on this kind of thing because usually in a regular like auto encoder variational auto encoder right your latent your latent code you'll have a prior on it and that prior can be you know some some continuous thing like a Gaussian and even in a regular GAN as I said you you have your your noise distribution and so on what is a prior on that thing now you can say oh a uniform prior but again we would like to learn this prior such that it matches the data set well now they use a prior from a paper that's called made and a DE and really quickly what it does is it sort of kind of decomposes this thing so what you'll have is a neural network that outputs binary vectors like this and it will sort of output them in a fashion auto regressively so it will output one of them and then condition on that it will output the next condition on that it will output the next and this is such that the probability of this binary vector minus 1 1 1 minus 1 and so on is going to be decomposed into the probability that there's a minus 1 here times the probability that there is a 1 here given that there is a minus 1 here and so on and in different order I don't really want to go into this but just to show you that there is a lot of consideration of mathematical consideration if you really want to go about really want to go about this sort of thing in a formal fashion so they define two things first of all this prior okay it's a prior distribution that they can learn from the data set and then there is this conditional distribution this what you might call a generator right if you're given a Z a one of these binary codes what's the probability of a given set so you're I tell you here it's a Scarlett Johansson what's the probability of these pictures being different views of Scarlett Johansson so that is going to be simply we're going to build this as an energy based model you can you can do this what you'll have to do is you'll have to define an energy and we'll just quickly discuss what that is and then you can build a construct like this where you'll say the probability of a given set is going to be the energy that assigned to that set divided by the energy that I'm going to assign to all of these other sets so this is it's a form of an energy based model you can phrase very many things in terms of these energy based models and young LeCun gave a talk about this at iClear I believe where he gives a lot of different examples of energy based models so I invite you to check this out I've also done a video on some of these energy based models and what you can do with them here it's simply to define this probabilistic model so what we need to do are two things we need to know what is this energy so what is this energy supposed to do this energy is going to be a function that gives you and now I have to think so its energy is going to be a function that gives you a high value if you are unhappy with the input and it gives you a low value if you are happy with the input okay so you see the negative exponential here which basically means if and also the energy is always positive so the best if you are super duper happy with what is what your input is into the function you'll output 0 so if you output 0 here you'll see that e to the exponential function of negative something is going to be quite small and no maybe I have it wrong maybe you output a really high number when you're really happy I'm not sure but it's one of the two so this comes this comes from from a physics from a physics background no no no I'm right so if you if you output if you're not happy you'll output a super high number here which will make this negative exponential be really close to zero and therefore the probability you say if I'm not happy the probability should be close to zero however if you're really happy you'll output a low number right here now the energy always has to be greater or equal to zero but the lower you go the higher this probability is going to be the bottom thing here is simply to normalize the distribution because in a probability distribution you always have to normalize because otherwise it's not a probability and this is what most of these models basically are fighting over how to normalize the distribution and what we're going to do is simply normalize it by sampling which is what most of these things do you can build energy-based models without this which GANs are a variant of that but okay so what we need to do is we need to come up with this energy function that is going to be a high number when we're not happy with the input now what is the input the input is X and Z X and Z what does it mean we're not happy with the input it means that the the image X and you see X here is one of the images of the set that particular image isn't really congruent with the Z with the identity so you either you say what this this isn't really a picture of scarlet Johansson so I'm going to assign this a high value however if you know if this is really any sort of picture of of that person then you're going to assign it a low value and how better to do this than to build a neural network to do that and this is going to be our discriminator so our discriminator is going to take the role of this energy function okay cool now I said you need to normalize and I kind of said it off the cuff and so on but the problem here is again we have this kind of sets and and so on so our probability as you'll notice is the probability of X given Z so we are already given the the identity of the person so what do we need to normalize by we can't simply normalize by all the sets of images in the world like here it's in the integral we need to normalize by all the sets of images that are mapped to that same identity okay so in order to normalize the distribution will basically ask how likely is this set how happy are you with this particular set right here compared to all the sets that could exist that would map to the same identity all right that's why these indicator functions are here and as you can see the part here is simply a normalization where you say I'm just going to produce other I'll consider all the other possible sets of images that map to the same person and I'll simply divide by the energy of those in fact if you do it correctly this particular X is also in that particular set but usually it's going to be a fairly small part but to properly normalize of course you have to consider it as well now this bottom part here as I already said is going to be the main problem of most of these probabilistic methods and as I already said again it's usually approximated by simply sampling a bunch of these sets and not sampling a bunch of these sets and not a by enumerating all the possible sets of images and this sampling is going to make some further problems as we'll see I guess down here so here is what we optimize ultimately we optimize or one of the things we optimize is or are we okay yeah so they say we apply maximum likelihood estimation to estimate the parameters of the thing we just defined where where the negative log likelihood loss for an observed set in the training split is this so this is simply the negative log likelihood the log decomposes into a sum so this is going to be your prior and this is going to be your generator discriminator combo okay it's the generator producing images from that binary code and then the discriminator assigning high or low values to that produced images and also to images from the data set and the thing over here is simply going to be the prior over the z distribution that we briefly discussed now again they have to they have to do some tricks here where they say okay we can get rid of this support by using a normalized distribution over Z which is a bound on that true prior and so on so they're going to replace the the P with a P bar which is over the entire space of Z and and that's going to be a bound but the interesting part I feel more is here where they consider the loss again of this conditional distribution on Z so you'll see the exact same quantity right here but now our loss is going to be the negative log of that now since it's the negative log you can decompose the division into a sum and and this part up here you will see the indicator function here is a bit unnecessary because the the Z here is the Z that we're considering is going to be the Z of that particular set that we're considering and so this equality holds on the top so disregard this and this down here as I said is simply a filter to filter the space of all sets to the ones that correspond to the Z that we have in the energy function so this goes here because it's simply a log of an exponential and the negative signs cancel so you'll end up with this what does it mean you want to minimize this loss right here and part of that is going to be you want to minimize the energy function of these inputs okay you want you want to and this is now the case when it comes to from the data set right so when X and Z come from the data set and E is your discriminator then you want to make the output of the discriminator really small which means that you want to train the discriminator to say I'm really really happy that this particular image comes together with this particular identity encoding now the in the in the if it comes from the generator of course you want to do the exact opposite you want to assign it a high value remember energy low means happy with the input okay and then the normalization down here is as I said the problem so you'll see it stated right here because it's under the the division it's going to get a pick up a negative sign which combines with this negative sign which gives you positive sign right here now this part right here is going to be intractable because it it's not going to be feasible to enumerate all the sets of images it's not even going to be feasible to enumerate all the sets of images that just correspond to that particular identity and in fact it's not even going to be feasible to sample from that because we have no clue right we can't simply generate out of the ether true other pictures of the of a particular person what we can do of course is we can use our we can use our model to produce more images right of that particular identity so what we'll do is we'll replace this distribution with a variational distribution and we'll sample from that now this isn't exactly the same this isn't this log probability anymore and that's why first of all we have a bound here and not an equality this is this is called a variational approximation so we bound this quantity and we can only bound this quantity if we down here introduce the entropy of the variational distribution this is a fairly standard trick in variational approximation methods if you want to look more into this look into kind of VAE explain variational autoencoders explained or anything like this will teach you how how these methods work and in case we replace the distribution with a distribution we can actually produce and what is that distribution look what we're supposed to do is we're supposed to produce a set of image given so sets of images here given particular Z and we can do that that's our generator right so we can use our generator to produce those samples and that's what they say here here we have derived a lower bound by introducing a variational distribution which we parameterized in the form of a generator ok so the generators going to produce that distribution it's going to use this noise production so as you know the generator takes two things it takes the identity encoding and a bit of noise and is going to produce an output set or for each noise is going to produce an output image very cool so that's that's the the kind of math formulation behind this model now they have a model architectures right here but this is all fairly standard except for so for the prior they learned the prior on the z space so you see you have Z being this binary vectors they say we use a standard autoregressive model made with three fully connected layers mainly for its simplicity and robustness and so again maybe you like it took me a while to get what this prior does this prior is supposed to say it's so it's not in again you have the Z vectors always being some sort of from the standard noise but what you can also do is you can learn better noise distribution a better input distribution for your again by basically making again for your input distribution so what you'll do is you'll have a z zero right here and then you'll use a you learn again to learn better input distributions ok and this is what you do here with these with this prior on Z this is more standard in like VAE's than it is in GANs but it exists so encoder say as a necessary option encoder for a set needs to satisfy the permutation variant property we opt to use a simple architecture design where we let this be the average right here so as you can see this is the average and then they they use this binarize operation and the binarize operation here is clamping the values to one or negative one and it is a straight through estimator which means that you will you back prop through it as if you hadn't clamped but you forward prop through it with clamping this is kind of a trick to get through discretization things discriminators job is to assign low energy to observed images and high energy to generated images given a set code Z we use an auto encoder based energy function implementation similar to 25 and here they say we have found that this choice is important as it enables effective learning in early stages of training so that's why they do usually a discriminator would be the energy would be equal to this thing right here which is a small MLP that maps a the input to a real sorry you can't see that that maps the inputs to a real number either high energy I'm not happy low energy I'm very happy here they also include this thing right here which is a decoder so it's kind of a you can maybe think of it as a another little GAN another another another little generator or the the generator part of a VAE or of an autoencoder sorry not a VAE an autoencoder that takes as input the encoding of the particular image and the identity and produces is going to produce something that's close to the output against observe that this is now with respect to a particular image so here we're trying to reconstruct that particular image because we have its input thing right here and we're it's not the same as the generator that is just asked to produce a some view something that corresponds to this particular identity vector okay the generator generates a set conditioned on a set code by sampling and random variables each of which is concatenated with Z and generates an image independently cool so what the losses they they now introduce some margin losses on the things here but basically you can just translate the what we have on top where we formulated negative log likelihood into the losses right here they do have some simplifications for example this to train the prior you what you are to train to train the encoder I think you have to make a bit of an approximation in that the encoder is supposed to match this this Z vector right and that's not differentiable by itself so they have this sort of l1 approximation right here they leave away the entropy from the loss and they have found that to work well they introduced this margin losses right here I I don't want to go into that too much but basically they simply in a way with some approximations they approximate I hear it is the indicator function they approximate is this I was looking for that they they they optimize this log likelihood from above in the way where they always optimize they keep the generator constant and they optimize the rest of the pipeline so the encoder and the discriminator in the prior and then they keep that rest fixed and they encode the generator so what does that do before remember right here we had this this approximation right here where we said you know what comes out of this we were not really optimizing this we're optimizing we're minimizing a lower bound on it right so here's a quantity that we want to minimize but here's a lower bound and we'll just push that lower bound down by optimizing it now that doesn't tell us anything about this thing right here but there is actually more to it so by optimizing the discriminator and the encoder and so on we do minimize this lower bound so that this this loss right here you see this energy function will adjust that whenever we adjust our whenever we adjust our our that particular loss our discriminator will adjust that energy function whenever we adjust our encoder we are going to adjust the part that generates the Z vectors right here so we'll push this down but whenever we optimize our generator that's when we make this gap here smaller okay so we always do two steps first we or first or second in one step we reduce this and in the other step we'll bring these two closer together and as a result of course we hope that it's not just the bottom one going to up down up down up down but we hope that both of them reduce with time because the top one is the one will actually want to reduce that's our actual loss or our log likelihood and that is I guess going to happen in practice so what does this do so as I as we already saw here on top you have a set and you feed that through the encoder feed that through the encoder that gives you a Z identity and then you feed that to the generator and the generator you can ask it you don't have to produce the same amount of images you can produce any amount of images you like they just chose to produce the same amount there's no correspondence but you see it's the same truck and here they manually align these so they just produce a bunch of images on the left is the data set and on the right I guess they just produced like a hundred images and then selected wherever the car looked like the closest to so they ordered them by by hand and that is to show that for example look at the the lighting on the car right here it's it's fairly similar I guess this one has red taillights and the other one hasn't but you can see that the the different views are pretty well captured by the generator and that just from all of these are created from one one binary encoding of this here so this is binary encoded to Z and then all of these different views are created there's no image correspondence so that's pretty cool and another problem you have with sets is how do you evaluate sets you can't you can't go and check for images or image closeness and so on so they have to do some 3d modeling they actually take it now they take these images right here and they have to approximate their 3d shape and then compare that 3d shape with the 3d shape of the original thing in order to just quant quantitatively estimate how well they're doing in the faces the the same thing you input the top row into the encoder and you get back the bottom row we've already looked at that but again to evaluate this they you actually have to go and use some sort of a face detector to recognize is that even is that the same person always and is it so you can evaluate two things you can evaluate are these right here all the same people so you can have a a face detector kind of tell you whether or not these are the same people and the second thing is are these down here the same person as these up here right so those are the the kind of things how you can evaluate this and they've done this and it's a fairly interesting and the results here are not surprising when you look at the images so these are curves curves from this face detector and of course for real images as you can see the this is simply the performance of the face detector so you do get some false positives if you if you want more true positives right so this is a standard curve right here because these face detectors are not perfect so in a given row right here in a given row even if that's from the real data set the face detector would sometimes fail and say no that's not the same person even though from the data set you know it is though the the to match the actual child photo from Ali with his adult photos is even like you can forgive the face detector so that's sort of the the gold standard we're trying to achieve and you can see within the reconstructed sets that that is achieved fairly fairly well so compared to uniform samples this is you know fairly fairly cool fairly close what is less close is this reckon and real and I believe that's when you compare the identity of the real row with the identity of the reconstructed row and that's here so that tells you already that it's the GAN or sorry the model doesn't always preserve the actual identity as seen by a face detector and I don't know what to say except yes that's what you see in the data right also you see that free samples I guess so you can do two things right you can give it a set like a row and encode that into the Z and then you can decode that again and basically reconstruct or you can just sample since you've learned a prior on the Z variable you can simply sample you can simply say give me some new identity maybe that I've never seen before right you have some binary vector and now generator please give me images of that identity and these two here are actually sampled like this and you can see again here it's remarkable that within the same row it's pretty much the the rough identity of the person is conserved right and these are these free samples right here I guess and they they do better than whenever you compare the reconstructed and real but they don't do as well as when you actually input a real data and then reconstruct this so this might be an indication that this prior isn't really working you know all too accurately and I do have my problems with this binary encoding right here because maybe I'm misunderstanding something but if you have these binary vectors as we said here the reason you know the reason why you do one hot encoding in class conditional GANs is you could you could simply say what am I doing a one hot encoding I'll simply say Z equals three for class three and Z equals four for class four like that it should be so easy why am I doing one hot and that's because these models see everything in a linear fashion so if you have class three and then I have class four and then I have class nine the model doesn't see that as three different classes the model sees this as these two are somehow closer together than this right so the reason why we do one hot vectors is that the model cannot do this the model has one independent dimension for each of the classes and whenever that particular dimension is high then it knows that that particular class is activated what this binary encoding here does is sort of it goes back to this thing right here where it says okay there are all of these different categories here it's like you have mini classes and the identity of whatever set you consider is now encoded in these mini classes so that I'm going to guess the first thing here might be something like does that person have a blonde hair and the second thing might be does the image look generally bright or the images image set as a whole look generally bright or dark and and so on so I'm gonna guess these things are encoded here and it'll sort of just end up being kind of a discrete GAN or a discrete autoencoder rather than what they believe but maybe that was their goal all along and I'm misunderstanding right here I just don't think this this binarization is gives you this sort of hoped expressiveness I think there's still a lot of dependence of whether or not a particular thing is on or off okay but enough ranting right here I want to look at the at some more of the samples because I've only shown you the reconstructions what I also find interesting is the free samples so here you can see uncurated shape net samples and on the left so here you can see this effect on from the learned order regressive prior and a uniform prior on the right and here you can see this effect of learning this prior so if I learn the prior it's going to give me back fairly okay objects if I don't learn the prior oh but if I learn the prior you know if I learn the prior really really well that basically means I'm only going to ever produce sets that were in the training data right if I learned like a perfect prior I'll see like wait this you know this particular identity here never shows up so I'm not going to output it and the uniform prior might actually output it and the generator is not going to be trained on that uniform prior so it's just going to give you kind of crap and here in the in the faces you see the same thing now again what I think I don't think that's happening what I think is happening is encoding these kind of micro characteristics not per se identity but it's encoding probably you know hair color what not head shape and so on things like this and in each of these dimensions and that's what is then going to produce so these each row here is an is one sample from that prior on the left is learned which you see is working pretty well in terms of the output and on the right you see it's from the uniform prior now you also see here first of all that approximately identity is preserved but not as much in this uniform prior that's first and second you see that the images are much worse which means that the generator doesn't have as much training on that particular thing because I guess it comes from a prior that it hasn't seen during training alright and here lastly they have reconstructions if you give different number of views so the top row I guess is the input the this row is when you just have four different views so I guess just the first four or something like this input and the bottom one is when you have the full eight views and you can I guess see or even more that this increases with number of views so the the accuracy of this identity increases the more views you input of the set and they have a bunch of other things right here in the appendix I I do invite you to look at this and I hope you sort of saw into a bit how you would go about something like this I I found it quite challenging the math because I'm mainly not used to this kind of variational math but I hope this gives you sort of an impression alright this was it from me tell me what you think and I'll see you next time bye bye
[ { "end": 5.1000000000000005, "start": 0, "text": " Hi there, today we're looking at set distribution networks, a generative model" }, { "end": 10.88, "start": 5.1000000000000005, "text": " for sets of images by Xuangfei Chai, Walter Tablet, Miguel Angel Bautista, Carlos" }, { "end": 17.080000000000002, "start": 10.88, "text": " Gestrin and Josh M. Suskind of Apple. So this paper introduces a generative model" }, { "end": 23.12, "start": 17.080000000000002, "text": " for sets and it does so in an energy-based model fashion. It will have" }, { "end": 28.7, "start": 23.12, "text": " an encoder, a decoder in form of a generator, it will have a discriminator" }, { "end": 34.6, "start": 28.7, "text": " and it will have all kinds of math but the end result is a model that can" }, { "end": 42.68, "start": 34.6, "text": " generate sets of images and by sets we mean it can generate different kind of" }, { "end": 48.92, "start": 42.68, "text": " views on the same identity of image and you'll see what that means and it can" }, { "end": 52.96, "start": 48.92, "text": " generate even sets that it has never seen before which makes it different" }, { "end": 59.6, "start": 52.96, "text": " from a class conditional GAN or something like this. So I can't really" }, { "end": 64.16, "start": 59.6, "text": " describe it on a high level in a very concise fashion, you'll just have to" }, { "end": 69.68, "start": 64.16, "text": " stick around and see what's going on right here. So if you like content" }, { "end": 74.48, "start": 69.68, "text": " like this feel also free to share it out and leave it a like, tell me in the" }, { "end": 78.44, "start": 74.48, "text": " comments what you like. This is going to be a fairly math heavy paper and I'll" }, { "end": 83.96, "start": 78.44, "text": " try my best to kind of distill it down to what's happening because ultimately" }, { "end": 89.12, "start": 83.96, "text": " it's not that difficult. Alright so if you have a look at these samples right" }, { "end": 93.8, "start": 89.12, "text": " here these are examples of sets of images. Now without actually caring for" }, { "end": 98.36, "start": 93.8, "text": " top and bottom row they will have some meaning right here. Top row is always a" }, { "end": 103.92, "start": 98.36, "text": " row from the actual data set and the bottom row is the reconstruction of that" }, { "end": 110.4, "start": 103.92, "text": " set. Now you'll see that the images don't really have a correspondence so" }, { "end": 115.32000000000001, "start": 110.4, "text": " you see it's the same truck in the top and the bottom row but the orientation" }, { "end": 119.56, "start": 115.32000000000001, "text": " here isn't really shared or anything and that's because as we said this is a set" }, { "end": 124.72, "start": 119.56, "text": " network. So what you want to do in this problem setting is you want to take, you" }, { "end": 129.84, "start": 124.72, "text": " want to build a model that can take this set right here from the data set and it" }, { "end": 137.96, "start": 129.84, "text": " can encode it into a latent description that we call z. z simply describes the" }, { "end": 146.56, "start": 137.96, "text": " set as a whole. So z here would be, sorry, would be truck right? It would sort of be" }, { "end": 153.72, "start": 146.56, "text": " the 3D model so not the class truck but the 3D information of the truck" }, { "end": 159.36, "start": 153.72, "text": " without having any information of the different views okay? And then you want" }, { "end": 165.12, "start": 159.36, "text": " to build another model that can generate from this low level representation of" }, { "end": 171.56, "start": 165.12, "text": " the set, can generate the different views okay? Like each one of these by sort of" }, { "end": 176.88000000000002, "start": 171.56, "text": " rotating it. So we want to build a model that understands just from the pixels" }, { "end": 182.96, "start": 176.88000000000002, "text": " that here we have sets of things that they somehow always share a commonality" }, { "end": 188.52, "start": 182.96, "text": " and in this case they always share their 3D structure right? What they don't share" }, { "end": 193.4, "start": 188.52, "text": " is the view where they rendered from. So our model is supposed to kind of parse" }, { "end": 198.44, "start": 193.4, "text": " the two apart and encode that 3D structure just from the pixels in this" }, { "end": 204.16000000000003, "start": 198.44, "text": " z variable and then encode the fact that you can rotate it and look at it from" }, { "end": 209.34, "start": 204.16000000000003, "text": " different views into the generative model that then produces the different" }, { "end": 215.72, "start": 209.34, "text": " views okay? And the reason why there is no correspondence between the views is" }, { "end": 221.44, "start": 215.72, "text": " we simply regard these things as sets. So our final objective is simply going" }, { "end": 228.64, "start": 221.44, "text": " to be that the set on top is different views of that particular truck is going" }, { "end": 233.56, "start": 228.64, "text": " to be very similar to the set on the bottom which is also different views of" }, { "end": 239.48, "start": 233.56, "text": " that particular truck and that's what our model is supposed to do. Now you" }, { "end": 243.76, "start": 239.48, "text": " might know something like this from where you can simply say well this this" }, { "end": 250.84, "start": 243.76, "text": " looks like a class conditional GAN right? I simply have the class truck and I you" }, { "end": 255.35999999999999, "start": 250.84, "text": " know I feed this to my generator and my discriminator and my encoder and so on" }, { "end": 260.76, "start": 255.35999999999999, "text": " and it will produce the same truck and here that and here the bench and so on." }, { "end": 265.3, "start": 260.76, "text": " The problem I guess becomes more apparent when you go to this different" }, { "end": 272.48, "start": 265.3, "text": " data set. So this is a face data set and again top row being the input and bottom" }, { "end": 278, "start": 272.48, "text": " row being the output. Now what is here what is supposed to be preserved you can" }, { "end": 284.08000000000004, "start": 278, "text": " kind of see in the images is sort of the identity of the person on the photo. Now" }, { "end": 288.84000000000003, "start": 284.08000000000004, "text": " you have to kind of gloss around your human bias here. You as a human" }, { "end": 295.32, "start": 288.84000000000003, "text": " can tell extremely tiny differences between faces and therefore none of the" }, { "end": 299.68, "start": 295.32, "text": " actual identities are going to be preserved right? So this on the top I" }, { "end": 308.56, "start": 299.68, "text": " believe is Ali and on on the bottom here it's not not Ali. So but you'll have to" }, { "end": 313.12, "start": 308.56, "text": " sort of gloss around this and you'll see that what is preserved or what is" }, { "end": 318.4, "start": 313.12, "text": " supposed to be preserved is something like the rough identity of the person on" }, { "end": 324.52, "start": 318.4, "text": " the picture and also a little bit of the image compositions. So you see here in" }, { "end": 330.79999999999995, "start": 324.52, "text": " the background you often have sort of these sports backgrounds right where" }, { "end": 335.12, "start": 330.79999999999995, "text": " there's kind of a washed out stadium or whatnot and you can see that this is" }, { "end": 340.26, "start": 335.12, "text": " also preserved here. I think it's kind of a glamour glamour shots of this must be" }, { "end": 345.96, "start": 340.26, "text": " some sort of glamour model and you'll see that this as well is preserved. What" }, { "end": 352.35999999999996, "start": 345.96, "text": " is different within each set is of course the different views on the same" }, { "end": 356.56, "start": 352.36, "text": " identity right? So you have the same person and you have pictures of them in" }, { "end": 360.72, "start": 356.56, "text": " different of different views, different lighting, different hairstyles and so on." }, { "end": 365.52000000000004, "start": 360.72, "text": " Here you even see you have the black and white image and the set that is produced" }, { "end": 370.04, "start": 365.52000000000004, "text": " also contains some black and white images. So this model this is already the" }, { "end": 376.08000000000004, "start": 370.04, "text": " trained model here is doing a fairly good job. Here you can see that almost" }, { "end": 381.08000000000004, "start": 376.08000000000004, "text": " all the pictures have some sort of double like two people with one being" }, { "end": 387.28, "start": 381.08, "text": " sort of half in frame and it leads to a fairly strange thing. I'm gonna" }, { "end": 391.64, "start": 387.28, "text": " guess the model hasn't seen lots of that during training. I like it, particularly" }, { "end": 399.15999999999997, "start": 391.64, "text": " like this. This is pretty good. Also here we know that a lot of these face" }, { "end": 405.36, "start": 399.15999999999997, "text": " datasets they don't really have bald people all too often so you can see here" }, { "end": 411.84000000000003, "start": 405.36, "text": " this results in sort of kind of a weird Richard Branson type picture. In any case" }, { "end": 421.36, "start": 411.84000000000003, "text": " it does a fairly good job as you can see right here. And what's" }, { "end": 426.32, "start": 421.36, "text": " the problem with these faces? Can't we just do a class conditional again where" }, { "end": 431.6, "start": 426.32, "text": " we basically say here this is Ali and that's one class in our latent" }, { "end": 437.12, "start": 431.6, "text": " vector and then and so on. What we want to do is we want to train something like" }, { "end": 441.6, "start": 437.12, "text": " this where we can give in a new identity that we haven't seen during" }, { "end": 445.6, "start": 441.6, "text": " training. In fact we want to train something where we don't even know how" }, { "end": 450.68, "start": 445.6, "text": " many sets there are going to be in the end. We simply want to train" }, { "end": 455.56, "start": 450.68, "text": " it in a way where we say look I'm going to give you a set and the set will have" }, { "end": 462.68, "start": 455.56, "text": " images of the same person and you're going to sort of reconstruct that in a" }, { "end": 467.36, "start": 462.68, "text": " way where you output a set of images of that person conserving the" }, { "end": 472.88, "start": 467.36, "text": " identity of the person and the rough style of the picture. So this is really" }, { "end": 477.48, "start": 472.88, "text": " different from a class conditional again as in we don't know how many classes" }, { "end": 484.74, "start": 477.48, "text": " there are and there can be new unseen ones during testing. So how do we go" }, { "end": 491.7, "start": 484.74, "text": " about something like this? And here is where the thing starts. So we're" }, { "end": 495.26, "start": 491.7, "text": " going to dive into a bit of the math here and then into a bit of the" }, { "end": 501.3, "start": 495.26, "text": " reasoning but ultimately what they're going to do is they're going to build" }, { "end": 512.04, "start": 501.3, "text": " three things. They're going to build an encoder, a discriminator, and a" }, { "end": 518.76, "start": 512.04, "text": " generator. So what does the encoder do? And remember our task here is going to" }, { "end": 525.48, "start": 518.76, "text": " be the way we train it is going to be by reconstruction, at least one way we train" }, { "end": 530.48, "start": 525.48, "text": " it. So the encoder is going to take this set of images that we give it, for" }, { "end": 536.12, "start": 530.48, "text": " example here different views of this car, and is going to produce this" }, { "end": 542.5600000000001, "start": 536.12, "text": " representation, this set representation. Now what should a" }, { "end": 547.6, "start": 542.5600000000001, "text": " property be of the set representation? For example it should be independent of" }, { "end": 552.68, "start": 547.6, "text": " the ordering of these inputs. A set is simply a collection of objects, it's" }, { "end": 558.12, "start": 552.68, "text": " independent of the ordering, and it's also independent of the size in some way." }, { "end": 562.28, "start": 558.12, "text": " Of course a bigger set gives you more information but the set identity, the" }, { "end": 567.4, "start": 562.28, "text": " fact that this is this particular car, is independent of how many views you have." }, { "end": 575.48, "start": 567.4, "text": " So what they do is they build and they put each image through an encoder which" }, { "end": 581.72, "start": 575.48, "text": " is a convolutional neural network and I guess that gives them a hidden" }, { "end": 586.5799999999999, "start": 581.72, "text": " representation and encoding and embedding for each image and then they" }, { "end": 591.12, "start": 586.5799999999999, "text": " have this operation called pool and binarize. And so this does two" }, { "end": 597.72, "start": 591.12, "text": " things, namely first of all it pools, it simply averages these things. So it goes" }, { "end": 612.52, "start": 597.72, "text": " 1 over n C of XI, or C of XI I guess. So this simply averages these" }, { "end": 618.6800000000001, "start": 612.52, "text": " encodings right here and this already fulfills our property. So this average is" }, { "end": 624.3599999999999, "start": 618.68, "text": " independent of the order of the images and also I can add more and I can add" }, { "end": 629.4799999999999, "start": 624.3599999999999, "text": " less in expectation the average will result in the same thing. So this" }, { "end": 635.76, "start": 629.4799999999999, "text": " here now we have basically lost the information of the exact" }, { "end": 640.4599999999999, "start": 635.76, "text": " ordering and so on. This is simply an average of these images. It's a good" }, { "end": 645.8, "start": 640.4599999999999, "text": " representation for a set. This is now if we give enough images this will be" }, { "end": 651.52, "start": 645.8, "text": " independent of the particular rendering position and it will only depend on the" }, { "end": 658.3599999999999, "start": 651.52, "text": " fact that this is that particular car, if it's trained well of course. And then the" }, { "end": 664, "start": 658.3599999999999, "text": " second thing is binarize. And here you have to understand what" }, { "end": 671.24, "start": 664, "text": " exactly this set latent representation is. How do we encode a set in latent" }, { "end": 678.32, "start": 671.24, "text": " space? And as far as I understand it, what they do is they do" }, { "end": 684.76, "start": 678.32, "text": " the following. Since they don't know how many sets there are, they can't simply" }, { "end": 688.8, "start": 684.76, "text": " do the classic one-hot vector. So what you would do in a class conditional" }, { "end": 694.04, "start": 688.8, "text": " GAN is you would say I have a vector and maybe I have ten classes. So I'll make ten" }, { "end": 700.32, "start": 694.04, "text": " entries right here. Is that ten? I don't know. So if I have C classes I'll" }, { "end": 710, "start": 700.32, "text": " make C entries right and I'll put a zero in all of them and a one in where a one" }, { "end": 714.84, "start": 710, "text": " where my class is or something like this. So this would be a valid encoding for a" }, { "end": 722.44, "start": 714.84, "text": " class conditional GAN to represent the identity of the class. Here however no we" }, { "end": 728.6600000000001, "start": 722.44, "text": " don't know. And also we can't really make this continuous because other if you" }, { "end": 734.88, "start": 728.66, "text": " make this continuous you wouldn't really encode the identity of the set." }, { "end": 741.88, "start": 734.88, "text": " You would encode more of a continuous latent space and then" }, { "end": 747.6, "start": 741.88, "text": " that becomes kind of different when you have new sets and so on. So what they" }, { "end": 753.4399999999999, "start": 747.6, "text": " really want to do is they want to make this representation here be a description" }, { "end": 759.0400000000001, "start": 753.44, "text": " of the set itself but not a one-hot. So what do we do? What they do is they do" }, { "end": 765.12, "start": 759.0400000000001, "text": " the same thing. They have a vector but not of size C because they don't know C" }, { "end": 771.72, "start": 765.12, "text": " but of some dimensionality D. This can be 10, this can be 4, this can be" }, { "end": 777.72, "start": 771.72, "text": " you know whatever. Just let's say it's 10 again but we don't know how many" }, { "end": 785.2, "start": 777.72, "text": " classes there are. What the model can do is it can encode each class as a binary" }, { "end": 790.96, "start": 785.2, "text": " vector, a binary combination of negative ones and ones. So it can put like a" }, { "end": 797.32, "start": 790.96, "text": " negative one here a one negative one negative one one one negative one and so" }, { "end": 804.8000000000001, "start": 797.32, "text": " on. So what does that give you? Now you can encode much more than ten" }, { "end": 809.5999999999999, "start": 804.8, "text": " classes. In fact you can with this you can encode two to the ten classes." }, { "end": 819.28, "start": 809.5999999999999, "text": " So it's not that they can encode an unlimited set of" }, { "end": 826.12, "start": 819.28, "text": " set identities but they can encode in this manner they can encode a lot." }, { "end": 831.3599999999999, "start": 826.12, "text": " They can encode this many sets using a representation like this. So this" }, { "end": 837.08, "start": 831.36, "text": " binarize operation it will take this output right here and basically clamp it" }, { "end": 843.24, "start": 837.08, "text": " to either one or negative one. So the set will be encoded by a binary vector like" }, { "end": 848.4, "start": 843.24, "text": " this and then the generator and the discriminator take that information." }, { "end": 854.8000000000001, "start": 848.4, "text": " We'll go over what this architectural choice means but right" }, { "end": 862.28, "start": 854.8, "text": " now this you know see that this is a way to encode a large number of" }, { "end": 872, "start": 862.28, "text": " set identities in a low dimensional vector. So there are two things now." }, { "end": 877.4799999999999, "start": 872, "text": " There are the discriminator and the generator. So first of all the generator" }, { "end": 885.04, "start": 877.48, "text": " is pretty easy. The generators task is to take this Z that we just saw which is" }, { "end": 894.6800000000001, "start": 885.04, "text": " the set identity and to generate different instances of that set." }, { "end": 899.9200000000001, "start": 894.6800000000001, "text": " For that it needs this noise here. So if you know a generator from a from a" }, { "end": 904.76, "start": 899.9200000000001, "text": " GAN it always kind of needs input noise in order to produce different" }, { "end": 908.4, "start": 904.76, "text": " outputs and that's this thing on the right here. This Z' they simply come" }, { "end": 915.28, "start": 908.4, "text": " from some sort of latent distribution. I think they call this P of psi which is" }, { "end": 918.8, "start": 915.28, "text": " some like a uniform or a Gaussian or something. You just sample some noise" }, { "end": 923.52, "start": 918.8, "text": " right and you combine it with this thing right here which is the set identity." }, { "end": 928.4, "start": 923.52, "text": " You concatenate it and then each one of these different Z will produce one" }, { "end": 937.92, "start": 928.4, "text": " different view right here. So the generators task is simply take" }, { "end": 946.76, "start": 937.92, "text": " the set identity and combine each with some noise and produce some views of" }, { "end": 953.92, "start": 946.76, "text": " that set. Now the discriminators task right here is going to be to decide it's" }, { "end": 961.28, "start": 953.92, "text": " going to get a set of views of a set of pictures and it's going to have to" }, { "end": 968.28, "start": 961.28, "text": " decide is this set coming from the generator or is this set coming from the" }, { "end": 973.4399999999999, "start": 968.28, "text": " data set. Now you can't simply compare images to each other like you" }, { "end": 977.8399999999999, "start": 973.4399999999999, "text": " would do in a regular GAN because they don't correspond to each other right but" }, { "end": 984.48, "start": 977.84, "text": " what should correspond to each other is this identity this Z identity. So the" }, { "end": 991.52, "start": 984.48, "text": " discriminator is going to take also two inputs. It's going to take this set right" }, { "end": 997.44, "start": 991.52, "text": " here or you know from the data set and it's going to take this right here this" }, { "end": 1005.88, "start": 997.44, "text": " Z. Now this Z is going to be the set identity and that you get from the" }, { "end": 1010.84, "start": 1005.88, "text": " encoder right it's the same as you get right here you get it from the encoder" }, { "end": 1017.16, "start": 1010.84, "text": " which set you should produce and the same goes here. So the discriminator knows" }, { "end": 1021.72, "start": 1017.16, "text": " the set discriminator knows for example I'm trying to produce that particular" }, { "end": 1027.32, "start": 1021.72, "text": " car and it gets a set of images that is supposedly of that particular car needs" }, { "end": 1031.96, "start": 1027.32, "text": " to decide does it come from the data set or from the generator. So it uses the" }, { "end": 1036.92, "start": 1031.96, "text": " same encoder pipeline here it's just a like a CNN giving you a latent" }, { "end": 1042.76, "start": 1036.92, "text": " representation and then it has two tasks the discriminator has two different" }, { "end": 1051.6000000000001, "start": 1042.76, "text": " tasks. First of all this here is the regular GAN path so it there's an" }, { "end": 1056.1200000000001, "start": 1051.6000000000001, "text": " there's an MLP there's simply a pipeline that outputs a number and the number is" }, { "end": 1061.32, "start": 1056.1200000000001, "text": " does this come from the data set or does this come from the generator but then" }, { "end": 1066.2, "start": 1061.32, "text": " there is an additional pipeline that they have found to be vital to train" }, { "end": 1070.6799999999998, "start": 1066.2, "text": " the objective which is a reconstruction pipeline. So this is more like a sort of" }, { "end": 1075.6799999999998, "start": 1070.6799999999998, "text": " like an auto encoder pipeline where they have a decoder they try to reconstruct" }, { "end": 1082.9199999999998, "start": 1075.6799999999998, "text": " the input set and they then compare it using mean squared error. Now here they" }, { "end": 1088.56, "start": 1082.9199999999998, "text": " try to reconstruct the input set really picture by picture not as a set but" }, { "end": 1093.32, "start": 1088.56, "text": " picture by picture so that's it's different from the set generator okay and" }, { "end": 1098.24, "start": 1093.32, "text": " this is this pipeline is just to stabilize the training but it also goes" }, { "end": 1105.84, "start": 1098.24, "text": " into this the output of the discriminator. So sort of the discriminator" }, { "end": 1112.6, "start": 1105.84, "text": " is happier the more it can reconstruct the images which seems" }, { "end": 1117.8, "start": 1112.6, "text": " kind of weird at the beginning but it you know they say it has helped in other" }, { "end": 1122.3999999999999, "start": 1117.8, "text": " GANs I'm not super familiar with GAN literature but it's just a another" }, { "end": 1126.96, "start": 1122.3999999999999, "text": " objective that you can add. So this is going to be the overview right here so" }, { "end": 1134.6, "start": 1126.96, "text": " if everything works well we should be able to take a set from the data set X" }, { "end": 1140.36, "start": 1134.6, "text": " right which is going to be you know images different images from the same" }, { "end": 1145.68, "start": 1140.36, "text": " person we should be able to feed that to the encoder get a latent representation" }, { "end": 1151.2, "start": 1145.68, "text": " Z for that set that somehow encodes here the identity of the person and now we" }, { "end": 1156.6000000000001, "start": 1151.2, "text": " don't if the encoder works really well we don't have to have seen that person" }, { "end": 1162.3200000000002, "start": 1156.6000000000001, "text": " before it will simply somehow encode the identity in that binary vector then we" }, { "end": 1169.76, "start": 1162.3200000000002, "text": " feed that to the generator together with some noise and we'll get out a set of" }, { "end": 1175.28, "start": 1169.76, "text": " pictures of different views of that same person or the person with a similar" }, { "end": 1182.6, "start": 1175.28, "text": " identity and pictures with similar kind of picture style and that if our" }, { "end": 1189.68, "start": 1182.6, "text": " discriminator works well will look very very similar to or will be images really" }, { "end": 1193.44, "start": 1189.68, "text": " of that same person and our discriminator if we plug that in right" }, { "end": 1196.96, "start": 1193.44, "text": " here will agree and we plug in the Z right here" }, { "end": 1207.4, "start": 1196.96, "text": " okay okay that's the overview now the math so they go about this in this sort" }, { "end": 1214.08, "start": 1207.4, "text": " of probabilistic framework what they say is we have we denote an image set of" }, { "end": 1221.48, "start": 1214.08, "text": " size n as X and that it comes from the space of sets of images so capital X" }, { "end": 1228.8, "start": 1221.48, "text": " right here is going to be a set of images as you see here and this here is" }, { "end": 1233.88, "start": 1228.8, "text": " the space of all sets of images okay so what they want to do is they want to" }, { "end": 1240.64, "start": 1233.88, "text": " build a probabilistic model of that so a model where you can input a set and" }, { "end": 1245.4, "start": 1240.64, "text": " it'll tell you how likely is that now you don't you don't actually have to" }, { "end": 1250.48, "start": 1245.4, "text": " have a number as an output right here what they often do is they start with a" }, { "end": 1254.96, "start": 1250.48, "text": " formulation like this and what they end up with is simply a model that allows" }, { "end": 1260.04, "start": 1254.96, "text": " you to sample from this distribution right from which you can estimate the" }, { "end": 1266.24, "start": 1260.04, "text": " probability but ultimately what we want is a generator that can sample from this" }, { "end": 1272.48, "start": 1266.24, "text": " okay so how do they build it they decompose this into two parts and this" }, { "end": 1276.88, "start": 1272.48, "text": " is just a you know a decomposition this is standard decomposition of probability" }, { "end": 1284.5600000000002, "start": 1276.88, "text": " where you'll say okay what's the probability of a set X the probability" }, { "end": 1292.0400000000002, "start": 1284.5600000000002, "text": " of a set X is the probability of the latent code of that set times the" }, { "end": 1298.16, "start": 1292.0400000000002, "text": " conditional probability of that set given the latent code so already we we" }, { "end": 1303.2800000000002, "start": 1298.16, "text": " have we ask ourselves what's the probability of X and what we might ask" }, { "end": 1313.6399999999999, "start": 1303.28, "text": " is huh well X you know if I look at X it has these different images of of this" }, { "end": 1318.44, "start": 1313.6399999999999, "text": " it has these different images of that thing whatever is on the image I can" }, { "end": 1323.92, "start": 1318.44, "text": " first ask what is the probability of that particular thing on the image and" }, { "end": 1329.8799999999999, "start": 1323.92, "text": " then conditioned on that what is the probability that I'll get these" }, { "end": 1337, "start": 1329.88, "text": " particular images right this is this simple decomposition already kind of" }, { "end": 1344.5200000000002, "start": 1337, "text": " builds up this model of encoding decoding so we'll go through that step" }, { "end": 1349.5600000000002, "start": 1344.5200000000002, "text": " this here is going to be a deterministic function our encoder and this here is" }, { "end": 1353.8000000000002, "start": 1349.5600000000002, "text": " going to be a probabilistic function the the decoder and it's probabilistic" }, { "end": 1358.3600000000001, "start": 1353.8000000000002, "text": " because every time you call it basically every time you call the generator it's" }, { "end": 1361.4799999999998, "start": 1358.36, "text": " going to give you a different output because you you're going to feed it" }, { "end": 1370.36, "start": 1361.4799999999998, "text": " different noise at the beginning okay so so this is going to be this is going to" }, { "end": 1382.36, "start": 1370.36, "text": " be our encoder here and this is going to be our decoder so they say here X is a" }, { "end": 1388, "start": 1382.36, "text": " Z is a deterministic function that maps a set X to an element in a discrete" }, { "end": 1395.56, "start": 1388, "text": " space Z okay this is a discrete space as opposed to maybe a regular autoencoder" }, { "end": 1402.6, "start": 1395.56, "text": " where you have a continuous space so here we want to discrete space and a lot" }, { "end": 1406.8, "start": 1402.6, "text": " of mathematical problems are going to arise from the fact that this Z is a" }, { "end": 1416.24, "start": 1406.8, "text": " discrete space and not a and not a continuous space so here P is a prior" }, { "end": 1423.04, "start": 1416.24, "text": " distribution with the support given by sub C which means all the Z vectors that" }, { "end": 1430.4, "start": 1423.04, "text": " have some sort of set associated with them which is a subset of all the Z so" }, { "end": 1436.76, "start": 1430.4, "text": " that basically means that if if we have a given encoder there not all of these" }, { "end": 1442.94, "start": 1436.76, "text": " binary vectors are going to be filled even you know with if we plug in all the" }, { "end": 1449.96, "start": 1442.94, "text": " world's faces into our encoder it might not fill all of the binary capabilities" }, { "end": 1456.16, "start": 1449.96, "text": " that we have so this prayer is only defined on this support of this and here" }, { "end": 1460.52, "start": 1456.16, "text": " you already kind of see what what kind of mathematical hurdles you have to go" }, { "end": 1464.3600000000001, "start": 1460.52, "text": " through if you do something like this and all the math here is going or most" }, { "end": 1468.16, "start": 1464.3600000000001, "text": " of the math here is going to deal with the fact that we have this discrete" }, { "end": 1478.3600000000001, "start": 1468.16, "text": " thing and so on and a bit of a bit of a little bit of a of a caveat here also is" }, { "end": 1483.6000000000001, "start": 1478.3600000000001, "text": " that this here they mentioned this here is a prior you need a prior" }, { "end": 1493.1200000000001, "start": 1483.6000000000001, "text": " distribution on your Z variables and this is also not easy so really quickly" }, { "end": 1498.3999999999999, "start": 1493.12, "text": " what does it mean to have a prior distribution on this kind of thing because" }, { "end": 1502.7199999999998, "start": 1498.3999999999999, "text": " usually in a regular like auto encoder variational auto encoder right your" }, { "end": 1509.84, "start": 1502.7199999999998, "text": " latent your latent code you'll have a prior on it and that prior can be you" }, { "end": 1515.52, "start": 1509.84, "text": " know some some continuous thing like a Gaussian and even in a regular GAN as I" }, { "end": 1520.12, "start": 1515.52, "text": " said you you have your your noise distribution and so on what is a prior" }, { "end": 1526.8, "start": 1520.12, "text": " on that thing now you can say oh a uniform prior but again we would like to" }, { "end": 1532.9199999999998, "start": 1526.8, "text": " learn this prior such that it matches the data set well now they use a prior" }, { "end": 1538.4399999999998, "start": 1532.9199999999998, "text": " from a paper that's called made and a DE and really quickly what it does is it" }, { "end": 1545, "start": 1538.4399999999998, "text": " sort of kind of decomposes this thing so what you'll have is a neural network" }, { "end": 1550.24, "start": 1545, "text": " that outputs binary vectors like this and it will sort of output them in a" }, { "end": 1557.4, "start": 1550.24, "text": " fashion auto regressively so it will output one of them and then condition on" }, { "end": 1561.12, "start": 1557.4, "text": " that it will output the next condition on that it will output the next and this" }, { "end": 1566.36, "start": 1561.12, "text": " is such that the probability of this binary vector minus 1 1 1 minus 1 and so" }, { "end": 1571.44, "start": 1566.36, "text": " on is going to be decomposed into the probability that there's a minus 1 here" }, { "end": 1575.8, "start": 1571.44, "text": " times the probability that there is a 1 here given that there is a minus 1 here" }, { "end": 1581.16, "start": 1575.8, "text": " and so on and in different order I don't really want to go into this but just to" }, { "end": 1584.72, "start": 1581.16, "text": " show you that there is a lot of consideration of mathematical" }, { "end": 1589.52, "start": 1584.72, "text": " consideration if you really want to go about really want to go about this sort" }, { "end": 1595.72, "start": 1589.52, "text": " of thing in a formal fashion so they define two things first of all this" }, { "end": 1600.88, "start": 1595.72, "text": " prior okay it's a prior distribution that they can learn from the data set" }, { "end": 1607.1200000000001, "start": 1600.88, "text": " and then there is this conditional distribution this what you might call a" }, { "end": 1613.3200000000002, "start": 1607.1200000000001, "text": " generator right if you're given a Z a one of these binary codes what's the" }, { "end": 1621.3200000000002, "start": 1613.3200000000002, "text": " probability of a given set so you're I tell you here it's a Scarlett Johansson" }, { "end": 1626.6000000000001, "start": 1621.3200000000002, "text": " what's the probability of these pictures being different views of Scarlett" }, { "end": 1634.1999999999998, "start": 1626.6, "text": " Johansson so that is going to be simply we're going to build this as an energy" }, { "end": 1639.56, "start": 1634.1999999999998, "text": " based model you can you can do this what you'll have to do is you'll have to" }, { "end": 1644.9599999999998, "start": 1639.56, "text": " define an energy and we'll just quickly discuss what that is and then you can" }, { "end": 1650.84, "start": 1644.9599999999998, "text": " build a construct like this where you'll say the probability of a given set is" }, { "end": 1657.28, "start": 1650.84, "text": " going to be the energy that assigned to that set divided by the energy that I'm" }, { "end": 1663.56, "start": 1657.28, "text": " going to assign to all of these other sets so this is it's a form of an energy" }, { "end": 1668.1999999999998, "start": 1663.56, "text": " based model you can phrase very many things in terms of these energy based" }, { "end": 1674.28, "start": 1668.1999999999998, "text": " models and young LeCun gave a talk about this at iClear I believe where he gives" }, { "end": 1679.32, "start": 1674.28, "text": " a lot of different examples of energy based models so I invite you to check" }, { "end": 1684.6, "start": 1679.32, "text": " this out I've also done a video on some of these energy based models and what" }, { "end": 1692.72, "start": 1684.6, "text": " you can do with them here it's simply to define this probabilistic model so what" }, { "end": 1698.48, "start": 1692.72, "text": " we need to do are two things we need to know what is this energy so what is this" }, { "end": 1703.52, "start": 1698.48, "text": " energy supposed to do this energy is going to be a function that gives you" }, { "end": 1710.72, "start": 1703.52, "text": " and now I have to think so its energy is going to be a function that gives you a" }, { "end": 1717.92, "start": 1710.72, "text": " high value if you are unhappy with the input and it gives you a low value if" }, { "end": 1724.76, "start": 1717.92, "text": " you are happy with the input okay so you see the negative exponential here which" }, { "end": 1731.44, "start": 1724.76, "text": " basically means if and also the energy is always positive so the best if you" }, { "end": 1737.4, "start": 1731.44, "text": " are super duper happy with what is what your input is into the function you'll" }, { "end": 1745.56, "start": 1737.4, "text": " output 0 so if you output 0 here you'll see that e to the exponential function" }, { "end": 1754.3200000000002, "start": 1745.56, "text": " of negative something is going to be quite small and no maybe I have it wrong" }, { "end": 1761.76, "start": 1754.32, "text": " maybe you output a really high number when you're really happy I'm not sure but" }, { "end": 1769.72, "start": 1761.76, "text": " it's one of the two so this comes this comes from from a physics from a physics" }, { "end": 1776.2, "start": 1769.72, "text": " background no no no I'm right so if you if you output if you're not happy you'll" }, { "end": 1781.2, "start": 1776.2, "text": " output a super high number here which will make this negative exponential be" }, { "end": 1786.96, "start": 1781.2, "text": " really close to zero and therefore the probability you say if I'm not happy the" }, { "end": 1793.1200000000001, "start": 1786.96, "text": " probability should be close to zero however if you're really happy you'll" }, { "end": 1797.8400000000001, "start": 1793.1200000000001, "text": " output a low number right here now the energy always has to be greater or equal" }, { "end": 1803.16, "start": 1797.8400000000001, "text": " to zero but the lower you go the higher this probability is going to be the" }, { "end": 1808.04, "start": 1803.16, "text": " bottom thing here is simply to normalize the distribution because in a probability" }, { "end": 1813.48, "start": 1808.04, "text": " distribution you always have to normalize because otherwise it's not a" }, { "end": 1821.72, "start": 1813.48, "text": " probability and this is what most of these models basically are fighting over" }, { "end": 1825.6399999999999, "start": 1821.72, "text": " how to normalize the distribution and what we're going to do is simply" }, { "end": 1830.28, "start": 1825.6399999999999, "text": " normalize it by sampling which is what most of these things do you can build" }, { "end": 1837.44, "start": 1830.28, "text": " energy-based models without this which GANs are a variant of that but" }, { "end": 1842.16, "start": 1837.44, "text": " okay so what we need to do is we need to come up with this energy function that" }, { "end": 1846.96, "start": 1842.16, "text": " is going to be a high number when we're not happy with the input now what is the" }, { "end": 1853, "start": 1846.96, "text": " input the input is X and Z X and Z what does it mean we're not happy with the" }, { "end": 1859.68, "start": 1853, "text": " input it means that the the image X and you see X here is one of the images of" }, { "end": 1864.96, "start": 1859.68, "text": " the set that particular image isn't really congruent with the Z with the" }, { "end": 1871.16, "start": 1864.96, "text": " identity so you either you say what this this isn't really a picture of scarlet" }, { "end": 1876.96, "start": 1871.16, "text": " Johansson so I'm going to assign this a high value however if you know if this" }, { "end": 1882.72, "start": 1876.96, "text": " is really any sort of picture of of that person then you're going to assign it a" }, { "end": 1888.44, "start": 1882.72, "text": " low value and how better to do this than to build a neural network to do that and" }, { "end": 1894.76, "start": 1888.44, "text": " this is going to be our discriminator so our discriminator is going to take the" }, { "end": 1903.8, "start": 1894.76, "text": " role of this energy function okay cool now I said you need to normalize and I" }, { "end": 1909.32, "start": 1903.8, "text": " kind of said it off the cuff and so on but the problem here is again we have" }, { "end": 1915.48, "start": 1909.32, "text": " this kind of sets and and so on so our probability as you'll notice is the" }, { "end": 1923.8, "start": 1915.48, "text": " probability of X given Z so we are already given the the identity of the" }, { "end": 1929.48, "start": 1923.8, "text": " person so what do we need to normalize by we can't simply normalize by all the" }, { "end": 1934.24, "start": 1929.48, "text": " sets of images in the world like here it's in the integral we need to" }, { "end": 1943.12, "start": 1934.24, "text": " normalize by all the sets of images that are mapped to that same identity okay so" }, { "end": 1949, "start": 1943.12, "text": " in order to normalize the distribution will basically ask how likely is this" }, { "end": 1954.64, "start": 1949, "text": " set how happy are you with this particular set right here compared to" }, { "end": 1962.56, "start": 1954.64, "text": " all the sets that could exist that would map to the same identity all right that's" }, { "end": 1968.8, "start": 1962.56, "text": " why these indicator functions are here and as you can see the part here is" }, { "end": 1973.8, "start": 1968.8, "text": " simply a normalization where you say I'm just going to produce other I'll" }, { "end": 1978.24, "start": 1973.8, "text": " consider all the other possible sets of images that map to the same person and" }, { "end": 1985.4, "start": 1978.24, "text": " I'll simply divide by the energy of those in fact if you do it correctly this" }, { "end": 1991.6, "start": 1985.4, "text": " particular X is also in that particular set but usually it's going to be a" }, { "end": 1995.52, "start": 1991.6, "text": " fairly small part but to properly normalize of course you have to" }, { "end": 2000, "start": 1995.52, "text": " consider it as well now this bottom part here as I already said is going to be" }, { "end": 2004.92, "start": 2000, "text": " the main problem of most of these probabilistic methods and as I already" }, { "end": 2010.2, "start": 2004.92, "text": " said again it's usually approximated by simply sampling a bunch of these sets" }, { "end": 2017, "start": 2010.2, "text": " and not sampling a bunch of these sets and not a by enumerating all the" }, { "end": 2021.64, "start": 2017, "text": " possible sets of images and this sampling is going to make some further" }, { "end": 2034, "start": 2021.64, "text": " problems as we'll see I guess down here so here is what we optimize ultimately" }, { "end": 2045, "start": 2034, "text": " we optimize or one of the things we optimize is or are we okay yeah so they" }, { "end": 2049.12, "start": 2045, "text": " say we apply maximum likelihood estimation to estimate the parameters of" }, { "end": 2053.88, "start": 2049.12, "text": " the thing we just defined where where the negative log likelihood loss for an" }, { "end": 2059.36, "start": 2053.88, "text": " observed set in the training split is this so this is simply the negative log" }, { "end": 2065.7200000000003, "start": 2059.36, "text": " likelihood the log decomposes into a sum so this is going to be your prior and" }, { "end": 2072.84, "start": 2065.7200000000003, "text": " this is going to be your generator discriminator combo okay it's the" }, { "end": 2078.8, "start": 2072.84, "text": " generator producing images from that binary code and then the discriminator" }, { "end": 2084.44, "start": 2078.8, "text": " assigning high or low values to that produced images and also to images from" }, { "end": 2091.36, "start": 2084.44, "text": " the data set and the thing over here is simply going to be the prior over the z" }, { "end": 2098.08, "start": 2091.36, "text": " distribution that we briefly discussed now again they have to they have to do" }, { "end": 2104, "start": 2098.08, "text": " some tricks here where they say okay we can get rid of this support by using a" }, { "end": 2110.52, "start": 2104, "text": " normalized distribution over Z which is a bound on that true prior and so on so" }, { "end": 2118.24, "start": 2110.52, "text": " they're going to replace the the P with a P bar which is over the entire space" }, { "end": 2127.28, "start": 2118.24, "text": " of Z and and that's going to be a bound but the interesting part I feel more is" }, { "end": 2132.8, "start": 2127.28, "text": " here where they consider the loss again of this conditional distribution on Z" }, { "end": 2137.64, "start": 2132.8, "text": " so you'll see the exact same quantity right here but now our loss is going to" }, { "end": 2143.4, "start": 2137.64, "text": " be the negative log of that now since it's the negative log you can decompose" }, { "end": 2150.6, "start": 2143.4, "text": " the division into a sum and and this part up here you will see the indicator" }, { "end": 2158.12, "start": 2150.6, "text": " function here is a bit unnecessary because the the Z here is the Z that" }, { "end": 2162.04, "start": 2158.12, "text": " we're considering is going to be the Z of that particular set that we're" }, { "end": 2167.4, "start": 2162.04, "text": " considering and so this equality holds on the top so disregard this and this" }, { "end": 2172.6800000000003, "start": 2167.4, "text": " down here as I said is simply a filter to filter the space of all sets to the" }, { "end": 2178.8, "start": 2172.6800000000003, "text": " ones that correspond to the Z that we have in the energy function so this goes" }, { "end": 2184.32, "start": 2178.8, "text": " here because it's simply a log of an exponential and the negative signs" }, { "end": 2189.88, "start": 2184.32, "text": " cancel so you'll end up with this what does it mean you want to minimize this" }, { "end": 2194.56, "start": 2189.88, "text": " loss right here and part of that is going to be you want to minimize the" }, { "end": 2202.12, "start": 2194.56, "text": " energy function of these inputs okay you want you want to and this is now the" }, { "end": 2208.16, "start": 2202.12, "text": " case when it comes to from the data set right so when X and Z come from the" }, { "end": 2214.36, "start": 2208.16, "text": " data set and E is your discriminator then you want to make the output of the" }, { "end": 2218.44, "start": 2214.36, "text": " discriminator really small which means that you want to train the" }, { "end": 2223.6, "start": 2218.44, "text": " discriminator to say I'm really really happy that this particular image comes" }, { "end": 2229.7599999999998, "start": 2223.6, "text": " together with this particular identity encoding now the in the in the if it" }, { "end": 2234, "start": 2229.7599999999998, "text": " comes from the generator of course you want to do the exact opposite you want" }, { "end": 2238.88, "start": 2234, "text": " to assign it a high value remember energy low means happy with the input" }, { "end": 2246.96, "start": 2238.88, "text": " okay and then the normalization down here is as I said the problem so you'll" }, { "end": 2254.16, "start": 2246.96, "text": " see it stated right here because it's under the the division it's going to" }, { "end": 2257.28, "start": 2254.16, "text": " get a pick up a negative sign which combines with this negative sign which" }, { "end": 2262.68, "start": 2257.28, "text": " gives you positive sign right here now this part right here is going to be" }, { "end": 2268.56, "start": 2262.68, "text": " intractable because it it's not going to be feasible to enumerate all the sets of" }, { "end": 2272.44, "start": 2268.56, "text": " images it's not even going to be feasible to enumerate all the sets of" }, { "end": 2279.2400000000002, "start": 2272.44, "text": " images that just correspond to that particular identity and in fact it's not" }, { "end": 2284.48, "start": 2279.2400000000002, "text": " even going to be feasible to sample from that because we have no clue right we" }, { "end": 2290.6, "start": 2284.48, "text": " can't simply generate out of the ether true other pictures of the of a" }, { "end": 2298.84, "start": 2290.6, "text": " particular person what we can do of course is we can use our we can use our" }, { "end": 2306.84, "start": 2298.84, "text": " model to produce more images right of that particular identity so what we'll" }, { "end": 2312.76, "start": 2306.84, "text": " do is we'll replace this distribution with a variational distribution and" }, { "end": 2320.08, "start": 2312.76, "text": " we'll sample from that now this isn't exactly the same this isn't this log" }, { "end": 2327, "start": 2320.08, "text": " probability anymore and that's why first of all we have a bound here and not an" }, { "end": 2333.28, "start": 2327, "text": " equality this is this is called a variational approximation so we bound" }, { "end": 2340.2, "start": 2333.28, "text": " this quantity and we can only bound this quantity if we down here introduce the" }, { "end": 2345.44, "start": 2340.2, "text": " entropy of the variational distribution this is a fairly standard trick in" }, { "end": 2350.08, "start": 2345.44, "text": " variational approximation methods if you want to look more into this look into" }, { "end": 2355.68, "start": 2350.08, "text": " kind of VAE explain variational autoencoders explained or anything like" }, { "end": 2361.3199999999997, "start": 2355.68, "text": " this will teach you how how these methods work and in case we replace the" }, { "end": 2366, "start": 2361.3199999999997, "text": " distribution with a distribution we can actually produce and what is that" }, { "end": 2372, "start": 2366, "text": " distribution look what we're supposed to do is we're supposed to produce a set" }, { "end": 2380.56, "start": 2372, "text": " of image given so sets of images here given particular Z and we can do that" }, { "end": 2385.3599999999997, "start": 2380.56, "text": " that's our generator right so we can use our generator to produce those samples" }, { "end": 2391.4, "start": 2385.36, "text": " and that's what they say here here we have derived a lower bound by" }, { "end": 2395.6400000000003, "start": 2391.4, "text": " introducing a variational distribution which we parameterized in the form of a" }, { "end": 2402.36, "start": 2395.6400000000003, "text": " generator ok so the generators going to produce that distribution it's going to" }, { "end": 2409.28, "start": 2402.36, "text": " use this noise production so as you know the generator takes two things it takes" }, { "end": 2415, "start": 2409.28, "text": " the identity encoding and a bit of noise and is going to produce an output" }, { "end": 2423.56, "start": 2415, "text": " set or for each noise is going to produce an output image very cool so" }, { "end": 2431.56, "start": 2423.56, "text": " that's that's the the kind of math formulation behind this model now they" }, { "end": 2436.12, "start": 2431.56, "text": " have a model architectures right here but this is all fairly standard except" }, { "end": 2440.64, "start": 2436.12, "text": " for so for the prior they learned the prior on the z space so you see you have" }, { "end": 2447.92, "start": 2440.64, "text": " Z being this binary vectors they say we use a standard autoregressive model made" }, { "end": 2453.04, "start": 2447.92, "text": " with three fully connected layers mainly for its simplicity and robustness and so" }, { "end": 2457.64, "start": 2453.04, "text": " again maybe you like it took me a while to get what this prior does this prior" }, { "end": 2463.7599999999998, "start": 2457.64, "text": " is supposed to say it's so it's not in again you have the Z vectors always" }, { "end": 2469.12, "start": 2463.7599999999998, "text": " being some sort of from the standard noise but what you can also do is you" }, { "end": 2474.8399999999997, "start": 2469.12, "text": " can learn better noise distribution a better input distribution for your again" }, { "end": 2481.2799999999997, "start": 2474.8399999999997, "text": " by basically making again for your input distribution so what you'll do is you'll" }, { "end": 2488.7599999999998, "start": 2481.2799999999997, "text": " have a z zero right here and then you'll use a you learn again to learn better" }, { "end": 2497.56, "start": 2488.7599999999998, "text": " input distributions ok and this is what you do here with these with this prior on" }, { "end": 2506.12, "start": 2497.56, "text": " Z this is more standard in like VAE's than it is in GANs but it exists so" }, { "end": 2511.6, "start": 2506.12, "text": " encoder say as a necessary option encoder for a set needs to satisfy the" }, { "end": 2515.92, "start": 2511.6, "text": " permutation variant property we opt to use a simple architecture design where" }, { "end": 2522.96, "start": 2515.92, "text": " we let this be the average right here so as you can see this is the average and" }, { "end": 2529.64, "start": 2522.96, "text": " then they they use this binarize operation and the binarize operation" }, { "end": 2534.2400000000002, "start": 2529.64, "text": " here is clamping the values to one or negative one and it is a straight" }, { "end": 2538.38, "start": 2534.2400000000002, "text": " through estimator which means that you will you back prop through it as if you" }, { "end": 2543.44, "start": 2538.38, "text": " hadn't clamped but you forward prop through it with clamping this is kind of" }, { "end": 2551.44, "start": 2543.44, "text": " a trick to get through discretization things discriminators job is to assign" }, { "end": 2556.08, "start": 2551.44, "text": " low energy to observed images and high energy to generated images given a set" }, { "end": 2559.96, "start": 2556.08, "text": " code Z we use an auto encoder based energy function implementation similar" }, { "end": 2564.6, "start": 2559.96, "text": " to 25 and here they say we have found that this choice is important as it" }, { "end": 2570.96, "start": 2564.6, "text": " enables effective learning in early stages of training so that's why they" }, { "end": 2575.32, "start": 2570.96, "text": " do usually a discriminator would be the energy would be equal to this thing" }, { "end": 2584.36, "start": 2575.32, "text": " right here which is a small MLP that maps a the input to a real sorry you" }, { "end": 2589.2000000000003, "start": 2584.36, "text": " can't see that that maps the inputs to a real number either high energy I'm not" }, { "end": 2594.8, "start": 2589.2000000000003, "text": " happy low energy I'm very happy here they also include this thing right here" }, { "end": 2601.6800000000003, "start": 2594.8, "text": " which is a decoder so it's kind of a you can maybe think of it as a another" }, { "end": 2608.7599999999998, "start": 2601.68, "text": " little GAN another another another little generator or the the generator" }, { "end": 2615, "start": 2608.7599999999998, "text": " part of a VAE or of an autoencoder sorry not a VAE an autoencoder that takes as" }, { "end": 2621.3999999999996, "start": 2615, "text": " input the encoding of the particular image and the identity and produces is" }, { "end": 2627.12, "start": 2621.3999999999996, "text": " going to produce something that's close to the output against observe that this" }, { "end": 2631.2, "start": 2627.12, "text": " is now with respect to a particular image so here we're trying to reconstruct" }, { "end": 2635.64, "start": 2631.2, "text": " that particular image because we have its input thing right here and we're" }, { "end": 2642.7999999999997, "start": 2635.64, "text": " it's not the same as the generator that is just asked to produce a some view" }, { "end": 2649.3199999999997, "start": 2642.7999999999997, "text": " something that corresponds to this particular identity vector okay the" }, { "end": 2654.3599999999997, "start": 2649.3199999999997, "text": " generator generates a set conditioned on a set code by sampling and random" }, { "end": 2658.5, "start": 2654.3599999999997, "text": " variables each of which is concatenated with Z and generates an image" }, { "end": 2668.12, "start": 2658.5, "text": " independently cool so what the losses they they now introduce some margin" }, { "end": 2674.56, "start": 2668.12, "text": " losses on the things here but basically you can just translate the what we have" }, { "end": 2679.56, "start": 2674.56, "text": " on top where we formulated negative log likelihood into the losses right here" }, { "end": 2686.32, "start": 2679.56, "text": " they do have some simplifications for example this to train the prior you what" }, { "end": 2696.28, "start": 2686.32, "text": " you are to train to train the encoder I think you have to make a bit of an" }, { "end": 2702.96, "start": 2696.28, "text": " approximation in that the encoder is supposed to match this this Z vector" }, { "end": 2709.6400000000003, "start": 2702.96, "text": " right and that's not differentiable by itself so they have this sort of l1" }, { "end": 2714.6000000000004, "start": 2709.6400000000003, "text": " approximation right here they leave away the entropy from the loss and they have" }, { "end": 2720.3199999999997, "start": 2714.6, "text": " found that to work well they introduced this margin losses right here I I don't" }, { "end": 2726.16, "start": 2720.3199999999997, "text": " want to go into that too much but basically they simply in a way with some" }, { "end": 2730.48, "start": 2726.16, "text": " approximations they approximate I hear it is the indicator function they" }, { "end": 2735.92, "start": 2730.48, "text": " approximate is this I was looking for that they they they optimize this log" }, { "end": 2740, "start": 2735.92, "text": " likelihood from above in the way where they always optimize they keep the" }, { "end": 2744.56, "start": 2740, "text": " generator constant and they optimize the rest of the pipeline so the encoder and" }, { "end": 2751.04, "start": 2744.56, "text": " the discriminator in the prior and then they keep that rest fixed and they encode" }, { "end": 2757.68, "start": 2751.04, "text": " the generator so what does that do before remember right here we had this" }, { "end": 2764.12, "start": 2757.68, "text": " this approximation right here where we said you know what comes out of this we" }, { "end": 2767.6, "start": 2764.12, "text": " were not really optimizing this we're optimizing we're minimizing a lower" }, { "end": 2772.2799999999997, "start": 2767.6, "text": " bound on it right so here's a quantity that we want to minimize but here's a" }, { "end": 2776.7200000000003, "start": 2772.28, "text": " lower bound and we'll just push that lower bound down by optimizing it now" }, { "end": 2781.4, "start": 2776.7200000000003, "text": " that doesn't tell us anything about this thing right here but there is actually" }, { "end": 2788.1200000000003, "start": 2781.4, "text": " more to it so by optimizing the discriminator and the encoder and so on" }, { "end": 2793.4, "start": 2788.1200000000003, "text": " we do minimize this lower bound so that this this loss right here you see this" }, { "end": 2801.6000000000004, "start": 2793.4, "text": " energy function will adjust that whenever we adjust our whenever we adjust our" }, { "end": 2806.3399999999997, "start": 2801.6, "text": " our that particular loss our discriminator will adjust that energy" }, { "end": 2813.4, "start": 2806.3399999999997, "text": " function whenever we adjust our encoder we are going to adjust the part that" }, { "end": 2820.68, "start": 2813.4, "text": " generates the Z vectors right here so we'll push this down but whenever we" }, { "end": 2826.54, "start": 2820.68, "text": " optimize our generator that's when we make this gap here smaller okay so we" }, { "end": 2833.92, "start": 2826.54, "text": " always do two steps first we or first or second in one step we reduce this and in" }, { "end": 2838.3, "start": 2833.92, "text": " the other step we'll bring these two closer together and as a result of" }, { "end": 2842.2799999999997, "start": 2838.3, "text": " course we hope that it's not just the bottom one going to up down up down up" }, { "end": 2847.72, "start": 2842.2799999999997, "text": " down but we hope that both of them reduce with time because the top one is" }, { "end": 2852.02, "start": 2847.72, "text": " the one will actually want to reduce that's our actual loss or our log" }, { "end": 2859.7, "start": 2852.02, "text": " likelihood and that is I guess going to happen in practice so what does this do" }, { "end": 2866.88, "start": 2859.7, "text": " so as I as we already saw here on top you have a set and you feed that through" }, { "end": 2873.04, "start": 2866.88, "text": " the encoder feed that through the encoder that gives you a Z identity and" }, { "end": 2878.6, "start": 2873.04, "text": " then you feed that to the generator and the generator you can ask it you don't" }, { "end": 2882.4, "start": 2878.6, "text": " have to produce the same amount of images you can produce any amount of images you" }, { "end": 2886.44, "start": 2882.4, "text": " like they just chose to produce the same amount there's no correspondence but you" }, { "end": 2892.2799999999997, "start": 2886.44, "text": " see it's the same truck and here they manually align these so they just" }, { "end": 2896.08, "start": 2892.2799999999997, "text": " produce a bunch of images on the left is the data set and on the right I guess" }, { "end": 2900.08, "start": 2896.08, "text": " they just produced like a hundred images and then selected wherever the car" }, { "end": 2906.3199999999997, "start": 2900.08, "text": " looked like the closest to so they ordered them by by hand and that is to" }, { "end": 2912.88, "start": 2906.32, "text": " show that for example look at the the lighting on the car right here it's it's" }, { "end": 2918.84, "start": 2912.88, "text": " fairly similar I guess this one has red taillights and the other one hasn't but" }, { "end": 2923.84, "start": 2918.84, "text": " you can see that the the different views are pretty well captured by the" }, { "end": 2930.76, "start": 2923.84, "text": " generator and that just from all of these are created from one one binary" }, { "end": 2935.28, "start": 2930.76, "text": " encoding of this here so this is binary encoded to Z and then all of these" }, { "end": 2941.36, "start": 2935.28, "text": " different views are created there's no image correspondence so that's pretty" }, { "end": 2946.84, "start": 2941.36, "text": " cool and another problem you have with sets is how do you evaluate sets you" }, { "end": 2953.32, "start": 2946.84, "text": " can't you can't go and check for images or image closeness and so on so they" }, { "end": 2959.6800000000003, "start": 2953.32, "text": " have to do some 3d modeling they actually take it now they take these" }, { "end": 2963.76, "start": 2959.6800000000003, "text": " images right here and they have to approximate their 3d shape and then" }, { "end": 2970, "start": 2963.76, "text": " compare that 3d shape with the 3d shape of the original thing in order to just" }, { "end": 2977.5200000000004, "start": 2970, "text": " quant quantitatively estimate how well they're doing in the faces the the same" }, { "end": 2983.32, "start": 2977.5200000000004, "text": " thing you input the top row into the encoder and you get back the bottom row" }, { "end": 2989.28, "start": 2983.32, "text": " we've already looked at that but again to evaluate this they you actually have" }, { "end": 2994.6000000000004, "start": 2989.28, "text": " to go and use some sort of a face detector to recognize is that even is" }, { "end": 3000.6800000000003, "start": 2994.6000000000004, "text": " that the same person always and is it so you can evaluate two things you can" }, { "end": 3007.8, "start": 3000.6800000000003, "text": " evaluate are these right here all the same people so you can have a a face" }, { "end": 3013.4, "start": 3007.8, "text": " detector kind of tell you whether or not these are the same people and the" }, { "end": 3019.7200000000003, "start": 3013.4, "text": " second thing is are these down here the same person as these up here right so" }, { "end": 3024.32, "start": 3019.7200000000003, "text": " those are the the kind of things how you can evaluate this and they've done this" }, { "end": 3029.1600000000003, "start": 3024.32, "text": " and it's a fairly interesting and the results here are not surprising when you" }, { "end": 3035.2400000000002, "start": 3029.1600000000003, "text": " look at the images so these are curves curves from this face detector and of" }, { "end": 3039.8, "start": 3035.2400000000002, "text": " course for real images as you can see the this is simply the performance of" }, { "end": 3047.6400000000003, "start": 3039.8, "text": " the face detector so you do get some false positives if you if you want more" }, { "end": 3052, "start": 3047.6400000000003, "text": " true positives right so this is a standard curve right here because these" }, { "end": 3059.76, "start": 3052, "text": " face detectors are not perfect so in a given row right here in a given row even" }, { "end": 3063.6800000000003, "start": 3059.76, "text": " if that's from the real data set the face detector would sometimes fail and" }, { "end": 3067.6000000000004, "start": 3063.6800000000003, "text": " say no that's not the same person even though from the data set you know it is" }, { "end": 3074.48, "start": 3067.6, "text": " though the the to match the actual child photo from Ali with his adult photos is" }, { "end": 3080.8399999999997, "start": 3074.48, "text": " even like you can forgive the face detector so that's sort of the the gold" }, { "end": 3085.96, "start": 3080.8399999999997, "text": " standard we're trying to achieve and you can see within the reconstructed sets" }, { "end": 3092.4, "start": 3085.96, "text": " that that is achieved fairly fairly well so compared to uniform samples this is" }, { "end": 3102.4, "start": 3092.4, "text": " you know fairly fairly cool fairly close what is less close is this reckon and" }, { "end": 3108.4, "start": 3102.4, "text": " real and I believe that's when you compare the identity of the real row" }, { "end": 3113.76, "start": 3108.4, "text": " with the identity of the reconstructed row and that's here so that tells you" }, { "end": 3120.6, "start": 3113.76, "text": " already that it's the GAN or sorry the model doesn't always preserve the actual" }, { "end": 3127.2, "start": 3120.6, "text": " identity as seen by a face detector and I don't know what to say except yes" }, { "end": 3133.56, "start": 3127.2, "text": " that's what you see in the data right also you see that free samples I guess" }, { "end": 3139.2, "start": 3133.56, "text": " so you can do two things right you can give it a set like a row and encode that" }, { "end": 3146.3199999999997, "start": 3139.2, "text": " into the Z and then you can decode that again and basically reconstruct or you" }, { "end": 3151.6000000000004, "start": 3146.32, "text": " can just sample since you've learned a prior on the Z variable you can simply" }, { "end": 3157.6000000000004, "start": 3151.6000000000004, "text": " sample you can simply say give me some new identity maybe that I've never seen" }, { "end": 3163.88, "start": 3157.6000000000004, "text": " before right you have some binary vector and now generator please give me" }, { "end": 3169.44, "start": 3163.88, "text": " images of that identity and these two here are actually sampled like this and" }, { "end": 3174.88, "start": 3169.44, "text": " you can see again here it's remarkable that within the same row it's pretty" }, { "end": 3181.84, "start": 3174.88, "text": " much the the rough identity of the person is conserved right and these are these" }, { "end": 3188.1600000000003, "start": 3181.84, "text": " free samples right here I guess and they they do better than whenever you compare" }, { "end": 3193.12, "start": 3188.1600000000003, "text": " the reconstructed and real but they don't do as well as when you actually" }, { "end": 3201.48, "start": 3193.12, "text": " input a real data and then reconstruct this so this might be an indication that" }, { "end": 3209.76, "start": 3201.48, "text": " this prior isn't really working you know all too accurately and I do have my" }, { "end": 3216.2400000000002, "start": 3209.76, "text": " problems with this binary encoding right here because maybe I'm" }, { "end": 3220.32, "start": 3216.2400000000002, "text": " misunderstanding something but if you have these binary vectors as we said" }, { "end": 3225.68, "start": 3220.32, "text": " here the reason you know the reason why you do one hot encoding in class" }, { "end": 3229.2400000000002, "start": 3225.68, "text": " conditional GANs is you could you could simply say what am I doing a one hot" }, { "end": 3234.7999999999997, "start": 3229.24, "text": " encoding I'll simply say Z equals three for class three and Z equals four for" }, { "end": 3239.16, "start": 3234.7999999999997, "text": " class four like that it should be so easy why am I doing one hot and that's" }, { "end": 3244.64, "start": 3239.16, "text": " because these models see everything in a linear fashion so if you have class" }, { "end": 3252.4399999999996, "start": 3244.64, "text": " three and then I have class four and then I have class nine the model doesn't" }, { "end": 3257.68, "start": 3252.4399999999996, "text": " see that as three different classes the model sees this as these two are somehow" }, { "end": 3265.72, "start": 3257.68, "text": " closer together than this right so the reason why we do one hot vectors is that" }, { "end": 3270.16, "start": 3265.72, "text": " the model cannot do this the model has one independent dimension for each of" }, { "end": 3275.7599999999998, "start": 3270.16, "text": " the classes and whenever that particular dimension is high then it knows that" }, { "end": 3281.8799999999997, "start": 3275.7599999999998, "text": " that particular class is activated what this binary encoding here does is sort" }, { "end": 3287.52, "start": 3281.8799999999997, "text": " of it goes back to this thing right here where it says okay there are all of these" }, { "end": 3293.68, "start": 3287.52, "text": " different categories here it's like you have mini classes and the identity of" }, { "end": 3299.72, "start": 3293.68, "text": " whatever set you consider is now encoded in these mini classes so that I'm going" }, { "end": 3304.48, "start": 3299.72, "text": " to guess the first thing here might be something like does that person have a" }, { "end": 3309.8, "start": 3304.48, "text": " blonde hair and the second thing might be does the image look generally bright" }, { "end": 3315.7599999999998, "start": 3309.8, "text": " or the images image set as a whole look generally bright or dark and and so on" }, { "end": 3320.84, "start": 3315.76, "text": " so I'm gonna guess these things are encoded here and it'll sort of just end" }, { "end": 3329.76, "start": 3320.84, "text": " up being kind of a discrete GAN or a discrete autoencoder rather than what" }, { "end": 3334.44, "start": 3329.76, "text": " they believe but maybe that was their goal all along and I'm misunderstanding" }, { "end": 3340.6400000000003, "start": 3334.44, "text": " right here I just don't think this this binarization is gives you this sort of" }, { "end": 3347, "start": 3340.64, "text": " hoped expressiveness I think there's still a lot of dependence of whether or" }, { "end": 3355.72, "start": 3347, "text": " not a particular thing is on or off okay but enough ranting right here I want to" }, { "end": 3360.3599999999997, "start": 3355.72, "text": " look at the at some more of the samples because I've only shown you the" }, { "end": 3365.12, "start": 3360.3599999999997, "text": " reconstructions what I also find interesting is the free samples so here" }, { "end": 3371.68, "start": 3365.12, "text": " you can see uncurated shape net samples and on the left so here you can see this" }, { "end": 3377.04, "start": 3371.68, "text": " effect on from the learned order regressive prior and a uniform prior on" }, { "end": 3381.52, "start": 3377.04, "text": " the right and here you can see this effect of learning this prior so if I" }, { "end": 3386.44, "start": 3381.52, "text": " learn the prior it's going to give me back fairly okay objects if I don't" }, { "end": 3394.24, "start": 3386.44, "text": " learn the prior oh but if I learn the prior you know if I learn the prior" }, { "end": 3400.16, "start": 3394.24, "text": " really really well that basically means I'm only going to ever produce sets" }, { "end": 3405.2, "start": 3400.16, "text": " that were in the training data right if I learned like a perfect prior I'll see" }, { "end": 3409.3199999999997, "start": 3405.2, "text": " like wait this you know this particular identity here never shows up so I'm not" }, { "end": 3414.06, "start": 3409.3199999999997, "text": " going to output it and the uniform prior might actually output it and the" }, { "end": 3419.3999999999996, "start": 3414.06, "text": " generator is not going to be trained on that uniform prior so it's just going to" }, { "end": 3427.92, "start": 3419.4, "text": " give you kind of crap and here in the in the faces you see the same thing now" }, { "end": 3431.88, "start": 3427.92, "text": " again what I think I don't think that's happening what I think is happening is" }, { "end": 3436.8, "start": 3431.88, "text": " encoding these kind of micro characteristics not per se identity but" }, { "end": 3441.6800000000003, "start": 3436.8, "text": " it's encoding probably you know hair color what not head shape and so on" }, { "end": 3447.84, "start": 3441.6800000000003, "text": " things like this and in each of these dimensions and that's what is then going" }, { "end": 3454.1600000000003, "start": 3447.84, "text": " to produce so these each row here is an is one sample from that prior on the" }, { "end": 3460.32, "start": 3454.1600000000003, "text": " left is learned which you see is working pretty well in terms of the output and" }, { "end": 3467.84, "start": 3460.32, "text": " on the right you see it's from the uniform prior now you also see here first" }, { "end": 3474.44, "start": 3467.84, "text": " of all that approximately identity is preserved but not as much in this" }, { "end": 3479.48, "start": 3474.44, "text": " uniform prior that's first and second you see that the images are much worse" }, { "end": 3484.8, "start": 3479.48, "text": " which means that the generator doesn't have as much training on that particular" }, { "end": 3489.56, "start": 3484.8, "text": " thing because I guess it comes from a prior that it hasn't seen during" }, { "end": 3495.44, "start": 3489.56, "text": " training alright and here lastly they have reconstructions if you give" }, { "end": 3501.68, "start": 3495.44, "text": " different number of views so the top row I guess is the input the this row is" }, { "end": 3505, "start": 3501.68, "text": " when you just have four different views so I guess just the first four or" }, { "end": 3508.72, "start": 3505, "text": " something like this input and the bottom one is when you have the full eight" }, { "end": 3518.24, "start": 3508.72, "text": " views and you can I guess see or even more that this increases with number of" }, { "end": 3523.9199999999996, "start": 3518.24, "text": " views so the the accuracy of this identity increases the more views you" }, { "end": 3529.56, "start": 3523.9199999999996, "text": " input of the set and they have a bunch of other things right here in the" }, { "end": 3537.7599999999998, "start": 3529.56, "text": " appendix I I do invite you to look at this and I hope you sort of saw into a" }, { "end": 3544.04, "start": 3537.7599999999998, "text": " bit how you would go about something like this I I found it quite challenging" }, { "end": 3549.56, "start": 3544.04, "text": " the math because I'm mainly not used to this kind of variational math but I hope" }, { "end": 3554.6, "start": 3549.56, "text": " this gives you sort of an impression alright this was it from me tell me" }, { "end": 3562, "start": 3554.6, "text": " what you think and I'll see you next time bye bye" } ]
eI8xTdcZ6VY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "vision", "cnn", "convolutional neural network", "coco", "object detection", "region of interest", "rcnn", "r-cnn", "attention", "attention mechanism", "google", "caltech", "gazelle", "wildlife", "wild trap", "traffic", "object", "car", "bus", "vehicle", "lighting", "time", "sampling", "frames", "memory", "long-term", "query" ]
Object detection often does not occur in a vacuum. Static cameras, such as wildlife traps, collect lots of irregularly sampled data over a large time frame and often capture repeating or similar events. This model learns to dynamically incorporate other frames taken by the same camera into its object detection pipeline. OUTLINE: 0:00 - Intro & Overview 1:10 - Problem Formulation 2:10 - Static Camera Data 6:45 - Architecture Overview 10:00 - Short-Term Memory 15:40 - Long-Term Memory 20:10 - Quantitative Results 22:30 - Qualitative Results 30:10 - False Positives 32:50 - Appendix & Conclusion Paper: https://arxiv.org/abs/1912.03538 My Video On Attention Is All You Need: https://youtu.be/iDulhoQ2pro Abstract: In static monitoring cameras, useful contextual information can stretch far beyond the few seconds typical video understanding models might see: subjects may exhibit similar behavior over multiple days, and background objects remain static. Due to power and storage constraints, sampling frequencies are low, often no faster than one frame per second, and sometimes are irregular due to the use of a motion trigger. In order to perform well in this setting, models must be robust to irregular sampling rates. In this paper we propose a method that leverages temporal context from the unlabeled frames of a novel camera to improve performance at that camera. Specifically, we propose an attention-based approach that allows our model, Context R-CNN, to index into a long term memory bank constructed on a per-camera basis and aggregate contextual features from other frames to boost object detection performance on the current frame. We apply Context R-CNN to two settings: (1) species detection using camera traps, and (2) vehicle detection in traffic cameras, showing in both settings that Context R-CNN leads to performance gains over strong baselines. Moreover, we show that increasing the contextual time horizon leads to improved results. When applied to camera trap data from the Snapshot Serengeti dataset, Context R-CNN with context from up to a month of images outperforms a single-frame baseline by 17.9% mAP, and outperforms S3D (a 3d convolution based baseline) by 11.2% mAP. Authors: Sara Beery, Guanhang Wu, Vivek Rathod, Ronny Votel, Jonathan Huang Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we'll look at context R-CNN, long-term temporal context for per-camera object detection by Sarah Beery, Guan Hong Wu, Vivek Rathod, Ronnie Votel, and Jonathan Huang. So, on a high level, this paper tries to do object detection for cameras where the camera is in the same place for a long time. For example, these wild trap cameras or traffic cameras right here. It proposes to do object detection by incorporating data from the images that the camera has seen in the past to help the detection in the current frame. And it does so via an attention mechanism that it runs over a memory of past data. So we're going to take a look at how this is done and how well it works. And yeah, stick around if you want to know. As always, if you enjoy content like this, then consider sharing it out, telling your friends about it. Subscribe if you haven't and tell me what you think in the comments. So, the paper starts off and describes the problem. And the problem is fairly simple. You want to do object detection in images. Object detection is the task of basically, if I give you an image, you should tell me what is on the image and where. So in this case, here you would have to draw me this bounding box and say, this is a deer. On the bottom, you would have to draw bounding boxes. Maybe they have to be rectangular, maybe not. And say, this is a bus. And here is a truck. And here is another truck. And here is a car. And so on. So, there can be many objects in an image. There can be one object. There can be objects of different classes or there can be no objects at all. So this is just object detection. And there have been many papers on this. And specifically, there has been this RCNN. And this is the model that we're going to extend. So the RCNN model or specifically the faster RCNN model that we're going to build on is a model that simply detects these bounding boxes in single images. But now we consider the situation where we have a camera that records images for a long, long time. So in these wild trap cameras, they often sit there for months. And it's not that easy to make use of them because in addition to there being a lot of data, they have motion triggers. So that could be there is no nothing for a long time. And then there's the animal walks in the trap. And then you have a bunch of images, like one per second for 10 seconds. And then you have nothing again for like a day or two days. And then you have 10 images again because another animal walks in or maybe doesn't. And so on and another. So you have irregular sampling frequencies. You have very, very different distance between the frames. All of this makes it very, very not suited for models like temporal convolutions or things like LSTMs because they don't work super well with data like this. Now I know there are formulations where LSTMs can do this, but they don't work very well with these super long contexts and irregular sampling frequencies and so on. So the idea is if we have a frame right here, like this one, and we want to detect what's on it, we should be able to pull information from other frames that the same camera has seen like from this one or from this one or from this one right here. And we should be able to do so in a dynamic way. Now why could that help? If you look at, for example, down here, these images have been taken. They say images were taken on separate days. But you can see this thing right here is in both images. Or a very similar thing. It's probably that bus's regular route. So in order to classify whether or not this here is a bus, it might be very helpful to also look at this picture right here and see, ah, you know, it's about at the same location. It looks the same and also it looks like a bus. So you know, that kind of gives evidence that this could be, this other thing could also be a bus. Then also there are background objects. So sometimes the single frame detectors get confused. It might be labeling this here as a car because just the lighting, the exact lighting in this picture is just off by the correct amount that it is confused. But considering this picture over here, maybe it recognizes here, no, that's not a car. And it can bring over this evidence to that frame and consider, ah, maybe you know this is the same thing. So it's not a car. So this is not the same than simply adding training data. We really consider the fact here that these images, they come from the same camera that is in the same location or maybe, you know, that is filming the same thing. So this, all of this is going to be within the same camera, not just adding, adding IID training data. And with animals as well, like often the same animal has like its regular route or within these 10, these burst of tens, the same animal will kind of walk around a bit and maybe, you know, here it's half occluded, but maybe in a different image you see, ah, here I see the nose. So it helps you make a better prediction. Also animals are often in kind of crowds and that helps if you see that there are other deer around, the probability that this is a deer increases rapidly. So how are we going to do this? What we're going to do is we're going to build an attention mechanism that can do these kinds of look into the past and some, also a little bit of the future as we will see, but mainly we'll look into other images from the same camera in a dynamic way and we'll learn how to address those other images from a memory bank. So the architecture is described right here. Now as you can see, we are still in the business of doing object detection. So what we'll do is we'll sort of hijack a existing object detector and the object detector we're going to hijack is going to be this FRCNN, this faster RCNN object detector. That's an object detector for a single frame problem. So that means you have one image and you're supposed to detect what's on it. It has two stages, as you can see. So stage one, if you have an image and let's say there's some stuff on it, stuff, stuff, stuff, there's stuff. What stage one is supposed to do is it's supposed to extract regions of interest. This could be, okay, all of these are regions of interest. So it simply says, well, there is something, there might be something right here in these regions of interest. And then it describes each of these regions of interest using features. So it extracts these regions of interest and each region of interest gets features assigned to it. So, well, these are, I think these are like seven by seven by 2048 features, but let's just say for the sake of describing it that these are just a vector of features for each region of interest. So each region of interest is going to be associated with one vector of features that this model extracts. Okay. And the next region of interest also has a vector and the next region of interest also has a vector and so on. Stage two then takes each one of these, takes each one of these vectors and assigns a class to it. So this would be deer right here. Okay, so stage one proposes regions of interest along with features. Then stage two takes each of these regions of interest and classifies them basically. And I guess there's many in between stages. Like this is massively simplified. There's non-maximum suppression. There is kind of an alignment stage where you can refine the bounding box and so on. But in essence, these are two stages. And you can see that this system here, it goes in between the two stages. So all of this right here, we shove in between the two stages. So we'll still use the stage one and we'll still use the stage two, but in between in this thing right here, we'll try to sort of pimp these features such that the stage two detector has an easier time classifying. Okay. So now we're going to pimp these features by incorporating in because these features right now, if we just do it vanilla, these are just from the current frame. And we're going to add to them information from other frames of the same camera. And we're going to do it in two different ways. So the first way, as you can see here, the first way is this short term memory. And the second way is the long term memory. Now the two are slightly different. As you can guess, the short term memory is going to be only over a short time period around the current frame. And the long term memory is going to be basically across a very long time horizon into the past. You can see we're trying to classify this blue frame right here, what we call the key frame. So what we'll do is we'll run it through stage one. Cool. So we're going to add two features for each region of interest. And then you can see this goes here and through these residual connections, this goes into stage two over here. So basically, stage two still receives the same input, it receives whatever stage one outputs for the key frame. But we're going to add to that twice. So we're going to add two things as I said. So the short term memory is added right here. Now how do we build the short term memory? We build the short term memory simply by considering all the frames around the key frame. And this you can see right here, the current window around the key frame, which can be like one frame around it or two frames or three frames, just a few frames around the current frame. And this can be fairly helpful. As we said, for example, if the deer moves a bit, the car moves a bit, you know, it gets into a slightly different lighting and so on. This can help us very much to classify the current key frame if we also have features from the surrounding frames. So for each of these surrounding frames, we also run them through the stage one detector to also extract regions of interest. And that all of these features go into this memory short term memory bank right here. There's different strategies, you don't always have to extract all of the regions of interest. You can also extract just the top one and so on, or you can extract the mean, since these are fairly, you know, consistent, the cameras at the same place. There are many ways you can do this. But what you ultimately end up with is a short term memory bank that kind of is so you'll have a bank and you have lots of these feature vectors in here for your region, your regions of interest of the surrounding frames. Now if this here, if this here is your half occluded deer, right, so this is the half occluded deer, and you want to consider information from the surrounding frames, maybe in the next frame, so maybe this is three frames, like 123 and two is the key frame, maybe in the next frame, the deer moves a bit and you see its nose. And that this particular region of interest here is relevant. So how do you know how do you now get from this entire memory, this feature vector that would be helpful? And the answer is you get it through an attention mechanism. You can see that right here, the way the short term memory is added is through this attention block. They describe the attention block right here. It is a fairly standard attention mechanism. So I've done a video on attention is all you need. If you don't know what an attention mechanism is, go check it out. But you can see it's it's very, very standard. So you have these input features, which are the features that come from the key frame. And you have the context features, which are all the features in your memory bank, you encode the input features with into a query using a fully connected layers and the context features into keys. And then you match the queries with the keys and the softmax in order to get a weighting over the context features. And then you aggregate the values from the context features. So this is a standard attention mechanism. What does it mean? It basically means that each of these vectors right here, they will emit a key that kind of describes what kind of information is contained in that vector. The vector over here will emit a query that describes what sort of information it is looking for to in order to describe what's in the region of interest as well as possible. And then you simply match the query with the keys to determine which key fits best to that query and whichever one fits best, let's say this one here, then you take that vector from the memory bank and incorporate it together with your current information that you already have. So that's how you address things that from other frames using an attention mechanism. Okay, now if this were all, you know, we could train this right now. We could train all of this because all of this is differentiable, right? This stage one detector right here is differentiable. It goes here and here, you know, the information, the attention mechanism is differentiable. The stage two detector is differentiable, all differentiable. Cool. We can train this end to end. Now what's the problem? The problem is this long term memory right here. So in this memory, ideally, we would want to fit, let's say an entire day, an entire week or even an entire month of data from one of these cameras. And it's just not feasible that we expand this current window here to an entire month or an entire week for many of those of those cameras, because even though they have a low frame rate and so on, it's still too much in order to then be all differentiable, all backpropagatable and so on. So we can't really backprop in for these long term memory. In essence, what we want to do is exactly the same. We want to build up a memory of all of the regions of interest or maybe selected regions or all of the best regions of interest, whatever heuristic strategy we have of the past, whatever this camera has seen, let's say in the last month or in the current week or something like this, we want to build all of this up and then use an attention mechanism, just the same in order to incorporate it. But we have to come up with these things right here in some other way than a way where we can backprop. So we can't really use this stage one detector right here because this is the one we're training and so we have to backprop through it. Now an easy proposal is to simply use it anyway, but do like a stop gradient on it so we don't backprop through it. That is one way but this the paper decides on a different way. The paper decides that all of the past basically right here, right here and so on, we'll take a pre-trained object detector. So not the one we're training currently, but we'll take a pre-trained one that was pre-trained either on something like Cocoa, which is an object detection data set, or you can pre-train it on Cocoa and then fine tune it on a task you're interested in, in a single frame fashion. For whatever way, we'll take a pre-trained object detector or region of interest extractor and that will give us for each frame in the past will give us also the regions of interest along with the features. And these are the features that we then go and put into the memory bank. Sorry, my tablet just crashed a bit. There we go. Okay, so we'll take a pre-trained extractor right here. That will give us features for regions of interest, we'll put that into the memory bank, and then we will use an attention mechanism to incorporate it. Now the attention mechanism we can train, but we cannot train the extractor for the features. And this is the difference to the short term memory where we can actually train the feature extractor in order to help us with building the memory. Now the memory is simply built without a goal in mind basically. And the attention mechanism basically has to learn that it doesn't work with features that are meant for its task. It works with features that have been originally created for a different tasks and they're not going to change. But as we'll see, this, you know, can be handled. So that's what they do. They incorporate short term and long term memory into their stage two prediction and then the stage two prediction simply takes in all of those features and classifies the class of the object. And that's the architecture of context rcnn. It's rcnn with long and short term context. So they describe very different ways of you know, how they built the memory and so on, how they built the features. I didn't, I kind of glossed over this right now. There's a lot of consideration in building these things and you have to look at the paper how they exactly do this. I'm more interested in the high level architecture and the sort of ideas behind it. So when they do this, they do outperform the, they do outperform the current or the single frame baselines by quite a bit. So this SS and this CCT are these wildlife data sets, whereas this CC, I think this is the city something city cam, these are this street data set. As you can see, they do outperform the single frame baseline by quite a bit. Now interesting, as you can see right here, as they increase the time horizon of this long term memory, so they, they can, they can now choose how much information do they want to put in that long term memory. As they increase the time horizon for one minute, one hour, one day and so on, the performance goes up and up and up, which is a strong indication that these features actually help from the time horizon, because you don't have more parameters. You simply increase the amount of information in the memory bank. And if the performance goes up, you can make a very strong claim that these, this is actually due to the fact that you have more information in that memory bank. Couldn't really guess any other explanation right here. So they, they do, they do investigate different memory strategies. They do a lot of ablations right here, where they also say, okay, what if we only have the short term attention? What if we only have the long term attention? What to only, if we only have self attention, that means attention only into the current frame, but of across regions of interest. That's interesting if you have like a herd of animals and so on, and they all help. But as you can see, the long term attention tends to help the most in this data set. And the short term attention help helps a lot in this data set. If you just compare to the other owner, these are two different metrics, not data sets. Sorry about that. But in essence, it helps the most when you combine the two and that's, you know, that's pretty cool to see. So they do some qualitative results, which I find very interesting. For example, they can visualize what the attention weights of their models are. So here, you always have a very long timeframe, I think an entire month in this, in this memory bank of the long term memory. Now in the top classification, you see the large thing here, the large frame is the one you actually want to classify. And the other frames are the frames where the top attention score is so that the attention weights are the highest. So here in order to classify this, what does the model pay attention to? Or which other frames does the model pay attention to? And you can see right here, they are all spread across the entire month. Here is the timeline. The most attended to pictures are spread across the entire month. And almost all of them actually have that work on in here. So this must be like its regular route. And the model recognizes that and pulls in information from all these other images in order to correctly classify it here. On the other hand, on the next example, this gazelle and tablet crashed right here. It also puts all the weight on top of images of that same gazelle. But you can see maybe that gazelle was only there for this one particular moment. And all the pictures this camera has of it is, you know, in the very few moments that the gazelle was around. You can see they all come here from the same point in time, or very, very close points in time. And you can see that it puts a lot of weight on wherever the gazelle is. So you know, that's a pretty strong indication that it actually learns to pull in the correct information be that from long time horizon, or from a short time horizon if necessary. You can also see right here, they visualize the top attention, where the top attention weights go, in terms of how long the frames where the attention goes to is away from the frame that they're trying to classify. So these graphics are somewhat kind of weird to interpret. This here always means how much is the total time of the buffer. So the memory buffer here contains always pictures from the total from one hour before until one hour after the key frame you want to classify. So this is the frame you want to classify at minute zero. And the memory buffer contains images from 60 minutes before to 60 minutes after. So it's not real time, right? You go back to your through your footage and you try to classify. You can also pull out images from the future. You can see there's most attention is on the current frame, which makes sense. You're trying to classify the current frame and it kind of falls off as you go further and further away. This is across the entire data set. So this is not a specific example, which also makes sense. Probably in most of the time, the relevant information is closer in time rather than farther away. But also you can see that the distribution is pretty spread out. So it makes the model makes use of the entire range of time. And you can see that throughout, even if you have an entire day in the buffer or two days, even if you have entire week before and week after in the buffer, and even if you have an entire month here. And especially if you look at when you have an entire week in the buffer, you can see the periodicity through the days. So that means the model tends to pay attention to images that are from the same time of day. Compared to the current key frame. That's fairly, fairly good indication that the model has actually learned to address these this memory by its content, right? Now night and day isn't super difficult because you can just go on the brightness and so on. But still, it's pretty cool to see that this is actually happening. They do have some failure cases of the single frame model that their model is able to handle up here. And they make a lot of sense. So here you can see that there is an object that's moving out of frame. And the single frame detector wasn't able to recognize this probably because it's moving out of frame. Whereas this new this context rcnn is able to detect it probably because it looked at the frame just before it where the car was somewhere back here and it could correctly classify it. Well, that's well, I just disregard my drawings. Here it managed to recognize this animal in the back, whereas this old model, the single frame model hasn't also probably by looking either at frames next to it or by looking at other frames of herds of animals and realizing that usually when there's two elephants, there's more. Here you can see that the object highly occluded. So we're talking about the object like at the very edge of the frame object poorly lit. This is particularly impressive. And also an example where the animals are often in herds. And if you see one deer, the likelihood that there's other deer is very high in this particular camera. And by aggregating information from different frames, you can see that maybe it's always the same patch of the air that comes by. And here, the single frame detector detects this patch here as a vehicle where it shouldn't. And of course, the new model, the context RCNN is able to recognize that this is present in all of the frames. And in most frames, the single object detector doesn't detect it as a vehicle. And so it can kind of carry over that information. Now you can already see sort of what the downsides might be if the single object detector is like very, very, very sure that this is in a single frame that this is a car. It could carry over that information to the other frame. So even though the single frame detector might have failed in that particular frame, if it fails super hard, it might, you know, shout that to all the other frames basically dominate the memory saying like, look, this is a car, I'm like pretty sure. And it will carry over that information to all of the other frames. And they say in one of these high confidence mistakes, it basically detected the same tree as a giraffe over and over again. What I find particularly interesting is they do look at, so here they have this curve of on the bottom, you have confidence threshold, so how confident the model is. And on the y-axis, you have the number of false positives. And you can see that in the low confidence regime, the context RCNN has lower false positives than the single frame detector. And the green line here is when you only have positive boxes. So when you only include regions of interest where there is an actual object, which in this case is sort of hurtful, you also want the regions of interest where there is nothing because that helps you avoid false positives in other frames. That's why the orange line is below the green line. But strangely here in the high confidence regime, you can see that the single frame model has fewer false positives than the context RCNN. And I like the text that they have to this. In figure 7, we can see that adding empty representations reduces the number of false positives across all confidence threshold compared to the same model with only positive representations. We investigated the 100 highest confidence false positives from context RCNN and found that in almost all of them, in 97 out of 100, the model had correctly found and classified animals that were missed by human annotators. So basically these graphs are even underestimating how good that model is because the model appears to be better than the human annotators of the test set. I find that to be pretty, pretty impressive. And here you can see failure modes where they say, for example, when exploring the confident false positives on the snapshot Serengeti data set, the three out of 100 images, so whatever was not a human failure, where context RCNN erroneously detected an animal, where all of the same tree highly confidently predicted to be a giraffe. So this is a failure mode when the model is highly confident it might spill that over to other frames because we now aggregate the information within the same camera across the frames. To be said, of course, their train test split is such that there's not the same camera in the training data as in the testing data. They have entirely different cameras in the testing data than in the training data, just so there is no information leakage. So that's the model right here, how it works. It's pretty cool. It kind of wedges itself in between any single frame object detector that has these two stages. And it's a pretty neat idea to bring in context from the past or even the future of the same camera. Just a quick glance at the appendix, they have lots of different examples right here. In one example, their camera kind of fell over and they say, well, it still worked. The system was still able to kind of do attention across this failure, this kind of tipping over of the camera. They have more examples right here, which I find pretty impressive, like these super low light things where it correctly detects like the possum. And yeah, I invite you to check out the paper, the code they say should be out soon. And I'll see you next time. Bye bye.
[ { "end": 6.16, "start": 0, "text": " Hi there, today we'll look at context R-CNN, long-term temporal context for per-camera" }, { "end": 11.96, "start": 6.16, "text": " object detection by Sarah Beery, Guan Hong Wu, Vivek Rathod, Ronnie Votel, and Jonathan" }, { "end": 12.96, "start": 11.96, "text": " Huang." }, { "end": 19.84, "start": 12.96, "text": " So, on a high level, this paper tries to do object detection for cameras where the camera" }, { "end": 22.04, "start": 19.84, "text": " is in the same place for a long time." }, { "end": 27, "start": 22.04, "text": " For example, these wild trap cameras or traffic cameras right here." }, { "end": 34.2, "start": 27, "text": " It proposes to do object detection by incorporating data from the images that the camera has seen" }, { "end": 39.08, "start": 34.2, "text": " in the past to help the detection in the current frame." }, { "end": 47.32, "start": 39.08, "text": " And it does so via an attention mechanism that it runs over a memory of past data." }, { "end": 51.400000000000006, "start": 47.32, "text": " So we're going to take a look at how this is done and how well it works." }, { "end": 54.84, "start": 51.400000000000006, "text": " And yeah, stick around if you want to know." }, { "end": 59.900000000000006, "start": 54.84, "text": " As always, if you enjoy content like this, then consider sharing it out, telling your" }, { "end": 61.52, "start": 59.900000000000006, "text": " friends about it." }, { "end": 65.72, "start": 61.52, "text": " Subscribe if you haven't and tell me what you think in the comments." }, { "end": 70.4, "start": 65.72, "text": " So, the paper starts off and describes the problem." }, { "end": 72.36, "start": 70.4, "text": " And the problem is fairly simple." }, { "end": 75.16, "start": 72.36, "text": " You want to do object detection in images." }, { "end": 79.80000000000001, "start": 75.16, "text": " Object detection is the task of basically, if I give you an image, you should tell me" }, { "end": 81.92, "start": 79.80000000000001, "text": " what is on the image and where." }, { "end": 88.76, "start": 81.92, "text": " So in this case, here you would have to draw me this bounding box and say, this is a deer." }, { "end": 92.56, "start": 88.76, "text": " On the bottom, you would have to draw bounding boxes." }, { "end": 94.44, "start": 92.56, "text": " Maybe they have to be rectangular, maybe not." }, { "end": 96.44, "start": 94.44, "text": " And say, this is a bus." }, { "end": 98.18, "start": 96.44, "text": " And here is a truck." }, { "end": 99.74000000000001, "start": 98.18, "text": " And here is another truck." }, { "end": 102.44, "start": 99.74000000000001, "text": " And here is a car." }, { "end": 103.44, "start": 102.44, "text": " And so on." }, { "end": 106.36, "start": 103.44, "text": " So, there can be many objects in an image." }, { "end": 107.68, "start": 106.36, "text": " There can be one object." }, { "end": 113.52000000000001, "start": 107.68, "text": " There can be objects of different classes or there can be no objects at all." }, { "end": 115.48, "start": 113.52000000000001, "text": " So this is just object detection." }, { "end": 117.82000000000001, "start": 115.48, "text": " And there have been many papers on this." }, { "end": 121.56, "start": 117.82000000000001, "text": " And specifically, there has been this RCNN." }, { "end": 123.96000000000001, "start": 121.56, "text": " And this is the model that we're going to extend." }, { "end": 130.56, "start": 123.96000000000001, "text": " So the RCNN model or specifically the faster RCNN model that we're going to build on is" }, { "end": 138.18, "start": 130.56, "text": " a model that simply detects these bounding boxes in single images." }, { "end": 144.28, "start": 138.18, "text": " But now we consider the situation where we have a camera that records images for a long," }, { "end": 145.84, "start": 144.28, "text": " long time." }, { "end": 151.8, "start": 145.84, "text": " So in these wild trap cameras, they often sit there for months." }, { "end": 157.84, "start": 151.8, "text": " And it's not that easy to make use of them because in addition to there being a lot of" }, { "end": 160.96, "start": 157.84, "text": " data, they have motion triggers." }, { "end": 164.14000000000001, "start": 160.96, "text": " So that could be there is no nothing for a long time." }, { "end": 167.46, "start": 164.14000000000001, "text": " And then there's the animal walks in the trap." }, { "end": 173.4, "start": 167.46, "text": " And then you have a bunch of images, like one per second for 10 seconds." }, { "end": 176.32, "start": 173.4, "text": " And then you have nothing again for like a day or two days." }, { "end": 181.08, "start": 176.32, "text": " And then you have 10 images again because another animal walks in or maybe doesn't." }, { "end": 182.72, "start": 181.08, "text": " And so on and another." }, { "end": 185.76, "start": 182.72, "text": " So you have irregular sampling frequencies." }, { "end": 190.88, "start": 185.76, "text": " You have very, very different distance between the frames." }, { "end": 198.29999999999998, "start": 190.88, "text": " All of this makes it very, very not suited for models like temporal convolutions or things" }, { "end": 202.92, "start": 198.29999999999998, "text": " like LSTMs because they don't work super well with data like this." }, { "end": 210.28, "start": 202.92, "text": " Now I know there are formulations where LSTMs can do this, but they don't work very well" }, { "end": 215.32, "start": 210.28, "text": " with these super long contexts and irregular sampling frequencies and so on." }, { "end": 222.07999999999998, "start": 215.32, "text": " So the idea is if we have a frame right here, like this one, and we want to detect what's" }, { "end": 228.4, "start": 222.07999999999998, "text": " on it, we should be able to pull information from other frames that the same camera has" }, { "end": 234.64, "start": 228.4, "text": " seen like from this one or from this one or from this one right here." }, { "end": 236.95999999999998, "start": 234.64, "text": " And we should be able to do so in a dynamic way." }, { "end": 238.56, "start": 236.95999999999998, "text": " Now why could that help?" }, { "end": 242.6, "start": 238.56, "text": " If you look at, for example, down here, these images have been taken." }, { "end": 246.84, "start": 242.6, "text": " They say images were taken on separate days." }, { "end": 251.24, "start": 246.84, "text": " But you can see this thing right here is in both images." }, { "end": 253.32, "start": 251.24, "text": " Or a very similar thing." }, { "end": 256.6, "start": 253.32, "text": " It's probably that bus's regular route." }, { "end": 263.9, "start": 256.6, "text": " So in order to classify whether or not this here is a bus, it might be very helpful to" }, { "end": 269.32, "start": 263.9, "text": " also look at this picture right here and see, ah, you know, it's about at the same location." }, { "end": 272.52, "start": 269.32, "text": " It looks the same and also it looks like a bus." }, { "end": 277.85999999999996, "start": 272.52, "text": " So you know, that kind of gives evidence that this could be, this other thing could also" }, { "end": 279.34, "start": 277.85999999999996, "text": " be a bus." }, { "end": 282.03999999999996, "start": 279.34, "text": " Then also there are background objects." }, { "end": 286.88, "start": 282.03999999999996, "text": " So sometimes the single frame detectors get confused." }, { "end": 292.2, "start": 286.88, "text": " It might be labeling this here as a car because just the lighting, the exact lighting in this" }, { "end": 297.12, "start": 292.2, "text": " picture is just off by the correct amount that it is confused." }, { "end": 302.28, "start": 297.12, "text": " But considering this picture over here, maybe it recognizes here, no, that's not a car." }, { "end": 310.79999999999995, "start": 302.28, "text": " And it can bring over this evidence to that frame and consider, ah, maybe you know this" }, { "end": 312.53999999999996, "start": 310.79999999999995, "text": " is the same thing." }, { "end": 314.64, "start": 312.53999999999996, "text": " So it's not a car." }, { "end": 318, "start": 314.64, "text": " So this is not the same than simply adding training data." }, { "end": 323.64, "start": 318, "text": " We really consider the fact here that these images, they come from the same camera that" }, { "end": 330.76, "start": 323.64, "text": " is in the same location or maybe, you know, that is filming the same thing." }, { "end": 337.88, "start": 330.76, "text": " So this, all of this is going to be within the same camera, not just adding, adding IID" }, { "end": 339, "start": 337.88, "text": " training data." }, { "end": 345.44, "start": 339, "text": " And with animals as well, like often the same animal has like its regular route or within" }, { "end": 352.56, "start": 345.44, "text": " these 10, these burst of tens, the same animal will kind of walk around a bit and maybe," }, { "end": 357.7, "start": 352.56, "text": " you know, here it's half occluded, but maybe in a different image you see, ah, here I see" }, { "end": 358.8, "start": 357.7, "text": " the nose." }, { "end": 363.92, "start": 358.8, "text": " So it helps you make a better prediction." }, { "end": 370.8, "start": 363.92, "text": " Also animals are often in kind of crowds and that helps if you see that there are other" }, { "end": 377.48, "start": 370.8, "text": " deer around, the probability that this is a deer increases rapidly." }, { "end": 380.36, "start": 377.48, "text": " So how are we going to do this?" }, { "end": 386.04, "start": 380.36, "text": " What we're going to do is we're going to build an attention mechanism that can do these kinds" }, { "end": 394.40000000000003, "start": 386.04, "text": " of look into the past and some, also a little bit of the future as we will see, but mainly" }, { "end": 400.26000000000005, "start": 394.40000000000003, "text": " we'll look into other images from the same camera in a dynamic way and we'll learn how" }, { "end": 405.46000000000004, "start": 400.26000000000005, "text": " to address those other images from a memory bank." }, { "end": 410.64000000000004, "start": 405.46000000000004, "text": " So the architecture is described right here." }, { "end": 416.36, "start": 410.64, "text": " Now as you can see, we are still in the business of doing object detection." }, { "end": 422.68, "start": 416.36, "text": " So what we'll do is we'll sort of hijack a existing object detector and the object detector" }, { "end": 429.76, "start": 422.68, "text": " we're going to hijack is going to be this FRCNN, this faster RCNN object detector." }, { "end": 435.41999999999996, "start": 429.76, "text": " That's an object detector for a single frame problem." }, { "end": 439.96, "start": 435.41999999999996, "text": " So that means you have one image and you're supposed to detect what's on it." }, { "end": 442.15999999999997, "start": 439.96, "text": " It has two stages, as you can see." }, { "end": 448.08, "start": 442.15999999999997, "text": " So stage one, if you have an image and let's say there's some stuff on it, stuff, stuff," }, { "end": 450.76, "start": 448.08, "text": " stuff, there's stuff." }, { "end": 456.2, "start": 450.76, "text": " What stage one is supposed to do is it's supposed to extract regions of interest." }, { "end": 460.03999999999996, "start": 456.2, "text": " This could be, okay, all of these are regions of interest." }, { "end": 465.58, "start": 460.03999999999996, "text": " So it simply says, well, there is something, there might be something right here in these" }, { "end": 467.47999999999996, "start": 465.58, "text": " regions of interest." }, { "end": 473.92, "start": 467.48, "text": " And then it describes each of these regions of interest using features." }, { "end": 479.40000000000003, "start": 473.92, "text": " So it extracts these regions of interest and each region of interest gets features assigned" }, { "end": 480.40000000000003, "start": 479.40000000000003, "text": " to it." }, { "end": 487.84000000000003, "start": 480.40000000000003, "text": " So, well, these are, I think these are like seven by seven by 2048 features, but let's" }, { "end": 495.46000000000004, "start": 487.84000000000003, "text": " just say for the sake of describing it that these are just a vector of features for each" }, { "end": 496.46000000000004, "start": 495.46000000000004, "text": " region of interest." }, { "end": 502.35999999999996, "start": 496.46, "text": " So each region of interest is going to be associated with one vector of features that" }, { "end": 504.4, "start": 502.35999999999996, "text": " this model extracts." }, { "end": 505.4, "start": 504.4, "text": " Okay." }, { "end": 510.2, "start": 505.4, "text": " And the next region of interest also has a vector and the next region of interest also" }, { "end": 513.12, "start": 510.2, "text": " has a vector and so on." }, { "end": 521.9, "start": 513.12, "text": " Stage two then takes each one of these, takes each one of these vectors and assigns a class" }, { "end": 522.9, "start": 521.9, "text": " to it." }, { "end": 524.8, "start": 522.9, "text": " So this would be deer right here." }, { "end": 528.92, "start": 524.8, "text": " Okay, so stage one proposes regions of interest along with features." }, { "end": 536.56, "start": 528.92, "text": " Then stage two takes each of these regions of interest and classifies them basically." }, { "end": 539.16, "start": 536.56, "text": " And I guess there's many in between stages." }, { "end": 540.68, "start": 539.16, "text": " Like this is massively simplified." }, { "end": 542.16, "start": 540.68, "text": " There's non-maximum suppression." }, { "end": 549.16, "start": 542.16, "text": " There is kind of an alignment stage where you can refine the bounding box and so on." }, { "end": 551.3599999999999, "start": 549.16, "text": " But in essence, these are two stages." }, { "end": 556.4, "start": 551.36, "text": " And you can see that this system here, it goes in between the two stages." }, { "end": 562.2, "start": 556.4, "text": " So all of this right here, we shove in between the two stages." }, { "end": 568.36, "start": 562.2, "text": " So we'll still use the stage one and we'll still use the stage two, but in between in" }, { "end": 574.48, "start": 568.36, "text": " this thing right here, we'll try to sort of pimp these features such that the stage two" }, { "end": 577.5600000000001, "start": 574.48, "text": " detector has an easier time classifying." }, { "end": 578.5600000000001, "start": 577.5600000000001, "text": " Okay." }, { "end": 583.4, "start": 578.56, "text": " So now we're going to pimp these features by incorporating in because these features" }, { "end": 588.76, "start": 583.4, "text": " right now, if we just do it vanilla, these are just from the current frame." }, { "end": 594.8399999999999, "start": 588.76, "text": " And we're going to add to them information from other frames of the same camera." }, { "end": 598.3599999999999, "start": 594.8399999999999, "text": " And we're going to do it in two different ways." }, { "end": 603.8399999999999, "start": 598.3599999999999, "text": " So the first way, as you can see here, the first way is this short term memory." }, { "end": 607.76, "start": 603.8399999999999, "text": " And the second way is the long term memory." }, { "end": 610.4, "start": 607.76, "text": " Now the two are slightly different." }, { "end": 615.56, "start": 610.4, "text": " As you can guess, the short term memory is going to be only over a short time period" }, { "end": 617.52, "start": 615.56, "text": " around the current frame." }, { "end": 625.02, "start": 617.52, "text": " And the long term memory is going to be basically across a very long time horizon into the past." }, { "end": 631.2, "start": 625.02, "text": " You can see we're trying to classify this blue frame right here, what we call the key" }, { "end": 632.2, "start": 631.2, "text": " frame." }, { "end": 633.92, "start": 632.2, "text": " So what we'll do is we'll run it through stage one." }, { "end": 634.92, "start": 633.92, "text": " Cool." }, { "end": 638, "start": 634.92, "text": " So we're going to add two features for each region of interest." }, { "end": 643.24, "start": 638, "text": " And then you can see this goes here and through these residual connections, this goes into" }, { "end": 645.5999999999999, "start": 643.24, "text": " stage two over here." }, { "end": 651.66, "start": 645.5999999999999, "text": " So basically, stage two still receives the same input, it receives whatever stage one" }, { "end": 653.8, "start": 651.66, "text": " outputs for the key frame." }, { "end": 656.8, "start": 653.8, "text": " But we're going to add to that twice." }, { "end": 660.8199999999999, "start": 656.8, "text": " So we're going to add two things as I said." }, { "end": 665, "start": 660.82, "text": " So the short term memory is added right here." }, { "end": 668.2800000000001, "start": 665, "text": " Now how do we build the short term memory?" }, { "end": 673.74, "start": 668.2800000000001, "text": " We build the short term memory simply by considering all the frames around the key frame." }, { "end": 677.9200000000001, "start": 673.74, "text": " And this you can see right here, the current window around the key frame, which can be" }, { "end": 684, "start": 677.9200000000001, "text": " like one frame around it or two frames or three frames, just a few frames around the" }, { "end": 685.32, "start": 684, "text": " current frame." }, { "end": 686.9200000000001, "start": 685.32, "text": " And this can be fairly helpful." }, { "end": 692.86, "start": 686.92, "text": " As we said, for example, if the deer moves a bit, the car moves a bit, you know, it gets" }, { "end": 695.88, "start": 692.86, "text": " into a slightly different lighting and so on." }, { "end": 703.42, "start": 695.88, "text": " This can help us very much to classify the current key frame if we also have features" }, { "end": 705.4799999999999, "start": 703.42, "text": " from the surrounding frames." }, { "end": 713.16, "start": 705.4799999999999, "text": " So for each of these surrounding frames, we also run them through the stage one detector" }, { "end": 716.56, "start": 713.16, "text": " to also extract regions of interest." }, { "end": 723.78, "start": 716.56, "text": " And that all of these features go into this memory short term memory bank right here." }, { "end": 728.16, "start": 723.78, "text": " There's different strategies, you don't always have to extract all of the regions of interest." }, { "end": 734.2399999999999, "start": 728.16, "text": " You can also extract just the top one and so on, or you can extract the mean, since" }, { "end": 737.8199999999999, "start": 734.2399999999999, "text": " these are fairly, you know, consistent, the cameras at the same place." }, { "end": 739.3599999999999, "start": 737.8199999999999, "text": " There are many ways you can do this." }, { "end": 746.0799999999999, "start": 739.3599999999999, "text": " But what you ultimately end up with is a short term memory bank that kind of is so you'll" }, { "end": 754.9000000000001, "start": 746.08, "text": " have a bank and you have lots of these feature vectors in here for your region, your regions" }, { "end": 757.9200000000001, "start": 754.9000000000001, "text": " of interest of the surrounding frames." }, { "end": 763.62, "start": 757.9200000000001, "text": " Now if this here, if this here is your half occluded deer, right, so this is the half" }, { "end": 771.8000000000001, "start": 763.62, "text": " occluded deer, and you want to consider information from the surrounding frames, maybe in the" }, { "end": 777.9599999999999, "start": 771.8, "text": " next frame, so maybe this is three frames, like 123 and two is the key frame, maybe in" }, { "end": 781.4399999999999, "start": 777.9599999999999, "text": " the next frame, the deer moves a bit and you see its nose." }, { "end": 785.8599999999999, "start": 781.4399999999999, "text": " And that this particular region of interest here is relevant." }, { "end": 794.24, "start": 785.8599999999999, "text": " So how do you know how do you now get from this entire memory, this feature vector that" }, { "end": 796.04, "start": 794.24, "text": " would be helpful?" }, { "end": 799.9599999999999, "start": 796.04, "text": " And the answer is you get it through an attention mechanism." }, { "end": 804.4000000000001, "start": 799.96, "text": " You can see that right here, the way the short term memory is added is through this attention" }, { "end": 805.5400000000001, "start": 804.4000000000001, "text": " block." }, { "end": 807.76, "start": 805.5400000000001, "text": " They describe the attention block right here." }, { "end": 810.6800000000001, "start": 807.76, "text": " It is a fairly standard attention mechanism." }, { "end": 813.2800000000001, "start": 810.6800000000001, "text": " So I've done a video on attention is all you need." }, { "end": 818.6800000000001, "start": 813.2800000000001, "text": " If you don't know what an attention mechanism is, go check it out." }, { "end": 821.88, "start": 818.6800000000001, "text": " But you can see it's it's very, very standard." }, { "end": 827.2, "start": 821.88, "text": " So you have these input features, which are the features that come from the key frame." }, { "end": 832.2800000000001, "start": 827.2, "text": " And you have the context features, which are all the features in your memory bank, you" }, { "end": 838.36, "start": 832.2800000000001, "text": " encode the input features with into a query using a fully connected layers and the context" }, { "end": 840.1600000000001, "start": 838.36, "text": " features into keys." }, { "end": 846.24, "start": 840.1600000000001, "text": " And then you match the queries with the keys and the softmax in order to get a weighting" }, { "end": 848.2800000000001, "start": 846.24, "text": " over the context features." }, { "end": 852.4000000000001, "start": 848.2800000000001, "text": " And then you aggregate the values from the context features." }, { "end": 855.4200000000001, "start": 852.4000000000001, "text": " So this is a standard attention mechanism." }, { "end": 856.4200000000001, "start": 855.4200000000001, "text": " What does it mean?" }, { "end": 863.68, "start": 856.42, "text": " It basically means that each of these vectors right here, they will emit a key that kind" }, { "end": 868.9, "start": 863.68, "text": " of describes what kind of information is contained in that vector." }, { "end": 876.04, "start": 868.9, "text": " The vector over here will emit a query that describes what sort of information it is looking" }, { "end": 882.1999999999999, "start": 876.04, "text": " for to in order to describe what's in the region of interest as well as possible." }, { "end": 888.5200000000001, "start": 882.2, "text": " And then you simply match the query with the keys to determine which key fits best to that" }, { "end": 895.24, "start": 888.5200000000001, "text": " query and whichever one fits best, let's say this one here, then you take that vector from" }, { "end": 902.9200000000001, "start": 895.24, "text": " the memory bank and incorporate it together with your current information that you already" }, { "end": 903.9200000000001, "start": 902.9200000000001, "text": " have." }, { "end": 910.32, "start": 903.9200000000001, "text": " So that's how you address things that from other frames using an attention mechanism." }, { "end": 916.08, "start": 910.32, "text": " Okay, now if this were all, you know, we could train this right now." }, { "end": 921.72, "start": 916.08, "text": " We could train all of this because all of this is differentiable, right?" }, { "end": 924.8000000000001, "start": 921.72, "text": " This stage one detector right here is differentiable." }, { "end": 932, "start": 924.8000000000001, "text": " It goes here and here, you know, the information, the attention mechanism is differentiable." }, { "end": 935.5200000000001, "start": 932, "text": " The stage two detector is differentiable, all differentiable." }, { "end": 936.5200000000001, "start": 935.5200000000001, "text": " Cool." }, { "end": 938.5600000000001, "start": 936.5200000000001, "text": " We can train this end to end." }, { "end": 939.5600000000001, "start": 938.5600000000001, "text": " Now what's the problem?" }, { "end": 942.7399999999999, "start": 939.56, "text": " The problem is this long term memory right here." }, { "end": 948.76, "start": 942.7399999999999, "text": " So in this memory, ideally, we would want to fit, let's say an entire day, an entire" }, { "end": 953.64, "start": 948.76, "text": " week or even an entire month of data from one of these cameras." }, { "end": 960.8, "start": 953.64, "text": " And it's just not feasible that we expand this current window here to an entire month" }, { "end": 966.2199999999999, "start": 960.8, "text": " or an entire week for many of those of those cameras, because even though they have a low" }, { "end": 973.1600000000001, "start": 966.22, "text": " frame rate and so on, it's still too much in order to then be all differentiable, all" }, { "end": 976.22, "start": 973.1600000000001, "text": " backpropagatable and so on." }, { "end": 982.24, "start": 976.22, "text": " So we can't really backprop in for these long term memory." }, { "end": 984.76, "start": 982.24, "text": " In essence, what we want to do is exactly the same." }, { "end": 992.44, "start": 984.76, "text": " We want to build up a memory of all of the regions of interest or maybe selected regions" }, { "end": 999.0400000000001, "start": 992.44, "text": " or all of the best regions of interest, whatever heuristic strategy we have of the past, whatever" }, { "end": 1003.96, "start": 999.0400000000001, "text": " this camera has seen, let's say in the last month or in the current week or something" }, { "end": 1009.1600000000001, "start": 1003.96, "text": " like this, we want to build all of this up and then use an attention mechanism, just" }, { "end": 1011.7600000000001, "start": 1009.1600000000001, "text": " the same in order to incorporate it." }, { "end": 1019.2800000000001, "start": 1011.7600000000001, "text": " But we have to come up with these things right here in some other way than a way where we" }, { "end": 1020.2800000000001, "start": 1019.2800000000001, "text": " can backprop." }, { "end": 1028.68, "start": 1020.28, "text": " So we can't really use this stage one detector right here because this is the one we're training" }, { "end": 1030.76, "start": 1028.68, "text": " and so we have to backprop through it." }, { "end": 1037.12, "start": 1030.76, "text": " Now an easy proposal is to simply use it anyway, but do like a stop gradient on it so we don't" }, { "end": 1038.68, "start": 1037.12, "text": " backprop through it." }, { "end": 1042.96, "start": 1038.68, "text": " That is one way but this the paper decides on a different way." }, { "end": 1051.92, "start": 1042.96, "text": " The paper decides that all of the past basically right here, right here and so on, we'll take" }, { "end": 1054.8400000000001, "start": 1051.92, "text": " a pre-trained object detector." }, { "end": 1061.08, "start": 1054.8400000000001, "text": " So not the one we're training currently, but we'll take a pre-trained one that was pre-trained" }, { "end": 1067.96, "start": 1061.08, "text": " either on something like Cocoa, which is an object detection data set, or you can pre-train" }, { "end": 1074.52, "start": 1067.96, "text": " it on Cocoa and then fine tune it on a task you're interested in, in a single frame fashion." }, { "end": 1082.16, "start": 1074.52, "text": " For whatever way, we'll take a pre-trained object detector or region of interest extractor" }, { "end": 1088.68, "start": 1082.16, "text": " and that will give us for each frame in the past will give us also the regions of interest" }, { "end": 1093.56, "start": 1088.68, "text": " along with the features." }, { "end": 1100.52, "start": 1093.56, "text": " And these are the features that we then go and put into the memory bank." }, { "end": 1103.52, "start": 1100.52, "text": " Sorry, my tablet just crashed a bit." }, { "end": 1105.52, "start": 1103.52, "text": " There we go." }, { "end": 1111.32, "start": 1105.52, "text": " Okay, so we'll take a pre-trained extractor right here." }, { "end": 1115.6399999999999, "start": 1111.32, "text": " That will give us features for regions of interest, we'll put that into the memory bank," }, { "end": 1119.28, "start": 1115.6399999999999, "text": " and then we will use an attention mechanism to incorporate it." }, { "end": 1126.32, "start": 1119.28, "text": " Now the attention mechanism we can train, but we cannot train the extractor for the" }, { "end": 1127.32, "start": 1126.32, "text": " features." }, { "end": 1131.84, "start": 1127.32, "text": " And this is the difference to the short term memory where we can actually train the feature" }, { "end": 1137.12, "start": 1131.84, "text": " extractor in order to help us with building the memory." }, { "end": 1141.72, "start": 1137.12, "text": " Now the memory is simply built without a goal in mind basically." }, { "end": 1148.2, "start": 1141.72, "text": " And the attention mechanism basically has to learn that it doesn't work with features" }, { "end": 1150.16, "start": 1148.2, "text": " that are meant for its task." }, { "end": 1155.04, "start": 1150.16, "text": " It works with features that have been originally created for a different tasks and they're" }, { "end": 1156.6000000000001, "start": 1155.04, "text": " not going to change." }, { "end": 1160.6000000000001, "start": 1156.6000000000001, "text": " But as we'll see, this, you know, can be handled." }, { "end": 1162.44, "start": 1160.6000000000001, "text": " So that's what they do." }, { "end": 1168.4, "start": 1162.44, "text": " They incorporate short term and long term memory into their stage two prediction and" }, { "end": 1173.16, "start": 1168.4, "text": " then the stage two prediction simply takes in all of those features and classifies the" }, { "end": 1176.1200000000001, "start": 1173.16, "text": " class of the object." }, { "end": 1179.8, "start": 1176.12, "text": " And that's the architecture of context rcnn." }, { "end": 1185.56, "start": 1179.8, "text": " It's rcnn with long and short term context." }, { "end": 1191.1, "start": 1185.56, "text": " So they describe very different ways of you know, how they built the memory and so on," }, { "end": 1192.6, "start": 1191.1, "text": " how they built the features." }, { "end": 1195.8799999999999, "start": 1192.6, "text": " I didn't, I kind of glossed over this right now." }, { "end": 1201.4399999999998, "start": 1195.8799999999999, "text": " There's a lot of consideration in building these things and you have to look at the paper" }, { "end": 1204.2399999999998, "start": 1201.4399999999998, "text": " how they exactly do this." }, { "end": 1209.96, "start": 1204.24, "text": " I'm more interested in the high level architecture and the sort of ideas behind it." }, { "end": 1218.8, "start": 1209.96, "text": " So when they do this, they do outperform the, they do outperform the current or the single" }, { "end": 1220.98, "start": 1218.8, "text": " frame baselines by quite a bit." }, { "end": 1227.32, "start": 1220.98, "text": " So this SS and this CCT are these wildlife data sets, whereas this CC, I think this is" }, { "end": 1233.44, "start": 1227.32, "text": " the city something city cam, these are this street data set." }, { "end": 1239.52, "start": 1233.44, "text": " As you can see, they do outperform the single frame baseline by quite a bit." }, { "end": 1244.88, "start": 1239.52, "text": " Now interesting, as you can see right here, as they increase the time horizon of this" }, { "end": 1250.28, "start": 1244.88, "text": " long term memory, so they, they can, they can now choose how much information do they" }, { "end": 1252.76, "start": 1250.28, "text": " want to put in that long term memory." }, { "end": 1259.4, "start": 1252.76, "text": " As they increase the time horizon for one minute, one hour, one day and so on, the performance" }, { "end": 1267.64, "start": 1259.4, "text": " goes up and up and up, which is a strong indication that these features actually help from the" }, { "end": 1271.1000000000001, "start": 1267.64, "text": " time horizon, because you don't have more parameters." }, { "end": 1277.0400000000002, "start": 1271.1000000000001, "text": " You simply increase the amount of information in the memory bank." }, { "end": 1284.16, "start": 1277.0400000000002, "text": " And if the performance goes up, you can make a very strong claim that these, this is actually" }, { "end": 1288.52, "start": 1284.16, "text": " due to the fact that you have more information in that memory bank." }, { "end": 1293.36, "start": 1288.52, "text": " Couldn't really guess any other explanation right here." }, { "end": 1298.6399999999999, "start": 1293.36, "text": " So they, they do, they do investigate different memory strategies." }, { "end": 1303.72, "start": 1298.6399999999999, "text": " They do a lot of ablations right here, where they also say, okay, what if we only have" }, { "end": 1305.86, "start": 1303.72, "text": " the short term attention?" }, { "end": 1307.96, "start": 1305.86, "text": " What if we only have the long term attention?" }, { "end": 1312.84, "start": 1307.96, "text": " What to only, if we only have self attention, that means attention only into the current" }, { "end": 1316.72, "start": 1312.84, "text": " frame, but of across regions of interest." }, { "end": 1321.04, "start": 1316.72, "text": " That's interesting if you have like a herd of animals and so on, and they all help." }, { "end": 1328.24, "start": 1321.04, "text": " But as you can see, the long term attention tends to help the most in this data set." }, { "end": 1332.18, "start": 1328.24, "text": " And the short term attention help helps a lot in this data set." }, { "end": 1337.84, "start": 1332.18, "text": " If you just compare to the other owner, these are two different metrics, not data sets." }, { "end": 1339.32, "start": 1337.84, "text": " Sorry about that." }, { "end": 1347.32, "start": 1339.32, "text": " But in essence, it helps the most when you combine the two and that's, you know, that's" }, { "end": 1351.86, "start": 1347.32, "text": " pretty cool to see." }, { "end": 1356.7, "start": 1351.86, "text": " So they do some qualitative results, which I find very interesting." }, { "end": 1363.2, "start": 1356.7, "text": " For example, they can visualize what the attention weights of their models are." }, { "end": 1371.1000000000001, "start": 1363.2, "text": " So here, you always have a very long timeframe, I think an entire month in this, in this memory" }, { "end": 1374.48, "start": 1371.1000000000001, "text": " bank of the long term memory." }, { "end": 1379.16, "start": 1374.48, "text": " Now in the top classification, you see the large thing here, the large frame is the one" }, { "end": 1382.18, "start": 1379.16, "text": " you actually want to classify." }, { "end": 1389.42, "start": 1382.18, "text": " And the other frames are the frames where the top attention score is so that the attention" }, { "end": 1393.72, "start": 1389.42, "text": " weights are the highest." }, { "end": 1398.28, "start": 1393.72, "text": " So here in order to classify this, what does the model pay attention to?" }, { "end": 1401.44, "start": 1398.28, "text": " Or which other frames does the model pay attention to?" }, { "end": 1406.74, "start": 1401.44, "text": " And you can see right here, they are all spread across the entire month." }, { "end": 1408.8200000000002, "start": 1406.74, "text": " Here is the timeline." }, { "end": 1412.98, "start": 1408.8200000000002, "text": " The most attended to pictures are spread across the entire month." }, { "end": 1419.14, "start": 1412.98, "text": " And almost all of them actually have that work on in here." }, { "end": 1422.22, "start": 1419.14, "text": " So this must be like its regular route." }, { "end": 1428.5800000000002, "start": 1422.22, "text": " And the model recognizes that and pulls in information from all these other images in" }, { "end": 1432.96, "start": 1428.5800000000002, "text": " order to correctly classify it here." }, { "end": 1444.3200000000002, "start": 1432.96, "text": " On the other hand, on the next example, this gazelle and tablet crashed right here." }, { "end": 1450.6799999999998, "start": 1444.32, "text": " It also puts all the weight on top of images of that same gazelle." }, { "end": 1456.26, "start": 1450.6799999999998, "text": " But you can see maybe that gazelle was only there for this one particular moment." }, { "end": 1462.32, "start": 1456.26, "text": " And all the pictures this camera has of it is, you know, in the very few moments that" }, { "end": 1463.7, "start": 1462.32, "text": " the gazelle was around." }, { "end": 1469.54, "start": 1463.7, "text": " You can see they all come here from the same point in time, or very, very close points" }, { "end": 1470.54, "start": 1469.54, "text": " in time." }, { "end": 1477.22, "start": 1470.54, "text": " And you can see that it puts a lot of weight on wherever the gazelle is." }, { "end": 1481.74, "start": 1477.22, "text": " So you know, that's a pretty strong indication that it actually learns to pull in the correct" }, { "end": 1489, "start": 1481.74, "text": " information be that from long time horizon, or from a short time horizon if necessary." }, { "end": 1497.28, "start": 1489, "text": " You can also see right here, they visualize the top attention, where the top attention" }, { "end": 1505.96, "start": 1497.28, "text": " weights go, in terms of how long the frames where the attention goes to is away from the" }, { "end": 1508.96, "start": 1505.96, "text": " frame that they're trying to classify." }, { "end": 1514.54, "start": 1508.96, "text": " So these graphics are somewhat kind of weird to interpret." }, { "end": 1519.3999999999999, "start": 1514.54, "text": " This here always means how much is the total time of the buffer." }, { "end": 1526.18, "start": 1519.3999999999999, "text": " So the memory buffer here contains always pictures from the total from one hour before" }, { "end": 1529.68, "start": 1526.18, "text": " until one hour after the key frame you want to classify." }, { "end": 1533.5, "start": 1529.68, "text": " So this is the frame you want to classify at minute zero." }, { "end": 1540.18, "start": 1533.5, "text": " And the memory buffer contains images from 60 minutes before to 60 minutes after." }, { "end": 1541.74, "start": 1540.18, "text": " So it's not real time, right?" }, { "end": 1545.14, "start": 1541.74, "text": " You go back to your through your footage and you try to classify." }, { "end": 1549.7, "start": 1545.14, "text": " You can also pull out images from the future." }, { "end": 1554.18, "start": 1549.7, "text": " You can see there's most attention is on the current frame, which makes sense." }, { "end": 1558.9, "start": 1554.18, "text": " You're trying to classify the current frame and it kind of falls off as you go further" }, { "end": 1560.74, "start": 1558.9, "text": " and further away." }, { "end": 1562.5800000000002, "start": 1560.74, "text": " This is across the entire data set." }, { "end": 1565.74, "start": 1562.5800000000002, "text": " So this is not a specific example, which also makes sense." }, { "end": 1572.7, "start": 1565.74, "text": " Probably in most of the time, the relevant information is closer in time rather than" }, { "end": 1573.96, "start": 1572.7, "text": " farther away." }, { "end": 1577.54, "start": 1573.96, "text": " But also you can see that the distribution is pretty spread out." }, { "end": 1582.18, "start": 1577.54, "text": " So it makes the model makes use of the entire range of time." }, { "end": 1587.3400000000001, "start": 1582.18, "text": " And you can see that throughout, even if you have an entire day in the buffer or two days," }, { "end": 1592.18, "start": 1587.3400000000001, "text": " even if you have entire week before and week after in the buffer, and even if you have" }, { "end": 1595.0800000000002, "start": 1592.18, "text": " an entire month here." }, { "end": 1600.5, "start": 1595.0800000000002, "text": " And especially if you look at when you have an entire week in the buffer, you can see" }, { "end": 1605.24, "start": 1600.5, "text": " the periodicity through the days." }, { "end": 1612.14, "start": 1605.24, "text": " So that means the model tends to pay attention to images that are from the same time of day." }, { "end": 1615.7800000000002, "start": 1612.14, "text": " Compared to the current key frame." }, { "end": 1621.66, "start": 1615.7800000000002, "text": " That's fairly, fairly good indication that the model has actually learned to address" }, { "end": 1625.0200000000002, "start": 1621.66, "text": " these this memory by its content, right?" }, { "end": 1630.1000000000001, "start": 1625.0200000000002, "text": " Now night and day isn't super difficult because you can just go on the brightness and so on." }, { "end": 1635.14, "start": 1630.1000000000001, "text": " But still, it's pretty cool to see that this is actually happening." }, { "end": 1641.66, "start": 1635.14, "text": " They do have some failure cases of the single frame model that their model is able to handle" }, { "end": 1643.5800000000002, "start": 1641.66, "text": " up here." }, { "end": 1646.8400000000001, "start": 1643.5800000000002, "text": " And they make a lot of sense." }, { "end": 1652.74, "start": 1646.8400000000001, "text": " So here you can see that there is an object that's moving out of frame." }, { "end": 1659.7, "start": 1652.74, "text": " And the single frame detector wasn't able to recognize this probably because it's moving" }, { "end": 1660.8600000000001, "start": 1659.7, "text": " out of frame." }, { "end": 1666.02, "start": 1660.8600000000001, "text": " Whereas this new this context rcnn is able to detect it probably because it looked at" }, { "end": 1673.22, "start": 1666.02, "text": " the frame just before it where the car was somewhere back here and it could correctly" }, { "end": 1674.22, "start": 1673.22, "text": " classify it." }, { "end": 1679.42, "start": 1674.22, "text": " Well, that's well, I just disregard my drawings." }, { "end": 1686.54, "start": 1679.42, "text": " Here it managed to recognize this animal in the back, whereas this old model, the single" }, { "end": 1693.66, "start": 1686.54, "text": " frame model hasn't also probably by looking either at frames next to it or by looking" }, { "end": 1700.0600000000002, "start": 1693.66, "text": " at other frames of herds of animals and realizing that usually when there's two elephants, there's" }, { "end": 1703.3000000000002, "start": 1700.0600000000002, "text": " more." }, { "end": 1706.26, "start": 1703.3000000000002, "text": " Here you can see that the object highly occluded." }, { "end": 1713.14, "start": 1706.26, "text": " So we're talking about the object like at the very edge of the frame object poorly lit." }, { "end": 1716.14, "start": 1713.14, "text": " This is particularly impressive." }, { "end": 1721.26, "start": 1716.14, "text": " And also an example where the animals are often in herds." }, { "end": 1726.54, "start": 1721.26, "text": " And if you see one deer, the likelihood that there's other deer is very high in this particular" }, { "end": 1729.06, "start": 1726.54, "text": " camera." }, { "end": 1734.58, "start": 1729.06, "text": " And by aggregating information from different frames, you can see that maybe it's always" }, { "end": 1737.58, "start": 1734.58, "text": " the same patch of the air that comes by." }, { "end": 1746.06, "start": 1737.58, "text": " And here, the single frame detector detects this patch here as a vehicle where it shouldn't." }, { "end": 1751.7, "start": 1746.06, "text": " And of course, the new model, the context RCNN is able to recognize that this is present" }, { "end": 1753.94, "start": 1751.7, "text": " in all of the frames." }, { "end": 1761.94, "start": 1753.94, "text": " And in most frames, the single object detector doesn't detect it as a vehicle." }, { "end": 1765.3, "start": 1761.94, "text": " And so it can kind of carry over that information." }, { "end": 1770.58, "start": 1765.3, "text": " Now you can already see sort of what the downsides might be if the single object detector is like" }, { "end": 1777.6599999999999, "start": 1770.58, "text": " very, very, very sure that this is in a single frame that this is a car." }, { "end": 1780.8799999999999, "start": 1777.6599999999999, "text": " It could carry over that information to the other frame." }, { "end": 1786.78, "start": 1780.8799999999999, "text": " So even though the single frame detector might have failed in that particular frame, if it" }, { "end": 1791.46, "start": 1786.78, "text": " fails super hard, it might, you know, shout that to all the other frames basically dominate" }, { "end": 1796.34, "start": 1791.46, "text": " the memory saying like, look, this is a car, I'm like pretty sure." }, { "end": 1801.34, "start": 1796.34, "text": " And it will carry over that information to all of the other frames." }, { "end": 1808.8999999999999, "start": 1801.34, "text": " And they say in one of these high confidence mistakes, it basically detected the same tree" }, { "end": 1813.34, "start": 1808.8999999999999, "text": " as a giraffe over and over again." }, { "end": 1822.98, "start": 1813.34, "text": " What I find particularly interesting is they do look at, so here they have this curve of" }, { "end": 1828.06, "start": 1822.98, "text": " on the bottom, you have confidence threshold, so how confident the model is." }, { "end": 1832.92, "start": 1828.06, "text": " And on the y-axis, you have the number of false positives." }, { "end": 1842.14, "start": 1832.92, "text": " And you can see that in the low confidence regime, the context RCNN has lower false positives" }, { "end": 1846.04, "start": 1842.14, "text": " than the single frame detector." }, { "end": 1849.3600000000001, "start": 1846.04, "text": " And the green line here is when you only have positive boxes." }, { "end": 1856.4199999999998, "start": 1849.36, "text": " So when you only include regions of interest where there is an actual object, which in" }, { "end": 1862.26, "start": 1856.4199999999998, "text": " this case is sort of hurtful, you also want the regions of interest where there is nothing" }, { "end": 1867.34, "start": 1862.26, "text": " because that helps you avoid false positives in other frames." }, { "end": 1869.1799999999998, "start": 1867.34, "text": " That's why the orange line is below the green line." }, { "end": 1874.5, "start": 1869.1799999999998, "text": " But strangely here in the high confidence regime, you can see that the single frame" }, { "end": 1878.7199999999998, "start": 1874.5, "text": " model has fewer false positives than the context RCNN." }, { "end": 1885.18, "start": 1878.72, "text": " And I like the text that they have to this." }, { "end": 1889.42, "start": 1885.18, "text": " In figure 7, we can see that adding empty representations reduces the number of false" }, { "end": 1894.26, "start": 1889.42, "text": " positives across all confidence threshold compared to the same model with only positive" }, { "end": 1895.82, "start": 1894.26, "text": " representations." }, { "end": 1901.66, "start": 1895.82, "text": " We investigated the 100 highest confidence false positives from context RCNN and found" }, { "end": 1908.18, "start": 1901.66, "text": " that in almost all of them, in 97 out of 100, the model had correctly found and classified" }, { "end": 1912.3400000000001, "start": 1908.18, "text": " animals that were missed by human annotators." }, { "end": 1923.46, "start": 1912.3400000000001, "text": " So basically these graphs are even underestimating how good that model is because the model appears" }, { "end": 1928.02, "start": 1923.46, "text": " to be better than the human annotators of the test set." }, { "end": 1933.8200000000002, "start": 1928.02, "text": " I find that to be pretty, pretty impressive." }, { "end": 1939.6799999999998, "start": 1933.82, "text": " And here you can see failure modes where they say, for example, when exploring the confident" }, { "end": 1946.86, "start": 1939.6799999999998, "text": " false positives on the snapshot Serengeti data set, the three out of 100 images, so" }, { "end": 1955.82, "start": 1946.86, "text": " whatever was not a human failure, where context RCNN erroneously detected an animal, where" }, { "end": 1961.06, "start": 1955.82, "text": " all of the same tree highly confidently predicted to be a giraffe." }, { "end": 1966.94, "start": 1961.06, "text": " So this is a failure mode when the model is highly confident it might spill that over" }, { "end": 1974.06, "start": 1966.94, "text": " to other frames because we now aggregate the information within the same camera across" }, { "end": 1976.1399999999999, "start": 1974.06, "text": " the frames." }, { "end": 1983.06, "start": 1976.1399999999999, "text": " To be said, of course, their train test split is such that there's not the same camera in" }, { "end": 1985.06, "start": 1983.06, "text": " the training data as in the testing data." }, { "end": 1992.62, "start": 1985.06, "text": " They have entirely different cameras in the testing data than in the training data, just" }, { "end": 1996.54, "start": 1992.62, "text": " so there is no information leakage." }, { "end": 2001.4199999999998, "start": 1996.54, "text": " So that's the model right here, how it works." }, { "end": 2002.5, "start": 2001.4199999999998, "text": " It's pretty cool." }, { "end": 2009.3799999999999, "start": 2002.5, "text": " It kind of wedges itself in between any single frame object detector that has these two stages." }, { "end": 2018.3400000000001, "start": 2009.38, "text": " And it's a pretty neat idea to bring in context from the past or even the future of the same" }, { "end": 2019.5200000000002, "start": 2018.3400000000001, "text": " camera." }, { "end": 2023.8200000000002, "start": 2019.5200000000002, "text": " Just a quick glance at the appendix, they have lots of different examples right here." }, { "end": 2028.46, "start": 2023.8200000000002, "text": " In one example, their camera kind of fell over and they say, well, it still worked." }, { "end": 2036.3000000000002, "start": 2028.46, "text": " The system was still able to kind of do attention across this failure, this kind of tipping" }, { "end": 2039.3000000000002, "start": 2036.3000000000002, "text": " over of the camera." }, { "end": 2044.46, "start": 2039.3, "text": " They have more examples right here, which I find pretty impressive, like these super" }, { "end": 2052.14, "start": 2044.46, "text": " low light things where it correctly detects like the possum." }, { "end": 2059.22, "start": 2052.14, "text": " And yeah, I invite you to check out the paper, the code they say should be out soon." }, { "end": 2061.22, "start": 2059.22, "text": " And I'll see you next time." }, { "end": 2069.8999999999996, "start": 2061.22, "text": " Bye bye." } ]
Hdo81GtLC_4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gnn", "transformer", "graph", "biology", "neurons", "axon", "dendrites", "plausible", "biologically plausible", "backprop", "backpropagation", "dfa", "feedback alignment", "random projections" ]
Backpropagation is one of the central components of modern deep learning. However, it's not biologically plausible, which limits the applicability of deep learning to understand how the human brain works. Direct Feedback Alignment is a biologically plausible alternative and this paper shows that, contrary to previous research, it can be successfully applied to modern deep architectures and solve challenging tasks. OUTLINE: 0:00 - Intro & Overview 1:40 - The Problem with Backpropagation 10:25 - Direct Feedback Alignment 21:00 - My Intuition why DFA works 31:20 - Experiments Paper: https://arxiv.org/abs/2006.12878 Code: https://github.com/lightonai/dfa-scales-to-modern-deep-learning Referenced Paper by Arild Nøkland: https://arxiv.org/abs/1609.01596 Abstract: Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural language processing. In contrast with previous studies limited to computer vision tasks, our findings show that it successfully trains a large range of state-of-the-art deep learning architectures, with performance close to fine-tuned backpropagation. At variance with common beliefs, our work supports that challenging tasks can be tackled in the absence of weight transport. Authors: Julien Launay, Iacopo Poli, François Boniface, Florent Krzakala Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we'll look at direct feedback alignment scales to modern deep learning tasks and architectures by Julia Alonet, Jacopo Poli, François Boniface and Florian Krzakala. So this paper on a high level it replaces the back propagation algorithm in deep learning architectures with this algorithm called direct feedback alignment which is more biologically plausible. The algorithm has been around for a while but it hasn't yet been shown to be applicable to really modern big deep learning architectures and then perform on par with backprop on modern deep learning tasks. This paper as I understand it is the first one to demonstrate that it can do that. So this is very much an engineering paper, an applied paper and we're going to mostly go into direct feedback alignment as such and I don't think we're going to go too much into what the actual empirical findings are because they even though they're impressive and it's a good piece of engineering I think they can be summarized pretty much by it works not yet on par with back propagation but into a promising direction. Alright as always if you like content like this consider sharing it out and leaving a like and tell me in the comments what you like. Of course subscribe if you aren't yet that that is you know essential otherwise how are we gonna hear from me in the future? Okay let's dive in. They say despite being the workhorse of deep learning the back propagation algorithm is no panacea. It enforces sequential layer updates thus preventing efficient parallelization of the training process. Furthermore its biological plausibility is being challenged. Alternative schemes have been devised yet under the constraints of synaptic asymmetry. None have scaled to modern deep learning tasks and architectures. Here we challenge this perspective and study the applicability of direct feedback alignment to neural view synthesis, recommender systems, geometric learning and natural language processing. In contrast with previous studies limited to computer vision tasks our findings show that it successfully trains a large range of state-of-the-art deep learning architectures with performance close to fine-tuned back propagation. At variance with common beliefs our work supports that challenging tasks can be tackled in the absence of weight transport. So there's a lot to unpack in this particular abstract right here. So first of all what's the problem with back propagation? Back propagation they have they have two corals right here. First of all it's preventing efficient parallelization of the training process. So what does that mean? So in back propagation I'm pretty sure you all know it's basic back propagation but you have an input to a neural network and the neural network has a bunch of layers so the input will travel layer by layer and at the end you'll get some output and your output y hat, let's call it here what the neural network thinks the let's say it's a classifier, thinks that the class of this particular X should be. Now in the data set you have your true label and then you compare that to your output label and you can compute a loss function. Now the whole question of the back propagation algorithm is how do I need to change my layers of the neural network in order to make the loss as small as possible? And for that you can use back propagation that means you can take that loss and you can back propagate it down the layers in order to update each layer individually. So the first problem they have here with the back propagation algorithm and it's not I mean it's kind of a secondary problem but it is that is sequential. So in order to update this layer right here you need to have already back propagated to this layer and then you need to back propagate further to this and to this layer so it's a sequential task you need to back propagate down the layers again whereas what is more plausible but what would be more efficient if we could somehow update all the layers in parallel but this is a minor quarrel. The bigger one is that back propagation isn't biologically plausible. We know that in real neurons you have your dendrites, your inputs and your axon and the signal only travels in one direction. We don't know of a feedback mechanism in true neurons in the brain that would allow for information sort of to flow in the opposite direction. There is information flowing in the opposite direction but I think it's too slow and it's so it's not really it can't be there's no analogous way of back propagation. There's no nothing in the brain that would take the role of the back propagation algorithm. Specifically if each layer is characterized by a weight matrix right here what back propagation does is it uses the transpose of that weight matrix to back propagate. So these arrows to the front right here they use the weight matrices and these arrows to the back they use the transposes of the weight matrices. So the transposes of the weight matrices sort of relay the information of what needs to change that would be the loss. What needs to change to make the losses small as possible. They relay this information down to the other layers and we don't know of any biological analogy to this mechanism right here. This transpose it acts as sort of a layer inverse and that is called weight transport. So weight transport means that you can you can do something like the transpose of the weights basically to carry to bring information from the next layer back to this layer. And in biology we don't have this and in direct feedback alignment we don't have this either. So direct feedback alignment the next thing here in this abstract is the algorithm that they are going to apply here. Direct feedback alignment and we'll go into what it is but it is more biologically plausible in that what it does is it takes the loss somehow and it distributes it globally to all of these layers like this. And it does so without requiring these transposes and also without requiring these sequential steps. So both of their proposed problems here would be solved by this. They say that in contrast with previous studies limited to computer vision tasks. So what people have tried to do is they have tried to apply this DFA algorithm to computer vision tasks. But in computer vision most architectures are CNNs and as I understand it as far as I understand it DFA can only right now be applied to linear layers. So something that is WX plus B and then a non-linearity. It cannot even though you can write the CNN as like a linear layer with constraints as I read this paper I think to interpret that you can only apply DFA to fully connected layers or things that look kind of like fully connected layers. So what they're going to do in their experiments is they're going to take these big architectures like transformers and replace parts of them with the parts that act as fully connected layers with DFA updates. So well they're not going to replace the layers but they're going to replace the back propagation part of it with DFA updates. It remains to say that they still use back propagation at some places where they can't replace the updates with DFA and that means where the layer isn't you know a fully connected layer or I guess it's too big. They somehow have to make it work so often they will not update for example the embedding layers and things like this. Okay so what they're saying is they go away from computer vision tasks because if you go to computer vision and CNNs rule that world right you can only do for feet-forward layers fully connected layers you're gonna lose already. So yeah it's kind of an unfair fight in that sense but even in absence of that they say we apply this to neural view synthesis, recommender systems, geometric learning and natural language processing. So these are quite diverse tasks and they're going to be quite diverse architectures that they are applying it to. For example in geometric learning I believe they do graph neural networks and there they replace the usually in graph neural networks there are fully connected layers that connect the two the vertices and the edges together and compute properties of them. So that's a pretty good point for using DFA right because what you're looking for is state-of-the-art tasks and architectures that still employ fully connected layers because there your algorithm can shine. Okay so that's it and they're basically going to show that this is performance is close to fine-tuned back propagation. Alright so what is DFA? What is this direct feedback alignment? And for that I actually want to jump papers right here and go to this other paper that describes DFA in a bit in a bit not more detail but in a graphic fashion. So this paper right here direct feedback alignment provides learning in deep neural networks by Arl Noecklund sorry Noecklund shows some theoretical properties about DFA. Now I don't want to go into the theory right here or in the math but I mainly like this paper for this particular graphic right here. So in the back propagation algorithm as you can see you forward propagate using these weight matrices and then you back propagate using the transposes of the weight matrices. Now one step after that is this thing right here it's called feedback alignment. It's not the same thing as a direct feedback alignment. In feedback alignment you simply say well I won't back prop using these transposes because I can't because that's not biologically possible. What I'll do is I'll use other matrices and these other matrices are going to be random matrices and by random matrices we really mean a matrix that is of you know the correct shape the same shape as this W transpose but each entry is going to be sampled from a like a random Gaussian right now I don't mean like the distribution of Gaussians but you fix this matrix once at the beginning of training by sampling from Gaussian and then you leave it there and that's going to be the matrix that you use for relaying the signal back through the layers. Now you might protest and say wait that's not gonna work because specifically this thing right here it you know that you need to know the weights here to know what you need to change in the lower layers you need to somehow have that information in there how are you gonna know what to change and that's a valid question and I will give my opinion of why this works okay in a second in two seconds first this is feedback alignment so simply use random matrices to back propagate so to say and then you have direct feedback alignment and direct feedback alignment goes a step further because in feedback alignment you still do this in a sequential manner direct feedback alignment simply takes whatever the top change should be the change to the top layer so how do I need to change the top layer and it back propagates that in a this global fashion to all the layers directly using random matrices okay and then this IFA we're not gonna look at today because that's not relevant for this other paper but I hope you can sort of see the overview here so let's go back scroll scroll scroll scroll scroll scroll scroll okay so here is the mathematical formulation of all of this and it pays to look at it to understand what's going on so they characterize a neural network right here as having n layers each neural network is the following each neural each layer takes whatever is the output of the last layer multiplies it by a weight matrix and that's going to be your a quantity you put a through a non-linearity to obtain the next layers input okay so the H is the output of this layer and the input of the next layer at the very end your last output is going to be your estimation of the labels so your last non-linearity is probably going to be something like a a softmax or something like this okay so how can we how can we have this as a concept in our heads if you have the neural network right here what you want to do is you want to forward prop always using your weight matrix W and then your non-linearity of that particular layer and then the last in the last layer you get your y hat as we saw before now the question is how can we adjust how can we adjust this W right here to make y hat more into the direction of y and here it's in here it's useful to think of the last layer as a vector output like usually we think of the loss function but in all of these algorithms they always start with the derivative of the loss function with respect to the last layer output so ay and ay is here right before the non-linearity if you remember this was f of ay and this here I guess is the softmax so if this is a classifier the ay here those are the logits and that's the output of your last layer so it instead of having y and y hat right sorry y hat right here it pays to maybe think of the output as a vector and the desired output as another vector and the desired output is of course going to be one hot vector in the case of in the case of a classification but it you know if you think of it like this then you'll recognize okay I need to change if this is my estimated output and I want to achieve this output I need to change it into this direction right to get more into the same direction as the output I want the entire question now becomes how do I tell the lower layers about this change right here this is the change that I want to make in the lower layers how do I get the lower layers such that they provide me with that signal with with the green signal instead of the red signal so I need to propagate this blue difference in the back propagation algorithm you can simply ask the system right so we've built entire frameworks on being able to back propagate tensorflow pytorch jacks whatever because with back propagation we can simply ask the system this question so here is how should I change the weights of my layer to make the loss smaller you can just ask that you can say what's the gradient of the loss with respect to the to my weights and the night negative sign here is because you want to make the loss smaller okay and that is going to be a straightforward calculation how does that calculation go it's going to involve this right here is the last layers output this right here as you can see over here is going to be this is going to be whatever comes back from the back propagation so in back propagation you always have to think of if you want to update these weights you need two quantities you need whatever comes from the bottom or came from the bottom during the forward pass and whatever comes from the top during the backward pass and this quantity here is going to be the one that came from the top and it's basically how you need to change the next layer in order to make the loss happier and by using this right here you pull it back to this layer so how do I need to change this layer and here you see that dreaded transpose of that weight matrix this is what we can't do in biology but this is what back propagation does so it pulls back how you need to change the next layer it pulls it back to this layer so this quantity right here is basically how do I need to change the output of this particular layer in order to make the loss happier and then you multiply it by the signal that comes from the bottom and that will give you how you need to change your weights okay so the green part is how does the output of the layer need to change and the multiplied by the blue part it's how do the weights need to change and of course the non-linearity is in there as well but let's let's just leave the non-linearity away because it's really not important for this particular thing so this is what backprop does what does DFA do DFA here again asks how should I change the weights of layer I and DFA says well first you need to compute this thing right here this is you see the derivative of the loss with respect to a y now a y is the output of the last layer these are in in our case for example your log it's okay note that this is still a gradient so it's not like we can't differentiate anymore we simply can't do back propagation from layer to layer okay so this is the quantity how do we need to change the last layers output and we're going to take that and simply feed it through this random matrix and then multiply again let's leave this away multiply it by the by this thing right here so if I get my colors correct like this again you have your neural network you want to update these weights the green is what comes from the top now it doesn't come from the next layer but the green actually comes from all the way at the end sorry you can't see that I still have to get used to that new frame of view so the green comes all the way from the end and the blue comes from down here okay so this is weird right because especially because this is just modulated by a random matrix so how can this possibly work that's the question and I you know I had some thoughts but I haven't read too much about it so I might be completely wrong or this might be completely known in the community I have no idea I'll just give my opinion right here so first of all you have to see you have to compare this to backprop so what's actually changing is this green part right here right we agree that this is the thing that's changing and what do we say does the green part mean the green part basically tells you how do you how should the output of this layer change okay by adjusting the weights in the direction of the thing on the right side of the equality sign you're going to change the output of the layer into the direction of that green part now in backpropagation the green part basically tells you how should the output of this layer change in order to make the loss as happy as possible now we don't have that anymore here we simply change the output of the layer into the into the direction of a random transformation of the of the change we would like to have in the output now okay that's the the first thing is we understand what's different and we understand what the green quantity means green quantity means how should the output of our layer change okay second thing if you look at the last layer of a neural network that that logits layer right what does it actually do let's say we have that's a three-dimensional last layer which means you have three classes right if your last layer is three-dimensional you have three classes each axis represents one class because you encode the classes as one hot vectors so this might be C the class label equals zero this might be C equals one this might be C equals two if you have something that you forward propagate through your neural network and let's say it comes out to be like this what would you classify that as now you classify that as the whatever class has the the biggest inner product with that vector which would be the C equals zero class right here and what is this quantity going to be how should you update this output in order to make the loss happier now that depends on your true label but let's say your true label is actually the zero label now what you want to do is you want to update that thing into the direction here right such that it is more aligned with the axis so what happens if you pull that back through a random matrix now the thing you have to know about random matrices like this is that they do approximately preserve distances and angles so technically if you pull this back what you're going to induce is another coordinate system in that other space now this can be a higher or a lower dimensional space I frankly I don't care but what you're going to induce is a coordinate system and what do you pull through that B matrix so this is the BI matrix you fix it right this is really important you fix it at the beginning of training it's always the same it preserves distances and angles approximately you pull back that quantity which is the okay my colors are all screwed which is the green arrow over here you pull back this green arrow here so what does it mean what so the output right here the output vector that came from the lower layers right that's you forward propagated that through your network so maybe in this layer it actually pointed here we don't know but let's say it pointed here if we pull back the green thing it might point here okay now this is since it's a random matrix we don't know we know that the angle is approximately preserved okay but you know that and these lengths are approximately preserved with relative to each other but it doesn't really tell you too much so why is this useful and to see why it's useful you need to consider other inputs we don't just in input this one vector we input a whole bunch of data now let's consider two other vectors so first I want to consider this this blue vector right here now the blue vector is also going to have a label of zero so what does the blue vectors update look like the blue vector is going to be pulled into this direction and I also want to consider this red vector right here the red vector is of class one so what does the red vectors update going to look like like this and if I consider now the red and the blue vector in this space right let's I just draw them at random like so okay what I do know actually that's that's for consistent let's draw the blue somewhere here and the red somewhere here what I do know is that the angles and distances are preserved so what is the green thing going to look like the update for the blue vector is going to be something like this and the update for the red vector is going to maybe be something like this you know away from from those so what is happening in that lower space you'll notice that the two vectors that are supposed to be in the same class this and this they are going to be pulled together now the direction they're pulled in that's determined by this random matrix but we know they're going to be pulled together because they are pulled together in this space in the final space okay and they're going to be pulled apart from the red vector okay because that red vector is going to to be pulled towards a different class in the in the last space and since the distances and angles are approximately preserved it's going to be pulled away from these in in this space so what this induces in my opinion is some sort of it induces this coordinate system where if you make the last layer axis aligned because you want to classify it it kind of clusters things that belong in the same class in these previous weight spaces right and because and if you do this layer by layer so if you do this in layer K and then you make the job easier for any layer K plus one that's in between here right because they are now the things in the same class are already together pretty okay now you map it through a weight and the non-linearity they might you know intertwine a bit again but there's they're more together than they would be otherwise so you make the job for the next layer easier which means that the next layer can also can even better cluster things and what you'll end up with in this last layer is the is a basically a class or next to last layer is basically a clustering where everything that's supposed to be in the same class is together and far apart from each other and since the last layer is the classification layer it's going to have a really easy job separating those classes and performing good classification so that's what I think is happening in this algorithm so even though the layers don't know how to change to help the last layer by the fact that these random matrices and induce a clustering together you know by back propagating these updates here it helps the last layer make it makes its job really easy and you know that's all the classifier needs and I want to I want to show again this is my opinion this is not anything of value it's just my hypothesis of why something like this could work I want to show you in this paper that I've shown you before right here they do actually do these experiments with DFA and they show that you can see top row shows feature obtained with back propagation bottom row shows features obtained with DFA I think these are input and features I'm not sure where exactly they are in the network but you can see that this clustering clearly emerges so oh yeah here from left to right input images first hidden layer second hidden layer third hidden layer so you can see that the clustering from layer to layer in backprop and also in DFA is better and better so the reason why backprop is good maybe it's just that because it also really induces clusterings like this I don't know maybe backprop does even does something on top of that because I mean backprop has all the properties of this and more right but still this this is congruent with my hypothesis of what's happening so what do they do with it they take this algorithm and they apply it to these architectures now let's for example look at one of them this neural view synthesis with neural radiance fields so neural radiance fields is a type of model to do this task of where you get a bunch of views of an object in 3d or you know a bunch of views around an object and you're supposed to render a new view and you can see that the DFA parameter or the DFA updated nerve neural radiance field model is pretty close to the back propagation updated one you can see it's a bit more blurry but it it works right and I think the this paper is really trying to show that look this works it doesn't work you know extremely well but it works and it works on a level that hasn't been seen before so here if you consider these results higher is better on the synthetic data set here even you see that if you have the same model with backprop it performs better than with DFA but the DFA for that model performs better than these other baseline models that have themselves been trained with back propagation so it's definitely in the direction of being competitive and that's the same thing they show with all of these experiments so they apply this to graph networks apply this to transformers and as I said it's it's not there yet you see that so in the transformers they have these settings where macro they just use it DFA for the individual blocks and micro they use it for each layer and already told you that you still in the attention mechanism you still have to use backprop within the attention mechanism but it is much more of a plausible algorithm than the back propagation through the entire network and they show that if they appropriately tweak the hyper parameters they do get into the direction of something that's performant at least with this macro strategy now this is nowhere close to this is nowhere close to what the to what the back propagation algorithm achieves but it's sort of it's sort of an indication that if the community could work as much on this as it has worked on back propagation then probably will make a lot of like we could we could push this to a place where it does perform on par with backprop or very close to it so I do invite you to go and look at the experiments they have a lot of lot of details on how they did it and exactly how you have to change the architectures to make DFA work and the hyper parameters and so on so that's really cool and they have some more outputs right here of the view synthesis and so on yeah if you are interested in that thing I again I don't want to disrespect it it's just I don't think there is much point in me going over it it's the results are always sort of the same that DFA it it's not there yet but it's a good direction yeah I hope this was informative let me know if you disagree about my assessment of DFA I could be completely wrong or you know I yeah or or this could be like well known to people already so yeah see you next time
[ { "end": 4.84, "start": 0, "text": " Hi there, today we'll look at direct feedback alignment scales to modern deep" }, { "end": 10.66, "start": 4.84, "text": " learning tasks and architectures by Julia Alonet, Jacopo Poli, François Boniface" }, { "end": 16.8, "start": 10.66, "text": " and Florian Krzakala. So this paper on a high level it replaces the back" }, { "end": 21.52, "start": 16.8, "text": " propagation algorithm in deep learning architectures with this algorithm called" }, { "end": 27.240000000000002, "start": 21.52, "text": " direct feedback alignment which is more biologically plausible. The algorithm has" }, { "end": 31.439999999999998, "start": 27.24, "text": " been around for a while but it hasn't yet been shown to be applicable to" }, { "end": 36.92, "start": 31.439999999999998, "text": " really modern big deep learning architectures and then perform on par" }, { "end": 42.04, "start": 36.92, "text": " with backprop on modern deep learning tasks. This paper as I understand it is" }, { "end": 46.879999999999995, "start": 42.04, "text": " the first one to demonstrate that it can do that. So this is very much an" }, { "end": 54.72, "start": 46.879999999999995, "text": " engineering paper, an applied paper and we're going to mostly go into direct" }, { "end": 58.92, "start": 54.72, "text": " feedback alignment as such and I don't think we're going to go too much into" }, { "end": 64.16, "start": 58.92, "text": " what the actual empirical findings are because they even though they're" }, { "end": 67.56, "start": 64.16, "text": " impressive and it's a good piece of engineering I think they can be" }, { "end": 74, "start": 67.56, "text": " summarized pretty much by it works not yet on par with back propagation but" }, { "end": 80.4, "start": 74, "text": " into a promising direction. Alright as always if you like content like this" }, { "end": 85.60000000000001, "start": 80.4, "text": " consider sharing it out and leaving a like and tell me in the comments what" }, { "end": 91.36000000000001, "start": 85.60000000000001, "text": " you like. Of course subscribe if you aren't yet that that is you know" }, { "end": 97.60000000000001, "start": 91.36000000000001, "text": " essential otherwise how are we gonna hear from me in the future? Okay let's" }, { "end": 101.64000000000001, "start": 97.60000000000001, "text": " dive in. They say despite being the workhorse of deep learning the back" }, { "end": 106.60000000000001, "start": 101.64000000000001, "text": " propagation algorithm is no panacea. It enforces sequential layer updates thus" }, { "end": 111.28, "start": 106.6, "text": " preventing efficient parallelization of the training process. Furthermore its" }, { "end": 116.47999999999999, "start": 111.28, "text": " biological plausibility is being challenged. Alternative schemes have been" }, { "end": 121.88, "start": 116.47999999999999, "text": " devised yet under the constraints of synaptic asymmetry. None have scaled to" }, { "end": 126.24, "start": 121.88, "text": " modern deep learning tasks and architectures. Here we challenge this" }, { "end": 131.35999999999999, "start": 126.24, "text": " perspective and study the applicability of direct feedback alignment to neural" }, { "end": 136.4, "start": 131.35999999999999, "text": " view synthesis, recommender systems, geometric learning and natural language" }, { "end": 142.72, "start": 136.4, "text": " processing. In contrast with previous studies limited to computer vision tasks" }, { "end": 146.4, "start": 142.72, "text": " our findings show that it successfully trains a large range of state-of-the-art" }, { "end": 150.72, "start": 146.4, "text": " deep learning architectures with performance close to fine-tuned back" }, { "end": 156.12, "start": 150.72, "text": " propagation. At variance with common beliefs our work supports that" }, { "end": 160.72, "start": 156.12, "text": " challenging tasks can be tackled in the absence of weight transport. So there's a" }, { "end": 167.68, "start": 160.72, "text": " lot to unpack in this particular abstract right here. So first of all what's the" }, { "end": 172.28, "start": 167.68, "text": " problem with back propagation? Back propagation they have they have two" }, { "end": 177.48, "start": 172.28, "text": " corals right here. First of all it's preventing efficient parallelization of" }, { "end": 183.32, "start": 177.48, "text": " the training process. So what does that mean? So in back propagation I'm pretty" }, { "end": 187.8, "start": 183.32, "text": " sure you all know it's basic back propagation but you have an input to a" }, { "end": 191.16000000000003, "start": 187.8, "text": " neural network and the neural network has a bunch of layers so the input will" }, { "end": 196.4, "start": 191.16000000000003, "text": " travel layer by layer and at the end you'll get some output and your output" }, { "end": 201.4, "start": 196.4, "text": " y hat, let's call it here what the neural network thinks the let's say it's a" }, { "end": 206.36, "start": 201.4, "text": " classifier, thinks that the class of this particular X should be. Now in the" }, { "end": 212.96, "start": 206.36, "text": " data set you have your true label and then you compare that to your output" }, { "end": 218.32000000000002, "start": 212.96, "text": " label and you can compute a loss function. Now the whole question of the" }, { "end": 222.56, "start": 218.32000000000002, "text": " back propagation algorithm is how do I need to change my layers of the neural" }, { "end": 227.60000000000002, "start": 222.56, "text": " network in order to make the loss as small as possible? And for that you can" }, { "end": 231.08, "start": 227.60000000000002, "text": " use back propagation that means you can take that loss and you can back" }, { "end": 238.12, "start": 231.08, "text": " propagate it down the layers in order to update each layer individually. So the" }, { "end": 241.68, "start": 238.12, "text": " first problem they have here with the back propagation algorithm and it's not" }, { "end": 247.08, "start": 241.68, "text": " I mean it's kind of a secondary problem but it is that is sequential. So in order" }, { "end": 252.48000000000002, "start": 247.08, "text": " to update this layer right here you need to have already back propagated to this" }, { "end": 256.56, "start": 252.48000000000002, "text": " layer and then you need to back propagate further to this and to this" }, { "end": 261.08, "start": 256.56, "text": " layer so it's a sequential task you need to back propagate down the layers again" }, { "end": 267.44, "start": 261.08, "text": " whereas what is more plausible but what would be more efficient if we could" }, { "end": 272.36, "start": 267.44, "text": " somehow update all the layers in parallel but this is a minor quarrel. The" }, { "end": 277.52, "start": 272.36, "text": " bigger one is that back propagation isn't biologically plausible. We know" }, { "end": 283.04, "start": 277.52, "text": " that in real neurons you have your dendrites, your inputs and your" }, { "end": 289.64, "start": 283.04, "text": " axon and the signal only travels in one direction. We don't know of a feedback" }, { "end": 295.32, "start": 289.64, "text": " mechanism in true neurons in the brain that would allow for information sort of" }, { "end": 299.92, "start": 295.32, "text": " to flow in the opposite direction. There is information flowing in the" }, { "end": 305.59999999999997, "start": 299.92, "text": " opposite direction but I think it's too slow and it's so it's not" }, { "end": 312.44, "start": 305.59999999999997, "text": " really it can't be there's no analogous way of back propagation. There's no" }, { "end": 318.36, "start": 312.44, "text": " nothing in the brain that would take the role of the back propagation algorithm." }, { "end": 325.40000000000003, "start": 318.36, "text": " Specifically if each layer is characterized by a weight matrix right here" }, { "end": 332.76, "start": 325.40000000000003, "text": " what back propagation does is it uses the transpose of that weight matrix to" }, { "end": 339.92, "start": 332.76, "text": " back propagate. So these arrows to the front right here they use the" }, { "end": 345.44, "start": 339.92, "text": " weight matrices and these arrows to the back they use the transposes of the" }, { "end": 351.48, "start": 345.44, "text": " weight matrices. So the transposes of the weight matrices sort of relay the" }, { "end": 355.8, "start": 351.48, "text": " information of what needs to change that would be the loss. What needs to change" }, { "end": 361.64, "start": 355.8, "text": " to make the losses small as possible. They relay this information down to the" }, { "end": 367.6, "start": 361.64, "text": " other layers and we don't know of any biological analogy to this" }, { "end": 372.92, "start": 367.6, "text": " mechanism right here. This transpose it acts as sort of a layer inverse and that" }, { "end": 379.6, "start": 372.92, "text": " is called weight transport. So weight transport means that you can you can do" }, { "end": 384.52000000000004, "start": 379.6, "text": " something like the transpose of the weights basically to carry to bring" }, { "end": 390.84000000000003, "start": 384.52000000000004, "text": " information from the next layer back to this layer. And in biology we don't have" }, { "end": 395.72, "start": 390.84000000000003, "text": " this and in direct feedback alignment we don't have this either. So direct" }, { "end": 401.32, "start": 395.72, "text": " feedback alignment the next thing here in this abstract is the algorithm that" }, { "end": 405.88, "start": 401.32, "text": " they are going to apply here. Direct feedback alignment and we'll go into" }, { "end": 410.36, "start": 405.88, "text": " what it is but it is more biologically plausible in that what it does is it" }, { "end": 416.08, "start": 410.36, "text": " takes the loss somehow and it distributes it globally to all of these" }, { "end": 423.64, "start": 416.08, "text": " layers like this. And it does so without requiring these transposes and also" }, { "end": 429.56, "start": 423.64, "text": " without requiring these sequential steps. So both of their proposed problems here" }, { "end": 440.08, "start": 429.56, "text": " would be solved by this. They say that in contrast with previous studies" }, { "end": 445.36, "start": 440.08, "text": " limited to computer vision tasks. So what people have tried to do is they have" }, { "end": 452.32, "start": 445.36, "text": " tried to apply this DFA algorithm to computer vision tasks. But in computer" }, { "end": 457.32, "start": 452.32, "text": " vision most architectures are CNNs and as I understand it as far as I" }, { "end": 464.04, "start": 457.32, "text": " understand it DFA can only right now be applied to linear layers. So something" }, { "end": 470.68, "start": 464.04, "text": " that is WX plus B and then a non-linearity. It cannot even though you" }, { "end": 476.84, "start": 470.68, "text": " can write the CNN as like a linear layer with constraints as I read this paper I" }, { "end": 482.4, "start": 476.84, "text": " think to interpret that you can only apply DFA to fully connected layers or" }, { "end": 487.03999999999996, "start": 482.4, "text": " things that look kind of like fully connected layers. So what they're going to" }, { "end": 490, "start": 487.04, "text": " do in their experiments is they're going to take these big architectures like" }, { "end": 495.92, "start": 490, "text": " transformers and replace parts of them with the parts that act as fully" }, { "end": 501.20000000000005, "start": 495.92, "text": " connected layers with DFA updates. So well they're not going to replace" }, { "end": 505.16, "start": 501.20000000000005, "text": " the layers but they're going to replace the back propagation part of it with DFA" }, { "end": 509.68, "start": 505.16, "text": " updates. It remains to say that they still use back propagation at some" }, { "end": 515.28, "start": 509.68, "text": " places where they can't replace the updates with DFA and that means where" }, { "end": 519.8399999999999, "start": 515.28, "text": " the layer isn't you know a fully connected layer or I guess it's too big." }, { "end": 522.9599999999999, "start": 519.8399999999999, "text": " They somehow have to make it work so often they will not update for example" }, { "end": 529.24, "start": 522.9599999999999, "text": " the embedding layers and things like this. Okay so what they're saying is they" }, { "end": 533.6, "start": 529.24, "text": " go away from computer vision tasks because if you go to computer vision and" }, { "end": 540.3, "start": 533.6, "text": " CNNs rule that world right you can only do for feet-forward layers fully" }, { "end": 547.1999999999999, "start": 540.3, "text": " connected layers you're gonna lose already. So yeah it's kind of an" }, { "end": 553.0799999999999, "start": 547.1999999999999, "text": " unfair fight in that sense but even in absence of that they say we" }, { "end": 558.04, "start": 553.0799999999999, "text": " apply this to neural view synthesis, recommender systems, geometric learning" }, { "end": 561.76, "start": 558.04, "text": " and natural language processing. So these are quite diverse tasks and they're" }, { "end": 565.68, "start": 561.76, "text": " going to be quite diverse architectures that they are applying it to. For example" }, { "end": 571, "start": 565.68, "text": " in geometric learning I believe they do graph neural networks and there" }, { "end": 576.88, "start": 571, "text": " they replace the usually in graph neural networks there are fully connected" }, { "end": 582.2399999999999, "start": 576.88, "text": " layers that connect the two the vertices and the edges together and compute" }, { "end": 587.8, "start": 582.2399999999999, "text": " properties of them. So that's a pretty good point for using DFA right because" }, { "end": 591.76, "start": 587.8, "text": " what you're looking for is state-of-the-art tasks and architectures" }, { "end": 597.96, "start": 591.76, "text": " that still employ fully connected layers because there your algorithm can shine." }, { "end": 604.56, "start": 597.96, "text": " Okay so that's it and they're basically going to show that this is performance" }, { "end": 612.04, "start": 604.56, "text": " is close to fine-tuned back propagation. Alright so what is DFA? What is this" }, { "end": 617.3199999999999, "start": 612.04, "text": " direct feedback alignment? And for that I actually want to jump papers right here" }, { "end": 623.4000000000001, "start": 617.32, "text": " and go to this other paper that describes DFA in a bit in a bit not" }, { "end": 628.5200000000001, "start": 623.4000000000001, "text": " more detail but in a graphic fashion. So this paper right here direct feedback" }, { "end": 633.2800000000001, "start": 628.5200000000001, "text": " alignment provides learning in deep neural networks by Arl Noecklund" }, { "end": 641.0400000000001, "start": 633.2800000000001, "text": " sorry Noecklund shows some theoretical properties about DFA. Now I don't want to" }, { "end": 645.9200000000001, "start": 641.0400000000001, "text": " go into the theory right here or in the math but I mainly like this paper for" }, { "end": 651.1999999999999, "start": 645.92, "text": " this particular graphic right here. So in the back propagation algorithm as you" }, { "end": 655.92, "start": 651.1999999999999, "text": " can see you forward propagate using these weight matrices and then you back" }, { "end": 662.12, "start": 655.92, "text": " propagate using the transposes of the weight matrices. Now one step after that" }, { "end": 666.4599999999999, "start": 662.12, "text": " is this thing right here it's called feedback alignment. It's not the same" }, { "end": 671.48, "start": 666.4599999999999, "text": " thing as a direct feedback alignment. In feedback alignment you simply say well I" }, { "end": 675.4, "start": 671.48, "text": " won't back prop using these transposes because I can't because that's not" }, { "end": 682.12, "start": 675.4, "text": " biologically possible. What I'll do is I'll use other matrices and these other" }, { "end": 688.28, "start": 682.12, "text": " matrices are going to be random matrices and by random matrices we really mean a" }, { "end": 693.76, "start": 688.28, "text": " matrix that is of you know the correct shape the same shape as this W transpose" }, { "end": 701.12, "start": 693.76, "text": " but each entry is going to be sampled from a like a random Gaussian right now" }, { "end": 706.8, "start": 701.12, "text": " I don't mean like the distribution of Gaussians but you fix this matrix once" }, { "end": 712.4, "start": 706.8, "text": " at the beginning of training by sampling from Gaussian and then you leave it" }, { "end": 717, "start": 712.4, "text": " there and that's going to be the matrix that you use for relaying the signal" }, { "end": 722.44, "start": 717, "text": " back through the layers. Now you might protest and say wait that's not gonna" }, { "end": 728.16, "start": 722.44, "text": " work because specifically this thing right here it you know that you need to" }, { "end": 732.6, "start": 728.16, "text": " know the weights here to know what you need to change in the lower layers you" }, { "end": 737.48, "start": 732.6, "text": " need to somehow have that information in there how are you gonna know what to" }, { "end": 743.04, "start": 737.48, "text": " change and that's a valid question and I will give my opinion of why this works" }, { "end": 750, "start": 743.04, "text": " okay in a second in two seconds first this is feedback alignment so simply use" }, { "end": 755.92, "start": 750, "text": " random matrices to back propagate so to say and then you have direct feedback" }, { "end": 760, "start": 755.92, "text": " alignment and direct feedback alignment goes a step further because in feedback" }, { "end": 764.24, "start": 760, "text": " alignment you still do this in a sequential manner direct feedback" }, { "end": 770.9599999999999, "start": 764.24, "text": " alignment simply takes whatever the top change should be the change to the top" }, { "end": 776.88, "start": 770.9599999999999, "text": " layer so how do I need to change the top layer and it back propagates that in a" }, { "end": 783.4, "start": 776.88, "text": " this global fashion to all the layers directly using random matrices okay and" }, { "end": 788.24, "start": 783.4, "text": " then this IFA we're not gonna look at today because that's not relevant for" }, { "end": 795.8, "start": 788.24, "text": " this other paper but I hope you can sort of see the overview here so let's go back" }, { "end": 802.16, "start": 797.4399999999999, "text": " scroll scroll scroll scroll scroll scroll scroll okay so here is the" }, { "end": 807.6, "start": 802.16, "text": " mathematical formulation of all of this and it pays to look at it to understand" }, { "end": 811.88, "start": 807.6, "text": " what's going on so they characterize a neural network right here as having n" }, { "end": 817.88, "start": 811.88, "text": " layers each neural network is the following each neural each layer takes" }, { "end": 823, "start": 817.88, "text": " whatever is the output of the last layer multiplies it by a weight matrix and" }, { "end": 830.56, "start": 823, "text": " that's going to be your a quantity you put a through a non-linearity to obtain" }, { "end": 836.2, "start": 830.56, "text": " the next layers input okay so the H is the output of this layer and the input" }, { "end": 843.24, "start": 836.2, "text": " of the next layer at the very end your last output is going to be your" }, { "end": 848.72, "start": 843.24, "text": " estimation of the labels so your last non-linearity is probably going to be" }, { "end": 857.32, "start": 848.72, "text": " something like a a softmax or something like this okay so how can we how can we" }, { "end": 863.72, "start": 857.32, "text": " have this as a concept in our heads if you have the neural network right here" }, { "end": 868.76, "start": 863.72, "text": " what you want to do is you want to forward prop always using your weight" }, { "end": 876.5600000000001, "start": 868.76, "text": " matrix W and then your non-linearity of that particular layer and then the last" }, { "end": 883.84, "start": 876.5600000000001, "text": " in the last layer you get your y hat as we saw before now the question is how can" }, { "end": 891.9200000000001, "start": 883.84, "text": " we adjust how can we adjust this W right here to make y hat more into the" }, { "end": 898.16, "start": 891.92, "text": " direction of y and here it's in here it's useful to think of the last layer" }, { "end": 905.4, "start": 898.16, "text": " as a vector output like usually we think of the loss function but in all of these" }, { "end": 910.4, "start": 905.4, "text": " algorithms they always start with the derivative of the loss function with" }, { "end": 918.3199999999999, "start": 910.4, "text": " respect to the last layer output so ay and ay is here right before the" }, { "end": 926.32, "start": 918.32, "text": " non-linearity if you remember this was f of ay and this here I guess is the softmax" }, { "end": 932.08, "start": 926.32, "text": " so if this is a classifier the ay here those are the logits and that's the" }, { "end": 942.08, "start": 932.08, "text": " output of your last layer so it instead of having y and y hat right sorry y hat" }, { "end": 951.0400000000001, "start": 942.08, "text": " right here it pays to maybe think of the output as a vector and the desired" }, { "end": 956.32, "start": 951.0400000000001, "text": " output as another vector and the desired output is of course going to be one hot" }, { "end": 963.36, "start": 956.32, "text": " vector in the case of in the case of a classification but it you know if you" }, { "end": 970.88, "start": 963.36, "text": " think of it like this then you'll recognize okay I need to change if this" }, { "end": 975.84, "start": 970.88, "text": " is my estimated output and I want to achieve this output I need to change it" }, { "end": 981.36, "start": 975.84, "text": " into this direction right to get more into the same direction as the output I" }, { "end": 988.04, "start": 981.36, "text": " want the entire question now becomes how do I tell the lower layers about this" }, { "end": 993.2, "start": 988.04, "text": " change right here this is the change that I want to make in the lower layers" }, { "end": 1000.12, "start": 993.2, "text": " how do I get the lower layers such that they provide me with that signal" }, { "end": 1005.12, "start": 1000.12, "text": " with with the green signal instead of the red signal so I need to propagate" }, { "end": 1011.28, "start": 1005.12, "text": " this blue difference in the back propagation algorithm you can simply ask" }, { "end": 1016.76, "start": 1011.28, "text": " the system right so we've built entire frameworks on being able to back" }, { "end": 1022.64, "start": 1016.76, "text": " propagate tensorflow pytorch jacks whatever because with back propagation" }, { "end": 1028.08, "start": 1022.64, "text": " we can simply ask the system this question so here is how should I change" }, { "end": 1034.12, "start": 1028.08, "text": " the weights of my layer to make the loss smaller you can just ask that you can" }, { "end": 1040.74, "start": 1034.12, "text": " say what's the gradient of the loss with respect to the to my weights and the" }, { "end": 1046.12, "start": 1040.74, "text": " night negative sign here is because you want to make the loss smaller okay and" }, { "end": 1051.8, "start": 1046.12, "text": " that is going to be a straightforward calculation how does that calculation go" }, { "end": 1062.6399999999999, "start": 1051.8, "text": " it's going to involve this right here is the last layers output this right here" }, { "end": 1071.36, "start": 1062.6399999999999, "text": " as you can see over here is going to be this is going to be whatever comes back" }, { "end": 1075.84, "start": 1071.36, "text": " from the back propagation so in back propagation you always have to think of" }, { "end": 1080.04, "start": 1075.84, "text": " if you want to update these weights you need two quantities you need whatever" }, { "end": 1084.12, "start": 1080.04, "text": " comes from the bottom or came from the bottom during the forward pass and" }, { "end": 1092.1599999999999, "start": 1084.12, "text": " whatever comes from the top during the backward pass and this quantity here is" }, { "end": 1098.92, "start": 1092.1599999999999, "text": " going to be the one that came from the top and it's basically how you need to" }, { "end": 1104.76, "start": 1098.92, "text": " change the next layer in order to make the loss happier and by using this right" }, { "end": 1109.8799999999999, "start": 1104.76, "text": " here you pull it back to this layer so how do I need to change this layer and" }, { "end": 1115.24, "start": 1109.88, "text": " here you see that dreaded transpose of that weight matrix this is what we can't" }, { "end": 1120.4, "start": 1115.24, "text": " do in biology but this is what back propagation does so it pulls back how you" }, { "end": 1126.1200000000001, "start": 1120.4, "text": " need to change the next layer it pulls it back to this layer so this quantity" }, { "end": 1131.9, "start": 1126.1200000000001, "text": " right here is basically how do I need to change the output of this particular" }, { "end": 1138.0200000000002, "start": 1131.9, "text": " layer in order to make the loss happier and then you multiply it by the signal" }, { "end": 1142.6399999999999, "start": 1138.02, "text": " that comes from the bottom and that will give you how you need to change your" }, { "end": 1148.16, "start": 1142.6399999999999, "text": " weights okay so the green part is how does the output of the layer need to" }, { "end": 1153.2, "start": 1148.16, "text": " change and the multiplied by the blue part it's how do the weights need to" }, { "end": 1158.56, "start": 1153.2, "text": " change and of course the non-linearity is in there as well but let's let's just" }, { "end": 1162.52, "start": 1158.56, "text": " leave the non-linearity away because it's really not important for this" }, { "end": 1170.8, "start": 1162.52, "text": " particular thing so this is what backprop does what does DFA do DFA here" }, { "end": 1178.12, "start": 1170.8, "text": " again asks how should I change the weights of layer I and DFA says well" }, { "end": 1183.68, "start": 1178.12, "text": " first you need to compute this thing right here this is you see the derivative" }, { "end": 1189.8, "start": 1183.68, "text": " of the loss with respect to a y now a y is the output of the last layer these" }, { "end": 1195.68, "start": 1189.8, "text": " are in in our case for example your log it's okay note that this is still a" }, { "end": 1200.72, "start": 1195.68, "text": " gradient so it's not like we can't differentiate anymore we simply can't do" }, { "end": 1206.8, "start": 1200.72, "text": " back propagation from layer to layer okay so this is the quantity how do we" }, { "end": 1213.44, "start": 1206.8, "text": " need to change the last layers output and we're going to take that and simply" }, { "end": 1218.84, "start": 1213.44, "text": " feed it through this random matrix and then multiply again let's leave this" }, { "end": 1225.08, "start": 1218.84, "text": " away multiply it by the by this thing right here so if I get my colors" }, { "end": 1229.84, "start": 1225.08, "text": " correct like this again you have your neural network you want to update these" }, { "end": 1236.12, "start": 1229.84, "text": " weights the green is what comes from the top now it doesn't come from the next" }, { "end": 1242.36, "start": 1236.12, "text": " layer but the green actually comes from all the way at the end sorry you can't" }, { "end": 1247.84, "start": 1242.36, "text": " see that I still have to get used to that new frame of view so the green" }, { "end": 1256, "start": 1247.84, "text": " comes all the way from the end and the blue comes from down here okay so this" }, { "end": 1260.72, "start": 1256, "text": " is weird right because especially because this is just modulated by a" }, { "end": 1268.84, "start": 1260.72, "text": " random matrix so how can this possibly work that's the question and I you know" }, { "end": 1271.9199999999998, "start": 1268.84, "text": " I had some thoughts but I haven't read too much about it so I might be" }, { "end": 1276.72, "start": 1271.9199999999998, "text": " completely wrong or this might be completely known in the community I have" }, { "end": 1283.8, "start": 1276.72, "text": " no idea I'll just give my opinion right here so first of all you have to see you" }, { "end": 1289.2, "start": 1283.8, "text": " have to compare this to backprop so what's actually changing is this green" }, { "end": 1293.96, "start": 1289.2, "text": " part right here right we agree that this is the thing that's changing and what" }, { "end": 1299.68, "start": 1293.96, "text": " do we say does the green part mean the green part basically tells you how do" }, { "end": 1306.68, "start": 1299.68, "text": " you how should the output of this layer change okay by adjusting the weights" }, { "end": 1310.3600000000001, "start": 1306.68, "text": " in the direction of the thing on the right side of the equality sign you're" }, { "end": 1314.6000000000001, "start": 1310.3600000000001, "text": " going to change the output of the layer into the direction of that green part" }, { "end": 1320.24, "start": 1314.6000000000001, "text": " now in backpropagation the green part basically tells you how should the" }, { "end": 1326.52, "start": 1320.24, "text": " output of this layer change in order to make the loss as happy as possible now" }, { "end": 1331.76, "start": 1326.52, "text": " we don't have that anymore here we simply change the output of the layer" }, { "end": 1339.68, "start": 1331.76, "text": " into the into the direction of a random transformation of the of the change we" }, { "end": 1344.96, "start": 1339.68, "text": " would like to have in the output now okay that's the the first thing is we" }, { "end": 1349, "start": 1344.96, "text": " understand what's different and we understand what the green quantity means" }, { "end": 1354.24, "start": 1349, "text": " green quantity means how should the output of our layer change okay second" }, { "end": 1360.96, "start": 1354.24, "text": " thing if you look at the last layer of a neural network that that logits layer" }, { "end": 1366.2, "start": 1360.96, "text": " right what does it actually do let's say we have that's a three-dimensional last" }, { "end": 1370.72, "start": 1366.2, "text": " layer which means you have three classes right if your last layer is" }, { "end": 1376.64, "start": 1370.72, "text": " three-dimensional you have three classes each axis represents one class because" }, { "end": 1381.44, "start": 1376.64, "text": " you encode the classes as one hot vectors so this might be C the class" }, { "end": 1388.92, "start": 1381.44, "text": " label equals zero this might be C equals one this might be C equals two if you" }, { "end": 1392.8400000000001, "start": 1388.92, "text": " have something that you forward propagate through your neural network" }, { "end": 1399.8000000000002, "start": 1392.8400000000001, "text": " and let's say it comes out to be like this what would you classify that as now" }, { "end": 1408.64, "start": 1399.8000000000002, "text": " you classify that as the whatever class has the the biggest inner product with" }, { "end": 1415.92, "start": 1408.64, "text": " that vector which would be the C equals zero class right here and what is this" }, { "end": 1421.6000000000001, "start": 1415.92, "text": " quantity going to be how should you update this output in order to make the" }, { "end": 1426.1200000000001, "start": 1421.6000000000001, "text": " loss happier now that depends on your true label but let's say your true label" }, { "end": 1431.8400000000001, "start": 1426.1200000000001, "text": " is actually the zero label now what you want to do is you want to update that" }, { "end": 1438.72, "start": 1431.8400000000001, "text": " thing into the direction here right such that it is more aligned with the axis so" }, { "end": 1444.2, "start": 1438.72, "text": " what happens if you pull that back through a random matrix now the thing" }, { "end": 1448.32, "start": 1444.2, "text": " you have to know about random matrices like this is that they do approximately" }, { "end": 1456.72, "start": 1448.32, "text": " preserve distances and angles so technically if you pull this back what" }, { "end": 1461.32, "start": 1456.72, "text": " you're going to induce is another coordinate system in that other space now" }, { "end": 1467.0800000000002, "start": 1461.32, "text": " this can be a higher or a lower dimensional space I frankly I don't care" }, { "end": 1474.6399999999999, "start": 1467.08, "text": " but what you're going to induce is a coordinate system and what do you pull" }, { "end": 1479.4399999999998, "start": 1474.6399999999999, "text": " through that B matrix so this is the BI matrix you fix it right this is really" }, { "end": 1482.76, "start": 1479.4399999999998, "text": " important you fix it at the beginning of training it's always the same it" }, { "end": 1488.76, "start": 1482.76, "text": " preserves distances and angles approximately you pull back that quantity" }, { "end": 1493.76, "start": 1488.76, "text": " which is the okay my colors are all screwed which is the green arrow over" }, { "end": 1503.4, "start": 1493.76, "text": " here you pull back this green arrow here so what does it mean what so the output" }, { "end": 1508.76, "start": 1503.4, "text": " right here the output vector that came from the lower layers right that's you" }, { "end": 1512.6, "start": 1508.76, "text": " forward propagated that through your network so maybe in this layer it" }, { "end": 1519.24, "start": 1512.6, "text": " actually pointed here we don't know but let's say it pointed here if we pull" }, { "end": 1526.28, "start": 1519.24, "text": " back the green thing it might point here okay now this is since it's a random" }, { "end": 1530.16, "start": 1526.28, "text": " matrix we don't know we know that the angle is approximately preserved okay" }, { "end": 1534.44, "start": 1530.16, "text": " but you know that and these lengths are approximately preserved with relative to" }, { "end": 1543.2, "start": 1534.44, "text": " each other but it doesn't really tell you too much so why is this useful and" }, { "end": 1549.28, "start": 1543.2, "text": " to see why it's useful you need to consider other inputs we don't just in" }, { "end": 1554.88, "start": 1549.28, "text": " input this one vector we input a whole bunch of data now let's consider two" }, { "end": 1563.0800000000002, "start": 1554.88, "text": " other vectors so first I want to consider this this blue vector right here now the" }, { "end": 1568.04, "start": 1563.0800000000002, "text": " blue vector is also going to have a label of zero so what does the blue" }, { "end": 1572.68, "start": 1568.04, "text": " vectors update look like the blue vector is going to be pulled into this" }, { "end": 1578.48, "start": 1572.68, "text": " direction and I also want to consider this red vector right here the red" }, { "end": 1584.2, "start": 1578.48, "text": " vector is of class one so what does the red vectors update going to look like" }, { "end": 1593.0800000000002, "start": 1584.2, "text": " like this and if I consider now the red and the blue vector in this space right" }, { "end": 1601.1200000000001, "start": 1593.0800000000002, "text": " let's I just draw them at random like so okay what I do know actually that's" }, { "end": 1605.86, "start": 1601.12, "text": " that's for consistent let's draw the blue somewhere here and the red" }, { "end": 1611.4399999999998, "start": 1605.86, "text": " somewhere here what I do know is that the angles and distances are preserved" }, { "end": 1615.9599999999998, "start": 1611.4399999999998, "text": " so what is the green thing going to look like the update for the blue vector is" }, { "end": 1620.56, "start": 1615.9599999999998, "text": " going to be something like this and the update for the red vector is going to" }, { "end": 1627.8, "start": 1620.56, "text": " maybe be something like this you know away from from those so what is" }, { "end": 1632.04, "start": 1627.8, "text": " happening in that lower space you'll notice that the two vectors that are" }, { "end": 1637.08, "start": 1632.04, "text": " supposed to be in the same class this and this they are going to be pulled" }, { "end": 1642.84, "start": 1637.08, "text": " together now the direction they're pulled in that's determined by this" }, { "end": 1647.68, "start": 1642.84, "text": " random matrix but we know they're going to be pulled together because they are" }, { "end": 1654.24, "start": 1647.68, "text": " pulled together in this space in the final space okay and they're going to" }, { "end": 1661.92, "start": 1654.24, "text": " be pulled apart from the red vector okay because that red vector is going to to" }, { "end": 1666.8, "start": 1661.92, "text": " be pulled towards a different class in the in the last space and since the" }, { "end": 1670.8, "start": 1666.8, "text": " distances and angles are approximately preserved it's going to be pulled away" }, { "end": 1679.76, "start": 1670.8, "text": " from these in in this space so what this induces in my opinion is some sort of it" }, { "end": 1687.72, "start": 1679.76, "text": " induces this coordinate system where if you make the last layer axis aligned" }, { "end": 1693.36, "start": 1687.72, "text": " because you want to classify it it kind of clusters things that belong in the" }, { "end": 1700.16, "start": 1693.36, "text": " same class in these previous weight spaces right and because and if you do" }, { "end": 1707.84, "start": 1700.16, "text": " this layer by layer so if you do this in layer K and then you make the job easier" }, { "end": 1712.8, "start": 1707.84, "text": " for any layer K plus one that's in between here right because they are now" }, { "end": 1717.32, "start": 1712.8, "text": " the things in the same class are already together pretty okay now you map it" }, { "end": 1721.32, "start": 1717.32, "text": " through a weight and the non-linearity they might you know intertwine a bit" }, { "end": 1727.1599999999999, "start": 1721.32, "text": " again but there's they're more together than they would be otherwise so you make" }, { "end": 1733.32, "start": 1727.1599999999999, "text": " the job for the next layer easier which means that the next layer can also can" }, { "end": 1739.36, "start": 1733.32, "text": " even better cluster things and what you'll end up with in this last layer is" }, { "end": 1745.8799999999999, "start": 1739.36, "text": " the is a basically a class or next to last layer is basically a clustering" }, { "end": 1751, "start": 1745.8799999999999, "text": " where everything that's supposed to be in the same class is together and far" }, { "end": 1756.8, "start": 1751, "text": " apart from each other and since the last layer is the classification layer it's" }, { "end": 1761.24, "start": 1756.8, "text": " going to have a really easy job separating those classes and performing" }, { "end": 1767.28, "start": 1761.24, "text": " good classification so that's what I think is happening in this algorithm so" }, { "end": 1773.32, "start": 1767.28, "text": " even though the layers don't know how to change to help the last layer by the" }, { "end": 1779.84, "start": 1773.32, "text": " fact that these random matrices and induce a clustering together you know by" }, { "end": 1786.28, "start": 1779.84, "text": " back propagating these updates here it helps the last layer make it makes its" }, { "end": 1792.76, "start": 1786.28, "text": " job really easy and you know that's all the classifier needs and I want to I" }, { "end": 1799.16, "start": 1792.76, "text": " want to show again this is my opinion this is not anything of value it's just" }, { "end": 1803.28, "start": 1799.16, "text": " my hypothesis of why something like this could work I want to show you in this" }, { "end": 1806.84, "start": 1803.28, "text": " paper that I've shown you before right here they do actually do these" }, { "end": 1814.28, "start": 1806.84, "text": " experiments with DFA and they show that you can see top row shows feature" }, { "end": 1819.48, "start": 1814.28, "text": " obtained with back propagation bottom row shows features obtained with DFA I" }, { "end": 1825.96, "start": 1819.48, "text": " think these are input and features I'm not sure where exactly they are in the" }, { "end": 1832.36, "start": 1825.96, "text": " network but you can see that this clustering clearly emerges so oh yeah" }, { "end": 1836.72, "start": 1832.36, "text": " here from left to right input images first hidden layer second hidden layer" }, { "end": 1841.24, "start": 1836.72, "text": " third hidden layer so you can see that the clustering from layer to layer in" }, { "end": 1848.56, "start": 1841.24, "text": " backprop and also in DFA is better and better so the reason why backprop is" }, { "end": 1853.88, "start": 1848.56, "text": " good maybe it's just that because it also really induces clusterings like" }, { "end": 1857.76, "start": 1853.88, "text": " this I don't know maybe backprop does even does something on top of that" }, { "end": 1864, "start": 1857.76, "text": " because I mean backprop has all the properties of this and more right but" }, { "end": 1871.04, "start": 1864, "text": " still this this is congruent with my hypothesis of what's happening so what" }, { "end": 1877.44, "start": 1871.04, "text": " do they do with it they take this algorithm and they apply it to these" }, { "end": 1883.08, "start": 1877.44, "text": " architectures now let's for example look at one of them this neural view" }, { "end": 1889.3999999999999, "start": 1883.08, "text": " synthesis with neural radiance fields so neural radiance fields is a type of" }, { "end": 1897.24, "start": 1889.3999999999999, "text": " model to do this task of where you get a bunch of views of an object in 3d or you" }, { "end": 1902.2, "start": 1897.24, "text": " know a bunch of views around an object and you're supposed to render a new view" }, { "end": 1910.92, "start": 1902.2, "text": " and you can see that the DFA parameter or the DFA updated nerve neural radiance" }, { "end": 1917.68, "start": 1910.92, "text": " field model is pretty close to the back propagation updated one you can see it's" }, { "end": 1922.92, "start": 1917.68, "text": " a bit more blurry but it it works right and I think the this paper is really" }, { "end": 1929.1200000000001, "start": 1922.92, "text": " trying to show that look this works it doesn't work you know extremely well but" }, { "end": 1935.68, "start": 1929.1200000000001, "text": " it works and it works on a level that hasn't been seen before so here if you" }, { "end": 1940.3600000000001, "start": 1935.68, "text": " consider these results higher is better on the synthetic data set here even you" }, { "end": 1944.76, "start": 1940.3600000000001, "text": " see that if you have the same model with backprop it performs better than with" }, { "end": 1952.5600000000002, "start": 1944.76, "text": " DFA but the DFA for that model performs better than these other baseline models" }, { "end": 1958.36, "start": 1952.56, "text": " that have themselves been trained with back propagation so it's definitely in" }, { "end": 1965.76, "start": 1958.36, "text": " the direction of being competitive and that's the same thing they show with all" }, { "end": 1970.12, "start": 1965.76, "text": " of these experiments so they apply this to graph networks apply this to" }, { "end": 1975.32, "start": 1970.12, "text": " transformers and as I said it's it's not there yet you see that so in the" }, { "end": 1979.6799999999998, "start": 1975.32, "text": " transformers they have these settings where macro they just use it DFA for the" }, { "end": 1984.88, "start": 1979.68, "text": " individual blocks and micro they use it for each layer and already told you that" }, { "end": 1989.48, "start": 1984.88, "text": " you still in the attention mechanism you still have to use backprop within the" }, { "end": 1996.8, "start": 1989.48, "text": " attention mechanism but it is much more of a plausible algorithm than the back" }, { "end": 2001.76, "start": 1996.8, "text": " propagation through the entire network and they show that if they appropriately" }, { "end": 2007.44, "start": 2001.76, "text": " tweak the hyper parameters they do get into the direction of something that's" }, { "end": 2012.64, "start": 2007.44, "text": " performant at least with this macro strategy now this is nowhere close to" }, { "end": 2018.52, "start": 2012.64, "text": " this is nowhere close to what the to what the back propagation algorithm" }, { "end": 2024.96, "start": 2018.52, "text": " achieves but it's sort of it's sort of an indication that if the community could" }, { "end": 2031.52, "start": 2024.96, "text": " work as much on this as it has worked on back propagation then probably will make" }, { "end": 2037.1200000000001, "start": 2031.52, "text": " a lot of like we could we could push this to a place where it does perform on" }, { "end": 2043.32, "start": 2037.12, "text": " par with backprop or very close to it so I do invite you to go and look at the" }, { "end": 2050.64, "start": 2043.32, "text": " experiments they have a lot of lot of details on how they did it and exactly" }, { "end": 2055.24, "start": 2050.64, "text": " how you have to change the architectures to make DFA work and the hyper parameters" }, { "end": 2060.3599999999997, "start": 2055.24, "text": " and so on so that's really cool and they have some more outputs right here of the" }, { "end": 2066.8399999999997, "start": 2060.3599999999997, "text": " view synthesis and so on yeah if you are interested in that thing I again I don't" }, { "end": 2071, "start": 2066.84, "text": " want to disrespect it it's just I don't think there is much point in me going" }, { "end": 2076.6000000000004, "start": 2071, "text": " over it it's the results are always sort of the same that DFA it it's not there" }, { "end": 2083.8, "start": 2076.6000000000004, "text": " yet but it's a good direction yeah I hope this was informative let me know if" }, { "end": 2089.8, "start": 2083.8, "text": " you disagree about my assessment of DFA I could be completely wrong or you know" }, { "end": 2096.76, "start": 2089.8, "text": " I yeah or or this could be like well known to people already so yeah see you" }, { "end": 2123.0400000000004, "start": 2096.76, "text": " next time" } ]
cuyM63ugsxI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
On the Measure of Intelligence by François Chollet - Part 3: The Math (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "chollet", "keras", "google", "francois", "intelligence", "iq", "iq test", "deep neural networks", "prior", "skill", "performance", "measurement", "measure", "test", "number", "intelligent", "smart", "learning", "generalization", "ability", "experience", "humans", "evolution", "nature", "nurture", "psychometrics", "range", "adaptability", "arc", "kaggle", "difficulty", "entropy", "core knowledge", "objectness", "navigation", "contact", "agent", "goal" ]
In this part, we go over the formal definition of the measure of intelligence. In order to do this, we have to frame and quantify the notions of generalization difficulty, priors, and experience in terms of algorithmic complexity. OUTLINE: 0:00 - Intro & Recap 2:50 - Concept Schema 10:00 - Algorithmic Complexity 13:00 - Definitions 15:25 - Generalization Difficulty 18:55 - Developer Aware Generalization Difficulty 22:40 - Priors 25:10 - Experience 30:50 - The Measure Of Intelligence 38:00 - An Ideal Intelligence Benchmark 42:30 - Conclusion Paper: https://arxiv.org/abs/1911.01547 Part 1: https://youtu.be/3_qGrmD6iQY Part 2: https://youtu.be/THcuTJbeD34 Abstract: To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans. Authors: François Chollet Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello and welcome to the third part on On the Measure of Intelligence by François Chollet. Now this is a multi-part series. If you haven't seen the first two parts I recommend to watch at least one of them. They're somewhat overlapping but we've basically gone over the history of intelligence measurement and the foundations of what a measurement for intelligence for an AI system should look like. Today we're going to get into the formal definition of the intelligence that Chollet proposes right here. So this sentence here pretty much sums up what we're interested in. The intelligence of a system is a measure of its skill acquisition efficiency over a scope of tasks with respect to priors, experience and generalization difficulty. So these are the things that we've established so far basically. The intelligence of a system that's the thing we want to measure is a measure of its skill acquisition efficiency. So how fast does it acquire new skills? Important here is that we are measuring it over a scope of tasks. So it's not arbitrary skills it is a scope that we define and this is going to be mostly the human scope, the scope of tasks that humans can solve and are sort of different at. What we need to factor in are priors which is what is already built into a system because that doesn't count as intelligence that's already built in. If your ability to solve a problem is already built into you you don't have to use intelligence to solve the problem. Second, experience. If you have had lots and lots and lots of experience at the particular task you're asked to solve you don't have to use intelligence you can simply rely on your experience. And the third is generalization difficulty and that's a property of the task. So if the task is very difficult to generalize so if it's very difficult if the task itself is very difficult then achieving good score at it should count as having higher intelligence if all other things are equal. So this is going to be the basis and today we're going to watch Shirley define these things into a number that can give us the intelligence of any system with respect to these things. So that's the program for today. If you like content like this share it out and tell all your friends and leave a like so that YouTube knows that you do like it. So the conceptualization of the entire system is like this. There is a task and we're going to consider a series of tasks of course but if we just look at one task in our scope there is the task and the task outputs these situations. In a machine learning term these are like your training examples. And on the other side there is the intelligent system. Now the intelligent system in a pure machine learning side you would factor this as the task gives the intelligent system something like a training sample or in reinforcement learning it would be something like an observation and the intelligent system gives something back like a response. Here we have a kind of a in-between step. The intelligent system doesn't actually give back the response to the situation. The intelligent system generates a skill program. So the intelligent system will generate a program that can map the situation to a response and that skill program should be able to run on its own. So in the classic machine learning sense if we look at supervised learning for example the intelligent system would be like a ResNet plus SGD. That is an intelligent system and if it is output it is able to generate a skill program. So during training what happens during training? During training the intelligent system is able to intervene in the skill program at each step. So the situation comes in and then the skill program does something but the intelligent system can at any point it can kind of intervene and update the skill program and generate a new skill program for the next step. So there's a situation the skill program gives a response and the task gives feedback in form of a score. In machine learning terms this would be your training sample. Your training sample comes in, your neural network gives a response which are the logits of the classes, then the task gives a score to that which in the supervised learning case is the label or the loss function as a feedback to the intelligent system and the intelligent system using SGD would update the skill program for the next step. So at each step the intelligent system can update the skill program. That's why the intelligent system in this case is the architecture of the neural network and the procedure to update the weights. Not the weights themselves but the procedure to update the weights and the skill program here those would be the actual weights of the neural network or like the instantiation of the ResNet with these particular weights. Now at test time we sever this connection right here. So this is now severed at test time. At some point the training is done. The task says okay now training is done and then the intelligent system will produce one last skill program and then this connection is cut and the skill program must by itself answer to these situations. The intelligent system cannot intervene anymore and in this loop here it's situation response situation response this goes over for a number of steps and all the scores during that time are counted and tallied up and at the end you know the higher the score the better. So the intelligent system must at this end step produce a skill program that by itself can achieve a high score. So there's always this training phase first and then there is the test phase. Now the training phase these situations that we get in a training phase they are called a curriculum in this in this world. In our world this would be something like a training data set but this is curriculum it's slightly more intricate but just the notion here makes sense right the intelligent system produces the skill program. So there's a lot of formalisms right here like okay the task has a situation generator and that maps the task state to a situation so the task can have a state and the skill program can have a state and the intelligent system can have a state and I don't like this is all a bit too formal you don't really need to understand it except if François Cholet is watching this I think I have found I'm not sure if it's a mistake but you say the intelligent system here consists of three objects so it generates the skill program according to its internal state okay and it generates the skill program and when it learns when it learns it updates the internal state internal state according to let me if I can find it right here a self update function so this is how the intelligent system can update itself so its own state so it takes the internal state of the intelligent system and outputs another internal state and this is the you know where I said the internal the intelligent system at each training step it can observe what happens and basically react accordingly so it takes the situation the response and the feedback and its own internal state as an input now what do we have here it takes the situation which would be in our case the training sample the response or the the logits that the neural network has produced the feedback which is the loss and its internal state okay now what I argue is basically that it should also get the internal state of the skill program as an input right here because the skill program can have an internal state all of this like the response can be a stochastic procedure of the skill program and I guess it's not necessary because you can sort of infer it but I think the framework would be more complete if the internal state of the skill program at that time were part of the intelligent system update procedure just you don't okay this is not relevant this is just me bickering cool let's actually jump all of this this is boring this is very boring okay blah blah blah blah blah lots of definitions all right quantifying generalization difficulty experience and priors using algorithmic information theory so these things that at the beginning we said that we want to define intelligence with respect to we are now going to quantify using algorithmic information theory algorithmic information theory in this case right here that we're using it's not very complicated the main quantity is this H the algorithmic complexity it the H of s is the length of the shortest description of the string in a fixed universal language okay so it's the length of the shortest program that outputs the string when running on a fixed universal Turing machine so basically if if you have this string s right here as is a bit string the shortest program that can compute s or you know so so in the worst case that's the the string itself but if the string is like 0 1 0 1 0 1 0 1 all the way you can just say 0 1 times 50 and that's that would be like the shortest program to produce that it isn't it is an information theoretic concept here but in essence you can just think of it as a measure of how long is the program that I would need to write to output a given to to produce a given output okay so that's the algorithmic complexity and then the second quantity right here is the relative algorithmic complexity which is almost the same thing it's how long is the program that I have to write so the shortest we're always talking about how long is the shortest program that I have to write that produces s1 but is allowed to take s2 as an input okay so it can never it it can always ignore s2 that's always a possibility so if s1 is like a super easy string you can just output that but if s1 let's say s2 here is 0 1 0 0 1 okay and s1 is 0 1 0 0 1 0 1 0 0 1 okay so it's just twice that so you could you could sort of output that string here we could write a program that just outputs this or you could write a program just that just says two times s2 okay so that the the length of this is not part of the program the program is just two times s2 because it's allowed to take s2 as an input okay so this is the algorithmic the relative algorithmic complexity is how how much how long is the how complex is the program to get from s2 to s1 so you can almost already see how that will relate now to to generalization okay so a few quantities that we need to consider are a task called t here then solve t theta is the shortest of all possible solutions of t so task t has a solution of threshold theta which the threshold theta is just this is like the minimum score we need to achieve in a task we don't we consider tasks according to thresholds like you know you need to get a I don't know what a score of 9000 in Pong or so so the shortest of all programs that will will optimize that will solve the task to a threshold t sorry theta which is the shortest scale program that achieves at least theta during evaluation and the other quantity is this train soul opt TC this has a lot of quantity right here you can see the task we want to be optimal but with respect to a curriculum okay the curriculum is the training data so this quantity is the shortest optimal training time solution given a curriculum so it's the shortest scale program that achieves optimal training time performance over the situation in the curriculum so this this right here is if we could if we had an oracle that told us here is how to solve the task in general like the task of the task of of determining cats from dogs and images this this is this is this would be the program that does it okay you know overall over the entire the entirety of the task all cats and dog images there there are that's the solution now this quantity right here means sort of the the one neural network that is best at disturb determining cats from dogs in this particular training data set this curriculum see okay so this is the the one neural network that is hyper optimized in this particular training data set and now we assess the generalization difficulty so the generalization difficulty is going to be a measure of how hard is it to in a particular task to generalize to the whole task from the curriculum see and that's going to be the relative algorithmic complexity to go from this quantity to this quantity both quantities we've just explained so it basically means if if I had the perfect solution on the training data set how much how much more complex is it to get from that to the perfect solution on the entirety of data or you can also guess on the test data set right so if if this is really easy so if the training data set already perfectly captures all of the data there is that this quantity is zero like the out the program I don't need to write a program I already have the solution right and you can see here we divide by the age of salty but however if the training data has no information whatsoever about about the about the solution to the general task or if I just so horribly overfit on the training data such that it doesn't help me at all for the general task then this quantity is zero so this quantity is in zero one with with sorry the quantity is one of course yes because the shortest this thing up here will be equal to just h of salt e because this doesn't help me in that case and then this ratio will be one so generalization difficulty of one basically means that the training data solution doesn't help me at all this this particular training curriculum is useless because I'll just overfit so horribly that I will not learn anything about the task or I can't learn anything at all and generalization difficulty of zero oh that's yeah no yes generalization difficulty of zero basically means that all of the solution is already contained in the training solution and I require no work to get to the to the test set solution okay this is I mean I do this train test that this is all a bit more general as it is written here but I think it's a good a good way to think about it okay so the point here he makes is that is that so yeah he makes this example right here where he has these two data points where x minus point seven five has a label false and x point one five has a label true and the shortest possible solution will not help you to generalize to the to the other things so the nearest neighbor program would be better prepared for future uncertainty but would take significantly more space to write down so there's there's a trade-off there's direct trade-off to how much you optimize on the training data and how much generalization capability you have okay so the next quantity we want to assess is developer aware generalization difficulty because so far we've only considered generalization difficulty with respect to the task itself and to the to the curriculum but what you could do is you could simply you know you producing this intelligent system you could simply build in the solution to the entire task into your intelligent system that means it could completely ignore the training data and still perform pretty well on this thing even though even though the training data itself the algorithmic complexity it tells you nothing about about this so the generalization difficulty would be very high in the measure up here but so you would think wow this intelligent system solves this task really well but it's because you've baked the solution to the task into the system and it just ignores the training data so the developer aware generalization difficulty is going to capture that and basically punish you for building the solution the final solution directly into the task so this is the intelligent system right here at time 0 this is basically whatever you pre build into the intelligent system this is it hasn't interacted with the training data yet this is simply the state at the very beginning so this is all the priors you build in if you build a resin that you know it has certain you know it has convolutional filters and so on that's a certain prior on the translational invariance if you build a an alpha go system that certainly has the rules of go built in to the system and it has this Monte Carlo tree search which biases it towards a certain kind of learning and so on so all of this is captured in this quantity right here this basically means that how if I am given the optimal training solution as before and also the initial state of the learning system how much more work is it to get to the solution of the task and here you can clearly see if I have already built the solution into this system so if I'm building a tic-tac-toe learning system I call it the learning system but I like build in the optimal strategy from the beginning into my system and it just ignores the training data then this thing here would be low because it takes me it takes me a lot of work to own we finally have the training data to get to the solution but it takes me very little work if I also have the initial state of the system because the solution would be encoded into the initial state already right so any prior you put in there will be captured by this okay so otherwise it's the same the same metric zero means it's very easy to generalize to the entire solution one means it's even like it's given the training data solution and the system you give me that it is very hard for that system now consider here this quantity actually depends on the system you put in all right then we need two more things which are priors and experience so this was the difficulty this was how difficult is the task as such if for a given system and a curriculum now what we want we want to characterize priors and experience now priors are pretty easy what are what is a prior a prior we can capture by simply looking at the difference between how complex is the solution minus how complex is the solution if I'm given the initial state this is almost the same as before but it now only considers what you built into the system right there's no training data anymore it simply says if I have you know if you give me your the source code of your learning system can I if I can already read out the solution then this quantity right here will be zero there is zero complexity to get from your initial state of the learning system to the solution of the task and therefore this entire quantity would be one that means the prior all the information is in the prior however if your learning system is a very general learning system like it's a it's like a standard reinforcement learning algorithm with almost no assumption about the data then this quantity right here would be very low sorry it would be very high of course because the initial system doesn't tell you too much so it's still a lot of work if you if I gave you the source code it's that well this is very general this doesn't tell me anything about the task and it would require a lot of work to get the dissolution of the task and therefore the quantity up here would be very low and therefore this would be close to zero so that means there are no priors in this intelligent system for that given task okay and the quantity is always of how to reach the threshold the solution is always with respect to a threshold in skill so you must reach like this many points okay the the important thing that shall I notes here is that the priors captured not not the amount of information in the program in the intelligent system but the amount of relevant information for that task you can make a super duper complex intelligent system it the only thing that matters for this quantity is the amount of information that's irrelevant to get to the solution of the task T the last thing we need to capture is experience now experience basically means how how much during this learning phase from now we just talked about at the at the at the outset like the state at time zero for the priors now you remember we interact with the task for a number of time in this training phase right and the question is of course if we are given a longer training phase it is easier to generalize generally right in more training data makes it makes our life easier makes it easier to generalize and the intelligence is is inverse proportional to that so a system that had all else being equal that has less training data but is performing as well on a task as a system that had more training data that system that had less training data we consider to be a more intelligent system because it can generalize more efficiently so we need to quantify experience and experience now in the same kind of in the same train of thought is going to be the difference between two quantities so the first quantity is this so here we consider at each time step T okay so at each time step T we have the intelligent system and we have this thing here called data now data is everything that the intelligent system gets at that point in time so the intelligent system is here at time step T and then it outputs the skill program and that skill program gets a situation and gives a response and this gives a feedback and all of this data that's called data okay it's basically you can think of it as one additional training example right you're at time step T and you're given one additional training example the experience is going to quantify how much information is in that one additional example and that's and the and then we were going to sum this up over time down here which basically means over the entire course of training which is this curriculum see how much information did you get out of the training data at each step that's going to be your experience over the course of training which and this is the sum over the experience that you got at each step and the experience at each step is simply the following two things are going to assess is how difficult is it at time T so you've learned for T steps how difficult is it to go from that to the solution right so if you might have had some training data right and you you score a certain you score a certain you look score like 80% on the on the test set so that's basically how difficult it is it's it's like you makes you still make 20% of error that's your difficulty and then you get one more training sample this data here now you can ask again if I know everything I'm knowing my intelligence system but I also get one more training data point can I how easy is it now to arrive at the solution of the task and now you can say oh with this training data point I now can correct some of my mistakes and I only make like 18% of error okay so the difference here would be like 2% so that's going to be your experience is going to be worth of 2% of errors okay now the important thing here is that it is it is different if we we could have just written here minus H of you know Sol theta T given the intelligent system at time step T plus 1 right because the intelligent system at times that people's one has had that data point at time step T and incorporated it but that's not that's not the same thing here we in in this step right here when we say how difficult is it we assume that you know God or Vapnik himself tells us how like the optimal way to use that information okay whereas this thing here the it's not a given that the intelligent system will use that information in the most optimal way so this is basically the difference between how difficult is it to get from the intelligent system to the solution and how difficult is it to get from the intelligent system and the data point at time T if you could make optimal use of that data point to the solution all right so this this is going to be an assessment of how much experience you've had in the in the sense of had you been able to incorporate the experience properly at each time step because yeah because otherwise you know you you couldn't compare the experience if two systems had had the the same experience in the same task it should mean they had had the same you know data points in the same order or in in a simplistic sense all right so this is all we need intelligence boom this is it so there's a lot of stuff here okay intelligence of an intelligent system with respect to a scope and there are two definitions right here one is for optimal skill at each task and one is for threshold of skill now we're going to focus on the threshold as we said we at each task we require something like you must achieve 8,000 points and we're going to consider the shortest programs that will get to at least 8,000 points now there's a bit of confusion in the notation here as this I'm pretty sure this quantity right here you know should be called something different because it's you know it's the T is here and then there's this here this refers to this and this shouldn't be out of here this should be meaning something like Thresh I'm pretty sure this is just a name like here the name opt so yeah in any case the the intelligence is of an intelligent system with respect to a scope of tasks okay and the first thing we do is we're going to average over the tasks in the scope so we consider all the different tasks and each task has a weight associated with it this this is the threshold and skill that we want and this is sort of a mapping this is a conversion rate because this might be you know 9,000 points at Pong and another task might be you need to achieve point 2 and that's really good a point 2 is really good so this W for each task is simply going to map it to a like a uniform coordinate space of of of points of skill level of that particular task okay so but we're going to average over tasks now you can I guess disregard this this is not super this is just scaling we're going to average over tasks now in each task we're going to consider all curriculums that get you to this threshold so all curriculums that get you to the threshold T for theta T for task T which means sort of means all the possible permutations of training data sets for that task right it's more general than this but we yeah we want to assess all the all the different ones and as you can see here there's P of C so this is an expectation this is the probability of that particular curriculum this is this is the expectation over data right here this is the expectation over the training data distribution okay in the classical machine learning sense so we're going to take the average across all tasks over the expectation under the training data distribution so we're good so far and usually right here we would put something like the empirical risk right the minimum minimum loss min loss function min theta loss function over my term over my see over my training data set okay but not in this case because we now want to consider the priors and the experience and discount that from the difficulty and that's what's written here so this is the developer aware generalization difficulty this here is the amount of information that's already contained in the priors and this here is the amount of information that's contained in the experience in that curriculum as you can see here the experience is in that curriculum so basically a system is more intelligent if the task is harder for that given system and that given curriculum okay so that makes intelligence up a system is more intelligent if it gets to a certain threshold with lower priors okay if the priors are low this the whole quantity is high and the system is also more intelligent if it gets to the threshold with less experience okay so if the experience here is lower it is counts as more intelligent all right in this in this quantity and this is written all in the text here it has some properties in that it for example it it down values actors that in the same curriculum like in the same training data they if if an actor learns faster like it learns earlier to reach the threshold it would assign more intelligence to that actor and so on it's kind of sometimes it's hidden over the it's hidden in the definitions for example these curricula are not all the same at the curricula are specific the curricula that you need to reach this certain threshold so it's not always doesn't always sum up to one with this probability here that's why it's not exactly an expectation let's call it an expectation in quotation marks but in the general sense that's it so in sis the intelligence of a system is over a scope of tasks the expectation in quotation marks under the training distribution of the generalization difficulty account but accounted for discount we discount the prior knowledge of the system and the experience that the system has had okay and that's it he says p plus e prior suppose experience represents the total exposure of the system to information about the problem including the information it starts with at the beginning of training okay so if this is high then the system is not very intelligent or is not if a system that has more of this but generalizes to the same level as another system is considered less intelligent than the other system because it has had more exposure to information about the problem like it it makes a lot of sense right so schematically the contribution of each task is the expectation over skill times generalization divided by priors plus experience that's kind of in words what we looked at so it goes over a number of key observations and at last he goes over consequences or basically a recommendation for what a benchmark should look like if we regard it in this light now of course these complexities and so on they're not exactly computable right so it's like how much exactly the shortest the length of the shortest program is is not exactly computable but it can inform our notion of how we should test intelligence okay so what to expect of an ideal intelligence benchmark first of all it should describe its scope of application its own predictiveness with regard to this scope so that means the validity it should be wreck replicable it should be reproducible it should measure broad abilities and developer aware generalization sorry it should it should set out to measure broad abilities and developer aware generalization okay so that means it should not be solely measuring skill or potential it should not feature in its evaluation set any tasks that are known in advance either to the test taking system itself or to the developers of the system and that of course refers directly to the developer aware generalization and it should seek to quantify the generalization difficulty it measures or at least provide qualitative guidelines with regards to its generalization difficulty it should at least be made clear whether the benchmark seeks to measure local generalization broad generalization or extreme generalization so we've we've seen this in part one taking into account generalization difficulty minimizes the possibility that a given benchmark could be hacked by solvers that take undesired shortcuts that bypass broad abilities hey says it should control for the amount of experience leveraged by test taking systems during training it should not be possible to buy performance on the benchmark by sampling unlimited training data so this this already rules out sort of any let's say image recognition or NLP benchmarks because there we can always just feed in more data the more unlabeled data from the internet or even labeled data like if there's a benchmark that you know is on computer vision I can just pay more humans to label more data and then I will be better at that benchmark the benchmark should avoid tasks for which new data can be generated at will it should be in effect a game for which it is not possible to practice in advance of the evaluation session that's going to be hard right it should be it should explicitly and exhaustively decide describe the set of priors it assumes any task is going to involve priors but in many tasks used for a evaluation today priors stay implicit and the existence of implicit hidden priors may often give an unfair advantage to either humans or machines so this is for example if the test is like a speed test a lot of times machines are going to be way faster than humans because the hidden assumption in a speed test is that kind of your nerve conductivity is the same across all test takers and the last one it should work for both humans and machines fairly by only assessing the same priors as possessed by humans and it refers to core knowledge which we saw in the last part and only requiring a human sized amount of practice time or training data so this means if we want to compare humans and machines machines can often incorporate way more data than humans so the tasks in the benchmark should only like the amount of data should be such that a human could process that data now of course that that sort of also means that any task where basically you collect data during your life is all also sort of ruled out a bit so that means the AI benchmark task can't be like cook a pan of spaghetti or something like this yeah and in the end he says these recommendations for general AI evaluation wouldn't be complete without a concrete effort to implement them in part three we present our initial attempt which is going to be the ARC dataset and the ARC Kaggle challenge but that's a story for next time I hope you enjoyed this and at least got some bits of it it's very abstract this measure of intelligence of course it can never be computed exactly but the fact that someone is trying to formalize and it's not the first time this has been trying to formalize but this I feel it's quite understandable and it makes sort of sense and I'm I'm interested to see if people come up with exp like actual approximations to this quantity that you could actually compute sort of all right that was it thank you for watching and bye bye see you next time
[ { "end": 5.66, "start": 0, "text": " Hello and welcome to the third part on On the Measure of Intelligence by" }, { "end": 11.08, "start": 5.66, "text": " François Chollet. Now this is a multi-part series. If you haven't seen the" }, { "end": 14.86, "start": 11.08, "text": " first two parts I recommend to watch at least one of them. They're somewhat" }, { "end": 20.28, "start": 14.86, "text": " overlapping but we've basically gone over the history of intelligence" }, { "end": 26.32, "start": 20.28, "text": " measurement and the foundations of what a measurement for intelligence for an AI" }, { "end": 31.92, "start": 26.32, "text": " system should look like. Today we're going to get into the formal" }, { "end": 37.08, "start": 31.92, "text": " definition of the intelligence that Chollet proposes right here. So this" }, { "end": 42.64, "start": 37.08, "text": " sentence here pretty much sums up what we're interested in." }, { "end": 48.879999999999995, "start": 42.64, "text": " The intelligence of a system is a measure of its skill acquisition" }, { "end": 54.2, "start": 48.879999999999995, "text": " efficiency over a scope of tasks with respect to priors, experience and" }, { "end": 58.720000000000006, "start": 54.2, "text": " generalization difficulty. So these are the things that we've established so far" }, { "end": 63.160000000000004, "start": 58.720000000000006, "text": " basically. The intelligence of a system that's the thing we want to measure" }, { "end": 69.04, "start": 63.160000000000004, "text": " is a measure of its skill acquisition efficiency. So how fast does it" }, { "end": 74.32000000000001, "start": 69.04, "text": " acquire new skills? Important here is that we are measuring it over a scope of" }, { "end": 78.68, "start": 74.32000000000001, "text": " tasks. So it's not arbitrary skills it is a scope that we define and this is going" }, { "end": 84.64, "start": 78.68, "text": " to be mostly the human scope, the scope of tasks that humans" }, { "end": 92.60000000000001, "start": 84.64, "text": " can solve and are sort of different at. What we need to factor in are priors" }, { "end": 98.80000000000001, "start": 92.60000000000001, "text": " which is what is already built into a system because that doesn't count as" }, { "end": 103.92000000000002, "start": 98.80000000000001, "text": " intelligence that's already built in. If your ability to solve a problem is" }, { "end": 107.88000000000001, "start": 103.92000000000002, "text": " already built into you you don't have to use intelligence to solve the problem." }, { "end": 113.39999999999999, "start": 107.88, "text": " Second, experience. If you have had lots and lots and lots of experience at the" }, { "end": 117.39999999999999, "start": 113.39999999999999, "text": " particular task you're asked to solve you don't have to use intelligence you" }, { "end": 122.84, "start": 117.39999999999999, "text": " can simply rely on your experience. And the third is generalization difficulty" }, { "end": 128.64, "start": 122.84, "text": " and that's a property of the task. So if the task is very difficult to generalize" }, { "end": 135.68, "start": 128.64, "text": " so if it's very difficult if the task itself is very difficult then achieving" }, { "end": 140.56, "start": 135.68, "text": " good score at it should count as having higher intelligence if all other things" }, { "end": 145.92000000000002, "start": 140.56, "text": " are equal. So this is going to be the basis and today we're going to" }, { "end": 153.12, "start": 145.92000000000002, "text": " watch Shirley define these things into a number that can give us" }, { "end": 159.48000000000002, "start": 153.12, "text": " the intelligence of any system with respect to these things. So that's" }, { "end": 164.60000000000002, "start": 159.48000000000002, "text": " the program for today. If you like content like this share it out and tell" }, { "end": 169.76, "start": 164.6, "text": " all your friends and leave a like so that YouTube knows that you do like it." }, { "end": 176.6, "start": 169.76, "text": " So the conceptualization of the entire system is like this. There is a" }, { "end": 182.2, "start": 176.6, "text": " task and we're going to consider a series of tasks of course but if we just" }, { "end": 188.6, "start": 182.2, "text": " look at one task in our scope there is the task and the task outputs these" }, { "end": 193.64, "start": 188.6, "text": " situations. In a machine learning term these are like your training" }, { "end": 199, "start": 193.64, "text": " examples. And on the other side there is the intelligent system. Now the" }, { "end": 203.04, "start": 199, "text": " intelligent system in a pure machine learning side you would factor this as" }, { "end": 208.83999999999997, "start": 203.04, "text": " the task gives the intelligent system something like a training sample or in" }, { "end": 213, "start": 208.83999999999997, "text": " reinforcement learning it would be something like an observation and the" }, { "end": 218.27999999999997, "start": 213, "text": " intelligent system gives something back like a response. Here we have a" }, { "end": 224, "start": 218.28, "text": " kind of a in-between step. The intelligent system doesn't actually give back the" }, { "end": 228.52, "start": 224, "text": " response to the situation. The intelligent system generates a skill" }, { "end": 234.68, "start": 228.52, "text": " program. So the intelligent system will generate a program that can map the" }, { "end": 240.92000000000002, "start": 234.68, "text": " situation to a response and that skill program should be able to run on its own." }, { "end": 247.08, "start": 240.92000000000002, "text": " So in the classic machine learning sense if we look at supervised" }, { "end": 253.84, "start": 247.08, "text": " learning for example the intelligent system would be like a" }, { "end": 263.6, "start": 253.84, "text": " ResNet plus SGD. That is an intelligent system and if it is output" }, { "end": 269.76, "start": 263.6, "text": " it is able to generate a skill program. So during training what happens" }, { "end": 274.56, "start": 269.76, "text": " during training? During training the intelligent system is able to intervene" }, { "end": 280.92, "start": 274.56, "text": " in the skill program at each step. So the situation comes in and then the skill" }, { "end": 284.28000000000003, "start": 280.92, "text": " program does something but the intelligent system can at any point it" }, { "end": 289.44, "start": 284.28000000000003, "text": " can kind of intervene and update the skill program and generate a new skill" }, { "end": 294.92, "start": 289.44, "text": " program for the next step. So there's a situation the skill program" }, { "end": 300.96, "start": 294.92, "text": " gives a response and the task gives feedback in form of a score. In machine" }, { "end": 305.08, "start": 300.96, "text": " learning terms this would be your training sample. Your training sample" }, { "end": 309.35999999999996, "start": 305.08, "text": " comes in, your neural network gives a response which are the logits of the" }, { "end": 315.47999999999996, "start": 309.35999999999996, "text": " classes, then the task gives a score to that which in the supervised" }, { "end": 321.28, "start": 315.47999999999996, "text": " learning case is the label or the loss function as a feedback to the intelligent" }, { "end": 328.15999999999997, "start": 321.28, "text": " system and the intelligent system using SGD would update the skill program for" }, { "end": 331.96000000000004, "start": 328.16, "text": " the next step. So at each step the intelligent system can update the skill" }, { "end": 336.72, "start": 331.96000000000004, "text": " program. That's why the intelligent system in this case is the architecture" }, { "end": 340.92, "start": 336.72, "text": " of the neural network and the procedure to update the weights. Not the weights" }, { "end": 346.32000000000005, "start": 340.92, "text": " themselves but the procedure to update the weights and the skill program here" }, { "end": 351.52000000000004, "start": 346.32000000000005, "text": " those would be the actual weights of the neural network or like the" }, { "end": 357.56, "start": 351.52000000000004, "text": " instantiation of the ResNet with these particular weights. Now at test time" }, { "end": 363.8, "start": 357.56, "text": " we sever this connection right here. So this is now severed at test time. At" }, { "end": 368.52, "start": 363.8, "text": " some point the training is done. The task says okay now training is done and then" }, { "end": 373.4, "start": 368.52, "text": " the intelligent system will produce one last skill program and then this" }, { "end": 379.76, "start": 373.4, "text": " connection is cut and the skill program must by itself answer to these" }, { "end": 386.32, "start": 379.76, "text": " situations. The intelligent system cannot intervene anymore and in this" }, { "end": 391.59999999999997, "start": 386.32, "text": " loop here it's situation response situation response this goes over for a" }, { "end": 397.2, "start": 391.59999999999997, "text": " number of steps and all the scores during that time are counted and tallied" }, { "end": 401.92, "start": 397.2, "text": " up and at the end you know the higher the score the better. So the intelligent" }, { "end": 408.12, "start": 401.92, "text": " system must at this end step produce a skill program that by itself can achieve" }, { "end": 413.76, "start": 408.12, "text": " a high score. So there's always this training phase first and then there is" }, { "end": 419.88, "start": 413.76, "text": " the test phase. Now the training phase these situations that we get in a" }, { "end": 427.24, "start": 419.88, "text": " training phase they are called a curriculum in this in this world. In" }, { "end": 433.08, "start": 427.24, "text": " our world this would be something like a training data set but this is" }, { "end": 438.96, "start": 433.08, "text": " curriculum it's slightly more intricate but just the notion here makes sense" }, { "end": 446.79999999999995, "start": 438.96, "text": " right the intelligent system produces the skill program. So there's a lot" }, { "end": 452.64, "start": 446.79999999999995, "text": " of formalisms right here like okay the task has a situation generator" }, { "end": 456.82, "start": 452.64, "text": " and that maps the task state to a situation so the task can have a state" }, { "end": 459.84, "start": 456.82, "text": " and the skill program can have a state and the intelligent system can have a" }, { "end": 464.94, "start": 459.84, "text": " state and I don't like this is all a bit too formal you don't really need to" }, { "end": 471.18, "start": 464.94, "text": " understand it except if François Cholet is watching this I think I have" }, { "end": 478.72, "start": 471.18, "text": " found I'm not sure if it's a mistake but you say the intelligent system here" }, { "end": 483.8, "start": 478.72, "text": " consists of three objects so it generates the skill program according" }, { "end": 488.32, "start": 483.8, "text": " to its internal state okay and it generates the skill program and when it" }, { "end": 494.12, "start": 488.32, "text": " learns when it learns it updates the internal state internal state according" }, { "end": 504.2, "start": 494.12, "text": " to let me if I can find it right here a self update function so this is how the" }, { "end": 510.08, "start": 504.2, "text": " intelligent system can update itself so its own state so it takes the internal" }, { "end": 514.44, "start": 510.08, "text": " state of the intelligent system and outputs another internal state and this" }, { "end": 517.96, "start": 514.44, "text": " is the you know where I said the internal the intelligent system at each" }, { "end": 522.84, "start": 517.96, "text": " training step it can observe what happens and basically react accordingly" }, { "end": 528.4, "start": 522.84, "text": " so it takes the situation the response and the feedback and its own internal" }, { "end": 533.6, "start": 528.4, "text": " state as an input now what do we have here it takes the situation which would" }, { "end": 538.96, "start": 533.6, "text": " be in our case the training sample the response or the the logits that the" }, { "end": 544.4, "start": 538.96, "text": " neural network has produced the feedback which is the loss and its internal state" }, { "end": 552.24, "start": 544.4, "text": " okay now what I argue is basically that it should also get the internal state of" }, { "end": 557.12, "start": 552.24, "text": " the skill program as an input right here because the skill program can have an" }, { "end": 560.44, "start": 557.12, "text": " internal state all of this like the response can be a stochastic procedure" }, { "end": 566.6800000000001, "start": 560.44, "text": " of the skill program and I guess it's not necessary because you can sort of" }, { "end": 573.08, "start": 566.6800000000001, "text": " infer it but I think the framework would be more complete if the internal state" }, { "end": 578.3, "start": 573.08, "text": " of the skill program at that time were part of the intelligent system update" }, { "end": 585.4399999999999, "start": 578.3, "text": " procedure just you don't okay this is not relevant this is just me bickering" }, { "end": 595.12, "start": 585.4399999999999, "text": " cool let's actually jump all of this this is boring this is very boring okay" }, { "end": 600.4399999999999, "start": 595.12, "text": " blah blah blah blah blah lots of definitions all right quantifying" }, { "end": 604.04, "start": 600.4399999999999, "text": " generalization difficulty experience and priors using algorithmic information" }, { "end": 608.5999999999999, "start": 604.04, "text": " theory so these things that at the beginning we said that we want to define" }, { "end": 614.76, "start": 608.5999999999999, "text": " intelligence with respect to we are now going to quantify using algorithmic" }, { "end": 620.12, "start": 614.76, "text": " information theory algorithmic information theory in this case right" }, { "end": 624.7199999999999, "start": 620.12, "text": " here that we're using it's not very complicated the main quantity is this H" }, { "end": 632.7199999999999, "start": 624.7199999999999, "text": " the algorithmic complexity it the H of s is the length of the shortest" }, { "end": 638.96, "start": 632.72, "text": " description of the string in a fixed universal language okay so it's the" }, { "end": 643.6, "start": 638.96, "text": " length of the shortest program that outputs the string when running on a" }, { "end": 648.5600000000001, "start": 643.6, "text": " fixed universal Turing machine so basically if if you have this string s" }, { "end": 653.96, "start": 648.5600000000001, "text": " right here as is a bit string the shortest program that can compute s or" }, { "end": 660.28, "start": 653.96, "text": " you know so so in the worst case that's the the string itself but if the string" }, { "end": 666.52, "start": 660.28, "text": " is like 0 1 0 1 0 1 0 1 all the way you can just say 0 1 times 50 and that's" }, { "end": 670.12, "start": 666.52, "text": " that would be like the shortest program to produce that it isn't it is an" }, { "end": 676.8399999999999, "start": 670.12, "text": " information theoretic concept here but in essence you can just think of it as a" }, { "end": 682.4399999999999, "start": 676.8399999999999, "text": " measure of how long is the program that I would need to write to output a given" }, { "end": 687.64, "start": 682.4399999999999, "text": " to to produce a given output okay so that's the algorithmic complexity and" }, { "end": 692.4, "start": 687.64, "text": " then the second quantity right here is the relative algorithmic complexity" }, { "end": 699.88, "start": 692.4, "text": " which is almost the same thing it's how long is the program that I have to write" }, { "end": 703.64, "start": 699.88, "text": " so the shortest we're always talking about how long is the shortest program" }, { "end": 711.76, "start": 703.64, "text": " that I have to write that produces s1 but is allowed to take s2 as an input" }, { "end": 718.72, "start": 711.76, "text": " okay so it can never it it can always ignore s2 that's always a possibility so" }, { "end": 725.24, "start": 718.72, "text": " if s1 is like a super easy string you can just output that but if s1 let's say" }, { "end": 740.4399999999999, "start": 725.24, "text": " s2 here is 0 1 0 0 1 okay and s1 is 0 1 0 0 1 0 1 0 0 1 okay so it's just twice" }, { "end": 744.44, "start": 740.44, "text": " that so you could you could sort of output that string here we could write a" }, { "end": 748.24, "start": 744.44, "text": " program that just outputs this or you could write a program just that just" }, { "end": 757.8800000000001, "start": 748.24, "text": " says two times s2 okay so that the the length of this is not part of the" }, { "end": 762.6, "start": 757.8800000000001, "text": " program the program is just two times s2 because it's allowed to take s2 as an" }, { "end": 767.96, "start": 762.6, "text": " input okay so this is the algorithmic the relative algorithmic complexity is" }, { "end": 776.4000000000001, "start": 767.96, "text": " how how much how long is the how complex is the program to get from s2 to s1 so" }, { "end": 783.9200000000001, "start": 776.4000000000001, "text": " you can almost already see how that will relate now to to generalization okay so" }, { "end": 788.9200000000001, "start": 783.9200000000001, "text": " a few quantities that we need to consider are a task called t here then" }, { "end": 796.6, "start": 788.9200000000001, "text": " solve t theta is the shortest of all possible solutions of t so task t has a" }, { "end": 802.0400000000001, "start": 796.6, "text": " solution of threshold theta which the threshold theta is just this is like the" }, { "end": 807.88, "start": 802.0400000000001, "text": " minimum score we need to achieve in a task we don't we consider tasks according" }, { "end": 814.76, "start": 807.88, "text": " to thresholds like you know you need to get a I don't know what a score of 9000" }, { "end": 824.96, "start": 814.76, "text": " in Pong or so so the shortest of all programs that will will optimize that" }, { "end": 832.9200000000001, "start": 824.96, "text": " will solve the task to a threshold t sorry theta which is the shortest scale" }, { "end": 838.08, "start": 832.9200000000001, "text": " program that achieves at least theta during evaluation and the other quantity" }, { "end": 844.12, "start": 838.08, "text": " is this train soul opt TC this has a lot of quantity right here you can see the" }, { "end": 852.32, "start": 844.12, "text": " task we want to be optimal but with respect to a curriculum okay the" }, { "end": 857.44, "start": 852.32, "text": " curriculum is the training data so this quantity is the shortest optimal" }, { "end": 862.5600000000001, "start": 857.44, "text": " training time solution given a curriculum so it's the shortest scale" }, { "end": 866.24, "start": 862.5600000000001, "text": " program that achieves optimal training time performance over the situation in" }, { "end": 877.24, "start": 866.24, "text": " the curriculum so this this right here is if we could if we had an oracle that" }, { "end": 883.92, "start": 877.24, "text": " told us here is how to solve the task in general like the task of the task of of" }, { "end": 889, "start": 883.92, "text": " determining cats from dogs and images this this is this is this would be the" }, { "end": 895.24, "start": 889, "text": " program that does it okay you know overall over the entire the entirety of" }, { "end": 900.64, "start": 895.24, "text": " the task all cats and dog images there there are that's the solution now this" }, { "end": 908.24, "start": 900.64, "text": " quantity right here means sort of the the one neural network that is best at" }, { "end": 914.28, "start": 908.24, "text": " disturb determining cats from dogs in this particular training data set this" }, { "end": 919.1999999999999, "start": 914.28, "text": " curriculum see okay so this is the the one neural network that is hyper" }, { "end": 925.52, "start": 919.1999999999999, "text": " optimized in this particular training data set and now we assess the" }, { "end": 930.08, "start": 925.52, "text": " generalization difficulty so the generalization difficulty is going to be" }, { "end": 937.5200000000001, "start": 930.08, "text": " a measure of how hard is it to in a particular task to generalize to the" }, { "end": 942.1600000000001, "start": 937.5200000000001, "text": " whole task from the curriculum see and that's going to be the relative" }, { "end": 947.48, "start": 942.1600000000001, "text": " algorithmic complexity to go from this quantity to this quantity both quantities" }, { "end": 953.24, "start": 947.48, "text": " we've just explained so it basically means if if I had the perfect solution" }, { "end": 960.84, "start": 953.24, "text": " on the training data set how much how much more complex is it to get from that" }, { "end": 966.76, "start": 960.84, "text": " to the perfect solution on the entirety of data or you can also guess on the" }, { "end": 975.36, "start": 966.76, "text": " test data set right so if if this is really easy so if the training data set" }, { "end": 980.6800000000001, "start": 975.36, "text": " already perfectly captures all of the data there is that this quantity is zero" }, { "end": 986.5999999999999, "start": 980.68, "text": " like the out the program I don't need to write a program I already have the" }, { "end": 992.92, "start": 986.5999999999999, "text": " solution right and you can see here we divide by the age of salty but however" }, { "end": 999.7199999999999, "start": 992.92, "text": " if the training data has no information whatsoever about about the about the" }, { "end": 1004.92, "start": 999.7199999999999, "text": " solution to the general task or if I just so horribly overfit on the training" }, { "end": 1011.64, "start": 1004.92, "text": " data such that it doesn't help me at all for the general task then this quantity" }, { "end": 1019.3199999999999, "start": 1011.64, "text": " is zero so this quantity is in zero one with with sorry the quantity is one of" }, { "end": 1026.92, "start": 1019.3199999999999, "text": " course yes because the shortest this thing up here will be equal to just h of" }, { "end": 1033.76, "start": 1026.92, "text": " salt e because this doesn't help me in that case and then this ratio will be" }, { "end": 1039.08, "start": 1033.76, "text": " one so generalization difficulty of one basically means that the training data" }, { "end": 1046.4, "start": 1039.08, "text": " solution doesn't help me at all this this particular training curriculum is" }, { "end": 1052.36, "start": 1046.4, "text": " useless because I'll just overfit so horribly that I will not learn anything" }, { "end": 1057.8, "start": 1052.36, "text": " about the task or I can't learn anything at all and generalization difficulty of" }, { "end": 1064.84, "start": 1057.8, "text": " zero oh that's yeah no yes generalization difficulty of zero basically" }, { "end": 1068.6, "start": 1064.84, "text": " means that all of the solution is already contained in the training" }, { "end": 1077.2, "start": 1068.6, "text": " solution and I require no work to get to the to the test set solution okay this" }, { "end": 1081.72, "start": 1077.2, "text": " is I mean I do this train test that this is all a bit more general as it is" }, { "end": 1089, "start": 1081.72, "text": " written here but I think it's a good a good way to think about it okay so the" }, { "end": 1099.44, "start": 1089, "text": " point here he makes is that is that so yeah he makes this example right here" }, { "end": 1106.56, "start": 1099.44, "text": " where he has these two data points where x minus point seven five has a label" }, { "end": 1113.8, "start": 1106.56, "text": " false and x point one five has a label true and the shortest possible solution" }, { "end": 1121.52, "start": 1113.8, "text": " will not help you to generalize to the to the other things so the nearest" }, { "end": 1125.6799999999998, "start": 1121.52, "text": " neighbor program would be better prepared for future uncertainty but would" }, { "end": 1128.6399999999999, "start": 1125.6799999999998, "text": " take significantly more space to write down so there's there's a trade-off" }, { "end": 1132.8, "start": 1128.6399999999999, "text": " there's direct trade-off to how much you optimize on the training data and how" }, { "end": 1141.32, "start": 1132.8, "text": " much generalization capability you have okay so the next quantity we want to" }, { "end": 1146.6399999999999, "start": 1141.32, "text": " assess is developer aware generalization difficulty because so far" }, { "end": 1150.44, "start": 1146.6399999999999, "text": " we've only considered generalization difficulty with respect to the task" }, { "end": 1154.76, "start": 1150.44, "text": " itself and to the to the curriculum but what you could do is you could simply" }, { "end": 1159.96, "start": 1154.76, "text": " you know you producing this intelligent system you could simply build in the" }, { "end": 1165.52, "start": 1159.96, "text": " solution to the entire task into your intelligent system that means it could" }, { "end": 1170.28, "start": 1165.52, "text": " completely ignore the training data and still perform pretty well on this thing" }, { "end": 1175.68, "start": 1170.28, "text": " even though even though the training data itself the algorithmic complexity" }, { "end": 1181.68, "start": 1175.68, "text": " it tells you nothing about about this so the generalization difficulty would be" }, { "end": 1188.76, "start": 1181.68, "text": " very high in the measure up here but so you would think wow this intelligent" }, { "end": 1193.24, "start": 1188.76, "text": " system solves this task really well but it's because you've baked the solution" }, { "end": 1199.68, "start": 1193.24, "text": " to the task into the system and it just ignores the training data so the" }, { "end": 1204.6, "start": 1199.68, "text": " developer aware generalization difficulty is going to capture that and" }, { "end": 1210.24, "start": 1204.6, "text": " basically punish you for building the solution the final solution directly" }, { "end": 1214.72, "start": 1210.24, "text": " into the task so this is the intelligent system right here at time 0 this is" }, { "end": 1221.16, "start": 1214.72, "text": " basically whatever you pre build into the intelligent system this is it hasn't" }, { "end": 1224.88, "start": 1221.16, "text": " interacted with the training data yet this is simply the state at the very" }, { "end": 1230.3600000000001, "start": 1224.88, "text": " beginning so this is all the priors you build in if you build a resin that you" }, { "end": 1234.56, "start": 1230.3600000000001, "text": " know it has certain you know it has convolutional filters and so on that's a" }, { "end": 1240.48, "start": 1234.56, "text": " certain prior on the translational invariance if you build a an alpha go" }, { "end": 1246.1200000000001, "start": 1240.48, "text": " system that certainly has the rules of go built in to the system and it has" }, { "end": 1249.68, "start": 1246.1200000000001, "text": " this Monte Carlo tree search which biases it towards a certain kind of" }, { "end": 1254.88, "start": 1249.68, "text": " learning and so on so all of this is captured in this quantity right here" }, { "end": 1263.56, "start": 1254.88, "text": " this basically means that how if I am given the optimal training solution as" }, { "end": 1270.32, "start": 1263.56, "text": " before and also the initial state of the learning system how much more work" }, { "end": 1276.6399999999999, "start": 1270.32, "text": " is it to get to the solution of the task and here you can clearly see if I have" }, { "end": 1282.76, "start": 1276.6399999999999, "text": " already built the solution into this system so if I'm building a tic-tac-toe" }, { "end": 1289.84, "start": 1283.56, "text": " learning system I call it the learning system but I like build in the optimal" }, { "end": 1294, "start": 1289.84, "text": " strategy from the beginning into my system and it just ignores the training" }, { "end": 1300.76, "start": 1294, "text": " data then this thing here would be low because it takes me it takes me a lot of" }, { "end": 1306.44, "start": 1300.76, "text": " work to own we finally have the training data to get to the solution but it takes" }, { "end": 1311.56, "start": 1306.44, "text": " me very little work if I also have the initial state of the system because the" }, { "end": 1318.16, "start": 1311.56, "text": " solution would be encoded into the initial state already right so any prior" }, { "end": 1323.48, "start": 1318.16, "text": " you put in there will be captured by this okay so otherwise it's the same the" }, { "end": 1329.32, "start": 1323.48, "text": " same metric zero means it's very easy to generalize to the entire solution one" }, { "end": 1335.76, "start": 1329.32, "text": " means it's even like it's given the training data solution and the system" }, { "end": 1340.52, "start": 1335.76, "text": " you give me that it is very hard for that system now consider here this" }, { "end": 1348.16, "start": 1340.52, "text": " quantity actually depends on the system you put in all right then we need two" }, { "end": 1352.28, "start": 1348.16, "text": " more things which are priors and experience so this was the difficulty" }, { "end": 1359.3999999999999, "start": 1352.28, "text": " this was how difficult is the task as such if for a given system and a" }, { "end": 1366.08, "start": 1359.3999999999999, "text": " curriculum now what we want we want to characterize priors and experience now" }, { "end": 1372.44, "start": 1366.08, "text": " priors are pretty easy what are what is a prior a prior we can capture by simply" }, { "end": 1380.32, "start": 1372.44, "text": " looking at the difference between how complex is the solution minus how" }, { "end": 1385.48, "start": 1380.32, "text": " complex is the solution if I'm given the initial state this is almost the same as" }, { "end": 1390.84, "start": 1385.48, "text": " before but it now only considers what you built into the system right there's" }, { "end": 1395.4399999999998, "start": 1390.84, "text": " no training data anymore it simply says if I have you know if you give me your" }, { "end": 1402.36, "start": 1395.4399999999998, "text": " the source code of your learning system can I if I can already read out the" }, { "end": 1410.28, "start": 1402.36, "text": " solution then this quantity right here will be zero there is zero complexity" }, { "end": 1416.8799999999999, "start": 1410.28, "text": " to get from your initial state of the learning system to the solution of the" }, { "end": 1422.04, "start": 1416.8799999999999, "text": " task and therefore this entire quantity would be one that means the prior all" }, { "end": 1427.98, "start": 1422.04, "text": " the information is in the prior however if your learning system is a very" }, { "end": 1432.52, "start": 1427.98, "text": " general learning system like it's a it's like a standard reinforcement learning" }, { "end": 1438.56, "start": 1432.52, "text": " algorithm with almost no assumption about the data then this quantity right" }, { "end": 1446.24, "start": 1438.56, "text": " here would be very low sorry it would be very high of course because the initial" }, { "end": 1451.36, "start": 1446.24, "text": " system doesn't tell you too much so it's still a lot of work if you if I gave you" }, { "end": 1454.8, "start": 1451.36, "text": " the source code it's that well this is very general this doesn't tell me" }, { "end": 1459.08, "start": 1454.8, "text": " anything about the task and it would require a lot of work to get the" }, { "end": 1463.1599999999999, "start": 1459.08, "text": " dissolution of the task and therefore the quantity up here would be very low" }, { "end": 1468.68, "start": 1463.16, "text": " and therefore this would be close to zero so that means there are no priors" }, { "end": 1475.72, "start": 1468.68, "text": " in this intelligent system for that given task okay and the quantity is" }, { "end": 1480.48, "start": 1475.72, "text": " always of how to reach the threshold the solution is always with respect to a" }, { "end": 1488.1200000000001, "start": 1480.48, "text": " threshold in skill so you must reach like this many points okay the the" }, { "end": 1493.7199999999998, "start": 1488.12, "text": " important thing that shall I notes here is that the priors captured not not the" }, { "end": 1498.04, "start": 1493.7199999999998, "text": " amount of information in the program in the intelligent system but the amount of" }, { "end": 1503.3999999999999, "start": 1498.04, "text": " relevant information for that task you can make a super duper complex" }, { "end": 1507.4799999999998, "start": 1503.3999999999999, "text": " intelligent system it the only thing that matters for this quantity is the" }, { "end": 1515.06, "start": 1507.4799999999998, "text": " amount of information that's irrelevant to get to the solution of the task T the" }, { "end": 1519.52, "start": 1515.06, "text": " last thing we need to capture is experience now experience basically" }, { "end": 1525.12, "start": 1519.52, "text": " means how how much during this learning phase from now we just talked about at" }, { "end": 1532.6, "start": 1525.12, "text": " the at the at the outset like the state at time zero for the priors now you" }, { "end": 1540.04, "start": 1532.6, "text": " remember we interact with the task for a number of time in this training phase" }, { "end": 1547.24, "start": 1540.04, "text": " right and the question is of course if we are given a longer training phase it" }, { "end": 1552.6, "start": 1547.24, "text": " is easier to generalize generally right in more training data makes it makes our" }, { "end": 1557.72, "start": 1552.6, "text": " life easier makes it easier to generalize and the intelligence is is" }, { "end": 1564, "start": 1557.72, "text": " inverse proportional to that so a system that had all else being equal that has" }, { "end": 1568.72, "start": 1564, "text": " less training data but is performing as well on a task as a system that had more" }, { "end": 1573.64, "start": 1568.72, "text": " training data that system that had less training data we consider to be a more" }, { "end": 1578.24, "start": 1573.64, "text": " intelligent system because it can generalize more efficiently so we need" }, { "end": 1584.6000000000001, "start": 1578.24, "text": " to quantify experience and experience now in the same kind of in the same train" }, { "end": 1589.56, "start": 1584.6000000000001, "text": " of thought is going to be the difference between two quantities so the first" }, { "end": 1598.3600000000001, "start": 1589.56, "text": " quantity is this so here we consider at each time step T okay so at each time" }, { "end": 1604.3999999999999, "start": 1598.36, "text": " step T we have the intelligent system and we have this thing here called data" }, { "end": 1611.9199999999998, "start": 1604.3999999999999, "text": " now data is everything that the intelligent system gets at that point" }, { "end": 1617.28, "start": 1611.9199999999998, "text": " in time so the intelligent system is here at time step T and then it outputs" }, { "end": 1622.3999999999999, "start": 1617.28, "text": " the skill program and that skill program gets a situation and gives a response and" }, { "end": 1627.8799999999999, "start": 1622.3999999999999, "text": " this gives a feedback and all of this data that's called data okay it's" }, { "end": 1633.24, "start": 1627.88, "text": " basically you can think of it as one additional training example right you're" }, { "end": 1640.1200000000001, "start": 1633.24, "text": " at time step T and you're given one additional training example the" }, { "end": 1645.1200000000001, "start": 1640.1200000000001, "text": " experience is going to quantify how much information is in that one additional" }, { "end": 1650.92, "start": 1645.1200000000001, "text": " example and that's and the and then we were going to sum this up over time down" }, { "end": 1655.24, "start": 1650.92, "text": " here which basically means over the entire course of training which is this" }, { "end": 1662.2, "start": 1655.24, "text": " curriculum see how much information did you get out of the training data at each" }, { "end": 1668.44, "start": 1662.2, "text": " step that's going to be your experience over the course of training which and" }, { "end": 1672.6, "start": 1668.44, "text": " this is the sum over the experience that you got at each step and the experience" }, { "end": 1678.36, "start": 1672.6, "text": " at each step is simply the following two things are going to assess is how" }, { "end": 1684.72, "start": 1678.36, "text": " difficult is it at time T so you've learned for T steps how difficult is it" }, { "end": 1691.02, "start": 1684.72, "text": " to go from that to the solution right so if you might have had some training" }, { "end": 1695.88, "start": 1691.02, "text": " data right and you you score a certain you score a certain you look score like" }, { "end": 1704.64, "start": 1695.88, "text": " 80% on the on the test set so that's basically how difficult it is it's it's" }, { "end": 1709.76, "start": 1704.64, "text": " like you makes you still make 20% of error that's your difficulty and then" }, { "end": 1714.96, "start": 1709.76, "text": " you get one more training sample this data here now you can ask again if I" }, { "end": 1719.32, "start": 1714.96, "text": " know everything I'm knowing my intelligence system but I also get one" }, { "end": 1728.28, "start": 1719.32, "text": " more training data point can I how easy is it now to arrive at the solution of" }, { "end": 1733.44, "start": 1728.28, "text": " the task and now you can say oh with this training data point I now can" }, { "end": 1738.28, "start": 1733.44, "text": " correct some of my mistakes and I only make like 18% of error okay so the" }, { "end": 1742.3999999999999, "start": 1738.28, "text": " difference here would be like 2% so that's going to be your experience is" }, { "end": 1749.84, "start": 1742.3999999999999, "text": " going to be worth of 2% of errors okay now the important thing here is that it" }, { "end": 1755.24, "start": 1749.84, "text": " is it is different if we we could have just written here minus H of you know" }, { "end": 1763.96, "start": 1755.24, "text": " Sol theta T given the intelligent system at time step T plus 1 right because the" }, { "end": 1768.1200000000001, "start": 1763.96, "text": " intelligent system at times that people's one has had that data point at" }, { "end": 1773.08, "start": 1768.1200000000001, "text": " time step T and incorporated it but that's not that's not the same thing" }, { "end": 1779.76, "start": 1773.08, "text": " here we in in this step right here when we say how difficult is it we assume" }, { "end": 1789.3600000000001, "start": 1779.76, "text": " that you know God or Vapnik himself tells us how like the optimal way to use" }, { "end": 1795.12, "start": 1789.36, "text": " that information okay whereas this thing here the it's not a given that the" }, { "end": 1799.9599999999998, "start": 1795.12, "text": " intelligent system will use that information in the most optimal way so" }, { "end": 1804.08, "start": 1799.9599999999998, "text": " this is basically the difference between how difficult is it to get from the" }, { "end": 1809.12, "start": 1804.08, "text": " intelligent system to the solution and how difficult is it to get from the" }, { "end": 1815.04, "start": 1809.12, "text": " intelligent system and the data point at time T if you could make optimal use of" }, { "end": 1820.3999999999999, "start": 1815.04, "text": " that data point to the solution all right so this this is going to be an" }, { "end": 1827.32, "start": 1820.3999999999999, "text": " assessment of how much experience you've had in the in the sense of had you been" }, { "end": 1834.72, "start": 1827.32, "text": " able to incorporate the experience properly at each time step because yeah" }, { "end": 1840.24, "start": 1834.72, "text": " because otherwise you know you you couldn't compare the experience if two" }, { "end": 1844.62, "start": 1840.24, "text": " systems had had the the same experience in the same task it should mean they" }, { "end": 1850.08, "start": 1844.62, "text": " had had the same you know data points in the same order or in in a simplistic" }, { "end": 1857.32, "start": 1850.08, "text": " sense all right so this is all we need intelligence boom this is it so there's" }, { "end": 1864.7199999999998, "start": 1857.32, "text": " a lot of stuff here okay intelligence of an intelligent system with respect to a" }, { "end": 1870.2399999999998, "start": 1864.7199999999998, "text": " scope and there are two definitions right here one is for optimal skill at" }, { "end": 1874.92, "start": 1870.24, "text": " each task and one is for threshold of skill now we're going to focus on the" }, { "end": 1880.4, "start": 1874.92, "text": " threshold as we said we at each task we require something like you must achieve" }, { "end": 1886.16, "start": 1880.4, "text": " 8,000 points and we're going to consider the shortest programs that will get to" }, { "end": 1891.36, "start": 1886.16, "text": " at least 8,000 points now there's a bit of confusion in the notation here as" }, { "end": 1895.72, "start": 1891.36, "text": " this I'm pretty sure this quantity right here you know should be called something" }, { "end": 1900.28, "start": 1895.72, "text": " different because it's you know it's the T is here and then there's this here this" }, { "end": 1904.08, "start": 1900.28, "text": " refers to this and this shouldn't be out of here this should be meaning" }, { "end": 1908.9, "start": 1904.08, "text": " something like Thresh I'm pretty sure this is just a name like here the name" }, { "end": 1920.52, "start": 1908.9, "text": " opt so yeah in any case the the intelligence is of an intelligent system" }, { "end": 1927.08, "start": 1920.52, "text": " with respect to a scope of tasks okay and the first thing we do is we're going" }, { "end": 1931.8799999999999, "start": 1927.08, "text": " to average over the tasks in the scope so we consider all the different tasks" }, { "end": 1936.6399999999999, "start": 1931.8799999999999, "text": " and each task has a weight associated with it this this is the threshold and" }, { "end": 1942.32, "start": 1936.6399999999999, "text": " skill that we want and this is sort of a mapping this is a conversion rate" }, { "end": 1948.08, "start": 1942.32, "text": " because this might be you know 9,000 points at Pong and another task might be" }, { "end": 1953.6799999999998, "start": 1948.08, "text": " you need to achieve point 2 and that's really good a point 2 is really good so" }, { "end": 1958.1999999999998, "start": 1953.6799999999998, "text": " this W for each task is simply going to map it to a like a uniform coordinate" }, { "end": 1967.84, "start": 1958.1999999999998, "text": " space of of of points of skill level of that particular task okay so but we're" }, { "end": 1972.74, "start": 1967.84, "text": " going to average over tasks now you can I guess disregard this this is not super" }, { "end": 1978.04, "start": 1972.74, "text": " this is just scaling we're going to average over tasks now in each task we're" }, { "end": 1987.84, "start": 1978.04, "text": " going to consider all curriculums that get you to this threshold so all" }, { "end": 1993.96, "start": 1987.84, "text": " curriculums that get you to the threshold T for theta T for task T which" }, { "end": 1999.64, "start": 1993.96, "text": " means sort of means all the possible permutations of training data sets for" }, { "end": 2005.6000000000001, "start": 1999.64, "text": " that task right it's more general than this but we yeah we want to assess all" }, { "end": 2013.1000000000001, "start": 2005.6000000000001, "text": " the all the different ones and as you can see here there's P of C so this is" }, { "end": 2017.22, "start": 2013.1000000000001, "text": " an expectation this is the probability of that particular curriculum this is" }, { "end": 2022.68, "start": 2017.22, "text": " this is the expectation over data right here this is the expectation over the" }, { "end": 2027.94, "start": 2022.68, "text": " training data distribution okay in the classical machine learning sense so" }, { "end": 2033.72, "start": 2027.94, "text": " we're going to take the average across all tasks over the expectation under the" }, { "end": 2040.66, "start": 2033.72, "text": " training data distribution so we're good so far and usually right here we would" }, { "end": 2049, "start": 2040.66, "text": " put something like the empirical risk right the minimum minimum loss min loss" }, { "end": 2055.8, "start": 2049, "text": " function min theta loss function over my term over my see over my training data" }, { "end": 2064.7200000000003, "start": 2055.8, "text": " set okay but not in this case because we now want to consider the priors and the" }, { "end": 2068.28, "start": 2064.7200000000003, "text": " experience and discount that from the difficulty and that's what's written" }, { "end": 2075.52, "start": 2068.28, "text": " here so this is the developer aware generalization difficulty this here is" }, { "end": 2080.52, "start": 2075.52, "text": " the amount of information that's already contained in the priors and this here is" }, { "end": 2085.3, "start": 2080.52, "text": " the amount of information that's contained in the experience in that" }, { "end": 2089, "start": 2085.3, "text": " curriculum as you can see here the experience is in that curriculum so" }, { "end": 2097.52, "start": 2089, "text": " basically a system is more intelligent if the task is harder for that given" }, { "end": 2103.1200000000003, "start": 2097.52, "text": " system and that given curriculum okay so that makes intelligence up a system is" }, { "end": 2110.32, "start": 2103.1200000000003, "text": " more intelligent if it gets to a certain threshold with lower priors okay if the" }, { "end": 2115.04, "start": 2110.32, "text": " priors are low this the whole quantity is high and the system is also more" }, { "end": 2124.2400000000002, "start": 2115.04, "text": " intelligent if it gets to the threshold with less experience okay so if the" }, { "end": 2133.2400000000002, "start": 2124.2400000000002, "text": " experience here is lower it is counts as more intelligent all right in this in" }, { "end": 2137.56, "start": 2133.2400000000002, "text": " this quantity and this is written all in the text here it has some properties in" }, { "end": 2145.22, "start": 2137.56, "text": " that it for example it it down values actors that in the same curriculum like" }, { "end": 2149.96, "start": 2145.22, "text": " in the same training data they if if an actor learns faster like it learns" }, { "end": 2154.96, "start": 2149.96, "text": " earlier to reach the threshold it would assign more intelligence to that actor" }, { "end": 2160.82, "start": 2154.96, "text": " and so on it's kind of sometimes it's hidden over the it's hidden in the" }, { "end": 2165.24, "start": 2160.82, "text": " definitions for example these curricula are not all the same at the curricula" }, { "end": 2169.64, "start": 2165.24, "text": " are specific the curricula that you need to reach this certain threshold so it's" }, { "end": 2173.9199999999996, "start": 2169.64, "text": " not always doesn't always sum up to one with this probability here that's why" }, { "end": 2178.3399999999997, "start": 2173.9199999999996, "text": " it's not exactly an expectation let's call it an expectation in quotation" }, { "end": 2185.2799999999997, "start": 2178.3399999999997, "text": " marks but in the general sense that's it so in sis the intelligence of a system" }, { "end": 2193.9399999999996, "start": 2185.2799999999997, "text": " is over a scope of tasks the expectation in quotation marks under the training" }, { "end": 2201.32, "start": 2193.94, "text": " distribution of the generalization difficulty account but accounted for" }, { "end": 2207.36, "start": 2201.32, "text": " discount we discount the prior knowledge of the system and the experience that" }, { "end": 2219.2000000000003, "start": 2207.36, "text": " the system has had okay and that's it he says p plus e prior suppose experience" }, { "end": 2223.7200000000003, "start": 2219.2000000000003, "text": " represents the total exposure of the system to information about the problem" }, { "end": 2228.9199999999996, "start": 2223.72, "text": " including the information it starts with at the beginning of training okay so if" }, { "end": 2236.68, "start": 2228.9199999999996, "text": " this is high then the system is not very intelligent or is not if a system that" }, { "end": 2241.4399999999996, "start": 2236.68, "text": " has more of this but generalizes to the same level as another system is" }, { "end": 2245.48, "start": 2241.4399999999996, "text": " considered less intelligent than the other system because it has had more" }, { "end": 2253.3199999999997, "start": 2245.48, "text": " exposure to information about the problem like it it makes a lot of sense" }, { "end": 2260.32, "start": 2253.32, "text": " right so schematically the contribution of each task is the expectation over" }, { "end": 2266.6800000000003, "start": 2260.32, "text": " skill times generalization divided by priors plus experience that's kind of" }, { "end": 2275.8, "start": 2266.6800000000003, "text": " in words what we looked at so it goes over a number of key observations and at" }, { "end": 2282.4, "start": 2275.8, "text": " last he goes over consequences or basically a recommendation for what a" }, { "end": 2287.2400000000002, "start": 2282.4, "text": " benchmark should look like if we regard it in this light now of course these" }, { "end": 2293.2000000000003, "start": 2287.2400000000002, "text": " complexities and so on they're not exactly computable right so it's like" }, { "end": 2297.7200000000003, "start": 2293.2000000000003, "text": " how much exactly the shortest the length of the shortest program is is not exactly" }, { "end": 2305.12, "start": 2297.7200000000003, "text": " computable but it can inform our notion of how we should test intelligence okay" }, { "end": 2310.08, "start": 2305.12, "text": " so what to expect of an ideal intelligence benchmark first of all it" }, { "end": 2314.16, "start": 2310.08, "text": " should describe its scope of application its own predictiveness with regard to" }, { "end": 2319.44, "start": 2314.16, "text": " this scope so that means the validity it should be wreck replicable it should be" }, { "end": 2325.08, "start": 2319.44, "text": " reproducible it should measure broad abilities and developer aware" }, { "end": 2330.56, "start": 2325.08, "text": " generalization sorry it should it should set out to measure broad abilities and" }, { "end": 2336.56, "start": 2330.56, "text": " developer aware generalization okay so that means it should not be solely" }, { "end": 2343, "start": 2336.56, "text": " measuring skill or potential it should not feature in its evaluation set any" }, { "end": 2348.52, "start": 2343, "text": " tasks that are known in advance either to the test taking system itself or to" }, { "end": 2353.12, "start": 2348.52, "text": " the developers of the system and that of course refers directly to the developer" }, { "end": 2359.84, "start": 2353.12, "text": " aware generalization and it should seek to quantify the generalization difficulty" }, { "end": 2364.84, "start": 2359.84, "text": " it measures or at least provide qualitative guidelines with regards to" }, { "end": 2369.88, "start": 2364.84, "text": " its generalization difficulty it should at least be made clear whether the" }, { "end": 2374.8, "start": 2369.88, "text": " benchmark seeks to measure local generalization broad generalization or" }, { "end": 2382.44, "start": 2374.8, "text": " extreme generalization so we've we've seen this in part one taking into" }, { "end": 2386.1800000000003, "start": 2382.44, "text": " account generalization difficulty minimizes the possibility that a given" }, { "end": 2391.2200000000003, "start": 2386.1800000000003, "text": " benchmark could be hacked by solvers that take undesired shortcuts that" }, { "end": 2398.9199999999996, "start": 2391.22, "text": " bypass broad abilities hey says it should control for the amount of" }, { "end": 2403.16, "start": 2398.9199999999996, "text": " experience leveraged by test taking systems during training it should not" }, { "end": 2407.9599999999996, "start": 2403.16, "text": " be possible to buy performance on the benchmark by sampling unlimited training" }, { "end": 2413.8399999999997, "start": 2407.9599999999996, "text": " data so this this already rules out sort of any let's say image recognition or" }, { "end": 2418.9199999999996, "start": 2413.8399999999997, "text": " NLP benchmarks because there we can always just feed in more data the more" }, { "end": 2423.44, "start": 2418.92, "text": " unlabeled data from the internet or even labeled data like if there's a" }, { "end": 2429.36, "start": 2423.44, "text": " benchmark that you know is on computer vision I can just pay more humans to" }, { "end": 2434.96, "start": 2429.36, "text": " label more data and then I will be better at that benchmark the benchmark" }, { "end": 2439.44, "start": 2434.96, "text": " should avoid tasks for which new data can be generated at will it should be in" }, { "end": 2443.62, "start": 2439.44, "text": " effect a game for which it is not possible to practice in advance of the" }, { "end": 2448.2000000000003, "start": 2443.62, "text": " evaluation session that's going to be hard right it should be it should" }, { "end": 2453.72, "start": 2448.2, "text": " explicitly and exhaustively decide describe the set of priors it assumes any" }, { "end": 2459.3599999999997, "start": 2453.72, "text": " task is going to involve priors but in many tasks used for a evaluation today" }, { "end": 2464.96, "start": 2459.3599999999997, "text": " priors stay implicit and the existence of implicit hidden priors may often give" }, { "end": 2469.9199999999996, "start": 2464.96, "text": " an unfair advantage to either humans or machines so this is for example if the" }, { "end": 2475.68, "start": 2469.9199999999996, "text": " test is like a speed test a lot of times machines are going to be way faster than" }, { "end": 2480.24, "start": 2475.68, "text": " humans because the hidden assumption in a speed test is that kind of your nerve" }, { "end": 2486.7999999999997, "start": 2480.24, "text": " conductivity is the same across all test takers and the last one it should work" }, { "end": 2492.16, "start": 2486.7999999999997, "text": " for both humans and machines fairly by only assessing the same priors as" }, { "end": 2497.16, "start": 2492.16, "text": " possessed by humans and it refers to core knowledge which we saw in the last" }, { "end": 2502.7999999999997, "start": 2497.16, "text": " part and only requiring a human sized amount of practice time or training" }, { "end": 2506.5600000000004, "start": 2502.8, "text": " data so this means if we want to compare humans and machines machines can often" }, { "end": 2513.7200000000003, "start": 2506.5600000000004, "text": " incorporate way more data than humans so the tasks in the benchmark should only" }, { "end": 2520.4, "start": 2513.7200000000003, "text": " like the amount of data should be such that a human could process that data now" }, { "end": 2526.8, "start": 2520.4, "text": " of course that that sort of also means that any task where basically you" }, { "end": 2532.6000000000004, "start": 2526.8, "text": " collect data during your life is all also sort of ruled out a bit so that" }, { "end": 2537.92, "start": 2532.6, "text": " means the AI benchmark task can't be like cook a pan of spaghetti or" }, { "end": 2544.7999999999997, "start": 2537.92, "text": " something like this yeah and in the end he says these recommendations for" }, { "end": 2549.7, "start": 2544.7999999999997, "text": " general AI evaluation wouldn't be complete without a concrete effort to" }, { "end": 2554.48, "start": 2549.7, "text": " implement them in part three we present our initial attempt which is going to be" }, { "end": 2561.6, "start": 2554.48, "text": " the ARC dataset and the ARC Kaggle challenge but that's a story for next" }, { "end": 2568.4, "start": 2561.6, "text": " time I hope you enjoyed this and at least got some bits of it it's very" }, { "end": 2572.64, "start": 2568.4, "text": " abstract this measure of intelligence of course it can never be computed exactly" }, { "end": 2577.2799999999997, "start": 2572.64, "text": " but the fact that someone is trying to formalize and it's not the first time" }, { "end": 2582.7999999999997, "start": 2577.2799999999997, "text": " this has been trying to formalize but this I feel it's quite understandable and" }, { "end": 2590.88, "start": 2582.7999999999997, "text": " it makes sort of sense and I'm I'm interested to see if people come up with" }, { "end": 2596.48, "start": 2590.88, "text": " exp like actual approximations to this quantity that you could actually compute" }, { "end": 2624.12, "start": 2596.48, "text": " sort of all right that was it thank you for watching and bye bye see you next time" } ]
LMb5tvW-UoQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Discovering Symbolic Models from Deep Learning with Inductive Biases (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "graph networks", "graph neural networks", "gnn", "physics", "newtonian", "hamiltonian", "dynamics", "cosmology", "dark matter", "symbolic regression", "edge", "vertex", "regularization" ]
Neural networks are very good at predicting systems' numerical outputs, but not very good at deriving the discrete symbolic equations that govern many physical systems. This paper combines Graph Networks with symbolic regression and shows that the strong inductive biases of these models can be used to derive accurate symbolic equations from observation data. OUTLINE: 0:00 - Intro & Outline 1:10 - Problem Statement 4:25 - Symbolic Regression 6:40 - Graph Neural Networks 12:05 - Inductive Biases for Physics 15:15 - How Graph Networks compute outputs 23:10 - Loss Backpropagation 24:30 - Graph Network Recap 26:10 - Analogies of GN to Newtonian Mechanics 28:40 - From Graph Network to Equation 33:50 - L1 Regularization of Edge Messages 40:10 - Newtonian Dynamics Example 43:10 - Cosmology Example 44:45 - Conclusions & Appendix Paper: https://arxiv.org/abs/2006.11287 Code: https://github.com/MilesCranmer/symbolic_deep_learning Abstract: We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example-a detailed dark matter simulation-and discover a new analytic formula which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn. Authors: Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, Shirley Ho Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at discovering symbolic models from deep learning with inductive biases by Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Pitalia, Rui Xu, Kyle Cranmer, David Spurgill and Shirley Ho. So this paper on a high level, it uses graph neural networks to fit a dataset of observations of a physical system. And then it uses symbolic regression in order to parse out equations, symbolic equations from the graph neural network. And the symbolic equations that will are found such they describe the physical system. And they do find, they do recover some known equations and they do find a new one in the field of cosmology. So we'll go through how they do it, what these two steps are and why this might work better than previous approaches. So yeah, join me. If you like content like this, as always, feel free to share it, subscribe if you haven't, if you want more content like this, and tell me what you think in the comments. All right, so they claim we develop a general approach to distill symbolic representation of a learned deep model by introducing strong inductive biases. And this, it doesn't really say a whole lot, but I think the abstract doesn't say a whole lot. So let me give you an example. If you have three different, let's say planets or stars, right? This is a, this three body problem is a unsolved problem, I think still. So if you have these three stars, and you just let the simulation run, they have gravity, they attract each other, right? So they are going to move around somehow. So this one's going to move here, this one's going to move like this, this one's going to move like this, and then it turns around and this one turns around and so on. So there is a fairly complex motions in already three different things that are somehow in a physical system together. This is a bigger problem than just stars. So you have these systems, for example, when these are atoms and there is like an electromagnetic force between them or the strong force. There can be, these can be things where springs are attached to them and so on. So our goal is to derive equations that govern this behavior, right? In the case of gravity, we know that these objects sort of pull on each other with the force proportional to something like the mass of the first times the mass of the second divided by the radius that they are a part squared. Something like this times like this gravitational constant. We know the equation that governs these interactions. We don't know the symbolic solution to the whole problem, but we know the equation that governs the interaction, right? Now imagine if we didn't know the equation, what do we have to do? Well, what did Newton do? Ultimately, he sat down and just came up with an equation that seemed okay to him and then found out that the equation actually does predict very accurately how the things move. So we're going to try to replicate that process in an AI system, the process of coming up with an equation that governs this behavior. So what we have is a data set. As I said, we let this stuff run. So we let it run for one time step and then this is here, maybe this is here and this is here, okay? And then we let it run for the next time step. This goes here, this goes here, this goes here and so on. So that will give us, basically it will give us frame by frame how this system evolves. Frame by frame. And that will give us a data set. So this right here, if we let it run and maybe we restarted a couple of times with different initializations, we let it run, we get a data set. So now we have a data set, right? So our goal is to be to take that data set and come up with an equation like m1, m2 divided by r squared that governs this behavior. Now previous approaches have resorted to symbolic regression, I think they call this. And that is basically, it's pretty simple. Namely, what you do is you simply provide the system with a bunch of options. You tell it, I have a list and the list can include the mass of the first, it can include the mass of the second, it can include the x and the y position of the things, it can include the delta x and delta y, which basically means the speed of the objects. It can include any constant a and b that you want. It can include the symbols plus, minus, division, multiplication, square, maybe exponential function and so on. So you give it a bunch of options of what it could potentially use in an equation. And then you simply let it make equations and you see how well these equations describe the data set. And the way you do that is you can do it naively by just searching and trying out, or you can be a little bit smarter about it and use evolutionary methods. So you start with some equations like this, okay, I'm going to x plus delta x minus a squared. You see how that describes the data set, you'll find not very well. And then you go on and you say, okay, maybe I'll make a small mutation, I'll mutate this to a minus and so on. And if you do this with an entire population, as is common in these evolutionary methods, you'll end up with something better at the end. Now this works until a point. So whenever the space of things to explore, like this one here, gets larger, and it doesn't have to be super large to already exhaust the capabilities of these methods. So these methods are very limited in the space they can search and have proven not really effective so far for this type of problem. This paper right here goes a different route. It uses graph neural networks in order to describe the data set. So in between this step of collecting a data set and making the equation, it fits another step. So it says in between here, we fit another step and that other step is going to be we have a graph neural network and you don't know yet, you don't have to know yet what that exactly is. But it's technique. It's like a type of neural network. And we're going to have that neural network learn the data set. Now as you know from neural networks, they can't do symbolic regression, they can't give you an equation, they can simply predict numbers. So what the network will do is it will simply predict like the motions or the accelerations, whatever you're interested in, it will predict those things as numbers, not as equations as just you can plug in this situation right here, and it will tell you how the things will move. Neural networks are pretty good at that. And once you have a graph neural network that can describe the system in a numeric fashion, then you parse out the equations from this graph neural network. And we're going to go over why that is going to be much, much easier than parsing out the equations directly from the physical system. It's going to be because you engineer the graph neural network in a way that makes it very congruent with physical reality that makes it very adapt to parse out equations like this that makes the job of this evolutionary method much easier. All right, so that's basically the two-step process here. First step is to numerically regress a neural network to describe the system, and then second step is going to be from that neural network parse out the equations. So we have to talk about graph neural networks. So here you see the entire process as they describe it. So they have this data set right here of observations of these physical systems. This is like, it's like any data set that you have in machine learning. They predict the dynamics, which means in a numeric fashion with a graph neural network, and then from the graph neural network, they extract the symbolic equation, as you can see right here. And this here is going to be the equation that they figure out that was previously unknown. They even say unknown dark matter over density equation. Cool. So we have to talk about graph neural networks. We haven't really done this on this channel so far. And I'm not like a big expert on graph neural networks. But they come in all shapes and forms. In this particular paper, they use what they call a type of interaction network that's called a graph network. So graph network is something different than graph neural network. I think graph network is a type of graph neural network. And specifically here, they use a network that... So a graph neural network has these things called vertices, and then it has edges, and edges connect vertices, like in a graph. Now we're going to build this graph neural network such that the number of vertices is exactly equal to the number of particles in our system. So in this paper, they consider systems with, I believe, four or eight particles. That's already a lot for if you want to derive equations and things. But of course, the physical world is made of many more particles. In any case, they consider four, let's say four particles right here. So what they're going to do, they're going to build a graph neural network that has four vertices, one for each of the particles. And in a graph neural network, every vertex can have properties. So the properties of each vertex here are going to be the properties of that particle. That means the x coordinate, for example, the y coordinate, and we're going to, let's say we're in two dimensions, right? It's a two dimensional problem. The x coordinate, the y coordinate, the delta x, the delta y, the mass, the, I don't know what else can we put here. There's a lot of stuff that we can put here, the charge, right? So all of these things are properties of the vertex. Then the other component of a graph are, of course, the edges. So the edges connect each two of all of the, so each edge connects two vertices like this. And in this particular type of graph network, we're going to consider graphs where all the particles are connected to all the other particles like this. So it's not like a sparse, it's not a sparse graph, except I think in the cosmology example here you can see that always there is a node that's connected to all its neighbors. But in the Newtonian dynamics graph networks, you can see right here, everything is connected to everything, like this. And why does that represent a physical system really well? So the reason is going to be the following. What we're going to try to do is we're going to try to say that in physical systems, if I want to, for example, consider this node up here, and consider how it is pulled by gravity by the other nodes, it's going to be pulled in this direction a little bit because of this particle right here, it's going to be pulled in this direction a little bit because of that one, and in this direction because of that one. Now note that these three things are independent. So if I want to describe the total force of gravity, I can do so as a sum over i equals 1, 2, 1, 2, 3 of the force that the particle i pulls. So if this is j right here, how i pulls on j, right? This is an independent sum across all of the neighbors of that particle. Now you might say, wait a minute, it's not independent, because it's being, you know, it's not being strictly pulled in this direction, it's also pulled in this direction. Yes, but with independent, we mean that the force, this force right here, is only dependent on this particle, and the force diagonally is only dependent on that particle. There is no part of the particle up here that modulates this force right here. So you can calculate the total force as an independent sum across the individual forces. And that's the simplification here. And that's a part, they claim, why current approaches that directly try to go about finding an equation using evolutionary methods from the data set itself don't really work, because the space is just too high of equations. But this right here, this is a massive constraint. And it's lucky, first of all, that physical systems, they say, most physical systems actually obey that constraint. Most physical systems can be described as an independent sum over contributions of interactions between just two things, right? So we simply can sum over interactions between two things. And that's way simpler than considering everything at once. And second of all, it's lucky because these graph networks describe exactly this. So each edge in the graph network is coincidentally connecting two things, right? And not more. So the edges, they don't know about each other. No edge knows about the other edge. They only consider whatever particles are at their respective ends. And that is exactly the same as this physical constraint on the physical system. And that's why the graph networks are so adapted or are so useful in describing these systems. So how does a graph network like this do anything, basically? So for that, you have to consider the task. If we want to describe a system like this, a task in that, if you frame it in a machine learning way, could be, I'm going to give you these particles right here. OK, here it's five particles. I'm going to give you, for each one, I'm going to give you all its features, like the x, the y, the speed currently, and the mass. And you're going to tell me what the acceleration is in the next frame. OK? So like this, like this, like this. OK, considering all the interactions between the particles, just tell me, where does it go in the very next time frame? That sounds like a machine learning problem, right? And the graph neural network can be made to predict this. So what we want is, for each vertex here, an output of a number or a vector, the acceleration. So we want to compute an output for each vertex. How do we do this? In a graph neural network, there are three, or in this particular type, there are three steps. We said each vertex, and we're just going to do it for one vertex, let's say the one on the bottom right. Let's say each vertex has these properties, like this x, y, and so on. So first, what we do is we go over the edges. So for each edge, in parallel and independent from each other, let's consider this edge right here. What we'll do is we take the nodes that are attached to it, and we combine their features. And we combine them, so x, y, this also has x, y. So we want to combine these two. We want to compute the edge right here. Now, in a physical system, what does the edge represent? The edge represents the force between the two particles, right? And that's a fairly complex equation. It's not like we can just add the features or something like this. So the edge here already needs to compute some sort of nonlinear, complicated function. And we know how to compute nonlinear, complicated functions with neural networks. We're in deep learning right here. So the edge here is going to compute what's called this edge function. And this edge function takes in two vertices, v1 and v2 right here. Maybe this is v2, this is v1. It takes in the features, these features of the two vertices, and it will compute a so-called edge message. I think they call this ek for the edge k. It will compute an edge message. And this is supposed to represent the force that pulls between these two particles. And we're going to approximate this function right here using a neural network. Since we don't know the equation yet, right? We assume we don't know the gravitational equation, but we can learn it, right? Because we have data. So we take this and we simply make it into a neural network. So the features go in here, both. We can concatenate them. And then out comes this edge message. Now, this edge message here is simply going to be a vector, a numerical vector describing some intermediate hidden state, right? That is going to describe the force, but for now it's just describing intermediate hidden state. OK, so we do this for each edge. So each edge is going to be, maybe this is e1, this is e2, e3, e4. Each edge in parallel is going to aggregate information of its endpoints into that edge. And then that's step one. So step one, compute the edge messages. Step two is going to be to compute the vertex messages or the vertex outputs. So we said we're not actually interested in the edges. We're interested that each vertex ends up with an acceleration, with an output. So how are we going to do this? So consider again our graph. If we want to compute the output for this node right here, what we'll do is we'll simply aggregate all of the edges, all of the edge messages that connect to that vertex. So we've computed previously the edge messages by integrating the information from all of the attached endpoints. Now we're going to go backwards and distribute the information from the edges back to the vertices that are attached. And you can see already by this two-step process, it's kind of a message passing process if you've ever studied graphical models. This means that in the two-step process, this vertex here aggregates information from the other vertices, via these edges. So in this case, this vertex here is going to take in all the edge messages right here, and it is going to aggregate all these edge messages in a function that computes the acceleration. So our estimate of the acceleration is going to be a function, let's call that nu, of the edges that are attached to it. So e1, e2, and e3. And here is where we're going to make our next physical assumption, namely the one we said before, that the way that these edges, the way that they influence the vertex, is going to be in a form of an independent sum. So this simplification means that this function should somehow be not of the edges, but of the sum of the edges, right? Sum of ei. Okay, so this sum here, this is the simplification that we make to make it in accordance with the physical system. With this graph network, we could do any sort of complicated thing right here. We could put a transformer on these things and compute 12 layers of interaction effects between these edges. We're not going to do that. We're simply going to sum them up and then come up and then run those through a function. So we'll sum them up. And of course, this function right here is still going to be a complex function because just because you sum up the forces, you don't have the acceleration yet. So as you know that force is mass times acceleration, that means acceleration is equal to force divided by mass. So this here is going to be this sum over the edges, I guess. Yes. So you still need to divide it by force. And technically, you still can do much more complicated things right here. We only say that the edges should only come in in form of a sum. So of course, we're going to say that this function right here, since it can be any complicated function of its input, it should also be a neural network. So we're going to take that sum of the edge messages and we're going to put that into a second neural network, and then out comes our estimate of the acceleration. And now that we can use together from the data set, we know the true acceleration, right? Since we have a data set, we have the observations and the labels. The labels are the true accelerations of that system that we observed. And we can compute a loss function right here. If you followed so far, everything we've done so far is differentiable. So from this loss function that compares the output of the neural network for that vertex to the true acceleration that we observed in the data set, we can back propagate through this neural network that computes the vertex function. We can back prop through the sum here to the edge messages, and we can back prop through the edge messages to that neural network that computed the edge messages from those features. So everything is differentiable. By having that loss at the end, we can train this neural network end to end to, from the observation right here, predict the numerical acceleration of the system right here. It was a fairly lengthy way, but it's important that you kind of understand what's happening. So you build the graph network according to the physical system. In the graph network, there are two kinds of things. First there are deterministic things, like we're always going to aggregate in a sum. And then there are things that you learn. Namely, there are two neural networks. The first one computes the edge messages from the features of the vertices. And the second one computes the output of each vertex according to the sum of the edge messages that are attached to that vertex. Now you can say, wait a minute, there are more than just two neural networks. Like each edge here has a neural network, technically, right? This edge has a neural network, this edge has a neural network, and each vertex has a neural network. But in this case, these neural networks are shared. So the neural network that computes the edge message for that edge is the same as the neural network that computes the edge message for any of the edges. You can think of it like weight sharing, or you can think that it is actually the same neural network, it's equivalent. And the same for the vertices. There's only one neural network that in the same fashion computes the output for each vertex. Of course, the incoming edge messages are going to be different, and that's why you have different outputs. But the neural network itself is the same. Okay, so we have a system that can describe this data set of physical observations really well. It's this graph neural network. So we train this end to end. And here is a little bit of an analogy where they say, this is how you can analogize the neural network with a physical system. So what are the analogies here? The nodes in the graph network correspond to the particles in Newtonian mechanics. Pairs of nodes correspond to two interacting particles. The edge model is the force between two particles. Then the pooling operation, which is the summing up of the edge messages, right, that we found so important as a simplification. This is the sum into the net force that is really given in the physical system. So independent sum of, sorry, sum of independent forces without interaction effects. Then concatenate with node, I guess this I left this out. But whenever you compute, whenever you compute the vertex properties, right here, I guess, what you want to do is not only input the edge messages, but you know, each vertex has these features that we said, and these could also be fairly important. It's like you technically have that information in the edge messages because it started out from these. But you can also just input that again into this neural network together with the edge properties. And that will just make its job a bit easier since, for example, right here, we have to divide by the mass in this function. And it's just easier if you provide that mass as a as the property. So that's a little detail I left out before. So that you concatenate the edge mess, the aggregated edge messages with the node, then you compute the node model, which in this case is the computation is simply the you take this sum right here, and you divide it by the mass. And then optionally, you can update the nodes, which is compute the next time step, which we don't do right here, because we simply want to output the acceleration. I guess I mean, it should be equivalent to output the next time step and then compare with the data set what the next time step was. In any case, you have to have some kind of loss function. And here you can see all the black squares right here are going to be neural networks. So now we have learned a graph network that can describe a system. How do we make this into an equation? And again, here, our our physical reality comes in that these of the like the independence assumptions of these realities comes in. Because in physics, you know, the the acceleration here is going to be a function of the sum and so on. So what we need to do is we don't need to develop an equation for the entire system, right? What we need to do is simply we need to develop an equation for each vertex. So each vertex, we need to have an equation acceleration equals something. And that something should include some of the edges and then the edges again should be something right. So we technically as we had two neural networks, we technically need two symbolic equations, one that represents that first neural network that computes the edge functions and one that represents that second neural network that aggregates the sum of the edge functions or that computes the output from the sum of the edge functions. And that you know, it's an exact correspondence. So what we need to do is we need to take that first neural network up here and do symbolic regression on that and the second neural network do symbolic regression on that. So what does it mean to do symbolic regression? It basically means that we want to find the symbolic equation that describes the neural network the best. And we do that in the exact same fashion as we started right here. So we give it a bunch of these options and then we let the system describe the neural network as best as possible. The way we do that again is we try out equations and if they get a low error, right, so we let the neural network run on the data set and we let this run on the data set. If it outputs the same thing, it describes the neural network well. And we can iterate that until we find a good equation. So the difference here is that we don't need to find an equation that governs the whole system. We just need to find two equations, one governing the edge model and one governing the vertex model and that's way, way easier than the whole system. And by finding those two equations, we and our given our physical assumptions, we can now find the equation to the whole system by simply composing them. Alright, so that's the entire system. I believe I've told you the entire paper right here without actually going into the paper. Let's just skim the paper a bit to see that they actually tell us the same thing. So, yeah, so the graph networks, they say, are ideal candidate for our approach due to their inductive biases shared by many physics problems. A, they're equivalent under particle permutations. B, they are differentiable end to end and can be trained efficiently using gradient descent. And C, they make use of three separate and interpretable internal functions, the edge, the node and the global model. Now the global model here isn't really used in the cases we're going to look at. So it's just going to be two different neural networks. Which are targets for the symbolic regression? Graph networks can also be embedded with additional symmetries, as in 23, 24, but we don't implement these. Okay, and then they say symbolic regression. So they use this Eureka package to perform symbolic regressions and fit compact closed form analytical expressions to these neural networks. Eureka works by using a genetic algorithm to combine algebraic expressions stochastically. The technique is analogous to natural selection, where the fitness of each expression is defined in terms of simplicity and accuracy. The operations considered in the fitting process are plus, minus, times, if, as well as real constants. Alright, so if we look at the examples, they have three different examples. First of all, they have Newtonian dynamics, which is, for example, this gravitational force we looked at. They have Hamiltonian dynamics, which describes the same systems, but in a different way in terms of the Hamiltonian. And I don't want to go into this too much, because I think that the Newtonian dynamics already demonstrate really well what the system can do. And then they have dark matter halos for cosmology, which is a problem where you have universe simulators and you try to predict where the dark matter is, depending on where other dark matter is, and that's where they find a new unknown equation. Okay, here is the system in a nutshell. This is the path that you know. You have the data set, you learn a graph network, and then you get out an equation. But in between, you can put even more constraints to make the network really learn a physical equation. So, as I said, you're going to compute these edge functions right here. And the output of the edge functions is going to be this edge message, which is just going to be a vector of some sort. And that vector can be pretty large. You know, this is a hidden dimension that you can choose as an implementer. All you need to make sure is that the output of the vertex is the same dimension as, you know, what your output should be. Everything internal, you can choose. Now, we know that, for example, in a 2D system, the actual informational content of that edge message should be two dimensional, right? If this really describes the force in two dimensions, it should be two dimensional. There's really no reason why it should have a higher dimension since all the relevant information can be described in two dimensions. So one thing you can do is you can simply say, all right, I will choose the hidden dimension to be two. And therefore, I will force my neural network to just use two dimensions. This however, they noticed doesn't work super well. I think it works, but not that well. They call this the bottleneck model. And the reason why it doesn't work super well is that if you have like this constraint of neural networks, they don't tend to learn very well. And that's what they hypothesize in the paper as well. They don't tend to really come, you know, be good friends with the fact that they only have two floating point numbers to learn anything. And this is probably more a property of the optimization procedure than the problem itself. It's property of, you know, us training neural networks with SGD. So what they do instead is they put an L1 penalty on these edge messages. So they say we apply L1 regularization. And what that will do is that will induce sparsity in whatever you apply it to. So L1 regularization simply means that you constrain. So the edge message, if you take the absolute value in each entry and the sum of that, that should be small. So you can just add this to the loss function, and that will induce sparsity in these edge messages. And so now the network still has these whatever 100 latent dimensions, but it is encouraged to use as few as possible. That means it can use a lot during the beginning when it's really benefits from the lot of dimensions when it learns the system. But then as it gets better and better, it might shift a lot of the information into very, very few dimensions. Okay, so once we do, if we do that, we can then run a check, right? If it is really the case that this graph network has learned the physical dynamics of the system, then we can simply look at the top two dimensions, and we start by largest standard deviation. So whichever two dimensions are the least sparse, have the largest standard deviation, we can look at those two and we say, well, even though we didn't constrain the model, those two should describe our force pretty well. And since in Newtonian dynamics, we know what the force is, so this is we know what the force is, we can simply check whether or not that holds, we can check whether we can read out the force from these two components. And here it's made such that you can't guarantee that the force is, you know, this force right here is actually so there are many ways to state a physical equation, because there are many symmetries in physics, and we cannot really make the neural network describe the equation exactly as humans would, because there are infinite amount of equivalent formulations, but in this case, they're all covered by rotations of each other. And that means in these graphs, if you have these message elements right here, and the linear combination of forces right here, a linear relationship means basically that the information is there, whereas a nonlinear relationship would mean that these numbers don't really encode the force as is. And here you can pretty clearly see that the linear relationship is given. And that means that these first two dimensions right here really encode the force in the way that we know the equation is. So that's when we know the equation, right? When we know the equation, we can simply say, okay, does this fit? And when we don't know the equation, we can use this symbolic regression. And what turns out is exactly this thing right here. Now you might you might object that this isn't really that force right here. But as I said, there are many, many symmetries. So for example, this, this R hat right here, I believe, and this is I've I'm not a big physics person, this R hat, I think this is the vector of the delta x delta y, right? So delta x delta y is in this R hat. So we already see that delta x and delta y here, this already looks like some sort of this already looks okay. No, actually, if we go down, it gets even clearer. So here they have the outputs of that. Alright, so in this first case, this is the same example right here. So they say you in this spring example, so this is a system where the particles are connected by springs, and we do l one regularization, what we expect is this equation, this is we know that this equation holds in this spring system. And what the neural network combined with the symbolic regression gives us is this equation. So right here, you can see there's this delta vector, and it's a product, it's an inner product dot product with this a, which is a numerical constants. And you can see that there is this form of product with numerical constants. What you can also see, so for example, here, the delta y here is 1.36 and 1.37. That's, you know, the same number and here it's point 6.6. Okay, but here you see, for example, r minus one, and here it's something like this minus something divided by r doesn't seem the same. But again, due to the due to the symmetries, you can, if you take this and you simply divide everything by r, you'll end up with this vector right here, a times delta x, delta y, times one over r, no, times one minus one over r plus b. Right, and now you can see it already looks very much similar. And it's only off by like, it's only a transformation away from what you want. So that's why I said you can describe these equations in many different sort of equivalent ways. And ask the neural network to really figure out, you know, the exact one we want. As long as it figures out a one that is equivalent, we're happy. And we're, I guess we're pretty happy here. So also in this case right here, you can see that it correctly predicts this relationship that it should be divided by r to the third power. And there is a delta x, delta y, delta z, if you simply consider, so delta z here, I guess is, has simply a factor of zero. And it even has this discontinuous problem where the force breaks after a certain while, it can even parse out this if condition right here. So that's, that's fairly cool, right? But to me that is pretty, pretty cool result that you can actually parse out these equations with just these graph networks and then the symbolic regression. So they do the same thing for this cosmology example, where they have these simulators of the universe and they let them run and these kind of distribute this dark matter. And I guess your task is, if I give you a bunch of these points right here, tell me where the other dark matter is, something like that. I don't understand this, but in essence, it is the same kind of problem, right? You want to figure out the dark matter properties from the surrounding dark matter or properties of other things. And again, here you can see pretty well that this is the equation they get out. So the equation they get out is going to be a sum right here over, so here the output for node i is going to be a sum over all the other nodes j and then some function of that sum. So this right here is the equation that came out of our edge model, of our edge neural network. And this here that includes this one, it was the equation that came out of our vertex model. As you know, the same here in this spring law, this came out of our edge model, this came out of our vertex model. Again, this rests on the fact that physical systems can actually be described often as these sums of independent interactions. And that's why all of this works. So they do give very, very detailed instructions on how they did everything. I think the most unclear things in this paper are the physics things that are assumed sort of that you know. I don't, I didn't. Yeah, but other than that, it's pretty straightforward. Their appendix is also pretty detailed in how they do all the representations and so on. They have different formulations other than this L1 regularization. As I said, they have bottleneck, they have like a KL formulation. They really describe how the graph neural network works here and so on. So all in all, I enjoyed reading this paper. Here is a bunch of examples of these particle systems. And yeah, and here is a bunch of examples of where you'd have a linear relationship that where you can say, oh, look, this really describes that force or a nonlinear relationship where you can make the claim this doesn't really describe the force well, because it's not linear relationship indicates that what the network found is a rotation of what you really want. And that's good because it's equivalent nonlinear basically means that you can't really it doesn't really describe what you want really well. Yeah, and I'm going to leave you with that. I absolutely invite you to check out the code and the video they made about it and I'll see you next time. Bye bye.
[ { "end": 5.28, "start": 0, "text": " Hi there, today we're looking at discovering symbolic models from deep learning with inductive" }, { "end": 11.98, "start": 5.28, "text": " biases by Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Pitalia, Rui Xu, Kyle Cranmer, David" }, { "end": 14.5, "start": 11.98, "text": " Spurgill and Shirley Ho." }, { "end": 21.62, "start": 14.5, "text": " So this paper on a high level, it uses graph neural networks to fit a dataset of observations" }, { "end": 23.76, "start": 21.62, "text": " of a physical system." }, { "end": 30.64, "start": 23.76, "text": " And then it uses symbolic regression in order to parse out equations, symbolic equations" }, { "end": 32.620000000000005, "start": 30.64, "text": " from the graph neural network." }, { "end": 39.08, "start": 32.620000000000005, "text": " And the symbolic equations that will are found such they describe the physical system." }, { "end": 45.68000000000001, "start": 39.08, "text": " And they do find, they do recover some known equations and they do find a new one in the" }, { "end": 48.3, "start": 45.68000000000001, "text": " field of cosmology." }, { "end": 54.82, "start": 48.3, "text": " So we'll go through how they do it, what these two steps are and why this might work better" }, { "end": 56.72, "start": 54.82, "text": " than previous approaches." }, { "end": 59.4, "start": 56.72, "text": " So yeah, join me." }, { "end": 65.16, "start": 59.4, "text": " If you like content like this, as always, feel free to share it, subscribe if you haven't," }, { "end": 70.67999999999999, "start": 65.16, "text": " if you want more content like this, and tell me what you think in the comments." }, { "end": 78.4, "start": 70.68, "text": " All right, so they claim we develop a general approach to distill symbolic representation" }, { "end": 82.32000000000001, "start": 78.4, "text": " of a learned deep model by introducing strong inductive biases." }, { "end": 90.36000000000001, "start": 82.32000000000001, "text": " And this, it doesn't really say a whole lot, but I think the abstract doesn't say a whole" }, { "end": 91.36000000000001, "start": 90.36000000000001, "text": " lot." }, { "end": 93.04, "start": 91.36000000000001, "text": " So let me give you an example." }, { "end": 98.96000000000001, "start": 93.04, "text": " If you have three different, let's say planets or stars, right?" }, { "end": 105.39999999999999, "start": 98.96, "text": " This is a, this three body problem is a unsolved problem, I think still." }, { "end": 110.75999999999999, "start": 105.39999999999999, "text": " So if you have these three stars, and you just let the simulation run, they have gravity," }, { "end": 111.96, "start": 110.75999999999999, "text": " they attract each other, right?" }, { "end": 114.38, "start": 111.96, "text": " So they are going to move around somehow." }, { "end": 117.6, "start": 114.38, "text": " So this one's going to move here, this one's going to move like this, this one's going" }, { "end": 122.13999999999999, "start": 117.6, "text": " to move like this, and then it turns around and this one turns around and so on." }, { "end": 129.56, "start": 122.14, "text": " So there is a fairly complex motions in already three different things that are somehow in" }, { "end": 131.96, "start": 129.56, "text": " a physical system together." }, { "end": 134.66, "start": 131.96, "text": " This is a bigger problem than just stars." }, { "end": 140.42000000000002, "start": 134.66, "text": " So you have these systems, for example, when these are atoms and there is like an electromagnetic" }, { "end": 143.96, "start": 140.42000000000002, "text": " force between them or the strong force." }, { "end": 148.44, "start": 143.96, "text": " There can be, these can be things where springs are attached to them and so on." }, { "end": 154.66, "start": 148.44, "text": " So our goal is to derive equations that govern this behavior, right?" }, { "end": 161.32, "start": 154.66, "text": " In the case of gravity, we know that these objects sort of pull on each other with the" }, { "end": 166.44, "start": 161.32, "text": " force proportional to something like the mass of the first times the mass of the second" }, { "end": 172.32, "start": 166.44, "text": " divided by the radius that they are a part squared." }, { "end": 176.36, "start": 172.32, "text": " Something like this times like this gravitational constant." }, { "end": 178.88000000000002, "start": 176.36, "text": " We know the equation that governs these interactions." }, { "end": 182.92000000000002, "start": 178.88000000000002, "text": " We don't know the symbolic solution to the whole problem, but we know the equation that" }, { "end": 186.44000000000003, "start": 182.92000000000002, "text": " governs the interaction, right?" }, { "end": 189.96, "start": 186.44000000000003, "text": " Now imagine if we didn't know the equation, what do we have to do?" }, { "end": 191.8, "start": 189.96, "text": " Well, what did Newton do?" }, { "end": 199.22000000000003, "start": 191.8, "text": " Ultimately, he sat down and just came up with an equation that seemed okay to him and then" }, { "end": 207.84, "start": 199.22, "text": " found out that the equation actually does predict very accurately how the things move." }, { "end": 214.38, "start": 207.84, "text": " So we're going to try to replicate that process in an AI system, the process of coming up" }, { "end": 217.42, "start": 214.38, "text": " with an equation that governs this behavior." }, { "end": 219.68, "start": 217.42, "text": " So what we have is a data set." }, { "end": 221.52, "start": 219.68, "text": " As I said, we let this stuff run." }, { "end": 225.62, "start": 221.52, "text": " So we let it run for one time step and then this is here, maybe this is here and this" }, { "end": 227.24, "start": 225.62, "text": " is here, okay?" }, { "end": 229.42000000000002, "start": 227.24, "text": " And then we let it run for the next time step." }, { "end": 233.52, "start": 229.42000000000002, "text": " This goes here, this goes here, this goes here and so on." }, { "end": 240.68, "start": 233.52, "text": " So that will give us, basically it will give us frame by frame how this system evolves." }, { "end": 242.52, "start": 240.68, "text": " Frame by frame." }, { "end": 244.62, "start": 242.52, "text": " And that will give us a data set." }, { "end": 250, "start": 244.62, "text": " So this right here, if we let it run and maybe we restarted a couple of times with different" }, { "end": 253.68, "start": 250, "text": " initializations, we let it run, we get a data set." }, { "end": 255.78, "start": 253.68, "text": " So now we have a data set, right?" }, { "end": 263.08, "start": 255.78, "text": " So our goal is to be to take that data set and come up with an equation like m1, m2 divided" }, { "end": 267.92, "start": 263.08, "text": " by r squared that governs this behavior." }, { "end": 274.5, "start": 267.92, "text": " Now previous approaches have resorted to symbolic regression, I think they call this." }, { "end": 277.28, "start": 274.5, "text": " And that is basically, it's pretty simple." }, { "end": 282.3, "start": 277.28, "text": " Namely, what you do is you simply provide the system with a bunch of options." }, { "end": 287.72, "start": 282.3, "text": " You tell it, I have a list and the list can include the mass of the first, it can include" }, { "end": 293.12, "start": 287.72, "text": " the mass of the second, it can include the x and the y position of the things, it can" }, { "end": 299.88, "start": 293.12, "text": " include the delta x and delta y, which basically means the speed of the objects." }, { "end": 305.16, "start": 299.88, "text": " It can include any constant a and b that you want." }, { "end": 316.16, "start": 305.16, "text": " It can include the symbols plus, minus, division, multiplication, square, maybe exponential" }, { "end": 317.16, "start": 316.16, "text": " function and so on." }, { "end": 323, "start": 317.16, "text": " So you give it a bunch of options of what it could potentially use in an equation." }, { "end": 329.18, "start": 323, "text": " And then you simply let it make equations and you see how well these equations describe" }, { "end": 331, "start": 329.18, "text": " the data set." }, { "end": 335.8, "start": 331, "text": " And the way you do that is you can do it naively by just searching and trying out, or you can" }, { "end": 339.44, "start": 335.8, "text": " be a little bit smarter about it and use evolutionary methods." }, { "end": 349, "start": 339.44, "text": " So you start with some equations like this, okay, I'm going to x plus delta x minus a" }, { "end": 350.56, "start": 349, "text": " squared." }, { "end": 355, "start": 350.56, "text": " You see how that describes the data set, you'll find not very well." }, { "end": 360.34, "start": 355, "text": " And then you go on and you say, okay, maybe I'll make a small mutation, I'll mutate this" }, { "end": 362.11999999999995, "start": 360.34, "text": " to a minus and so on." }, { "end": 369.44, "start": 362.11999999999995, "text": " And if you do this with an entire population, as is common in these evolutionary methods," }, { "end": 373.2, "start": 369.44, "text": " you'll end up with something better at the end." }, { "end": 377.03999999999996, "start": 373.2, "text": " Now this works until a point." }, { "end": 382.91999999999996, "start": 377.03999999999996, "text": " So whenever the space of things to explore, like this one here, gets larger, and it doesn't" }, { "end": 389.08, "start": 382.91999999999996, "text": " have to be super large to already exhaust the capabilities of these methods." }, { "end": 394.46, "start": 389.08, "text": " So these methods are very limited in the space they can search and have proven not really" }, { "end": 399.02, "start": 394.46, "text": " effective so far for this type of problem." }, { "end": 402.4, "start": 399.02, "text": " This paper right here goes a different route." }, { "end": 407.76, "start": 402.4, "text": " It uses graph neural networks in order to describe the data set." }, { "end": 414.71999999999997, "start": 407.76, "text": " So in between this step of collecting a data set and making the equation, it fits another" }, { "end": 415.97999999999996, "start": 414.71999999999997, "text": " step." }, { "end": 422.24, "start": 415.98, "text": " So it says in between here, we fit another step and that other step is going to be we" }, { "end": 427.20000000000005, "start": 422.24, "text": " have a graph neural network and you don't know yet, you don't have to know yet what" }, { "end": 428.44, "start": 427.20000000000005, "text": " that exactly is." }, { "end": 429.44, "start": 428.44, "text": " But it's technique." }, { "end": 431.64000000000004, "start": 429.44, "text": " It's like a type of neural network." }, { "end": 436.6, "start": 431.64000000000004, "text": " And we're going to have that neural network learn the data set." }, { "end": 441.56, "start": 436.6, "text": " Now as you know from neural networks, they can't do symbolic regression, they can't give" }, { "end": 445.28000000000003, "start": 441.56, "text": " you an equation, they can simply predict numbers." }, { "end": 453, "start": 445.28, "text": " So what the network will do is it will simply predict like the motions or the accelerations," }, { "end": 458.15999999999997, "start": 453, "text": " whatever you're interested in, it will predict those things as numbers, not as equations" }, { "end": 463.88, "start": 458.15999999999997, "text": " as just you can plug in this situation right here, and it will tell you how the things" }, { "end": 465.53999999999996, "start": 463.88, "text": " will move." }, { "end": 468.91999999999996, "start": 465.53999999999996, "text": " Neural networks are pretty good at that." }, { "end": 474.88, "start": 468.91999999999996, "text": " And once you have a graph neural network that can describe the system in a numeric fashion," }, { "end": 480.12, "start": 474.88, "text": " then you parse out the equations from this graph neural network." }, { "end": 485.12, "start": 480.12, "text": " And we're going to go over why that is going to be much, much easier than parsing out the" }, { "end": 488, "start": 485.12, "text": " equations directly from the physical system." }, { "end": 493.44, "start": 488, "text": " It's going to be because you engineer the graph neural network in a way that makes it" }, { "end": 501.2, "start": 493.44, "text": " very congruent with physical reality that makes it very adapt to parse out equations" }, { "end": 505.47999999999996, "start": 501.2, "text": " like this that makes the job of this evolutionary method much easier." }, { "end": 510.2, "start": 505.47999999999996, "text": " All right, so that's basically the two-step process here." }, { "end": 516.08, "start": 510.2, "text": " First step is to numerically regress a neural network to describe the system, and then second" }, { "end": 521.4399999999999, "start": 516.08, "text": " step is going to be from that neural network parse out the equations." }, { "end": 524.46, "start": 521.4399999999999, "text": " So we have to talk about graph neural networks." }, { "end": 528.88, "start": 524.46, "text": " So here you see the entire process as they describe it." }, { "end": 535.2, "start": 528.88, "text": " So they have this data set right here of observations of these physical systems." }, { "end": 539.96, "start": 535.2, "text": " This is like, it's like any data set that you have in machine learning." }, { "end": 547.08, "start": 539.96, "text": " They predict the dynamics, which means in a numeric fashion with a graph neural network," }, { "end": 551.88, "start": 547.08, "text": " and then from the graph neural network, they extract the symbolic equation, as you can" }, { "end": 554.48, "start": 551.88, "text": " see right here." }, { "end": 561.6, "start": 554.48, "text": " And this here is going to be the equation that they figure out that was previously unknown." }, { "end": 566.12, "start": 561.6, "text": " They even say unknown dark matter over density equation." }, { "end": 567.44, "start": 566.12, "text": " Cool." }, { "end": 570, "start": 567.44, "text": " So we have to talk about graph neural networks." }, { "end": 572.46, "start": 570, "text": " We haven't really done this on this channel so far." }, { "end": 575.82, "start": 572.46, "text": " And I'm not like a big expert on graph neural networks." }, { "end": 579.16, "start": 575.82, "text": " But they come in all shapes and forms." }, { "end": 584.32, "start": 579.16, "text": " In this particular paper, they use what they call a type of interaction network that's" }, { "end": 585.84, "start": 584.32, "text": " called a graph network." }, { "end": 590.12, "start": 585.84, "text": " So graph network is something different than graph neural network." }, { "end": 593.7600000000001, "start": 590.12, "text": " I think graph network is a type of graph neural network." }, { "end": 599.32, "start": 593.7600000000001, "text": " And specifically here, they use a network that..." }, { "end": 604.0400000000001, "start": 599.32, "text": " So a graph neural network has these things called vertices, and then it has edges, and" }, { "end": 607.44, "start": 604.0400000000001, "text": " edges connect vertices, like in a graph." }, { "end": 613.2, "start": 607.44, "text": " Now we're going to build this graph neural network such that the number of vertices is" }, { "end": 617.0400000000001, "start": 613.2, "text": " exactly equal to the number of particles in our system." }, { "end": 623.0400000000001, "start": 617.0400000000001, "text": " So in this paper, they consider systems with, I believe, four or eight particles." }, { "end": 627.5200000000001, "start": 623.0400000000001, "text": " That's already a lot for if you want to derive equations and things." }, { "end": 632.86, "start": 627.5200000000001, "text": " But of course, the physical world is made of many more particles." }, { "end": 637.0200000000001, "start": 632.86, "text": " In any case, they consider four, let's say four particles right here." }, { "end": 640.6400000000001, "start": 637.0200000000001, "text": " So what they're going to do, they're going to build a graph neural network that has four" }, { "end": 645.04, "start": 640.64, "text": " vertices, one for each of the particles." }, { "end": 650.4399999999999, "start": 645.04, "text": " And in a graph neural network, every vertex can have properties." }, { "end": 655.72, "start": 650.4399999999999, "text": " So the properties of each vertex here are going to be the properties of that particle." }, { "end": 660.3199999999999, "start": 655.72, "text": " That means the x coordinate, for example, the y coordinate, and we're going to, let's" }, { "end": 663.16, "start": 660.3199999999999, "text": " say we're in two dimensions, right?" }, { "end": 664.84, "start": 663.16, "text": " It's a two dimensional problem." }, { "end": 671.2800000000001, "start": 664.84, "text": " The x coordinate, the y coordinate, the delta x, the delta y, the mass, the, I don't know" }, { "end": 673.36, "start": 671.2800000000001, "text": " what else can we put here." }, { "end": 676.6, "start": 673.36, "text": " There's a lot of stuff that we can put here, the charge, right?" }, { "end": 681.0400000000001, "start": 676.6, "text": " So all of these things are properties of the vertex." }, { "end": 684.44, "start": 681.0400000000001, "text": " Then the other component of a graph are, of course, the edges." }, { "end": 692.2, "start": 684.44, "text": " So the edges connect each two of all of the, so each edge connects two vertices like this." }, { "end": 698.32, "start": 692.2, "text": " And in this particular type of graph network, we're going to consider graphs where all the" }, { "end": 701.76, "start": 698.32, "text": " particles are connected to all the other particles like this." }, { "end": 709.0400000000001, "start": 701.76, "text": " So it's not like a sparse, it's not a sparse graph, except I think in the cosmology example" }, { "end": 715.1600000000001, "start": 709.0400000000001, "text": " here you can see that always there is a node that's connected to all its neighbors." }, { "end": 720.2, "start": 715.1600000000001, "text": " But in the Newtonian dynamics graph networks, you can see right here, everything is connected" }, { "end": 727.12, "start": 720.2, "text": " to everything, like this." }, { "end": 732.84, "start": 727.12, "text": " And why does that represent a physical system really well?" }, { "end": 737.76, "start": 732.84, "text": " So the reason is going to be the following." }, { "end": 743.48, "start": 737.76, "text": " What we're going to try to do is we're going to try to say that in physical systems, if" }, { "end": 750.16, "start": 743.48, "text": " I want to, for example, consider this node up here, and consider how it is pulled by" }, { "end": 756.48, "start": 750.16, "text": " gravity by the other nodes, it's going to be pulled in this direction a little bit because" }, { "end": 760.5600000000001, "start": 756.48, "text": " of this particle right here, it's going to be pulled in this direction a little bit because" }, { "end": 764.32, "start": 760.5600000000001, "text": " of that one, and in this direction because of that one." }, { "end": 768.2, "start": 764.32, "text": " Now note that these three things are independent." }, { "end": 775.0400000000001, "start": 768.2, "text": " So if I want to describe the total force of gravity, I can do so as a sum over i equals" }, { "end": 783.08, "start": 775.0400000000001, "text": " 1, 2, 1, 2, 3 of the force that the particle i pulls." }, { "end": 788.1600000000001, "start": 783.08, "text": " So if this is j right here, how i pulls on j, right?" }, { "end": 795.36, "start": 788.1600000000001, "text": " This is an independent sum across all of the neighbors of that particle." }, { "end": 800.8000000000001, "start": 795.36, "text": " Now you might say, wait a minute, it's not independent, because it's being, you know," }, { "end": 804.64, "start": 800.8000000000001, "text": " it's not being strictly pulled in this direction, it's also pulled in this direction." }, { "end": 814.08, "start": 804.64, "text": " Yes, but with independent, we mean that the force, this force right here, is only dependent" }, { "end": 819.0600000000001, "start": 814.08, "text": " on this particle, and the force diagonally is only dependent on that particle." }, { "end": 825.92, "start": 819.06, "text": " There is no part of the particle up here that modulates this force right here." }, { "end": 833.0999999999999, "start": 825.92, "text": " So you can calculate the total force as an independent sum across the individual forces." }, { "end": 835.1199999999999, "start": 833.0999999999999, "text": " And that's the simplification here." }, { "end": 841.52, "start": 835.1199999999999, "text": " And that's a part, they claim, why current approaches that directly try to go about finding" }, { "end": 848.16, "start": 841.52, "text": " an equation using evolutionary methods from the data set itself don't really work, because" }, { "end": 851.12, "start": 848.16, "text": " the space is just too high of equations." }, { "end": 856.1999999999999, "start": 851.12, "text": " But this right here, this is a massive constraint." }, { "end": 861.52, "start": 856.1999999999999, "text": " And it's lucky, first of all, that physical systems, they say, most physical systems actually" }, { "end": 863.28, "start": 861.52, "text": " obey that constraint." }, { "end": 869.6999999999999, "start": 863.28, "text": " Most physical systems can be described as an independent sum over contributions of interactions" }, { "end": 872.56, "start": 869.6999999999999, "text": " between just two things, right?" }, { "end": 878.16, "start": 872.56, "text": " So we simply can sum over interactions between two things." }, { "end": 883.0799999999999, "start": 878.16, "text": " And that's way simpler than considering everything at once." }, { "end": 888.8399999999999, "start": 883.0799999999999, "text": " And second of all, it's lucky because these graph networks describe exactly this." }, { "end": 895.0799999999999, "start": 888.8399999999999, "text": " So each edge in the graph network is coincidentally connecting two things, right?" }, { "end": 896.0799999999999, "start": 895.0799999999999, "text": " And not more." }, { "end": 898.64, "start": 896.0799999999999, "text": " So the edges, they don't know about each other." }, { "end": 900.5, "start": 898.64, "text": " No edge knows about the other edge." }, { "end": 906.52, "start": 900.5, "text": " They only consider whatever particles are at their respective ends." }, { "end": 911, "start": 906.52, "text": " And that is exactly the same as this physical constraint on the physical system." }, { "end": 918.56, "start": 911, "text": " And that's why the graph networks are so adapted or are so useful in describing these systems." }, { "end": 923.44, "start": 918.56, "text": " So how does a graph network like this do anything, basically?" }, { "end": 926.24, "start": 923.44, "text": " So for that, you have to consider the task." }, { "end": 933.32, "start": 926.24, "text": " If we want to describe a system like this, a task in that, if you frame it in a machine" }, { "end": 941.32, "start": 933.32, "text": " learning way, could be, I'm going to give you these particles right here." }, { "end": 943.5600000000001, "start": 941.32, "text": " OK, here it's five particles." }, { "end": 947.52, "start": 943.5600000000001, "text": " I'm going to give you, for each one, I'm going to give you all its features, like the x," }, { "end": 952.2, "start": 947.52, "text": " the y, the speed currently, and the mass." }, { "end": 957.9200000000001, "start": 952.2, "text": " And you're going to tell me what the acceleration is in the next frame." }, { "end": 959.0400000000001, "start": 957.9200000000001, "text": " OK?" }, { "end": 962, "start": 959.0400000000001, "text": " So like this, like this, like this." }, { "end": 967.7800000000001, "start": 962, "text": " OK, considering all the interactions between the particles, just tell me, where does it" }, { "end": 971, "start": 967.7800000000001, "text": " go in the very next time frame?" }, { "end": 973.48, "start": 971, "text": " That sounds like a machine learning problem, right?" }, { "end": 977, "start": 973.48, "text": " And the graph neural network can be made to predict this." }, { "end": 984.8, "start": 977, "text": " So what we want is, for each vertex here, an output of a number or a vector, the acceleration." }, { "end": 987.36, "start": 984.8, "text": " So we want to compute an output for each vertex." }, { "end": 988.48, "start": 987.36, "text": " How do we do this?" }, { "end": 992.84, "start": 988.48, "text": " In a graph neural network, there are three, or in this particular type, there are three" }, { "end": 993.84, "start": 992.84, "text": " steps." }, { "end": 999.7, "start": 993.84, "text": " We said each vertex, and we're just going to do it for one vertex, let's say the one" }, { "end": 1002.24, "start": 999.7, "text": " on the bottom right." }, { "end": 1008.16, "start": 1002.24, "text": " Let's say each vertex has these properties, like this x, y, and so on." }, { "end": 1012.88, "start": 1008.16, "text": " So first, what we do is we go over the edges." }, { "end": 1017.6800000000001, "start": 1012.88, "text": " So for each edge, in parallel and independent from each other, let's consider this edge" }, { "end": 1018.88, "start": 1017.6800000000001, "text": " right here." }, { "end": 1028.68, "start": 1018.88, "text": " What we'll do is we take the nodes that are attached to it, and we combine their features." }, { "end": 1033.0800000000002, "start": 1028.68, "text": " And we combine them, so x, y, this also has x, y." }, { "end": 1035.96, "start": 1033.0800000000002, "text": " So we want to combine these two." }, { "end": 1040.4, "start": 1035.96, "text": " We want to compute the edge right here." }, { "end": 1043.8, "start": 1040.4, "text": " Now, in a physical system, what does the edge represent?" }, { "end": 1048.72, "start": 1043.8, "text": " The edge represents the force between the two particles, right?" }, { "end": 1051.68, "start": 1048.72, "text": " And that's a fairly complex equation." }, { "end": 1055.16, "start": 1051.68, "text": " It's not like we can just add the features or something like this." }, { "end": 1063.28, "start": 1055.16, "text": " So the edge here already needs to compute some sort of nonlinear, complicated function." }, { "end": 1067.8400000000001, "start": 1063.28, "text": " And we know how to compute nonlinear, complicated functions with neural networks." }, { "end": 1069.4, "start": 1067.8400000000001, "text": " We're in deep learning right here." }, { "end": 1076.18, "start": 1069.4, "text": " So the edge here is going to compute what's called this edge function." }, { "end": 1081.26, "start": 1076.18, "text": " And this edge function takes in two vertices, v1 and v2 right here." }, { "end": 1083.72, "start": 1081.26, "text": " Maybe this is v2, this is v1." }, { "end": 1089.6000000000001, "start": 1083.72, "text": " It takes in the features, these features of the two vertices, and it will compute a so-called" }, { "end": 1090.96, "start": 1089.6000000000001, "text": " edge message." }, { "end": 1095.08, "start": 1090.96, "text": " I think they call this ek for the edge k." }, { "end": 1096.44, "start": 1095.08, "text": " It will compute an edge message." }, { "end": 1102.2, "start": 1096.44, "text": " And this is supposed to represent the force that pulls between these two particles." }, { "end": 1106.76, "start": 1102.2, "text": " And we're going to approximate this function right here using a neural network." }, { "end": 1108.6000000000001, "start": 1106.76, "text": " Since we don't know the equation yet, right?" }, { "end": 1115.12, "start": 1108.6, "text": " We assume we don't know the gravitational equation, but we can learn it, right?" }, { "end": 1116.6399999999999, "start": 1115.12, "text": " Because we have data." }, { "end": 1121.08, "start": 1116.6399999999999, "text": " So we take this and we simply make it into a neural network." }, { "end": 1123.08, "start": 1121.08, "text": " So the features go in here, both." }, { "end": 1125.04, "start": 1123.08, "text": " We can concatenate them." }, { "end": 1127.24, "start": 1125.04, "text": " And then out comes this edge message." }, { "end": 1133.28, "start": 1127.24, "text": " Now, this edge message here is simply going to be a vector, a numerical vector describing" }, { "end": 1135.76, "start": 1133.28, "text": " some intermediate hidden state, right?" }, { "end": 1140.72, "start": 1135.76, "text": " That is going to describe the force, but for now it's just describing intermediate hidden" }, { "end": 1141.72, "start": 1140.72, "text": " state." }, { "end": 1143.82, "start": 1141.72, "text": " OK, so we do this for each edge." }, { "end": 1150.04, "start": 1143.82, "text": " So each edge is going to be, maybe this is e1, this is e2, e3, e4." }, { "end": 1156.44, "start": 1150.04, "text": " Each edge in parallel is going to aggregate information of its endpoints into that edge." }, { "end": 1158.8, "start": 1156.44, "text": " And then that's step one." }, { "end": 1161.44, "start": 1158.8, "text": " So step one, compute the edge messages." }, { "end": 1170.3600000000001, "start": 1161.44, "text": " Step two is going to be to compute the vertex messages or the vertex outputs." }, { "end": 1173.4, "start": 1170.3600000000001, "text": " So we said we're not actually interested in the edges." }, { "end": 1178.6000000000001, "start": 1173.4, "text": " We're interested that each vertex ends up with an acceleration, with an output." }, { "end": 1179.92, "start": 1178.6000000000001, "text": " So how are we going to do this?" }, { "end": 1182.96, "start": 1179.92, "text": " So consider again our graph." }, { "end": 1188.96, "start": 1182.96, "text": " If we want to compute the output for this node right here, what we'll do is we'll simply" }, { "end": 1196.16, "start": 1188.96, "text": " aggregate all of the edges, all of the edge messages that connect to that vertex." }, { "end": 1202.96, "start": 1196.16, "text": " So we've computed previously the edge messages by integrating the information from all of" }, { "end": 1205.76, "start": 1202.96, "text": " the attached endpoints." }, { "end": 1212.72, "start": 1205.76, "text": " Now we're going to go backwards and distribute the information from the edges back to the" }, { "end": 1214.6000000000001, "start": 1212.72, "text": " vertices that are attached." }, { "end": 1219.6799999999998, "start": 1214.6, "text": " And you can see already by this two-step process, it's kind of a message passing process if" }, { "end": 1226.3999999999999, "start": 1219.6799999999998, "text": " you've ever studied graphical models." }, { "end": 1231.76, "start": 1226.3999999999999, "text": " This means that in the two-step process, this vertex here aggregates information from the" }, { "end": 1235.34, "start": 1231.76, "text": " other vertices, via these edges." }, { "end": 1243.36, "start": 1235.34, "text": " So in this case, this vertex here is going to take in all the edge messages right here," }, { "end": 1250.5, "start": 1243.36, "text": " and it is going to aggregate all these edge messages in a function that computes the acceleration." }, { "end": 1259.5, "start": 1250.5, "text": " So our estimate of the acceleration is going to be a function, let's call that nu, of the" }, { "end": 1261.3, "start": 1259.5, "text": " edges that are attached to it." }, { "end": 1265.04, "start": 1261.3, "text": " So e1, e2, and e3." }, { "end": 1270.84, "start": 1265.04, "text": " And here is where we're going to make our next physical assumption, namely the one we" }, { "end": 1278.56, "start": 1270.84, "text": " said before, that the way that these edges, the way that they influence the vertex, is" }, { "end": 1282.24, "start": 1278.56, "text": " going to be in a form of an independent sum." }, { "end": 1292.24, "start": 1282.24, "text": " So this simplification means that this function should somehow be not of the edges, but of" }, { "end": 1295.4399999999998, "start": 1292.24, "text": " the sum of the edges, right?" }, { "end": 1297.72, "start": 1295.4399999999998, "text": " Sum of ei." }, { "end": 1305.8, "start": 1297.72, "text": " Okay, so this sum here, this is the simplification that we make to make it in accordance with" }, { "end": 1307.64, "start": 1305.8, "text": " the physical system." }, { "end": 1312.1000000000001, "start": 1307.64, "text": " With this graph network, we could do any sort of complicated thing right here." }, { "end": 1317.98, "start": 1312.1000000000001, "text": " We could put a transformer on these things and compute 12 layers of interaction effects" }, { "end": 1319.64, "start": 1317.98, "text": " between these edges." }, { "end": 1320.64, "start": 1319.64, "text": " We're not going to do that." }, { "end": 1327.54, "start": 1320.64, "text": " We're simply going to sum them up and then come up and then run those through a function." }, { "end": 1329.28, "start": 1327.54, "text": " So we'll sum them up." }, { "end": 1334.2, "start": 1329.28, "text": " And of course, this function right here is still going to be a complex function because" }, { "end": 1341.24, "start": 1334.2, "text": " just because you sum up the forces, you don't have the acceleration yet." }, { "end": 1349.08, "start": 1341.24, "text": " So as you know that force is mass times acceleration, that means acceleration is equal to force" }, { "end": 1350.8, "start": 1349.08, "text": " divided by mass." }, { "end": 1356.08, "start": 1350.8, "text": " So this here is going to be this sum over the edges, I guess." }, { "end": 1357.2, "start": 1356.08, "text": " Yes." }, { "end": 1358.92, "start": 1357.2, "text": " So you still need to divide it by force." }, { "end": 1363.1200000000001, "start": 1358.92, "text": " And technically, you still can do much more complicated things right here." }, { "end": 1368.48, "start": 1363.1200000000001, "text": " We only say that the edges should only come in in form of a sum." }, { "end": 1376.6000000000001, "start": 1368.48, "text": " So of course, we're going to say that this function right here, since it can be any complicated" }, { "end": 1379.96, "start": 1376.6000000000001, "text": " function of its input, it should also be a neural network." }, { "end": 1384.04, "start": 1379.96, "text": " So we're going to take that sum of the edge messages and we're going to put that into" }, { "end": 1390.56, "start": 1384.04, "text": " a second neural network, and then out comes our estimate of the acceleration." }, { "end": 1398.2, "start": 1390.56, "text": " And now that we can use together from the data set, we know the true acceleration, right?" }, { "end": 1402.96, "start": 1398.2, "text": " Since we have a data set, we have the observations and the labels." }, { "end": 1409.04, "start": 1402.96, "text": " The labels are the true accelerations of that system that we observed." }, { "end": 1414.36, "start": 1409.04, "text": " And we can compute a loss function right here." }, { "end": 1419.04, "start": 1414.36, "text": " If you followed so far, everything we've done so far is differentiable." }, { "end": 1425.56, "start": 1419.04, "text": " So from this loss function that compares the output of the neural network for that vertex" }, { "end": 1431.72, "start": 1425.56, "text": " to the true acceleration that we observed in the data set, we can back propagate through" }, { "end": 1436.04, "start": 1431.72, "text": " this neural network that computes the vertex function." }, { "end": 1442.2, "start": 1436.04, "text": " We can back prop through the sum here to the edge messages, and we can back prop through" }, { "end": 1447.72, "start": 1442.2, "text": " the edge messages to that neural network that computed the edge messages from those features." }, { "end": 1449.84, "start": 1447.72, "text": " So everything is differentiable." }, { "end": 1456.48, "start": 1449.84, "text": " By having that loss at the end, we can train this neural network end to end to, from the" }, { "end": 1465.76, "start": 1456.48, "text": " observation right here, predict the numerical acceleration of the system right here." }, { "end": 1473.68, "start": 1465.76, "text": " It was a fairly lengthy way, but it's important that you kind of understand what's happening." }, { "end": 1477.68, "start": 1473.68, "text": " So you build the graph network according to the physical system." }, { "end": 1480.56, "start": 1477.68, "text": " In the graph network, there are two kinds of things." }, { "end": 1485.96, "start": 1480.56, "text": " First there are deterministic things, like we're always going to aggregate in a sum." }, { "end": 1487.96, "start": 1485.96, "text": " And then there are things that you learn." }, { "end": 1490.18, "start": 1487.96, "text": " Namely, there are two neural networks." }, { "end": 1495.08, "start": 1490.18, "text": " The first one computes the edge messages from the features of the vertices." }, { "end": 1503.3999999999999, "start": 1495.08, "text": " And the second one computes the output of each vertex according to the sum of the edge" }, { "end": 1506.52, "start": 1503.3999999999999, "text": " messages that are attached to that vertex." }, { "end": 1510.4399999999998, "start": 1506.52, "text": " Now you can say, wait a minute, there are more than just two neural networks." }, { "end": 1513.76, "start": 1510.4399999999998, "text": " Like each edge here has a neural network, technically, right?" }, { "end": 1517.8, "start": 1513.76, "text": " This edge has a neural network, this edge has a neural network, and each vertex has" }, { "end": 1519.24, "start": 1517.8, "text": " a neural network." }, { "end": 1522.4399999999998, "start": 1519.24, "text": " But in this case, these neural networks are shared." }, { "end": 1527.2, "start": 1522.44, "text": " So the neural network that computes the edge message for that edge is the same as the neural" }, { "end": 1531.4, "start": 1527.2, "text": " network that computes the edge message for any of the edges." }, { "end": 1535.6200000000001, "start": 1531.4, "text": " You can think of it like weight sharing, or you can think that it is actually the same" }, { "end": 1538.04, "start": 1535.6200000000001, "text": " neural network, it's equivalent." }, { "end": 1539.3600000000001, "start": 1538.04, "text": " And the same for the vertices." }, { "end": 1545.72, "start": 1539.3600000000001, "text": " There's only one neural network that in the same fashion computes the output for each" }, { "end": 1546.72, "start": 1545.72, "text": " vertex." }, { "end": 1550.16, "start": 1546.72, "text": " Of course, the incoming edge messages are going to be different, and that's why you" }, { "end": 1551.56, "start": 1550.16, "text": " have different outputs." }, { "end": 1556.1599999999999, "start": 1551.56, "text": " But the neural network itself is the same." }, { "end": 1565.44, "start": 1556.1599999999999, "text": " Okay, so we have a system that can describe this data set of physical observations really" }, { "end": 1566.44, "start": 1565.44, "text": " well." }, { "end": 1568.6399999999999, "start": 1566.44, "text": " It's this graph neural network." }, { "end": 1570.36, "start": 1568.6399999999999, "text": " So we train this end to end." }, { "end": 1578.52, "start": 1570.36, "text": " And here is a little bit of an analogy where they say, this is how you can analogize the" }, { "end": 1581.24, "start": 1578.52, "text": " neural network with a physical system." }, { "end": 1583.56, "start": 1581.24, "text": " So what are the analogies here?" }, { "end": 1590.36, "start": 1583.56, "text": " The nodes in the graph network correspond to the particles in Newtonian mechanics." }, { "end": 1594.48, "start": 1590.36, "text": " Pairs of nodes correspond to two interacting particles." }, { "end": 1599.68, "start": 1594.48, "text": " The edge model is the force between two particles." }, { "end": 1606.24, "start": 1599.68, "text": " Then the pooling operation, which is the summing up of the edge messages, right, that we found" }, { "end": 1608.44, "start": 1606.24, "text": " so important as a simplification." }, { "end": 1613.92, "start": 1608.44, "text": " This is the sum into the net force that is really given in the physical system." }, { "end": 1622.96, "start": 1613.92, "text": " So independent sum of, sorry, sum of independent forces without interaction effects." }, { "end": 1626.44, "start": 1622.96, "text": " Then concatenate with node, I guess this I left this out." }, { "end": 1637.1000000000001, "start": 1626.44, "text": " But whenever you compute, whenever you compute the vertex properties, right here, I guess," }, { "end": 1641.84, "start": 1637.1, "text": " what you want to do is not only input the edge messages, but you know, each vertex has" }, { "end": 1646.36, "start": 1641.84, "text": " these features that we said, and these could also be fairly important." }, { "end": 1651.9199999999998, "start": 1646.36, "text": " It's like you technically have that information in the edge messages because it started out" }, { "end": 1652.9199999999998, "start": 1651.9199999999998, "text": " from these." }, { "end": 1658.76, "start": 1652.9199999999998, "text": " But you can also just input that again into this neural network together with the edge" }, { "end": 1660.84, "start": 1658.76, "text": " properties." }, { "end": 1664.52, "start": 1660.84, "text": " And that will just make its job a bit easier since, for example, right here, we have to" }, { "end": 1667.62, "start": 1664.52, "text": " divide by the mass in this function." }, { "end": 1672.52, "start": 1667.62, "text": " And it's just easier if you provide that mass as a as the property." }, { "end": 1675.84, "start": 1672.52, "text": " So that's a little detail I left out before." }, { "end": 1681.36, "start": 1675.84, "text": " So that you concatenate the edge mess, the aggregated edge messages with the node, then" }, { "end": 1687.6399999999999, "start": 1681.36, "text": " you compute the node model, which in this case is the computation is simply the you" }, { "end": 1693.48, "start": 1687.6399999999999, "text": " take this sum right here, and you divide it by the mass." }, { "end": 1697.8, "start": 1693.48, "text": " And then optionally, you can update the nodes, which is compute the next time step, which" }, { "end": 1704.16, "start": 1697.8, "text": " we don't do right here, because we simply want to output the acceleration." }, { "end": 1711.04, "start": 1704.16, "text": " I guess I mean, it should be equivalent to output the next time step and then compare" }, { "end": 1714.28, "start": 1711.04, "text": " with the data set what the next time step was." }, { "end": 1717.16, "start": 1714.28, "text": " In any case, you have to have some kind of loss function." }, { "end": 1723.3, "start": 1717.16, "text": " And here you can see all the black squares right here are going to be neural networks." }, { "end": 1729.76, "start": 1723.3, "text": " So now we have learned a graph network that can describe a system." }, { "end": 1732.54, "start": 1729.76, "text": " How do we make this into an equation?" }, { "end": 1740.28, "start": 1732.54, "text": " And again, here, our our physical reality comes in that these of the like the independence" }, { "end": 1743.12, "start": 1740.28, "text": " assumptions of these realities comes in." }, { "end": 1749.56, "start": 1743.12, "text": " Because in physics, you know, the the acceleration here is going to be a function of the sum" }, { "end": 1750.56, "start": 1749.56, "text": " and so on." }, { "end": 1756.28, "start": 1750.56, "text": " So what we need to do is we don't need to develop an equation for the entire system," }, { "end": 1757.28, "start": 1756.28, "text": " right?" }, { "end": 1762, "start": 1757.28, "text": " What we need to do is simply we need to develop an equation for each vertex." }, { "end": 1768.08, "start": 1762, "text": " So each vertex, we need to have an equation acceleration equals something." }, { "end": 1776.74, "start": 1768.08, "text": " And that something should include some of the edges and then the edges again should" }, { "end": 1778.62, "start": 1776.74, "text": " be something right." }, { "end": 1785.32, "start": 1778.62, "text": " So we technically as we had two neural networks, we technically need two symbolic equations," }, { "end": 1790, "start": 1785.32, "text": " one that represents that first neural network that computes the edge functions and one that" }, { "end": 1796.32, "start": 1790, "text": " represents that second neural network that aggregates the sum of the edge functions or" }, { "end": 1800.4799999999998, "start": 1796.32, "text": " that computes the output from the sum of the edge functions." }, { "end": 1803.1399999999999, "start": 1800.4799999999998, "text": " And that you know, it's an exact correspondence." }, { "end": 1809.64, "start": 1803.14, "text": " So what we need to do is we need to take that first neural network up here and do symbolic" }, { "end": 1816.0800000000002, "start": 1809.64, "text": " regression on that and the second neural network do symbolic regression on that." }, { "end": 1819.22, "start": 1816.0800000000002, "text": " So what does it mean to do symbolic regression?" }, { "end": 1826.64, "start": 1819.22, "text": " It basically means that we want to find the symbolic equation that describes the neural" }, { "end": 1828.72, "start": 1826.64, "text": " network the best." }, { "end": 1832.5800000000002, "start": 1828.72, "text": " And we do that in the exact same fashion as we started right here." }, { "end": 1838.32, "start": 1832.58, "text": " So we give it a bunch of these options and then we let the system describe the neural" }, { "end": 1841.1599999999999, "start": 1838.32, "text": " network as best as possible." }, { "end": 1848.76, "start": 1841.1599999999999, "text": " The way we do that again is we try out equations and if they get a low error, right, so we" }, { "end": 1852.32, "start": 1848.76, "text": " let the neural network run on the data set and we let this run on the data set." }, { "end": 1856.28, "start": 1852.32, "text": " If it outputs the same thing, it describes the neural network well." }, { "end": 1859.3799999999999, "start": 1856.28, "text": " And we can iterate that until we find a good equation." }, { "end": 1863.3200000000002, "start": 1859.38, "text": " So the difference here is that we don't need to find an equation that governs the whole" }, { "end": 1864.3200000000002, "start": 1863.3200000000002, "text": " system." }, { "end": 1870.6000000000001, "start": 1864.3200000000002, "text": " We just need to find two equations, one governing the edge model and one governing the vertex" }, { "end": 1875.2800000000002, "start": 1870.6000000000001, "text": " model and that's way, way easier than the whole system." }, { "end": 1881.7600000000002, "start": 1875.2800000000002, "text": " And by finding those two equations, we and our given our physical assumptions, we can" }, { "end": 1887.1200000000001, "start": 1881.7600000000002, "text": " now find the equation to the whole system by simply composing them." }, { "end": 1890.12, "start": 1887.12, "text": " Alright, so that's the entire system." }, { "end": 1898.8, "start": 1890.12, "text": " I believe I've told you the entire paper right here without actually going into the paper." }, { "end": 1906.3999999999999, "start": 1898.8, "text": " Let's just skim the paper a bit to see that they actually tell us the same thing." }, { "end": 1912.6799999999998, "start": 1906.3999999999999, "text": " So, yeah, so the graph networks, they say, are ideal candidate for our approach due to" }, { "end": 1916.2399999999998, "start": 1912.6799999999998, "text": " their inductive biases shared by many physics problems." }, { "end": 1919.4, "start": 1916.24, "text": " A, they're equivalent under particle permutations." }, { "end": 1923.52, "start": 1919.4, "text": " B, they are differentiable end to end and can be trained efficiently using gradient" }, { "end": 1924.6, "start": 1923.52, "text": " descent." }, { "end": 1930.2, "start": 1924.6, "text": " And C, they make use of three separate and interpretable internal functions, the edge," }, { "end": 1932.36, "start": 1930.2, "text": " the node and the global model." }, { "end": 1937.32, "start": 1932.36, "text": " Now the global model here isn't really used in the cases we're going to look at." }, { "end": 1941.48, "start": 1937.32, "text": " So it's just going to be two different neural networks." }, { "end": 1944.34, "start": 1941.48, "text": " Which are targets for the symbolic regression?" }, { "end": 1950.56, "start": 1944.34, "text": " Graph networks can also be embedded with additional symmetries, as in 23, 24, but we don't implement" }, { "end": 1951.56, "start": 1950.56, "text": " these." }, { "end": 1953.9599999999998, "start": 1951.56, "text": " Okay, and then they say symbolic regression." }, { "end": 1958.76, "start": 1953.9599999999998, "text": " So they use this Eureka package to perform symbolic regressions and fit compact closed" }, { "end": 1963.6799999999998, "start": 1958.76, "text": " form analytical expressions to these neural networks." }, { "end": 1969.32, "start": 1963.6799999999998, "text": " Eureka works by using a genetic algorithm to combine algebraic expressions stochastically." }, { "end": 1974, "start": 1969.32, "text": " The technique is analogous to natural selection, where the fitness of each expression is defined" }, { "end": 1976.36, "start": 1974, "text": " in terms of simplicity and accuracy." }, { "end": 1982, "start": 1976.36, "text": " The operations considered in the fitting process are plus, minus, times, if, as well as real" }, { "end": 1983.72, "start": 1982, "text": " constants." }, { "end": 1991.92, "start": 1983.72, "text": " Alright, so if we look at the examples, they have three different examples." }, { "end": 1997.32, "start": 1991.92, "text": " First of all, they have Newtonian dynamics, which is, for example, this gravitational" }, { "end": 1999.24, "start": 1997.32, "text": " force we looked at." }, { "end": 2005.1200000000001, "start": 1999.24, "text": " They have Hamiltonian dynamics, which describes the same systems, but in a different way in" }, { "end": 2006.64, "start": 2005.1200000000001, "text": " terms of the Hamiltonian." }, { "end": 2012.1200000000001, "start": 2006.64, "text": " And I don't want to go into this too much, because I think that the Newtonian dynamics" }, { "end": 2015.88, "start": 2012.1200000000001, "text": " already demonstrate really well what the system can do." }, { "end": 2022.44, "start": 2015.88, "text": " And then they have dark matter halos for cosmology, which is a problem where you have universe" }, { "end": 2027.64, "start": 2022.44, "text": " simulators and you try to predict where the dark matter is, depending on where other dark" }, { "end": 2032.8000000000002, "start": 2027.64, "text": " matter is, and that's where they find a new unknown equation." }, { "end": 2039.5200000000002, "start": 2032.8000000000002, "text": " Okay, here is the system in a nutshell." }, { "end": 2041.16, "start": 2039.5200000000002, "text": " This is the path that you know." }, { "end": 2047.48, "start": 2041.16, "text": " You have the data set, you learn a graph network, and then you get out an equation." }, { "end": 2056.6, "start": 2047.48, "text": " But in between, you can put even more constraints to make the network really learn a physical" }, { "end": 2057.6, "start": 2056.6, "text": " equation." }, { "end": 2062.3199999999997, "start": 2057.6, "text": " So, as I said, you're going to compute these edge functions right here." }, { "end": 2068, "start": 2062.3199999999997, "text": " And the output of the edge functions is going to be this edge message, which is just going" }, { "end": 2071.2, "start": 2068, "text": " to be a vector of some sort." }, { "end": 2073.2, "start": 2071.2, "text": " And that vector can be pretty large." }, { "end": 2076.48, "start": 2073.2, "text": " You know, this is a hidden dimension that you can choose as an implementer." }, { "end": 2081.68, "start": 2076.48, "text": " All you need to make sure is that the output of the vertex is the same dimension as, you" }, { "end": 2083.52, "start": 2081.68, "text": " know, what your output should be." }, { "end": 2085.96, "start": 2083.52, "text": " Everything internal, you can choose." }, { "end": 2096, "start": 2085.96, "text": " Now, we know that, for example, in a 2D system, the actual informational content of that edge" }, { "end": 2098.64, "start": 2096, "text": " message should be two dimensional, right?" }, { "end": 2106.76, "start": 2098.64, "text": " If this really describes the force in two dimensions, it should be two dimensional." }, { "end": 2111.36, "start": 2106.76, "text": " There's really no reason why it should have a higher dimension since all the relevant" }, { "end": 2114.28, "start": 2111.36, "text": " information can be described in two dimensions." }, { "end": 2119.8, "start": 2114.28, "text": " So one thing you can do is you can simply say, all right, I will choose the hidden dimension" }, { "end": 2121.36, "start": 2119.8, "text": " to be two." }, { "end": 2127.88, "start": 2121.36, "text": " And therefore, I will force my neural network to just use two dimensions." }, { "end": 2131.1600000000003, "start": 2127.88, "text": " This however, they noticed doesn't work super well." }, { "end": 2133.1600000000003, "start": 2131.1600000000003, "text": " I think it works, but not that well." }, { "end": 2135.7200000000003, "start": 2133.1600000000003, "text": " They call this the bottleneck model." }, { "end": 2141.2000000000003, "start": 2135.7200000000003, "text": " And the reason why it doesn't work super well is that if you have like this constraint of" }, { "end": 2147.3999999999996, "start": 2141.2, "text": " neural networks, they don't tend to learn very well." }, { "end": 2149.3999999999996, "start": 2147.3999999999996, "text": " And that's what they hypothesize in the paper as well." }, { "end": 2154.7599999999998, "start": 2149.3999999999996, "text": " They don't tend to really come, you know, be good friends with the fact that they only" }, { "end": 2158.8799999999997, "start": 2154.7599999999998, "text": " have two floating point numbers to learn anything." }, { "end": 2164.4399999999996, "start": 2158.8799999999997, "text": " And this is probably more a property of the optimization procedure than the problem itself." }, { "end": 2169.7999999999997, "start": 2164.4399999999996, "text": " It's property of, you know, us training neural networks with SGD." }, { "end": 2177.1200000000003, "start": 2169.8, "text": " So what they do instead is they put an L1 penalty on these edge messages." }, { "end": 2180.0800000000004, "start": 2177.1200000000003, "text": " So they say we apply L1 regularization." }, { "end": 2184.88, "start": 2180.0800000000004, "text": " And what that will do is that will induce sparsity in whatever you apply it to." }, { "end": 2189, "start": 2184.88, "text": " So L1 regularization simply means that you constrain." }, { "end": 2194.7200000000003, "start": 2189, "text": " So the edge message, if you take the absolute value in each entry and the sum of that, that" }, { "end": 2196.2000000000003, "start": 2194.7200000000003, "text": " should be small." }, { "end": 2201.52, "start": 2196.2, "text": " So you can just add this to the loss function, and that will induce sparsity in these edge" }, { "end": 2203.22, "start": 2201.52, "text": " messages." }, { "end": 2209.7999999999997, "start": 2203.22, "text": " And so now the network still has these whatever 100 latent dimensions, but it is encouraged" }, { "end": 2212.66, "start": 2209.7999999999997, "text": " to use as few as possible." }, { "end": 2218.72, "start": 2212.66, "text": " That means it can use a lot during the beginning when it's really benefits from the lot of" }, { "end": 2221.06, "start": 2218.72, "text": " dimensions when it learns the system." }, { "end": 2227.16, "start": 2221.06, "text": " But then as it gets better and better, it might shift a lot of the information into" }, { "end": 2230.44, "start": 2227.16, "text": " very, very few dimensions." }, { "end": 2237.04, "start": 2230.44, "text": " Okay, so once we do, if we do that, we can then run a check, right?" }, { "end": 2244.08, "start": 2237.04, "text": " If it is really the case that this graph network has learned the physical dynamics of the system," }, { "end": 2252.3199999999997, "start": 2244.08, "text": " then we can simply look at the top two dimensions, and we start by largest standard deviation." }, { "end": 2258.9, "start": 2252.3199999999997, "text": " So whichever two dimensions are the least sparse, have the largest standard deviation," }, { "end": 2263.16, "start": 2258.9, "text": " we can look at those two and we say, well, even though we didn't constrain the model," }, { "end": 2267.56, "start": 2263.16, "text": " those two should describe our force pretty well." }, { "end": 2272.4, "start": 2267.56, "text": " And since in Newtonian dynamics, we know what the force is, so this is we know what the" }, { "end": 2278.6800000000003, "start": 2272.4, "text": " force is, we can simply check whether or not that holds, we can check whether we can read" }, { "end": 2282.28, "start": 2278.6800000000003, "text": " out the force from these two components." }, { "end": 2290.64, "start": 2282.28, "text": " And here it's made such that you can't guarantee that the force is, you know, this force right" }, { "end": 2298.08, "start": 2290.64, "text": " here is actually so there are many ways to state a physical equation, because there are" }, { "end": 2304.52, "start": 2298.08, "text": " many symmetries in physics, and we cannot really make the neural network describe the" }, { "end": 2310.7, "start": 2304.52, "text": " equation exactly as humans would, because there are infinite amount of equivalent formulations," }, { "end": 2314.8199999999997, "start": 2310.7, "text": " but in this case, they're all covered by rotations of each other." }, { "end": 2321.7599999999998, "start": 2314.8199999999997, "text": " And that means in these graphs, if you have these message elements right here, and the" }, { "end": 2327.7999999999997, "start": 2321.7599999999998, "text": " linear combination of forces right here, a linear relationship means basically that the" }, { "end": 2332.6400000000003, "start": 2327.8, "text": " information is there, whereas a nonlinear relationship would mean that these numbers" }, { "end": 2335.2000000000003, "start": 2332.6400000000003, "text": " don't really encode the force as is." }, { "end": 2339.44, "start": 2335.2000000000003, "text": " And here you can pretty clearly see that the linear relationship is given." }, { "end": 2346.52, "start": 2339.44, "text": " And that means that these first two dimensions right here really encode the force in the" }, { "end": 2350.78, "start": 2346.52, "text": " way that we know the equation is." }, { "end": 2352.88, "start": 2350.78, "text": " So that's when we know the equation, right?" }, { "end": 2356.6800000000003, "start": 2352.88, "text": " When we know the equation, we can simply say, okay, does this fit?" }, { "end": 2360.9199999999996, "start": 2356.68, "text": " And when we don't know the equation, we can use this symbolic regression." }, { "end": 2366, "start": 2360.9199999999996, "text": " And what turns out is exactly this thing right here." }, { "end": 2373.62, "start": 2366, "text": " Now you might you might object that this isn't really that force right here." }, { "end": 2376.64, "start": 2373.62, "text": " But as I said, there are many, many symmetries." }, { "end": 2385.3999999999996, "start": 2376.64, "text": " So for example, this, this R hat right here, I believe, and this is I've I'm not a big" }, { "end": 2392.88, "start": 2385.4, "text": " physics person, this R hat, I think this is the vector of the delta x delta y, right?" }, { "end": 2395.84, "start": 2392.88, "text": " So delta x delta y is in this R hat." }, { "end": 2404, "start": 2395.84, "text": " So we already see that delta x and delta y here, this already looks like some sort of" }, { "end": 2405.8, "start": 2404, "text": " this already looks okay." }, { "end": 2410.9, "start": 2405.8, "text": " No, actually, if we go down, it gets even clearer." }, { "end": 2414.56, "start": 2410.9, "text": " So here they have the outputs of that." }, { "end": 2423.96, "start": 2414.56, "text": " Alright, so in this first case, this is the same example right here." }, { "end": 2429.38, "start": 2423.96, "text": " So they say you in this spring example, so this is a system where the particles are connected" }, { "end": 2434.64, "start": 2429.38, "text": " by springs, and we do l one regularization, what we expect is this equation, this is we" }, { "end": 2437.96, "start": 2434.64, "text": " know that this equation holds in this spring system." }, { "end": 2443.88, "start": 2437.96, "text": " And what the neural network combined with the symbolic regression gives us is this equation." }, { "end": 2449.96, "start": 2443.88, "text": " So right here, you can see there's this delta vector, and it's a product, it's an inner" }, { "end": 2455.1800000000003, "start": 2449.96, "text": " product dot product with this a, which is a numerical constants." }, { "end": 2462.48, "start": 2455.1800000000003, "text": " And you can see that there is this form of product with numerical constants." }, { "end": 2468.6400000000003, "start": 2462.48, "text": " What you can also see, so for example, here, the delta y here is 1.36 and 1.37." }, { "end": 2472.32, "start": 2468.6400000000003, "text": " That's, you know, the same number and here it's point 6.6." }, { "end": 2479.7200000000003, "start": 2472.32, "text": " Okay, but here you see, for example, r minus one, and here it's something like this minus" }, { "end": 2482.9, "start": 2479.7200000000003, "text": " something divided by r doesn't seem the same." }, { "end": 2491.6400000000003, "start": 2482.9, "text": " But again, due to the due to the symmetries, you can, if you take this and you simply divide" }, { "end": 2502.1200000000003, "start": 2491.6400000000003, "text": " everything by r, you'll end up with this vector right here, a times delta x, delta y, times" }, { "end": 2509.24, "start": 2502.12, "text": " one over r, no, times one minus one over r plus b." }, { "end": 2516.04, "start": 2509.24, "text": " Right, and now you can see it already looks very much similar." }, { "end": 2521.24, "start": 2516.04, "text": " And it's only off by like, it's only a transformation away from what you want." }, { "end": 2526, "start": 2521.24, "text": " So that's why I said you can describe these equations in many different sort of equivalent" }, { "end": 2527, "start": 2526, "text": " ways." }, { "end": 2532.44, "start": 2527, "text": " And ask the neural network to really figure out, you know, the exact one we want." }, { "end": 2537.24, "start": 2532.44, "text": " As long as it figures out a one that is equivalent, we're happy." }, { "end": 2541.06, "start": 2537.24, "text": " And we're, I guess we're pretty happy here." }, { "end": 2547.72, "start": 2541.06, "text": " So also in this case right here, you can see that it correctly predicts this relationship" }, { "end": 2553.4, "start": 2547.72, "text": " that it should be divided by r to the third power." }, { "end": 2560.1600000000003, "start": 2553.4, "text": " And there is a delta x, delta y, delta z, if you simply consider, so delta z here, I" }, { "end": 2565.96, "start": 2560.1600000000003, "text": " guess is, has simply a factor of zero." }, { "end": 2572.4, "start": 2565.96, "text": " And it even has this discontinuous problem where the force breaks after a certain while," }, { "end": 2577.12, "start": 2572.4, "text": " it can even parse out this if condition right here." }, { "end": 2580.44, "start": 2577.12, "text": " So that's, that's fairly cool, right?" }, { "end": 2587.4, "start": 2580.44, "text": " But to me that is pretty, pretty cool result that you can actually parse out these equations" }, { "end": 2591.44, "start": 2587.4, "text": " with just these graph networks and then the symbolic regression." }, { "end": 2598.3, "start": 2591.44, "text": " So they do the same thing for this cosmology example, where they have these simulators" }, { "end": 2605.38, "start": 2598.3, "text": " of the universe and they let them run and these kind of distribute this dark matter." }, { "end": 2611.6800000000003, "start": 2605.38, "text": " And I guess your task is, if I give you a bunch of these points right here, tell me" }, { "end": 2614.6800000000003, "start": 2611.6800000000003, "text": " where the other dark matter is, something like that." }, { "end": 2619.84, "start": 2614.6800000000003, "text": " I don't understand this, but in essence, it is the same kind of problem, right?" }, { "end": 2626.48, "start": 2619.84, "text": " You want to figure out the dark matter properties from the surrounding dark matter or properties" }, { "end": 2627.98, "start": 2626.48, "text": " of other things." }, { "end": 2633.76, "start": 2627.98, "text": " And again, here you can see pretty well that this is the equation they get out." }, { "end": 2640.0200000000004, "start": 2633.76, "text": " So the equation they get out is going to be a sum right here over, so here the output" }, { "end": 2650.1600000000003, "start": 2640.0200000000004, "text": " for node i is going to be a sum over all the other nodes j and then some function of that" }, { "end": 2651.5600000000004, "start": 2650.1600000000003, "text": " sum." }, { "end": 2657.36, "start": 2651.5600000000004, "text": " So this right here is the equation that came out of our edge model, of our edge neural" }, { "end": 2658.36, "start": 2657.36, "text": " network." }, { "end": 2666.1600000000003, "start": 2658.36, "text": " And this here that includes this one, it was the equation that came out of our vertex model." }, { "end": 2670.36, "start": 2666.1600000000003, "text": " As you know, the same here in this spring law, this came out of our edge model, this" }, { "end": 2672.92, "start": 2670.36, "text": " came out of our vertex model." }, { "end": 2679.4, "start": 2672.92, "text": " Again, this rests on the fact that physical systems can actually be described often as" }, { "end": 2682, "start": 2679.4, "text": " these sums of independent interactions." }, { "end": 2684.5, "start": 2682, "text": " And that's why all of this works." }, { "end": 2689.84, "start": 2684.5, "text": " So they do give very, very detailed instructions on how they did everything." }, { "end": 2695.52, "start": 2689.84, "text": " I think the most unclear things in this paper are the physics things that are assumed sort" }, { "end": 2697.44, "start": 2695.52, "text": " of that you know." }, { "end": 2699.36, "start": 2697.44, "text": " I don't, I didn't." }, { "end": 2703.08, "start": 2699.36, "text": " Yeah, but other than that, it's pretty straightforward." }, { "end": 2707.8, "start": 2703.08, "text": " Their appendix is also pretty detailed in how they do all the representations and so" }, { "end": 2708.8, "start": 2707.8, "text": " on." }, { "end": 2711.52, "start": 2708.8, "text": " They have different formulations other than this L1 regularization." }, { "end": 2714.86, "start": 2711.52, "text": " As I said, they have bottleneck, they have like a KL formulation." }, { "end": 2719.7599999999998, "start": 2714.86, "text": " They really describe how the graph neural network works here and so on." }, { "end": 2722.74, "start": 2719.7599999999998, "text": " So all in all, I enjoyed reading this paper." }, { "end": 2726.24, "start": 2722.74, "text": " Here is a bunch of examples of these particle systems." }, { "end": 2733.56, "start": 2726.24, "text": " And yeah, and here is a bunch of examples of where you'd have a linear relationship" }, { "end": 2739.7599999999998, "start": 2733.56, "text": " that where you can say, oh, look, this really describes that force or a nonlinear relationship" }, { "end": 2744.76, "start": 2739.76, "text": " where you can make the claim this doesn't really describe the force well, because it's" }, { "end": 2750.6000000000004, "start": 2744.76, "text": " not linear relationship indicates that what the network found is a rotation of what you" }, { "end": 2751.6000000000004, "start": 2750.6000000000004, "text": " really want." }, { "end": 2757.5600000000004, "start": 2751.6000000000004, "text": " And that's good because it's equivalent nonlinear basically means that you can't really it doesn't" }, { "end": 2761.0400000000004, "start": 2757.5600000000004, "text": " really describe what you want really well." }, { "end": 2764, "start": 2761.0400000000004, "text": " Yeah, and I'm going to leave you with that." }, { "end": 2770.04, "start": 2764, "text": " I absolutely invite you to check out the code and the video they made about it and I'll" }, { "end": 2771.04, "start": 2770.04, "text": " see you next time." }, { "end": 2795.12, "start": 2771.04, "text": " Bye bye." } ]
Uumd2zOOz60
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
How I Read a Paper: Facebook's DETR (Video Tutorial)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ml", "reading", "papers", "understanding", "quickly", "quick", "fast", "ultralearning", "research", "facebook", "detr", "object detection", "transformers", "how to" ]
I retrace my first reading of Facebook AI's DETR paper and explain my process of understanding it. OUTLINE: 0:00 - Introduction 1:25 - Title 4:10 - Authors 5:55 - Affiliation 7:40 - Abstract 13:50 - Pictures 20:30 - Introduction 22:00 - Related Work 24:00 - Model 30:00 - Experiments 41:50 - Conclusions & Abstract 42:40 - Final Remarks Original Video about DETR: https://youtu.be/T35ba_VXkMY Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there people! So a lot of you have asked me how I read papers and honestly I don't think there is any super special method to it but you know I thought because people have asked me to make a video on it so I'll make a video on it and I'll try to share my method of reading papers and hopefully this is going to be somewhat of a mini series or a series where I every now and then discuss how I read one of the papers that I make videos about and I'll try to select them such that different things are highlighted. Now I've selected this one right here for really for no particular reason other than I sort of remembered it and I'm going to try to go with you through how I read this and how I encountered this and kind of try to honestly share what I thought at the first time when I read it and I hope this helps some of you. If it does help you and if you like content like this of course feel free to share this out and subscribe. If you haven't seen my original video on this paper it might be worth to go watch it. I'll link it and with that let's dive in. So again this might be not really something new but I'll just go through it. So first thing I do is of course read the title. So the title has three parts end to end object detection with transformers. So what I notice that I do myself is I like through reading a paper it's like read the paper with an open mind. I don't do that. I almost immediately form an opinion and a hypothesis of what's going on. So I see transformers so I know what transformers are. If you don't I've made a video, I've made lots of videos on transformers. Attention is all you need is the base paper for that. So I know what a transformer is. And I know that transformers are usually in NLP. They are usually used in NLP. There are things like other things with transformers but it's usually an NLP model. Then I read object detection and I know object detection is a computer vision task. So immediately this here is sort of a difference and I immediately try to assess what's the new idea in this paper. And in this case it might be applying transformers to object detection but then I also see end to end. And the only reason to put that in a title is because that's the novelty. Because usually in deep learning we're sort of used to systems being end to end. And even if most systems aren't end to end, a lot of people don't. It's like end to end image classification on ImageNet. Thanks. So I was guessing that the reason they put in end to end into the title was because that's actually something that's special about the model. So now I have like two competing hypotheses of why this paper matters. First of all because it does it with transformers and second because it does it end to end. And of course the true fear is that the combination of end to end transformers, all of that, is what makes this model. And I already form like a hypothesis of whether I like this or not. I have to be honest. I have very quick judgment of papers of whether I like them or not and then I sort of catch myself each time and I still try to... So for most papers actually that I have sort of a negative opinion at the beginning where I... Well, negative. There are papers where I think like there is no way this is going to you know work or something like this. I'm actually positively convinced throughout the paper. So for most papers that I read, I'm trying to find the positive things in there. But I do form an opinion pretty quickly usually. Alright, so the second thing. This part right here I don't even see. This is like advertisements on Twitter. I have always had issues with author names. People will come to me and be like, oh have you seen the new Vignoles paper? And I have no clue. And then when they say like, oh that's where they use this character level model to do that. And I'm like, oh that paper. So I do not care who the authors are of a paper. I can't remember the papers by their author names. I've gotten better at it I have to say. But I've always had trouble with this. Now that's not to say that a name doesn't pop out to me. If this would be like a like Joshua Benj or someone like really famous, then of course that would catch my eye. But I also know that you know, Joshua Benjo's paper, Joshua Benjo's lab is huge. So just because a big name is on the paper doesn't mean that the paper is going to be of any good or bad quality. Sometimes the authors give you an indication of what kind of research is there. Like if you see Jeff Klune or Kenneth O Stanley, you know that there's going to be this certain type of learning to explore and kind of a bit more out-of-the-box thinking in their papers, which I really like. But it doesn't immediately give you clue. Maybe if you go by first authors, it's much more indicative if you have already read some of their papers. But most often I just ignore authors and go on. The affiliation sometimes matters in that it's a bit of a vicious cycle. If there's a big name affiliation like Facebook AI, Google AI and so on, these papers also get more exposure in the press and so on. So whenever Google publishes a paper, all of these all these pop-sci magazines like Diverge and This and Lifehacker and Hacker News and whatnot, they like write a blurb about it. So often they get much more scrutinized for these papers. They get much more the public attention, but they also get much more scrutiny, which in turn means that there is a bit more pressure on them to do good experiments. So that biases me a little bit into the direction of believing their experimental evidence more. Now usually this is also backed up by the fact that I am actually convinced by their experiments. Usually these big name papers, often I find myself even without or disregarding the affiliation to be convinced more than of regular papers. My most often issue with papers is that I don't believe the experiments. I make no difference. Even if it's Facebook, my prior is the experiments are crap and I don't believe them and they have to convince me of the opposite. But I can't say that it doesn't affect me, that it's like a big-name affiliation. Okay, so then the second thing is I sometimes I see the paper on archive and I skim the abstract. Sometimes the abstract is informative and sometimes not. So here it's like blah blah blah. A new method that views object detection as a direct set prediction problem. I'm like oh yeah okay. It streamlines the detection, effectively removing the need for many hand-designed components like non-maximum suppression, yada yada yada. The main ingredients called detection transformer, a set-based global loss that forces unique prediction via bipartite matching, and the transformer encoder decoder architecture. So they make it clear here why it matters and that's what I want to get at is sort of what's the new thing in this paper. Most papers are, even though they're all very long and have lots of math and so on, they often have like one or maybe two new core things that they really tell you. Sometimes zero. But a lot of times it's like one thing that they really do and you sort of have to... But they're trying to cloak it often because they need to make their research as impactful as possible, right? But you need to sort of figure out what it is they're doing. Here they make it fairly easy for us in that they say okay. They remove the need for many hand-designed components like non-maximum suppression, which tells me that they are building something that's easier than what came before them. And that already tells me it's not necessarily going to be better. Their argument is more that it's going to be easier, right? There are sort of two kinds of experimental results. The ones where you try to beat what came before you and the ones where you're trying to say look our thing works just as well as this other thing while being more advantageous in some other metric. So I would place this already in the sort of second category. And then they say what are the actual ingredients? It's a set-based global loss that forces unique predictions via bipartite matching. Now I at this point I know what these terms mean but at this point I actually don't have to know what the terms mean. What I need to recognize is that I simply have to go later and figure out what that is. And a transformer-based encoder decoder architecture, okay? So there are two things right here that I remember I need to pay attention to later. There's this loss which seems to be special and there is the transformer architecture which they say, okay, the model basically consists of those two things. And then they have a short description of what it does. Given a fixed small set of learned object queries, there are reasons about the relations of the objects and the global image context to directly output the final set of predicted in parallel. That almost tells me nothing. Yeah, okay, the model reasons. Maybe this in parallel is something but... The model is conceptually simple and does not require specialized library unlike many other modern detectors. This sort of repeats, this enforces my hypothesis that they're going with the hey this is a much easier way of doing things approach. Dettor demonstrates accuracy and runtime performance on par with well-established that further confirms my hypothesis that this is on par, right? The runtime performance on par with the current state of the art. And at the end they say moreover, Dettor can easily be generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. Training code and preterm models are available. Okay. Now this last part when I first read it is like, okay, can easily be generalized to produce this panoptic segmentation. I didn't know yet whether this is like a central claim of their paper that it can do this segmentation or whether this is like an added benefit to their paper. Because you can read it in both ways and I'm just ready to find this out in the paper. Now after I've read the abstract and sort of already formed the hypothesis of what's going on. So here I already in my mind I already sort of have a model of how would I do that, right? How would I do that? And then what would I do? So right now what I might be thinking is if I have a transformer over images that directly outputs the predictions in parallel, I'm imagining like an image and the image somehow needs to go into a transformer. So maybe there's like an encoder, like a CNN encoder that gives me image features. And then it's so maybe you sample this down, this image. This is just me hypothesizing what could be going on, right? And then I might be unrolling that, right? This image into a vector of these lower pixels. And then so in my mind what I would do right here without knowing anything more would be to do something like BERT span prediction. So I would have BERT right here and I so for I would input the sequence right here and then to detect an object I would sort of think that maybe the BERT, you know, BERT has an output that is the same length as the input, right? So it's it's very good at sequence tagging and things like this. So maybe how it detects an object is going to be that it sort of like tags the center location in the pixel of an object right here or it tags somehow the corners of the of the bounding box. But then I don't know how this is going to be in parallel. Maybe BERT outputs like a score for each location and then you do some kind of matching right here. So this is my initial hypothesis of what's going on. And then I scroll through and honestly the first thing I do is I go and find the pictures. And no no different in all like since since your first book you read that's what you do. I go and find the pictures because usually if someone proposes anything new that they're gonna try to make a picture of it. Luckily I don't do like super theoretical what not your Bayesian generalization bounds and I don't know. So most often papers I read have some sort of picture and that's very awful to me. I know, I know, but yeah. So I find this picture and here I see okay you have image, you have CNN okay gives you set of image features so so far so good. Then transformer encoder decoder then set of box predictions so all of them come out here and I already read they're in parallel and then bipartite matching loss. So here they I can see they color these in different ways and these color appear to match with these colors right here right in the green here and these they they also this is a very good graphic right from this I can already read that these here go to the no object. A lot of times the graphics aren't very good so this this is what I'm not saying in every paper you can learn by looking at the graphics like sometimes the graphics are terrible and you're like what's going on here I like I don't this this makes no sense. This happens a lot in this paper right here this happens to be very very good explanatory graphics so I'll take advantage of that and I do the same thing in the other papers right but then later when it doesn't match what I read in the text I'll have to you know update my belief and so on but here I see that these go to no object and this goes to no object so I don't know yet that this is the test set at the point where I read this I was sort of confused by this but I recognized that each of these boxes right here is going to be either resulting in a bounding box or in the no object prediction so from that I could conclude that these things here are maybe some sort of a fixed set right but I still thought that you know these that this would actually be the output of these image features so that in this case you'd have like six set of image features and then you'd have like BERT here even though that's not an encoder decoder I still this was still my running hypothesis that somehow you'd map these image features to these boxes right here so and I didn't know what to what to make of this this thing right here so then I went through some more and look for more pictures and there are not sometimes I also kind of glance at the formulas but okay when I ever I see this this is just I mean this is kind of useless like okay cool you minimize the loss thanks this okay didn't really pay attention to that ah new picture cool so this picture is much more informative than the other picture now I believe with the other picture they were trying to showcase this loss how they do the matching and even though I could read a lot from that picture I did not get that part and therefore I felt when I saw this and I just glanced at it I'm like wait what's what's different than up here it seems like the same but okay let's look at this so again we see okay you have set of image features that comes out of the CNN so that conforms with my belief but then this here goes into a transformer encoder and this comes out so immediately I see oh this is not the same as these boxes here right that was my hypothesis that these things here would be the colored boxes so I I say okay obviously that's not what happens this thing here it seems to be sort of the encoded image information then that's somehow fed into here and that then there are these object query things and they seem to correspond to this so I'm a bit more confused right now what I can see is that these then will result in these in these boxes okay so being confused by that I look for more pictures so I go look for more pictures and this here seems to be like of a visualization a lot of these papers have some sort of ablation experiments or so and so on this I just find really cool picture for now I don't know yet what it means this I don't know yet what it means and I go down skip all of this and then back here in the appendix I find this here which I immediately mapped to the previous where this is the end and this is a decoder and I've already read the attention is all you need paper and that that point it clicked in me is like ah this is not a BERT transformer this is one of these transformers that has an encoder in the decoder even though they told me like 50 billion times already I was too stupid until this point so now I know okay okay I see what's going on so the image goes through here and then this goes as a side input like as an attention from the decoder to the encoder like I know in NLP right so in NLP this here would be a source sequence like maybe if you do translation and this here would be a target sequence so now whenever I see a transformer like this and it outputs something at this I I look at it as okay this here is sort of the input that goes as like a side input over here and usually here you have the target sequence but that's not the case right here right you have these these object queries so this is how far I get from the pictures now I go up so I have a sort of I have questions now I have questions and that's when I start reading the paper only now do I start reading the paper after I've looked through all the images form the hypothesis and sort of have questions on how this works and we'll go a bit faster from now on to just not bore you with all the things so the introduction is often very important even though it's called introduction and maybe you know if you read a book like if there's like introduction or prologue or something like this it's often kind of pointless introduction in these research papers is one of the most important points because all of these papers they try basically all of them try to convince a reviewer to accept them and in order to do that they will set up their main points and their main story immediately in the introduction so what you'll usually have is a problem statement which is here like why what's what's wrong right now and then you have like a story of how their paper addresses the issue okay and that's that's here we streamline the training pipeline by viewing object prediction the other yada yada this is often formulates in words what the paper is about and what contribution the paper makes right this is like a this is like a longer abstract the abstract is often very very cryptic very dense this here is often much more informative of what the paper does so for understanding the paper and a high level the introduction is the best place but given that I've already looked at the images and so on I don't actually draw many new much new information from this thing then there's related work and honestly I I skip it like unless I'm the actual reviewer of a paper like when I'm the reviewer of a paper I read the related work but often the related work is just like you first of all you cite a bunch of your friends and then you cite the mandatory papers and then you cite every single person that you think could be a reviewer because or you've actually been rejected from a conference with a reviewer claiming that you're you haven't compared or you haven't cited that or that paper you can pretty much be sure that that's the if if it's not a glaring of omission if it's like a niche paper and you haven't cited it then you're like okay I'm gonna cite it just because the next conference you could be my reviewer again so I'm not I'm not sure that these related work sections they're necessary like if someone wants to write their thesis and they go and read this paper and they want references oftentimes this is a good place but a lot of it is just blah blah blah blah blah okay I know I know disagree with me if you want oh yeah to maybe to reading quality so I tend to at this point I tend to not skim so at first I skim but at this point I tend to read every sentence and read it closely and understand it and when I realized like I'm tired or something I don't just skim the paper I've tried to skim papers and it doesn't doesn't work try to read every sentence understand every sentence and okay if you don't understand it don't stop reading because of that but try to not skim and be like oh yeah yeah yeah okay I gotta go to go to go to go that's is not helpful except related work skip completely cool then a lot of times in this paper now is the the model and this is the section I'm actually interested in right so I read very very closely here and then I find out what their their loss is all about and again I stress read these things and understand them right sometimes it's hard but if you're if you're confused that means you either they've done a bad job or they made a mistake or that you haven't understood something if you can't understand the sentence try to read on maybe it's clarified later and then you know go back but again do not do not like just start a lot of times when I read paper previously like I wouldn't understand something quite well yet and then I would be like oh yeah yeah yeah and then I noticed that I start skipping and skimming more and more because that would you know pop up again and again and I wouldn't understand it again and again and then at the end I would just be kind of glancing at the paper and I don't want to do that right here so I want to read every sentence and understand it okay so here then I find out about the loss and then I if I don't know something here then I'll go and look it up on maybe on Wikipedia or something like this now I don't need to on actually I don't need to understand every single part of it right that's maybe I should correct myself so for example this bounding box loss here they talk about the second part of the max and question Hungarian possible is this box loss that scores bounding boxes unlike many detectors that do box prediction with some initiality yada yada yada they say the most commonly used L1 loss will have different scales for a small so here they basically talk about how they mix the losses they see overall our box losses that defined as this and this now I haven't I don't know what these losses are I just assume there's some bounding box losses so when I it's not true when I say understand everything understand the things that are integral to the story of the paper right how exactly they compute bounding box losses at this point I don't care I just assume that there's some loss that I can back propagate right I what is important is that they do this Hungarian matching thing right as soon as I get that I'm like ah that was this you know this this thing no this thing up here this thing this with the matching thing now I get it now I know there are always the same amount of boxes here and there are always the same amount of labels here and all we need to do is somehow match them and I immediately think why is that relevant oh because when something is already matched to an object some other thing cannot be matched to the same object and that's how we you know prevent the fact that all the things predict the same thing right and so that immediately becomes clear and as I said there is usually like one or two ideas in a paper I don't assume or I don't care what their exact loss function is because I've sort of gotten the idea up here of what the loss is about all right so I hope that's clear under very closely read the things and understand the things that are necessary for the story if you find if you think something is not necessary for the story and then later end up not understanding that maybe come back and you know read it again in any case I would I would rather I would rather skip something and assume it's not necessary if I think so and then come back then trying to understand every everything but the things I do read I try to understand thoroughly okay then there's the architecture okay and that again I read closely and get backbone okay transformer encoder okay and now I understand much more closely decoder okay and here I get now finally I get what this is about decodes and objects in parallel yada yada yada these input embeddings are learned positional encodings that we refer to as object queries and similarly to the encoder we add them to the input at each attention layer so now they name I've already seen these object queries here and the only word I actually need from this sentence are learned the fact that they're positional encodings I just kind of ignore as soon as they say learned I know aha these things here are learned they have actually they're always the same for each of the images they're just overall learned okay so now I feel I understand the entire model and yeah so they then they say auxiliary decoding losses and this sometimes you have to pay attention to like auxiliary auxiliary things because those are the the things that here they say explicitly we found helpful to use auxiliary losses sometimes they they won't say why they did it they'll just say our loss consists of three things and you know if you look at the three things only one of the things is really a part of their story so far and that you should immediately conclude that they've put in the other things because they tried it and it didn't work right so you can also kind of get an estimate of the brittleness and so on of the system in that you see how many unnecessary things are there or how many things are not straightforward how many things are the easiest thing that you would do when you would go about and do what they did okay so then you this concludes this model or method usually this section is called like method or model or something like this and you go to experiments now the main question I have so far or I have maybe I have some more questions about the model itself that I haven't been able to pick up from this section which is not the case here but I simply keep those questions in mind and see whether they are resolved later right so I keep an awareness of what I don't understand but from here on my main issue is are they demonstrating that their story works right so they're here they're they're proposing a loss and a model and in my mind they now need to convince me that that works and that's that's it's not as easy as simply to show me some numbers that they are good at some benchmark they need to show me that they get those numbers because of what they claim so here they claim well okay they propose a new they propose a new architecture so what they need to convince me of is that the architecture itself makes sense right but in other papers when when you propose like and when you say like oh we for example in an LSTM when you build in an attention mechanism and you claim oh we you know the attention mechanism can look back at the source sequence in one step then you need to convince me that that actually happens right so you need to not only do you need to perform well you need to convince me that you perform well because of what you claim your model does right so and that's often difficult and I specifically look out in the experiments for usually the question is like where are they trying to bullshit me right where are they trying to are or are they trying to bullshit me are they trying to cover up the fact that something doesn't work and all the experiments are always in the best light possible of course and you have to keep that in mind but a lot of times you can also already see from the experiments that okay are they doing something weird are they not showing me some obvious experiment or and that's a lot of times I guess is there an easier explanation for why they get the results that they get other than their explanation right and it is it is their job to convince you that their explanation is the correct one for these numbers and especially if there is an easier one that they haven't excluded and then I don't believe the experiments if that's the case right if there is an easier explanation for the effect I'm I'm very skeptical but some papers have an easier job here than other papers so in this paper they basically show results on a on a on a task and since their paper is about hey our pipeline is just easier than other pipelines what they first of all need to do is they just need to like match the numbers of other pipelines and here I see that okay in these results often you have maybe a table or something here you see like this their model other models and their model is the best model in a lot of cases now if the best thing is of course if their model throughout is the best the worst thing is if it's like scattered like this even if their model is the best but in every single benchmark a different configuration of their model is the best that's that's sort of a bad sign unless they can explicitly explain why that is and it's also not that good of a sign if these things are spread out like this like sometimes this baseline is good sometimes their model is better and so on so pay attention to that now in this paper it doesn't matter so much that's actually fine because what they're trying to show is that their model is on par and way easier and they've already made the case in what way it is easier it's easier in terms of architecture if they were to say it's much faster then after that I would expect you know an experiment in speed while these numbers are matched so but since they say it's it's easier I've already seen the architecture I'm convinced of that now that they show okay our numbers match or actually I'm surprised they even outperform a lot of times then I'm quite happy with these experiments so also look for differences between numbers and the spread of numbers now it's not easy to say what if like point one is a big or a small difference that depends on the task but if you know pay attention to these things pay attention to the fact that these results are noisy and oftentimes there is a lot more hyper parameter tuning going into the model of the paper then into the baseline models right you want to make your look your stuff look as good as possible and here is a little bit where the institutional credibility of someone like Facebook comes in in that I tend to believe their results a bit more than other results not mega but a bit more yeah also look at patterns that they don't point out in the text so if there is like a pattern if you see like an interaction between the number of parameters and the score or something like this just try to be on the lookout of that and see if you can spot something that you think or think about whether that makes sense or not in what your hypothesis would be so here we go on and okay then they go into ablations and a lot of a lot of these papers do ablations and I generally appreciate that so here they visualize that the attention mechanism in their model actually refers to different instances right encoder self-attentions for a set of reference points the encoder is able to separate individual instances and you can see that pretty clearly right here where and even here with the overlapping cows and this is the sort of experiment that I would expect that actually convinces me that their architecture does what it says that it does right and something like this where you see like totally overlapping things with the attention of the individual things visualized so telling me like especially this one right here the the foot of the back elephant actually being focused by the attention of the bounding box of the back elephant that's the sort of experiment that convinces me that their claims like that their numbers really come from what they claim it comes from okay so at the end of the experimental section you should always ask yourself have they really convinced me that their story is true right that the improvement or when egg whenever they get an improvement or whatever they get what is is due to the story that they want to sell me or could there be like an easier explanation or does something not fit is like are there are the experiments different than from what you would expect here okay so these are these are my main questions are they are they convincing me of their story it's not do they have state-of-the-art numbers I don't care I don't care even though like sometimes so there is a bit of a catch I I don't care about state of the art numbers now let's say you have a table like this and you have a computer vision model and one of the models is like on the C for 10 data set now if your baseline model has like a 91 92 percent accuracy on C for 10 when I know the state-of-the-art is 96 I don't care right I know like I've done C for 10 I know with like I don't know five six layers CNN you can reach these 91 92 93 percent accuracy and to get to the 96 97 you'd actually be like in the region of a wide resinette and whatnot so it I I know that even though you're a few points behind state-of-the-art I know you know this this is valid still so I don't care but if you were to be like at 80 percent accuracy on C for 10 then I then I get a bit like hmm I like it's pretty easy to get to 90 percent plus with like a standard CNN so there I immediately start to wonder why is there an explanation now this could be like a theoretical paper that says oh we investigate MLPs and that's why we only get that number so that's that would be fine but if something is out of the ordinary like this then I pay attention but never because something isn't like the latest and greatest state-of-the-art that's just dumb okay and also if only evaluate what the paper claims it does right if the paper says we want to show that we are on par with current models then don't be mad if the paper doesn't outperform these models they didn't claim that right so yeah so after these ablations I'm actually pretty happy right here with the results and this right here when I saw this I didn't I didn't expect that but I read the experiment description that these are these different learned object queries and what they do and that gave me an increased understanding of how these object queries actually work right so at that point I still had like a vague I knew that these are learned but reading this and sort of looking at it studying it a bit I was like oh okay then I understood even better what they are so again when I say understand everything in the method section you can still have questions and but you just have to keep it in mind for later and then here I go on and there's this DETR for panoptic segmentation and they here they propose like a new model so I first look at it and I'm like okay they propose a new model they can do stuff like this now this is not object detection and again I'm not sure is this like a is this like a an add-on to the method or is was was this up here just an intermediate step to this and honestly after reading that I still wasn't sure it seems like something in between of course the paper is also a bit longer than other papers it just it seems it's too long for just being a side note but it's too short for being its own thing so that was just a bit weird and I treated is as just like a oh we can also do this with our model but I didn't pay like too much attention to that okay so at the end I you know look at conclusions now the conclusions of a paper are much much often they are not nearly as informative as the introduction the conclusions they all often tend to be very generic and kind of hedging a bit against criticisms saying what would be up for future work which is again hedging against criticism because you simply say well we didn't do this that's future work yes so again I read it but I don't really pay attention to it and then I gloss over the abstract I just would kind of scroll through the abstract if there's something that catches my eye I would look at it and if not then not and then I basically go to the start and whenever I didn't understand something I go back I look at it again and I try to think are all my questions answered and have they sufficiently convinced me that their story is the thing that really has the effect right here and then if I now were to make a video of this I've often found it useful to just put the paper away for a while and it's I usually get the best results when I read the paper the day before and then make a video the day after or if not I'll just you know put it away do something else do some email responding programming going outside eating lunch just some kind of a break between first read or between your first couple of reads and riff just I don't even think about the paper I just kind of it's just in the subconscious it kind of brews right and I happen to think about the paper every now and then but I don't make a conscious effort to be like oh how am I gonna explain this and so on but I just found the the worst videos are the ones where I immediately make the video after reading a paper and I've just discovered that if I kind of take a break and then I look at it again right I look I don't read it fully again but I if I have if I have the feeling I've understood it I don't read it fully again but I just kind of look at it and go again through the story and I think that's even if you you know want to if you want to talk about a paper in a reading group or tell you know explain it to your friends or whatnot this is often very useful just put it away for a while let it mellow and I find that helps a lot okay that was my process of reading this particular paper now we again this this is a high quality paper so it's I find it's a pretty easy read in that I simply need to understand what they did and I'm pretty happy with their experiments I maybe next time I can find an experiment or a paper where I'm initially more skeptical and not as happy with what I find but yeah let me know if you enjoyed this or if you would like to see any other explanation I don't exactly know if this is what you expected from a video like this so let me know maybe I have misunderstood you completely or it's way too long way too detailed or way too undetailed yeah leave me a comment and I'll see you next time bye bye
[ { "end": 7.74, "start": 0, "text": " Hi there people! So a lot of you have asked me how I read papers and honestly" }, { "end": 12.96, "start": 7.74, "text": " I don't think there is any super special method to it but you know I thought" }, { "end": 17.56, "start": 12.96, "text": " because people have asked me to make a video on it so I'll make a video on it" }, { "end": 23.76, "start": 17.56, "text": " and I'll try to share my method of reading papers and hopefully this is" }, { "end": 28.64, "start": 23.76, "text": " going to be somewhat of a mini series or a series where I every now and then" }, { "end": 33.8, "start": 28.64, "text": " discuss how I read one of the papers that I make videos about and I'll try to" }, { "end": 38.84, "start": 33.8, "text": " select them such that different things are highlighted. Now I've selected this" }, { "end": 43.56, "start": 38.84, "text": " one right here for really for no particular reason other than I sort of" }, { "end": 51.040000000000006, "start": 43.56, "text": " remembered it and I'm going to try to go with you through how I read this and how" }, { "end": 56.400000000000006, "start": 51.040000000000006, "text": " I encountered this and kind of try to honestly share what I thought at the" }, { "end": 62.879999999999995, "start": 56.4, "text": " first time when I read it and I hope this helps some of you. If it does help" }, { "end": 67.2, "start": 62.879999999999995, "text": " you and if you like content like this of course feel free to share this out and" }, { "end": 73.36, "start": 67.2, "text": " subscribe. If you haven't seen my original video on this paper it might" }, { "end": 81.2, "start": 73.36, "text": " be worth to go watch it. I'll link it and with that let's dive in. So again this" }, { "end": 87.04, "start": 81.2, "text": " might be not really something new but I'll just go through it. So first" }, { "end": 93.16, "start": 87.04, "text": " thing I do is of course read the title. So the title has three parts end to end" }, { "end": 99.04, "start": 93.16, "text": " object detection with transformers. So what I notice that I do myself is I like" }, { "end": 104.2, "start": 99.04, "text": " through reading a paper it's like read the paper with an open mind. I don't do" }, { "end": 109.24000000000001, "start": 104.2, "text": " that. I almost immediately form an opinion and a hypothesis of what's going" }, { "end": 114.52, "start": 109.24, "text": " on. So I see transformers so I know what transformers are. If you don't I've" }, { "end": 118, "start": 114.52, "text": " made a video, I've made lots of videos on transformers. Attention is all you need" }, { "end": 122.67999999999999, "start": 118, "text": " is the base paper for that. So I know what a transformer is. And I know" }, { "end": 128.95999999999998, "start": 122.67999999999999, "text": " that transformers are usually in NLP. They are usually used in NLP. There are" }, { "end": 135.04, "start": 128.95999999999998, "text": " things like other things with transformers but it's usually an NLP" }, { "end": 139.48, "start": 135.04, "text": " model. Then I read object detection and I know object detection is a computer" }, { "end": 145.95999999999998, "start": 139.48, "text": " vision task. So immediately this here is sort of a difference and I immediately" }, { "end": 151, "start": 145.95999999999998, "text": " try to assess what's the new idea in this paper. And in this case it might be" }, { "end": 154.23999999999998, "start": 151, "text": " applying transformers to object" }, { "end": 158.72, "start": 154.23999999999998, "text": " detection but then I also see end to end. And the only reason to put that in a" }, { "end": 163.07999999999998, "start": 158.72, "text": " title is because that's the novelty. Because usually in deep learning we're" }, { "end": 169.20000000000002, "start": 163.08, "text": " sort of used to systems being end to end. And even if most systems" }, { "end": 174.08, "start": 169.20000000000002, "text": " aren't end to end, a lot of people don't. It's like end to end image classification" }, { "end": 180.72000000000003, "start": 174.08, "text": " on ImageNet. Thanks. So I was guessing that the reason they put in end to" }, { "end": 185, "start": 180.72000000000003, "text": " end into the title was because that's actually something that's special about" }, { "end": 190.32000000000002, "start": 185, "text": " the model. So now I have like two competing hypotheses of why this paper" }, { "end": 194.4, "start": 190.32, "text": " matters. First of all because it does it with transformers and second because it" }, { "end": 199.76, "start": 194.4, "text": " does it end to end. And of course the true fear is that the combination of" }, { "end": 206.68, "start": 199.76, "text": " end to end transformers, all of that, is what makes this model. And I already" }, { "end": 211.28, "start": 206.68, "text": " form like a hypothesis of whether I like this or not. I have to be honest." }, { "end": 216.68, "start": 211.28, "text": " I have very quick judgment of papers of whether I like them or not and then I" }, { "end": 224.20000000000002, "start": 216.68, "text": " sort of catch myself each time and I still try to... So for most papers" }, { "end": 228.48000000000002, "start": 224.20000000000002, "text": " actually that I have sort of a negative opinion at the beginning where I..." }, { "end": 233.84, "start": 228.48000000000002, "text": " Well, negative. There are papers where I think like there is no way this is going to" }, { "end": 238.08, "start": 233.84, "text": " you know work or something like this. I'm actually positively convinced" }, { "end": 246.04000000000002, "start": 238.08, "text": " throughout the paper. So for most papers that I read, I'm trying" }, { "end": 250.6, "start": 246.04, "text": " to find the positive things in there. But I do form an opinion pretty quickly" }, { "end": 256.24, "start": 250.6, "text": " usually. Alright, so the second thing. This part right here I don't even" }, { "end": 263.24, "start": 256.24, "text": " see. This is like advertisements on Twitter. I have always had" }, { "end": 267.8, "start": 263.24, "text": " issues with author names. People will come to me and be like, oh have you seen" }, { "end": 274.28, "start": 267.8, "text": " the new Vignoles paper? And I have no clue. And then when they say like, oh that's" }, { "end": 277.35999999999996, "start": 274.28, "text": " where they use this character level model to do that. And I'm like, oh that" }, { "end": 283.59999999999997, "start": 277.35999999999996, "text": " paper. So I do not care who the authors are of a paper. I" }, { "end": 288.11999999999995, "start": 283.59999999999997, "text": " can't remember the papers by their author names. I've gotten better at it I" }, { "end": 292.79999999999995, "start": 288.11999999999995, "text": " have to say. But I've always had trouble with this. Now that's not to say that a" }, { "end": 298.47999999999996, "start": 292.79999999999995, "text": " name doesn't pop out to me. If this would be like a like Joshua Benj or" }, { "end": 304.72, "start": 298.48, "text": " someone like really famous, then of course that would catch my eye. But I" }, { "end": 310.96000000000004, "start": 304.72, "text": " also know that you know, Joshua Benjo's paper, Joshua Benjo's lab is huge. So just" }, { "end": 315.72, "start": 310.96000000000004, "text": " because a big name is on the paper doesn't mean that the paper is going to" }, { "end": 319.96000000000004, "start": 315.72, "text": " be of any good or bad quality. Sometimes the authors give you an indication of" }, { "end": 326.12, "start": 319.96000000000004, "text": " what kind of research is there. Like if you see Jeff Klune or Kenneth O" }, { "end": 331.8, "start": 326.12, "text": " Stanley, you know that there's going to be this certain type of" }, { "end": 338.44, "start": 331.8, "text": " learning to explore and kind of a bit more out-of-the-box thinking in" }, { "end": 343.52, "start": 338.44, "text": " their papers, which I really like. But it doesn't immediately give you clue. Maybe" }, { "end": 349.68, "start": 343.52, "text": " if you go by first authors, it's much more indicative if you have already read" }, { "end": 355.36, "start": 349.68, "text": " some of their papers. But most often I just ignore authors and go on. The" }, { "end": 361.72, "start": 355.36, "text": " affiliation sometimes matters in that it's a bit of a vicious cycle. If" }, { "end": 367.12, "start": 361.72, "text": " there's a big name affiliation like Facebook AI, Google AI and so on, these" }, { "end": 371.92, "start": 367.12, "text": " papers also get more exposure in the press and so on. So whenever" }, { "end": 376.32, "start": 371.92, "text": " Google publishes a paper, all of these all these pop-sci magazines like" }, { "end": 382.2, "start": 376.32, "text": " Diverge and This and Lifehacker and Hacker News and whatnot, they like" }, { "end": 389.24, "start": 382.2, "text": " write a blurb about it. So often they get much more scrutinized for these papers." }, { "end": 393.71999999999997, "start": 389.24, "text": " They get much more the public attention, but they also get" }, { "end": 398.88, "start": 393.71999999999997, "text": " much more scrutiny, which in turn means that there is a bit more pressure on" }, { "end": 405.48, "start": 398.88, "text": " them to do good experiments. So that biases me a little bit into the" }, { "end": 410.71999999999997, "start": 405.48, "text": " direction of believing their experimental evidence more. Now usually" }, { "end": 415.84000000000003, "start": 410.72, "text": " this is also backed up by the fact that I am actually convinced by their" }, { "end": 421.88000000000005, "start": 415.84000000000003, "text": " experiments. Usually these big name papers, often I find myself" }, { "end": 427.6, "start": 421.88000000000005, "text": " even without or disregarding the affiliation to be convinced more than of" }, { "end": 433.48, "start": 427.6, "text": " regular papers. My most often issue with papers is that I don't believe" }, { "end": 438.76000000000005, "start": 433.48, "text": " the experiments. I make no difference. Even if it's Facebook, my" }, { "end": 444.03999999999996, "start": 438.76, "text": " prior is the experiments are crap and I don't believe them and they have to" }, { "end": 449.52, "start": 444.03999999999996, "text": " convince me of the opposite. But I can't say that it doesn't affect me," }, { "end": 455.88, "start": 449.52, "text": " that it's like a big-name affiliation. Okay, so then the second thing is I" }, { "end": 462.8, "start": 455.88, "text": " sometimes I see the paper on archive and I skim the abstract. Sometimes the" }, { "end": 468.03999999999996, "start": 462.8, "text": " abstract is informative and sometimes not. So here it's like blah blah blah. A" }, { "end": 472, "start": 468.04, "text": " new method that views object detection as a direct set prediction problem. I'm" }, { "end": 477.32, "start": 472, "text": " like oh yeah okay. It streamlines the detection, effectively removing the need" }, { "end": 481.52000000000004, "start": 477.32, "text": " for many hand-designed components like non-maximum suppression, yada yada yada." }, { "end": 487.88, "start": 481.52000000000004, "text": " The main ingredients called detection transformer, a set-based global loss that" }, { "end": 491.48, "start": 487.88, "text": " forces unique prediction via bipartite matching, and the transformer encoder" }, { "end": 495.96000000000004, "start": 491.48, "text": " decoder architecture. So they make it clear here why it matters and that's" }, { "end": 499.71999999999997, "start": 495.96, "text": " what I want to get at is sort of what's the new thing in this" }, { "end": 505.71999999999997, "start": 499.71999999999997, "text": " paper. Most papers are, even though they're all very long and have lots of" }, { "end": 513.3, "start": 505.71999999999997, "text": " math and so on, they often have like one or maybe two new core things that they" }, { "end": 519.92, "start": 513.3, "text": " really tell you. Sometimes zero. But a lot of times it's like one thing that they" }, { "end": 524.76, "start": 519.92, "text": " really do and you sort of have to... But they're trying to cloak it often" }, { "end": 530.36, "start": 524.76, "text": " because they need to make their research as impactful as possible, right? But you" }, { "end": 534.8, "start": 530.36, "text": " need to sort of figure out what it is they're doing. Here they make it fairly" }, { "end": 540.76, "start": 534.8, "text": " easy for us in that they say okay. They remove the need for many hand-designed" }, { "end": 544.4, "start": 540.76, "text": " components like non-maximum suppression, which tells me that they are building" }, { "end": 548.6, "start": 544.4, "text": " something that's easier than what came before them. And that already tells me" }, { "end": 553.64, "start": 548.6, "text": " it's not necessarily going to be better. Their argument is more that it's going" }, { "end": 559.76, "start": 553.64, "text": " to be easier, right? There are sort of two kinds of experimental results. The" }, { "end": 563.6, "start": 559.76, "text": " ones where you try to beat what came before you and the ones where you're" }, { "end": 568.28, "start": 563.6, "text": " trying to say look our thing works just as well as this other thing while being" }, { "end": 573.88, "start": 568.28, "text": " more advantageous in some other metric. So I would place this already in the" }, { "end": 578.68, "start": 573.88, "text": " sort of second category. And then they say what are the actual ingredients? It's" }, { "end": 583.14, "start": 578.68, "text": " a set-based global loss that forces unique predictions via bipartite" }, { "end": 588.04, "start": 583.14, "text": " matching. Now I at this point I know what these terms mean but at this point I" }, { "end": 592.14, "start": 588.04, "text": " actually don't have to know what the terms mean. What I need to recognize is" }, { "end": 597.96, "start": 592.14, "text": " that I simply have to go later and figure out what that is. And a" }, { "end": 604.24, "start": 597.96, "text": " transformer-based encoder decoder architecture, okay? So there are two" }, { "end": 609.04, "start": 604.24, "text": " things right here that I remember I need to pay attention to later. There's this" }, { "end": 614.28, "start": 609.04, "text": " loss which seems to be special and there is the transformer architecture which" }, { "end": 618.4, "start": 614.28, "text": " they say, okay, the model basically consists of" }, { "end": 623.8399999999999, "start": 618.4, "text": " those two things. And then they have a short description of what it does. Given" }, { "end": 629.66, "start": 623.8399999999999, "text": " a fixed small set of learned object queries, there are reasons about the relations" }, { "end": 633.3399999999999, "start": 629.66, "text": " of the objects and the global image context to directly output the final set" }, { "end": 639.1600000000001, "start": 633.34, "text": " of predicted in parallel. That almost tells me nothing. Yeah, okay, the model" }, { "end": 645.4, "start": 639.1600000000001, "text": " reasons. Maybe this in parallel is something but... The model is conceptually" }, { "end": 649.2800000000001, "start": 645.4, "text": " simple and does not require specialized library unlike many other modern" }, { "end": 653.08, "start": 649.2800000000001, "text": " detectors. This sort of repeats, this enforces my hypothesis that they're" }, { "end": 658.2, "start": 653.08, "text": " going with the hey this is a much easier way of doing things approach. Dettor" }, { "end": 662.44, "start": 658.2, "text": " demonstrates accuracy and runtime performance on par with well-established" }, { "end": 669.36, "start": 662.44, "text": " that further confirms my hypothesis that this is on par, right? The runtime" }, { "end": 675.0400000000001, "start": 669.36, "text": " performance on par with the current state of the art. And at the end they say" }, { "end": 678.96, "start": 675.0400000000001, "text": " moreover, Dettor can easily be generalized to produce panoptic" }, { "end": 683.7600000000001, "start": 678.96, "text": " segmentation in a unified manner. We show that it significantly outperforms" }, { "end": 688.44, "start": 683.7600000000001, "text": " competitive baselines. Training code and preterm models are available. Okay. Now" }, { "end": 692.72, "start": 688.44, "text": " this last part when I first read it is like, okay, can easily be generalized to" }, { "end": 698.3800000000001, "start": 692.72, "text": " produce this panoptic segmentation. I didn't know yet whether this is" }, { "end": 702.32, "start": 698.3800000000001, "text": " like a central claim of their paper that it can do this segmentation or whether" }, { "end": 706.6800000000001, "start": 702.32, "text": " this is like an added benefit to their paper. Because you can read it in both" }, { "end": 712.74, "start": 706.6800000000001, "text": " ways and I'm just ready to find this out in the paper. Now after I've read the" }, { "end": 717.0400000000001, "start": 712.74, "text": " abstract and sort of already formed the hypothesis of what's going on. So here I" }, { "end": 722.8, "start": 717.04, "text": " already in my mind I already sort of have a model of how would I do that, right?" }, { "end": 730.56, "start": 722.8, "text": " How would I do that? And then what would I do? So right now what I" }, { "end": 735.0799999999999, "start": 730.56, "text": " might be thinking is if I have a transformer over images that directly" }, { "end": 744.4, "start": 735.0799999999999, "text": " outputs the predictions in parallel, I'm imagining like an image and the image" }, { "end": 748.8, "start": 744.4, "text": " somehow needs to go into a transformer. So maybe there's like an encoder, like a" }, { "end": 756.84, "start": 748.8, "text": " CNN encoder that gives me image features. And then it's so maybe you sample" }, { "end": 761.28, "start": 756.84, "text": " this down, this image. This is just me hypothesizing what could be going on," }, { "end": 767.12, "start": 761.28, "text": " right? And then I might be unrolling that, right? This image into a vector of these" }, { "end": 773.56, "start": 767.12, "text": " lower pixels. And then so in my mind what I would do right here without knowing" }, { "end": 778.28, "start": 773.56, "text": " anything more would be to do something like BERT span prediction. So I would" }, { "end": 784.28, "start": 778.28, "text": " have BERT right here and I so for I would input the sequence right here and" }, { "end": 792.1199999999999, "start": 784.28, "text": " then to detect an object I would sort of think that maybe the BERT, you know, BERT" }, { "end": 797.1199999999999, "start": 792.1199999999999, "text": " has an output that is the same length as the input, right? So it's it's very good" }, { "end": 803.84, "start": 797.12, "text": " at sequence tagging and things like this. So maybe how it detects an object is" }, { "end": 809.16, "start": 803.84, "text": " going to be that it sort of like tags the center location in the pixel of" }, { "end": 813.68, "start": 809.16, "text": " an object right here or it tags somehow the corners of the of the bounding box." }, { "end": 817.68, "start": 813.68, "text": " But then I don't know how this is going to be in parallel. Maybe BERT outputs" }, { "end": 823.04, "start": 817.68, "text": " like a score for each location and then you do some kind of matching right here." }, { "end": 829.04, "start": 823.04, "text": " So this is my initial hypothesis of what's going on. And then I scroll" }, { "end": 835.52, "start": 829.04, "text": " through and honestly the first thing I do is I go and find the pictures. And no" }, { "end": 840.24, "start": 835.52, "text": " no different in all like since since your first book you read that's what you do. I" }, { "end": 844.88, "start": 840.24, "text": " go and find the pictures because usually if someone proposes anything new that" }, { "end": 850.28, "start": 844.88, "text": " they're gonna try to make a picture of it. Luckily I don't do like super" }, { "end": 855.1999999999999, "start": 850.28, "text": " theoretical what not your Bayesian generalization bounds and I don't know." }, { "end": 862.36, "start": 855.1999999999999, "text": " So most often papers I read have some sort of picture and that's very awful to" }, { "end": 870.3199999999999, "start": 862.36, "text": " me. I know, I know, but yeah. So I find this picture and here I see okay you have" }, { "end": 877.12, "start": 870.3199999999999, "text": " image, you have CNN okay gives you set of image features so so far so good. Then" }, { "end": 882.28, "start": 877.12, "text": " transformer encoder decoder then set of box predictions so all of them come out" }, { "end": 886.64, "start": 882.28, "text": " here and I already read they're in parallel and then bipartite matching" }, { "end": 891.4, "start": 886.64, "text": " loss. So here they I can see they color these in different ways and these color" }, { "end": 896.4, "start": 891.4, "text": " appear to match with these colors right here right in the green here and these" }, { "end": 900.52, "start": 896.4, "text": " they they also this is a very good graphic right from this I can already" }, { "end": 906, "start": 900.52, "text": " read that these here go to the no object. A lot of times the graphics aren't very" }, { "end": 911.08, "start": 906, "text": " good so this this is what I'm not saying in every paper you can learn by looking" }, { "end": 915.28, "start": 911.08, "text": " at the graphics like sometimes the graphics are terrible and you're like" }, { "end": 920.36, "start": 915.28, "text": " what's going on here I like I don't this this makes no sense. This happens a lot" }, { "end": 925.44, "start": 920.36, "text": " in this paper right here this happens to be very very good explanatory graphics so" }, { "end": 931.32, "start": 925.44, "text": " I'll take advantage of that and I do the same thing in the other papers right but" }, { "end": 936.36, "start": 931.32, "text": " then later when it doesn't match what I read in the text I'll have to you know" }, { "end": 943.2800000000001, "start": 936.36, "text": " update my belief and so on but here I see that these go to no object and this" }, { "end": 949.5600000000001, "start": 943.2800000000001, "text": " goes to no object so I don't know yet that this is the test set at the point" }, { "end": 956.46, "start": 949.5600000000001, "text": " where I read this I was sort of confused by this but I recognized that each of" }, { "end": 961.96, "start": 956.46, "text": " these boxes right here is going to be either resulting in a bounding box or in" }, { "end": 967.6800000000001, "start": 961.96, "text": " the no object prediction so from that I could conclude that these things here are" }, { "end": 975.24, "start": 967.6800000000001, "text": " maybe some sort of a fixed set right but I still thought that you know these that" }, { "end": 978.96, "start": 975.24, "text": " this would actually be the output of these image features so that in this" }, { "end": 983.36, "start": 978.96, "text": " case you'd have like six set of image features and then you'd have like BERT" }, { "end": 988.84, "start": 983.36, "text": " here even though that's not an encoder decoder I still this was still my" }, { "end": 993.88, "start": 988.84, "text": " running hypothesis that somehow you'd map these image features to these boxes" }, { "end": 999.88, "start": 993.88, "text": " right here so and I didn't know what to what to make of this this thing right" }, { "end": 1007.48, "start": 999.88, "text": " here so then I went through some more and look for more pictures and there are" }, { "end": 1011.84, "start": 1007.48, "text": " not sometimes I also kind of glance at the formulas but okay when I ever I see" }, { "end": 1015.9200000000001, "start": 1011.84, "text": " this this is just I mean this is kind of useless like okay cool you minimize the" }, { "end": 1023.84, "start": 1015.9200000000001, "text": " loss thanks this okay didn't really pay attention to that ah new picture cool so" }, { "end": 1028.6000000000001, "start": 1023.84, "text": " this picture is much more informative than the other picture now I believe" }, { "end": 1033.3400000000001, "start": 1028.6000000000001, "text": " with the other picture they were trying to showcase this loss how they do the" }, { "end": 1039.08, "start": 1033.3400000000001, "text": " matching and even though I could read a lot from that picture I did not get that" }, { "end": 1044, "start": 1039.08, "text": " part and therefore I felt when I saw this and I just glanced at it I'm like" }, { "end": 1048.3999999999999, "start": 1044, "text": " wait what's what's different than up here it seems like the same but okay" }, { "end": 1053.4399999999998, "start": 1048.3999999999999, "text": " let's look at this so again we see okay you have set of image features that" }, { "end": 1058.6399999999999, "start": 1053.4399999999998, "text": " comes out of the CNN so that conforms with my belief but then this here goes" }, { "end": 1067.34, "start": 1058.6399999999999, "text": " into a transformer encoder and this comes out so immediately I see oh this" }, { "end": 1071.6799999999998, "start": 1067.34, "text": " is not the same as these boxes here right that was my hypothesis that these" }, { "end": 1079.6, "start": 1071.6799999999998, "text": " things here would be the colored boxes so I I say okay obviously that's not what" }, { "end": 1086.1999999999998, "start": 1079.6, "text": " happens this thing here it seems to be sort of the encoded image information" }, { "end": 1093.9199999999998, "start": 1086.1999999999998, "text": " then that's somehow fed into here and that then there are these object query" }, { "end": 1101.3200000000002, "start": 1093.92, "text": " things and they seem to correspond to this so I'm a bit more confused right" }, { "end": 1107.8000000000002, "start": 1101.3200000000002, "text": " now what I can see is that these then will result in these in these boxes okay" }, { "end": 1114.72, "start": 1107.8000000000002, "text": " so being confused by that I look for more pictures so I go look for more" }, { "end": 1119.1200000000001, "start": 1114.72, "text": " pictures and this here seems to be like of a visualization a lot of these papers" }, { "end": 1124.36, "start": 1119.12, "text": " have some sort of ablation experiments or so and so on this I just find really" }, { "end": 1127.9599999999998, "start": 1124.36, "text": " cool picture for now I don't know yet what it means this I don't know yet what" }, { "end": 1135.32, "start": 1127.9599999999998, "text": " it means and I go down skip all of this and then back here in the appendix I" }, { "end": 1142.4799999999998, "start": 1135.32, "text": " find this here which I immediately mapped to the previous where this is the" }, { "end": 1145.3999999999999, "start": 1142.4799999999998, "text": " end and this is a decoder and I've already read the attention is all you" }, { "end": 1148.36, "start": 1145.3999999999999, "text": " need paper and that that point it clicked in me is like ah this is not a" }, { "end": 1152.52, "start": 1148.36, "text": " BERT transformer this is one of these transformers that has an encoder in the" }, { "end": 1156.3999999999999, "start": 1152.52, "text": " decoder even though they told me like 50 billion times already I was too stupid" }, { "end": 1162.6399999999999, "start": 1156.3999999999999, "text": " until this point so now I know okay okay I see what's going on so the image goes" }, { "end": 1169.32, "start": 1162.6399999999999, "text": " through here and then this goes as a side input like as an attention from the" }, { "end": 1174.8, "start": 1169.32, "text": " decoder to the encoder like I know in NLP right so in NLP this here would be a" }, { "end": 1178.96, "start": 1174.8, "text": " source sequence like maybe if you do translation and this here would be a" }, { "end": 1185.12, "start": 1178.96, "text": " target sequence so now whenever I see a transformer like this and it outputs" }, { "end": 1193.1599999999999, "start": 1185.12, "text": " something at this I I look at it as okay this here is sort of the input that goes" }, { "end": 1200.08, "start": 1193.1599999999999, "text": " as like a side input over here and usually here you have the target" }, { "end": 1203.68, "start": 1200.08, "text": " sequence but that's not the case right here right you have these these object" }, { "end": 1211.3200000000002, "start": 1203.68, "text": " queries so this is how far I get from the pictures now I go up so I have a" }, { "end": 1216.6000000000001, "start": 1211.3200000000002, "text": " sort of I have questions now I have questions and that's when I start" }, { "end": 1220.24, "start": 1216.6000000000001, "text": " reading the paper only now do I start reading the paper after I've looked" }, { "end": 1224.72, "start": 1220.24, "text": " through all the images form the hypothesis and sort of have questions on" }, { "end": 1230.6000000000001, "start": 1224.72, "text": " how this works and we'll go a bit faster from now on to just not bore you with" }, { "end": 1235.3999999999999, "start": 1230.6, "text": " all the things so the introduction is often very important even though it's" }, { "end": 1239.7199999999998, "start": 1235.3999999999999, "text": " called introduction and maybe you know if you read a book like if there's like" }, { "end": 1245.32, "start": 1239.7199999999998, "text": " introduction or prologue or something like this it's often kind of pointless" }, { "end": 1250.1999999999998, "start": 1245.32, "text": " introduction in these research papers is one of the most important points" }, { "end": 1255.52, "start": 1250.1999999999998, "text": " because all of these papers they try basically all of them try to convince a" }, { "end": 1260.8, "start": 1255.52, "text": " reviewer to accept them and in order to do that they will set up their main" }, { "end": 1265.24, "start": 1260.8, "text": " points and their main story immediately in the introduction so what you'll" }, { "end": 1270.7, "start": 1265.24, "text": " usually have is a problem statement which is here like why what's what's" }, { "end": 1277.4, "start": 1270.7, "text": " wrong right now and then you have like a story of how their paper addresses the" }, { "end": 1285.76, "start": 1277.4, "text": " issue okay and that's that's here we streamline the training pipeline by" }, { "end": 1290.76, "start": 1285.76, "text": " viewing object prediction the other yada yada this is often formulates in words" }, { "end": 1296.2, "start": 1290.76, "text": " what the paper is about and what contribution the paper makes right this" }, { "end": 1301.0800000000002, "start": 1296.2, "text": " is like a this is like a longer abstract the abstract is often very very cryptic" }, { "end": 1306.3200000000002, "start": 1301.0800000000002, "text": " very dense this here is often much more informative of what the paper does so" }, { "end": 1312.84, "start": 1306.32, "text": " for understanding the paper and a high level the introduction is the best place" }, { "end": 1317.6799999999998, "start": 1312.84, "text": " but given that I've already looked at the images and so on I don't actually" }, { "end": 1325.6, "start": 1317.6799999999998, "text": " draw many new much new information from this thing then there's related work and" }, { "end": 1331.4199999999998, "start": 1325.6, "text": " honestly I I skip it like unless I'm the actual reviewer of a paper like when" }, { "end": 1335.9199999999998, "start": 1331.4199999999998, "text": " I'm the reviewer of a paper I read the related work but often the related work" }, { "end": 1340.48, "start": 1335.92, "text": " is just like you first of all you cite a bunch of your friends and then you cite" }, { "end": 1345.3600000000001, "start": 1340.48, "text": " the mandatory papers and then you cite every single person that you think" }, { "end": 1350, "start": 1345.3600000000001, "text": " could be a reviewer because or you've actually been rejected from a conference" }, { "end": 1353.44, "start": 1350, "text": " with a reviewer claiming that you're you haven't compared or you haven't cited" }, { "end": 1358.44, "start": 1353.44, "text": " that or that paper you can pretty much be sure that that's the if if it's not a" }, { "end": 1362.96, "start": 1358.44, "text": " glaring of omission if it's like a niche paper and you haven't cited it then" }, { "end": 1367.48, "start": 1362.96, "text": " you're like okay I'm gonna cite it just because the next conference you could be" }, { "end": 1374.4, "start": 1367.48, "text": " my reviewer again so I'm not I'm not sure that these related work sections" }, { "end": 1379.28, "start": 1374.4, "text": " they're necessary like if someone wants to write their thesis and they go and" }, { "end": 1384.08, "start": 1379.28, "text": " read this paper and they want references oftentimes this is a good place but a" }, { "end": 1390.2, "start": 1384.08, "text": " lot of it is just blah blah blah blah blah okay I know I know disagree with me" }, { "end": 1396.88, "start": 1390.2, "text": " if you want oh yeah to maybe to reading quality so I tend to at this point I" }, { "end": 1403.68, "start": 1396.88, "text": " tend to not skim so at first I skim but at this point I tend to read every" }, { "end": 1409.4, "start": 1403.68, "text": " sentence and read it closely and understand it and when I realized like" }, { "end": 1414.96, "start": 1409.4, "text": " I'm tired or something I don't just skim the paper I've tried to skim papers and" }, { "end": 1420, "start": 1414.96, "text": " it doesn't doesn't work try to read every sentence understand every sentence" }, { "end": 1423.92, "start": 1420, "text": " and okay if you don't understand it don't stop reading because of that but" }, { "end": 1428.96, "start": 1423.92, "text": " try to not skim and be like oh yeah yeah yeah okay I gotta go to go to go to go" }, { "end": 1438.4, "start": 1428.96, "text": " that's is not helpful except related work skip completely cool then a lot of" }, { "end": 1442.56, "start": 1438.4, "text": " times in this paper now is the the model and this is the section I'm actually" }, { "end": 1448.56, "start": 1442.56, "text": " interested in right so I read very very closely here and then I find out what" }, { "end": 1455.04, "start": 1448.56, "text": " their their loss is all about and again I stress read these things and" }, { "end": 1463.84, "start": 1455.04, "text": " understand them right sometimes it's hard but if you're if you're confused" }, { "end": 1469.32, "start": 1463.84, "text": " that means you either they've done a bad job or they made a mistake or that you" }, { "end": 1473.44, "start": 1469.32, "text": " haven't understood something if you can't understand the sentence try to" }, { "end": 1479.24, "start": 1473.44, "text": " read on maybe it's clarified later and then you know go back but again do not" }, { "end": 1485.96, "start": 1479.24, "text": " do not like just start a lot of times when I read paper previously like I" }, { "end": 1490.2, "start": 1485.96, "text": " wouldn't understand something quite well yet and then I would be like oh yeah" }, { "end": 1494.96, "start": 1490.2, "text": " yeah yeah and then I noticed that I start skipping and skimming more and" }, { "end": 1499.4, "start": 1494.96, "text": " more because that would you know pop up again and again and I wouldn't understand" }, { "end": 1503.6000000000001, "start": 1499.4, "text": " it again and again and then at the end I would just be kind of glancing at the" }, { "end": 1507.64, "start": 1503.6000000000001, "text": " paper and I don't want to do that right here so I want to read every sentence" }, { "end": 1515.3200000000002, "start": 1507.64, "text": " and understand it okay so here then I find out about the loss and then I if I" }, { "end": 1521.1200000000001, "start": 1515.3200000000002, "text": " don't know something here then I'll go and look it up on maybe on Wikipedia or" }, { "end": 1525.4, "start": 1521.1200000000001, "text": " something like this now I don't need to on actually I don't need to understand" }, { "end": 1530.72, "start": 1525.4, "text": " every single part of it right that's maybe I should correct myself so for" }, { "end": 1536.64, "start": 1530.72, "text": " example this bounding box loss here they talk about the second part of the max" }, { "end": 1540.16, "start": 1536.64, "text": " and question Hungarian possible is this box loss that scores bounding boxes" }, { "end": 1543.96, "start": 1540.16, "text": " unlike many detectors that do box prediction with some initiality yada yada" }, { "end": 1548.4, "start": 1543.96, "text": " yada they say the most commonly used L1 loss will have different scales for a" }, { "end": 1553.24, "start": 1548.4, "text": " small so here they basically talk about how they mix the losses they see overall" }, { "end": 1558.76, "start": 1553.24, "text": " our box losses that defined as this and this now I haven't I don't know what" }, { "end": 1563.76, "start": 1558.76, "text": " these losses are I just assume there's some bounding box losses so when I it's" }, { "end": 1568.4, "start": 1563.76, "text": " not true when I say understand everything understand the things that" }, { "end": 1574.48, "start": 1568.4, "text": " are integral to the story of the paper right how exactly they compute bounding" }, { "end": 1578.8, "start": 1574.48, "text": " box losses at this point I don't care I just assume that there's some loss that" }, { "end": 1584.8, "start": 1578.8, "text": " I can back propagate right I what is important is that they do this" }, { "end": 1589.56, "start": 1584.8, "text": " Hungarian matching thing right as soon as I get that I'm like ah that was this" }, { "end": 1597.28, "start": 1589.56, "text": " you know this this thing no this thing up here this thing this with the matching" }, { "end": 1602.68, "start": 1597.28, "text": " thing now I get it now I know there are always the same amount of boxes here and" }, { "end": 1607.6, "start": 1602.68, "text": " there are always the same amount of labels here and all we need to do is" }, { "end": 1612.84, "start": 1607.6, "text": " somehow match them and I immediately think why is that relevant oh because" }, { "end": 1617.08, "start": 1612.84, "text": " when something is already matched to an object some other thing cannot be" }, { "end": 1621.6399999999999, "start": 1617.08, "text": " matched to the same object and that's how we you know prevent the fact that" }, { "end": 1628.12, "start": 1621.6399999999999, "text": " all the things predict the same thing right and so that immediately becomes" }, { "end": 1633.6, "start": 1628.12, "text": " clear and as I said there is usually like one or two ideas in a paper I don't" }, { "end": 1638.52, "start": 1633.6, "text": " assume or I don't care what their exact loss function is because I've sort of" }, { "end": 1644.28, "start": 1638.52, "text": " gotten the idea up here of what the loss is about all right so I hope that's" }, { "end": 1649.4399999999998, "start": 1644.28, "text": " clear under very closely read the things and understand the things that are" }, { "end": 1655.84, "start": 1649.4399999999998, "text": " necessary for the story if you find if you think something is not necessary for" }, { "end": 1659.12, "start": 1655.84, "text": " the story and then later end up not understanding that maybe come back and" }, { "end": 1666.12, "start": 1659.12, "text": " you know read it again in any case I would I would rather I would rather skip" }, { "end": 1671.12, "start": 1666.12, "text": " something and assume it's not necessary if I think so and then come back then" }, { "end": 1676.8799999999999, "start": 1671.12, "text": " trying to understand every everything but the things I do read I try to" }, { "end": 1685.4799999999998, "start": 1676.8799999999999, "text": " understand thoroughly okay then there's the architecture okay and that again I" }, { "end": 1691.08, "start": 1685.48, "text": " read closely and get backbone okay transformer encoder okay and now I" }, { "end": 1698.32, "start": 1691.08, "text": " understand much more closely decoder okay and here I get now finally I get" }, { "end": 1705.3600000000001, "start": 1698.32, "text": " what this is about decodes and objects in parallel yada yada yada these input" }, { "end": 1708.76, "start": 1705.3600000000001, "text": " embeddings are learned positional encodings that we refer to as object" }, { "end": 1713, "start": 1708.76, "text": " queries and similarly to the encoder we add them to the input at each attention" }, { "end": 1718.28, "start": 1713, "text": " layer so now they name I've already seen these object queries here and the only" }, { "end": 1723.16, "start": 1718.28, "text": " word I actually need from this sentence are learned the fact that they're" }, { "end": 1727.88, "start": 1723.16, "text": " positional encodings I just kind of ignore as soon as they say learned I know" }, { "end": 1733.44, "start": 1727.88, "text": " aha these things here are learned they have actually they're always the same" }, { "end": 1738.84, "start": 1733.44, "text": " for each of the images they're just overall learned okay so now I feel I" }, { "end": 1748.24, "start": 1738.84, "text": " understand the entire model and yeah so they then they say auxiliary decoding" }, { "end": 1752.84, "start": 1748.24, "text": " losses and this sometimes you have to pay attention to like auxiliary auxiliary" }, { "end": 1759.12, "start": 1752.84, "text": " things because those are the the things that here they say explicitly we found" }, { "end": 1765.6399999999999, "start": 1759.12, "text": " helpful to use auxiliary losses sometimes they they won't say why they" }, { "end": 1770.96, "start": 1765.64, "text": " did it they'll just say our loss consists of three things and you know if" }, { "end": 1774.2800000000002, "start": 1770.96, "text": " you look at the three things only one of the things is really a part of their" }, { "end": 1778.92, "start": 1774.2800000000002, "text": " story so far and that you should immediately conclude that they've put in" }, { "end": 1783.96, "start": 1778.92, "text": " the other things because they tried it and it didn't work right so you can also" }, { "end": 1787.96, "start": 1783.96, "text": " kind of get an estimate of the brittleness and so on of the system in" }, { "end": 1793, "start": 1787.96, "text": " that you see how many unnecessary things are there or how many things are not" }, { "end": 1798.12, "start": 1793, "text": " straightforward how many things are the easiest thing that you would do when you" }, { "end": 1805.32, "start": 1798.12, "text": " would go about and do what they did okay so then you this concludes this model or" }, { "end": 1809.24, "start": 1805.32, "text": " method usually this section is called like method or model or something like" }, { "end": 1815.6, "start": 1809.24, "text": " this and you go to experiments now the main question I have so far or I have" }, { "end": 1819.92, "start": 1815.6, "text": " maybe I have some more questions about the model itself that I haven't been" }, { "end": 1826.4, "start": 1819.92, "text": " able to pick up from this section which is not the case here but I simply keep" }, { "end": 1833.28, "start": 1826.4, "text": " those questions in mind and see whether they are resolved later right so I keep" }, { "end": 1838.96, "start": 1833.28, "text": " an awareness of what I don't understand but from here on my main issue is are" }, { "end": 1845.3200000000002, "start": 1838.96, "text": " they demonstrating that their story works right so they're here they're" }, { "end": 1852.72, "start": 1845.32, "text": " they're proposing a loss and a model and in my mind they now need to convince me" }, { "end": 1859.54, "start": 1852.72, "text": " that that works and that's that's it's not as easy as simply to show me some" }, { "end": 1864.82, "start": 1859.54, "text": " numbers that they are good at some benchmark they need to show me that they" }, { "end": 1872.48, "start": 1864.82, "text": " get those numbers because of what they claim so here they claim well okay they" }, { "end": 1876.32, "start": 1872.48, "text": " propose a new they propose a new architecture so what they need to" }, { "end": 1882.08, "start": 1876.32, "text": " convince me of is that the architecture itself makes sense right but in other" }, { "end": 1888.56, "start": 1882.08, "text": " papers when when you propose like and when you say like oh we for example in" }, { "end": 1894.3600000000001, "start": 1888.56, "text": " an LSTM when you build in an attention mechanism and you claim oh we you know" }, { "end": 1900.76, "start": 1894.3600000000001, "text": " the attention mechanism can look back at the source sequence in one step then you" }, { "end": 1905.36, "start": 1900.76, "text": " need to convince me that that actually happens right so you need to not only do" }, { "end": 1909.84, "start": 1905.36, "text": " you need to perform well you need to convince me that you perform well because" }, { "end": 1917, "start": 1909.84, "text": " of what you claim your model does right so and that's often difficult and I" }, { "end": 1922.36, "start": 1917, "text": " specifically look out in the experiments for usually the question is like where" }, { "end": 1928.96, "start": 1922.36, "text": " are they trying to bullshit me right where are they trying to are or are they" }, { "end": 1933.88, "start": 1928.96, "text": " trying to bullshit me are they trying to cover up the fact that something doesn't" }, { "end": 1938.3600000000001, "start": 1933.88, "text": " work and all the experiments are always in the best light possible of course and" }, { "end": 1943.44, "start": 1938.3600000000001, "text": " you have to keep that in mind but a lot of times you can also already see from" }, { "end": 1951.08, "start": 1943.44, "text": " the experiments that okay are they doing something weird are they not showing me" }, { "end": 1956.52, "start": 1951.08, "text": " some obvious experiment or and that's a lot of times I guess is there an easier" }, { "end": 1961.96, "start": 1956.52, "text": " explanation for why they get the results that they get other than their" }, { "end": 1967.08, "start": 1961.96, "text": " explanation right and it is it is their job to convince you that their" }, { "end": 1972.6, "start": 1967.08, "text": " explanation is the correct one for these numbers and especially if there is an" }, { "end": 1978.12, "start": 1972.6, "text": " easier one that they haven't excluded and then I don't believe the experiments" }, { "end": 1982.92, "start": 1978.12, "text": " if that's the case right if there is an easier explanation for the effect I'm" }, { "end": 1989.0800000000002, "start": 1982.92, "text": " I'm very skeptical but some papers have an easier job here than other papers so" }, { "end": 1995.92, "start": 1989.0800000000002, "text": " in this paper they basically show results on a on a on a task and since" }, { "end": 2001.72, "start": 1995.92, "text": " their paper is about hey our pipeline is just easier than other pipelines what" }, { "end": 2005.16, "start": 2001.72, "text": " they first of all need to do is they just need to like match the numbers of" }, { "end": 2010.5600000000002, "start": 2005.16, "text": " other pipelines and here I see that okay in these results often you have maybe a" }, { "end": 2015.8, "start": 2010.56, "text": " table or something here you see like this their model other models and their" }, { "end": 2022.44, "start": 2015.8, "text": " model is the best model in a lot of cases now if the best thing is of course" }, { "end": 2026.8, "start": 2022.44, "text": " if their model throughout is the best the worst thing is if it's like" }, { "end": 2032.04, "start": 2026.8, "text": " scattered like this even if their model is the best but in every single" }, { "end": 2037.48, "start": 2032.04, "text": " benchmark a different configuration of their model is the best that's that's" }, { "end": 2042.88, "start": 2037.48, "text": " sort of a bad sign unless they can explicitly explain why that is and it's" }, { "end": 2048.96, "start": 2042.88, "text": " also not that good of a sign if these things are spread out like this like" }, { "end": 2053.72, "start": 2048.96, "text": " sometimes this baseline is good sometimes their model is better and so on" }, { "end": 2057.68, "start": 2053.72, "text": " so pay attention to that now in this paper it doesn't matter so much that's" }, { "end": 2062.04, "start": 2057.68, "text": " actually fine because what they're trying to show is that their model is on" }, { "end": 2068.24, "start": 2062.04, "text": " par and way easier and they've already made the case in what way it is easier" }, { "end": 2072.64, "start": 2068.24, "text": " it's easier in terms of architecture if they were to say it's much faster then" }, { "end": 2078.92, "start": 2072.64, "text": " after that I would expect you know an experiment in speed while these numbers" }, { "end": 2082.96, "start": 2078.92, "text": " are matched so but since they say it's it's easier I've already seen the" }, { "end": 2087.7799999999997, "start": 2082.96, "text": " architecture I'm convinced of that now that they show okay our numbers match" }, { "end": 2093.6400000000003, "start": 2087.78, "text": " or actually I'm surprised they even outperform a lot of times then I'm quite" }, { "end": 2098.5600000000004, "start": 2093.6400000000003, "text": " happy with these experiments so also look for differences between numbers and" }, { "end": 2104.6400000000003, "start": 2098.5600000000004, "text": " the spread of numbers now it's not easy to say what if like point one is a big" }, { "end": 2108.7200000000003, "start": 2104.6400000000003, "text": " or a small difference that depends on the task but if you know pay attention" }, { "end": 2113.1200000000003, "start": 2108.7200000000003, "text": " to these things pay attention to the fact that these results are noisy and" }, { "end": 2117.8399999999997, "start": 2113.12, "text": " oftentimes there is a lot more hyper parameter tuning going into the model of" }, { "end": 2122.7599999999998, "start": 2117.8399999999997, "text": " the paper then into the baseline models right you want to make your look your" }, { "end": 2128.44, "start": 2122.7599999999998, "text": " stuff look as good as possible and here is a little bit where the institutional" }, { "end": 2133.8399999999997, "start": 2128.44, "text": " credibility of someone like Facebook comes in in that I tend to believe their" }, { "end": 2141.52, "start": 2133.8399999999997, "text": " results a bit more than other results not mega but a bit more yeah also look at" }, { "end": 2145.92, "start": 2141.52, "text": " patterns that they don't point out in the text so if there is like a pattern" }, { "end": 2149.7599999999998, "start": 2145.92, "text": " if you see like an interaction between the number of parameters and the score" }, { "end": 2155.68, "start": 2149.7599999999998, "text": " or something like this just try to be on the lookout of that and see if you can" }, { "end": 2161.88, "start": 2155.68, "text": " spot something that you think or think about whether that makes sense or not in" }, { "end": 2171.6400000000003, "start": 2161.88, "text": " what your hypothesis would be so here we go on and okay then they go into" }, { "end": 2176.6400000000003, "start": 2171.6400000000003, "text": " ablations and a lot of a lot of these papers do ablations and I generally" }, { "end": 2181.76, "start": 2176.6400000000003, "text": " appreciate that so here they visualize that the attention mechanism in their" }, { "end": 2187.56, "start": 2181.76, "text": " model actually refers to different instances right encoder self-attentions" }, { "end": 2191.4, "start": 2187.56, "text": " for a set of reference points the encoder is able to separate individual" }, { "end": 2198.08, "start": 2191.4, "text": " instances and you can see that pretty clearly right here where and even here" }, { "end": 2203.32, "start": 2198.08, "text": " with the overlapping cows and this is the sort of experiment that I would" }, { "end": 2207.36, "start": 2203.32, "text": " expect that actually convinces me that their architecture does what it says" }, { "end": 2213.6, "start": 2207.36, "text": " that it does right and something like this where you see like totally" }, { "end": 2218.28, "start": 2213.6, "text": " overlapping things with the attention of the individual things visualized so" }, { "end": 2223.1200000000003, "start": 2218.28, "text": " telling me like especially this one right here the the foot of the back" }, { "end": 2227.36, "start": 2223.1200000000003, "text": " elephant actually being focused by the attention of the bounding box of the" }, { "end": 2232.32, "start": 2227.36, "text": " back elephant that's the sort of experiment that convinces me that their" }, { "end": 2238.96, "start": 2232.32, "text": " claims like that their numbers really come from what they claim it comes from" }, { "end": 2244.96, "start": 2238.96, "text": " okay so at the end of the experimental section you should always ask yourself" }, { "end": 2251.28, "start": 2244.96, "text": " have they really convinced me that their story is true right that the" }, { "end": 2255.36, "start": 2251.28, "text": " improvement or when egg whenever they get an improvement or whatever they get" }, { "end": 2262.68, "start": 2255.36, "text": " what is is due to the story that they want to sell me or could there be like" }, { "end": 2268.48, "start": 2262.68, "text": " an easier explanation or does something not fit is like are there are the" }, { "end": 2273.7200000000003, "start": 2268.48, "text": " experiments different than from what you would expect here okay so these are" }, { "end": 2278.2799999999997, "start": 2273.72, "text": " these are my main questions are they are they convincing me of their story it's" }, { "end": 2284.3599999999997, "start": 2278.2799999999997, "text": " not do they have state-of-the-art numbers I don't care I don't care even" }, { "end": 2290.9599999999996, "start": 2284.3599999999997, "text": " though like sometimes so there is a bit of a catch I I don't care about state" }, { "end": 2297.2799999999997, "start": 2290.9599999999996, "text": " of the art numbers now let's say you have a table like this and you have a" }, { "end": 2301.6, "start": 2297.2799999999997, "text": " computer vision model and one of the models is like on the C for 10 data set" }, { "end": 2308.68, "start": 2301.6, "text": " now if your baseline model has like a 91 92 percent accuracy on C for 10 when I" }, { "end": 2314.56, "start": 2308.68, "text": " know the state-of-the-art is 96 I don't care right I know like I've done C for 10" }, { "end": 2321.4, "start": 2314.56, "text": " I know with like I don't know five six layers CNN you can reach these 91 92 93" }, { "end": 2326.48, "start": 2321.4, "text": " percent accuracy and to get to the 96 97 you'd actually be like in the region of" }, { "end": 2334.12, "start": 2326.48, "text": " a wide resinette and whatnot so it I I know that even though you're a few" }, { "end": 2340.2400000000002, "start": 2334.12, "text": " points behind state-of-the-art I know you know this this is valid still so I" }, { "end": 2348.4, "start": 2340.2400000000002, "text": " don't care but if you were to be like at 80 percent accuracy on C for 10 then I" }, { "end": 2354.16, "start": 2348.4, "text": " then I get a bit like hmm I like it's pretty easy to get to 90 percent plus" }, { "end": 2361.56, "start": 2354.16, "text": " with like a standard CNN so there I immediately start to wonder why is there" }, { "end": 2365.8399999999997, "start": 2361.56, "text": " an explanation now this could be like a theoretical paper that says oh we" }, { "end": 2371.52, "start": 2365.8399999999997, "text": " investigate MLPs and that's why we only get that number so that's that would be" }, { "end": 2377.12, "start": 2371.52, "text": " fine but if something is out of the ordinary like this then I pay attention" }, { "end": 2381.8799999999997, "start": 2377.12, "text": " but never because something isn't like the latest and greatest state-of-the-art" }, { "end": 2388.84, "start": 2381.88, "text": " that's just dumb okay and also if only evaluate what the paper claims it does" }, { "end": 2394.04, "start": 2388.84, "text": " right if the paper says we want to show that we are on par with current models" }, { "end": 2400.48, "start": 2394.04, "text": " then don't be mad if the paper doesn't outperform these models they didn't" }, { "end": 2406.48, "start": 2400.48, "text": " claim that right so yeah so after these ablations I'm actually pretty happy" }, { "end": 2413.28, "start": 2406.48, "text": " right here with the results and this right here when I saw this I didn't I" }, { "end": 2419, "start": 2413.28, "text": " didn't expect that but I read the experiment description that these are" }, { "end": 2423.44, "start": 2419, "text": " these different learned object queries and what they do and that gave me an" }, { "end": 2428.96, "start": 2423.44, "text": " increased understanding of how these object queries actually work right so at" }, { "end": 2433.44, "start": 2428.96, "text": " that point I still had like a vague I knew that these are learned but reading" }, { "end": 2438.08, "start": 2433.44, "text": " this and sort of looking at it studying it a bit I was like oh okay then I" }, { "end": 2443.64, "start": 2438.08, "text": " understood even better what they are so again when I say understand everything" }, { "end": 2450.2000000000003, "start": 2443.64, "text": " in the method section you can still have questions and but you just have to keep" }, { "end": 2456.88, "start": 2450.2000000000003, "text": " it in mind for later and then here I go on and there's this DETR for panoptic" }, { "end": 2463.2000000000003, "start": 2456.88, "text": " segmentation and they here they propose like a new model so I first look at it" }, { "end": 2467.2799999999997, "start": 2463.2, "text": " and I'm like okay they propose a new model they can do stuff like this now" }, { "end": 2472.56, "start": 2467.2799999999997, "text": " this is not object detection and again I'm not sure is this like a is this like" }, { "end": 2479.12, "start": 2472.56, "text": " a an add-on to the method or is was was this up here just an intermediate step" }, { "end": 2485.48, "start": 2479.12, "text": " to this and honestly after reading that I still wasn't sure it seems like" }, { "end": 2489.8399999999997, "start": 2485.48, "text": " something in between of course the paper is also a bit longer than other papers" }, { "end": 2496.92, "start": 2489.84, "text": " it just it seems it's too long for just being a side note but it's too short for" }, { "end": 2501.8, "start": 2496.92, "text": " being its own thing so that was just a bit weird and I treated is as just like" }, { "end": 2507.6400000000003, "start": 2501.8, "text": " a oh we can also do this with our model but I didn't pay like too much attention" }, { "end": 2516.36, "start": 2507.6400000000003, "text": " to that okay so at the end I you know look at conclusions now the conclusions" }, { "end": 2523.88, "start": 2516.36, "text": " of a paper are much much often they are not nearly as informative as the" }, { "end": 2529.28, "start": 2523.88, "text": " introduction the conclusions they all often tend to be very generic and kind" }, { "end": 2534.76, "start": 2529.28, "text": " of hedging a bit against criticisms saying what would be up for future work" }, { "end": 2541, "start": 2534.76, "text": " which is again hedging against criticism because you simply say well we didn't do" }, { "end": 2547.08, "start": 2541, "text": " this that's future work yes so again I read it but I don't really pay attention" }, { "end": 2553.32, "start": 2547.08, "text": " to it and then I gloss over the abstract I just would kind of scroll through the" }, { "end": 2558.8, "start": 2553.32, "text": " abstract if there's something that catches my eye I would look at it and if" }, { "end": 2565.32, "start": 2558.8, "text": " not then not and then I basically go to the start and whenever I didn't" }, { "end": 2570.68, "start": 2565.32, "text": " understand something I go back I look at it again and I try to think are all my" }, { "end": 2575, "start": 2570.68, "text": " questions answered and have they sufficiently convinced me that their" }, { "end": 2582.44, "start": 2575, "text": " story is the thing that really has the effect right here and then if I now were" }, { "end": 2588.8399999999997, "start": 2582.44, "text": " to make a video of this I've often found it useful to just put the paper away for" }, { "end": 2593.48, "start": 2588.8399999999997, "text": " a while and it's I usually get the best results when I read the paper the day" }, { "end": 2598.72, "start": 2593.48, "text": " before and then make a video the day after or if not I'll just you know put" }, { "end": 2603.7599999999998, "start": 2598.72, "text": " it away do something else do some email responding programming going outside" }, { "end": 2610.6, "start": 2603.7599999999998, "text": " eating lunch just some kind of a break between first read or between your" }, { "end": 2616.8399999999997, "start": 2610.6, "text": " first couple of reads and riff just I don't even think about the paper I just" }, { "end": 2623.4399999999996, "start": 2616.8399999999997, "text": " kind of it's just in the subconscious it kind of brews right and I happen to" }, { "end": 2626.52, "start": 2623.4399999999996, "text": " think about the paper every now and then but I don't make a conscious effort to be" }, { "end": 2630.7599999999998, "start": 2626.52, "text": " like oh how am I gonna explain this and so on but I just found the the worst" }, { "end": 2635.64, "start": 2630.7599999999998, "text": " videos are the ones where I immediately make the video after reading a paper" }, { "end": 2642.08, "start": 2635.64, "text": " and I've just discovered that if I kind of take a break and then I look at it" }, { "end": 2646.8, "start": 2642.08, "text": " again right I look I don't read it fully again but I if I have if I have the" }, { "end": 2650.16, "start": 2646.8, "text": " feeling I've understood it I don't read it fully again but I just kind of look" }, { "end": 2655.56, "start": 2650.16, "text": " at it and go again through the story and I think that's even if you you know want" }, { "end": 2660.16, "start": 2655.56, "text": " to if you want to talk about a paper in a reading group or tell you know explain" }, { "end": 2666.32, "start": 2660.16, "text": " it to your friends or whatnot this is often very useful just put it away for" }, { "end": 2673.68, "start": 2666.32, "text": " a while let it mellow and I find that helps a lot okay that was my process of" }, { "end": 2680, "start": 2673.68, "text": " reading this particular paper now we again this this is a high quality paper" }, { "end": 2685.2, "start": 2680, "text": " so it's I find it's a pretty easy read in that I simply need to understand what" }, { "end": 2690.8799999999997, "start": 2685.2, "text": " they did and I'm pretty happy with their experiments I maybe next time I can find" }, { "end": 2696.48, "start": 2690.8799999999997, "text": " an experiment or a paper where I'm initially more skeptical and not as happy" }, { "end": 2703.16, "start": 2696.48, "text": " with what I find but yeah let me know if you enjoyed this or if you would like to" }, { "end": 2708.3599999999997, "start": 2703.16, "text": " see any other explanation I don't exactly know if this is what you expected" }, { "end": 2714.18, "start": 2708.3599999999997, "text": " from a video like this so let me know maybe I have misunderstood you" }, { "end": 2720.08, "start": 2714.18, "text": " completely or it's way too long way too detailed or way too undetailed yeah" }, { "end": 2749.2, "start": 2720.08, "text": " leave me a comment and I'll see you next time bye bye" } ]
qSArFEIoSbo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
RepNet: Counting Out Time - Class Agnostic Video Repetition Counting in the Wild (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "vision", "counting", "self-similarity", "temporal", "frames", "video", "repeating", "lines", "transformer", "attention", "cnn", "convolutional neural network", "repetitions", "periodicity", "period", "repeat", "actions", "kinetics", "countix" ]
Counting repeated actions in a video is one of the easiest tasks for humans, yet remains incredibly hard for machines. RepNet achieves state-of-the-art by creating an information bottleneck in the form of a temporal self-similarity matrix, relating video frames to each other in a way that forces the model to surface the information relevant for counting. Along with that, the authors produce a new dataset for evaluating counting models. OUTLINE: 0:00 - Intro & Overview 2:30 - Problem Statement 5:15 - Output & Loss 6:25 - Per-Frame Embeddings 11:20 - Temporal Self-Similarity Matrix 19:00 - Periodicity Predictor 25:50 - Architecture Recap 27:00 - Synthetic Dataset 30:15 - Countix Dataset 31:10 - Experiments 33:35 - Applications 35:30 - Conclusion & Comments Paper Website: https://sites.google.com/view/repnet Colab: https://colab.research.google.com/github/google-research/google-research/blob/master/repnet/repnet_colab.ipynb Abstract: We present an approach for estimating the period with which an action is repeated in a video. The crux of the approach lies in constraining the period prediction module to use temporal self-similarity as an intermediate representation bottleneck that allows generalization to unseen repetitions in videos in the wild. We train this model, called RepNet, with a synthetic dataset that is generated from a large unlabeled video collection by sampling short clips of varying lengths and repeating them with different periods and counts. This combination of synthetic data and a powerful yet constrained model, allows us to predict periods in a class-agnostic fashion. Our model substantially exceeds the state of the art performance on existing periodicity (PERTUBE) and repetition counting (QUVA) benchmarks. We also collect a new challenging dataset called Countix (~90 times larger than existing datasets) which captures the challenges of repetition counting in real-world videos. Authors: Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Check out these videos on the top. Each one kind of contains a repeating action. So on the left you see someone doing jumping jacks in a fairly regular pattern. In the middle it gets a bit more difficult because what you see is a tennis ball bouncing and it bounces faster and faster and faster as time goes on. On the right you see that there is a short intro sequence before the repeating action, the person shoveling the cement, is displayed. So the goal here is to build an AI that can detect that a repeating action is happening and if it detects so that it can count how often this repeating action is happening. You can already see the difficulties here is not only the recognition itself but the fact that the repeating actions can be different and can be of different length and cannot always look the same and so on. So this paper uses these temporal self-similarity matrices that you see at the bottom here to achieve this and we're going to explore how that's happening. So the paper we'll look at is called Counting Out Time Class Agnostic Video Repetition Counting in the Wild by Debidada Dwebidi, Yusuf Aitar, Jonathan Thompson, Pierre Sermonet and Andrew Zizerman of Google Research and DeepMind. So as I already said this paper detects repeating actions and is able to count them and on a high level what they do is they encode the video using convolutional networks then they build these temporal self-similarity matrices between the frames in order to detect the repetitions and they decode that into the predictions using another neural network. This is all trained end to end and they also make a new data set for this task. So that's the high level. If you want to find out how exactly that's done I invite you to stick around because we'll go into the paper. If you like content like this don't forget to share it out, leave a like and tell me what you think of the paper in the comments. I do read the comments so yeah I'm very happy to read what you all think about it. Okay so as we already said they say we present an approach for estimating the period with which an action is repeated in a video and that's actually understating what the problem is here. The problem is many manyfold. As you can see on the right here even if you don't get what this self-similarity matrix is yet the outputs that you want are at the bottom. So what you want is first of all a per frame periodicity prediction. So that means that for each frame you want to know is there even a repeating action happening or not in that particular frame. You can see here at the beginning at the end of the video there is no repeating action and then in the middle there is a repeating action. So that's the first thing you want to know. The second thing is this per frame period length prediction. So for each frame that is part of a repeating action you want to know what is the period length of that repeating action and that can change throughout the video. So you need it per frame and once you have it per frame you can actually count the number of repetitions. So those are two problems already. The third problem that this paper solves is that there is no adequate data set to train a model like this it seems. For example this model right here this QUVA data set I believe has 100 data points and these are meant for testing. So you would build your system somewhere and then you would test it on those 100 data points. But they claim not even those are large and diverse enough for these systems to be evaluated. So they build. We also collect a new challenging data set called Countix which is 90 times larger than existing data sets which captures the challenges of repetition counting in real world videos and that actually consists of a train and test split. So you can also train a system using it. Let's dive into the architecture. I think this paper is a very very very cool example of even though we're in this deep learning paradigm where you just throw neural networks at a problem it's a very cool example that you can still achieve a lot by smartly constructing this because we used to achieve a lot by smartly constructing features and so on. In this case the goal is achieved by smartly constructing the architecture of the neural network itself to give you back a good performance on the particular task right here. Okay so if that tablet lets me actually scroll around let's go to the architecture. So figure two shows the architecture in more detail. So we'll go through it from the beginning to the end. Actually let's go let's go to the end so you know what is supposed to happen. So for each frame in this video what we'll need is a period length and a periodicity. These are two so the bottom is a binary variable and the top is a it's actually a number but it is predicted in kind of binned as a classification task. It doesn't really matter we need two outputs that we can compare with the labels right. In this case the videos are of length 64 so there's 64 frames per video and for each of those frames we want a period length and for each of those frames we want a periodicity binary prediction. And that as I said we compare it with our labels and then we can calculate a loss. So this is the loss the loss is at the end comparing these two labels and then everything else is trained using back propagation on that loss. Okay so now with that in mind let's go to the beginning. So the video is taken and fed through an encoder in order to produce these per frame embeddings. So we want an embedding for each frame that means for each of the 64 frames we want to obtain one vector of length 512 that describes that particular frame in terms that the model can understand. And we do that using an encoder. Now the encoder it has a bunch of parts to it. It's not just a blob as you see right here. So the encoder consists of three things. First of all there's this convolutional feature extractor which is a ResNet 50 architecture. It's simply a convolution and you let the convolutional neural network run on each frame independently. This is simply a feature extractor from images right like you know it from any other image processing task. But of course here we have a video. So it would be nice if the frames knew something about the other frames right. Especially if you think of something like a jumping jack. If you are in this position right here. It doesn't tell you everything about that video frame. If you consider the frame before it and maybe the frame before it is this and maybe the frame after it is that, you can clearly see that the hands or the arms are in an upward motion. So the next step of the encoder tries to integrate that temporal information into the embeddings and that is achieved via a 3D convolution. So once we process each frame individually then we feed it into one layer through a layer of 3D convolution to add local temporal information to the per frame features. So if you don't know what a 3D convolution is, this already drew. So in a 2D convolution what you want to do is you want to have this filter right here which is a 2D filter for each of the channels and you slide it across the image like this. And you can have multiple channels of the input image right here and you can actually do this multiple times which corresponds to multiple output channels of the filters but the actual convolution is happening in two dimensions. So the sliding is across the width and the height of the image. In contrast to that if you have a 3D convolution and you have the same input stack of images and now this we have to you have to pay attention here. This right this stack right here is the individual channels of one image so of one video frame. This stack right here that I'm drawing these are the video frames stacked. Now each of these stacked video frames can have multiple channels resulting from its 2D convolution or even just RGB. So I can't really draw in four dimensions but now we stack the video frames and now our kernel our filter will be also in the direction of the video frame. So I don't know if this is really recognizable if I draw it like this but as you can see the kernel is not only 2D but 3D and now the sliding so if we have actually more than that the sliding is not only done into the direction of height and width but also into the direction of depth so that each of the frames each of the video frames right here can incorporate information from its immediate neighbors in case of a 3x3x3 filter or even more neighbors but here we use a 3x3x3 filter. Okay so that's that's how we obtain these embeddings right here and at the end there is a dimension reduction like a max pooling or something but ultimately what you'll end up with is for each of the 64 video frames you get one vector and that vector mainly describes that particular video frame like if you consider the green one here but also contains information from the video frame before it and from the video frame after it. Okay so that's sort of and and that's that's the so the temporal convolution is not to detect the periodic actions because it's just one frame into the future and into the past it is more like what we said here in order to give you extra information of what's happening in a particular frame because especially for periodicity it's actually important if the arms are going up or down. Cool so then comes the heart of the architecture. The heart of the architecture is this temporal self-similarity matrix. Now what does this do? This is relating the frames to each other and important to note here this is just a single channel image so there is no other ones of these. For the entire video frame sequence you have one 64x64 matrix and all the signal has to go through this matrix right everything from here is only through this matrix there's no residual connections there is no skip connections that's all that's your information bottleneck and by having a bottleneck like this these the authors here force the model to basically do a good job at making this temporal self-similarity happen if it wants to achieve a low loss and that's what I mean by you can achieve a lot by having a smart architecture. This temporal self-similarity matrix is actually not learned this can be computed deterministically from the embeddings right here so what you do is you each row here corresponds to one frame so you take each frame that corresponds to a row and you calculate simply the distance to or the similarity with each of the other frames so this is as you can see it's 64x64 so frame i here is simply compared to each of the other frames j and depending on whether that embedding is very similar to the embedding of frame j this number is going to be high or low. Now there's also a after that you do a softmax across here such that this is going to be like a distribution and not just raw numbers but that's ultimately not that important so what you can see is that the diagonal is very prominent and that makes sense because technically the diagonal is always one right if for example if you use the inner product as a similarity the diagonal should always be one but here we have the softmax so it's not but ultimately we can say that any frame should be very similar to its is going to be very similar to itself so that's why but then you can see right here there's a pattern emerging and that pattern is these diagonal lines in this direction as you can see and what what does that mean they actually have a larger version of this down here so what does the diagonal pattern mean it means that so here the diagonal that's okay frame i is very similar to frame i cool but the other lines what do they mean so if i look at frame i right here it is also very similar to this frame j now this wouldn't be further you know frames are similar but the line means that if i have if i look 10 frames later i plus 10 that's very similar to j plus 10 and if i now look at i plus 20 frame that's very similar to j plus 20 so this is the this is here why the the pattern is emerging of a line because if i go 10 frames into the future it's similar to the other one 10 frames into the future and so on and that means the line indicates that this entire sequence starting from i 20 frames into the future is repeated starting from j actually j is earlier here so but you get the point and if i have a bunch of these lines that means that this subsequence is repeated again and again and again throughout the video so each of these lines is basically one repetition of the sequence from the middle here at some other point in the video and that's pretty fascinating and that's these these self-similarity matrices that's what they're sort of showing now they don't use the inner product here as a self-similarity metric they actually use as you can see right here they use the negative square distance but the effect is the same so negative square distance followed by a row wise softmax operation so you could say hi we are basically done having this self-similarity matrix what we could do let's let's say we could train it we don't worry about how to train it we could simply take each row right here and we plot the intensities across that row and that's maybe you know like something like this and then there's the diagonal is a bit higher okay we could just use a heuristic to detect these bumps here basically calculate the length count the bumps and calculate the period length right that that should be pretty easy with like a simple heuristic but the authors here they want more they want to solve more problems so what are some of the problems we already saw some of the problems namely for example here is the hammer throw so the hammer throw starts out slow and gets faster and faster and faster and you can see this pretty clearly at the lines right here namely if you go through time so you start off here and you go through time you can see that the distance to the next line here is fairly large but you go through time further the distance gets shorter you go through time further the distance gets even shorter so these pattern of lines here that's kind of converging towards that it indicates that this repeating action gets faster and faster and faster this is nice to see here at the bouncy ball example where you can see it starts out pretty slow but it gets faster and faster and here you if you if you have this full thing right here that basically means all the frames are self similar to each other which basically means if you stop the video right that's if you have 10 frames in a row the same thing the ball is just lying on the ground all of these frames will be self similar so there's probably no bouncing happening down here you can see pretty well from the pattern what happens and here in this mixing concrete example that we saw at the beginning you can see that at the beginning at the end there's this intro and outro sequence and only in the middle is there a repeating action and that's indicated by this line pattern is only at in the middle of the videos only between here and here so it's it's going to be pretty difficult to just have a heuristic that reads out these periodic action periodicity and in a true deep learning fashion the the authors here oh sorry maybe you can't see that i've shifted my recording window so maybe sometimes something's out of frame and you have to yell at me if i do that please so i hope you you saw this that you have the ever the speeding up here and here they're visible in the pattern and then here you have the beginning sequence the end sequence that have no repeating pattern and the repeating pattern only merged in the middle so the authors want to do this through of course a deep learning network they want to read out the periodicities not through a heuristic but using a deep network you know respectable that's at the times we live in so what do they do first of all you have to see right here everything that happens from here as i understand it is per frame so they simply take a row of this matrix right here like this red line and that is independently pulled through to the end so there is no interaction happening anymore between the individual frame data the only interaction that happens is a little bit here at the temporal convolutions but the only real interaction between the frames is happening through the self-similarity matrix and again this is the information bottleneck that the authors force the information through everything happening from here no that's actually not right there is this convolution right here but still this is the information bottleneck you have to go through so right here we process this image using a convolution so this is an image right and we can process it using a convolutional neural network so what we do is we have a 64 by 64 image in one channel we simply up sample that not up sample but we expand the channels to 32 channels now as i said it's pretty easy to think we can just go to the end here use a conv net to produce our final 512 by so 512 embeddings we have here again 64 by 64 that we then use to predict the final result but the authors here do something different they do transformer layers in the middle but only per frame so what does it mean so here you up sample to 32 channels and then that means that one of these blocks right here one of these blocks corresponds to one row in the self-similarity matrix which corresponds to one frame and from now on so from now on i want to say what i said before from now on it's all just this one block they are independent of each other okay so you take this one block and you feed it through a transformer to achieve at your final embedding of 512 and it's probably best if we read what they say about it okay so if we're given this self-similarity matrices matrix they consist of row each row is the per frame self-similarity representation and generates two outputs the per frame period length estimation and the per frame binary periodicity classification note that both l and p are vectors and their elements are per frame predictions okay the architecture of the period predictor module can be viewed in figure two note that the predictors share a common architecture and waits until the last classification phase the shared processing pipeline starts with starts with 32 2d convolutional filters of size three by three followed by a transformer layer which uses a multi-headed attention with trainable positional embeddings in the form of a 64 length variable that is learned by training okay it's i guess the transformers learned by training and the positional embeddings are also learned by training that's fairly common we use four heads with 512 dimension in the transformer by the way if you don't know what a transformer is watch the video on attention is all you need i made one it's very popular yeah so with each head being 128 dimensions in size after the shared pipeline we have two classifiers period length classifier and periodicity classifier tau sorry this is fine this is tau each of them consists of two fully connected layers of size 512 so i guess the the the pipeline here is pretty simple the question could be why do they use a transformer and not simply another convolutional network so here they up sample the image as we saw into 32 channels and then they simply want to take one of these one of these blocks here and that corresponds a little bit so we have for one frame right what does it mean we have basically we have 64 by 32 things so the 64 things it's this one frames temporal connection to each other frame given that you know comes from this self-similarity matrix so it kind of relates this frame that we're considering to each of the other frames and each of this each of these entries is a 32 size vector this is sort of a this is you can consider like a sequence of 64 things 64 embeddings so to use a transformer here it's pretty natural if you think of this as like a sequence transformation task i i would guess so the transformer can if there are these peaks right here like we saw right here the transformer can make very good sense of that because of course the attention mechanism from a one peak it can attend to all the other peaks and can sort of relate the different peaks to each other and then determine the periodicity length whereas with a convolutional network i guess that's going to be a lot harder because of the sort of invariance built into the convolution i'm not sure maybe they also it just worked better but that's how i think about it it's that for a given frame you basically have a sequence classification or a set classification task and the attention mechanism allows you to in one single step connect each peak with each other peak or each information with each other information in this sequence all right so at the end you have just fully connected layers again only on a per frame basis and that will give you the output and again you compare this to the label and you back prop through everything everything here is differentiable so all of this is trained to achieve minimum possible loss and because you train everything to achieve minimal possible loss you make this encoder right here which is the crucial part because the encoder is must give you good embeddings which must give you a sensible self-similarity matrix right you train the encoder to encode things that are relevant for the task and that's what makes the whole thing work okay so we've gone through the architecture now the problem right here is the the data set so they also go into how they do inference they can actually do a bunch of things like play the video at different speeds and then look at what each of the predictions so if a double speed it predicts half the period length then you can be more sure and so on so that's pretty cool but they go into another point right here and that's the data set so they produce this countix data set but also on the other hand which is something I also find very cool is they produce a synthetic data set so here they say we train with synthetic repetitions and that can be sort of I didn't know what to think of it at first I was just like huh but then it's pretty cool so if you have a video with these these are the frames of the video right so the video goes in this temporal direction what you can do is simply go here go through these frames and just repeat these frames and repeat them and repeat them and at the end you have these frames right and then you have a data set and if you if you assume that most videos do not naturally contain repeating actions right most videos are just videos they're not videos of something repeating then you can safely assume that these parts here are non repeating so and these parts here are repeating this is one of the labels that you need right the problem with synthetic data set is always to have the labels and also you know how many there are because you can simply count the number of times that you go through it you can even make it faster slower and so on so this synthetic approach is pretty cool and especially the bottom right here because this might be kind of hacky because each time each time you jump from the end of one of those arrows to the beginning right you have kind of a hack in the indie video because you know it's not continuous so what you can do and this is the the bottom here you can do this reversal technique where you go to the end and then you play the frames backwards and then you play the frames forwards again backwards again forwards again and then you go out here and that gives you one continuous motion right if someone if it's simply a video of someone lifting their hand like it starts out down here and it goes here and it goes here and then if you do this technique it would go down again down again up again up again and so on so that's you know i think it's a fairly smart technique honestly now they tried this and it doesn't work super well so what they also have to do is they have to do manual camera motion augmentation so that's so camera motion augmentation it basically means that if you just do a repeating action like this it's sort of i guess it's too monotonic it doesn't really cover real videos with repeating actions so what they do is they kind of simulate a moving camera and you simulate that much like you would do image augmentation so you can rotate the camera over time you can translate it you can scale it differently and through if you do that throughout the video and you change it around how the camera moves then that appears to work fairly well so if they now compare this and their data set they perform pretty well so in their data set they take this kinetics data set and they crowdsource the label and the tasks in the data set they're pretty diverse as you can see right here so you have sports like rope training mountain climbers but you have also things like playing ukulele exercising arms slicing an onion and so on and you can see that the repetition count is fairly diverse as well so from one or two repetitions per video it goes to 50 or so and the period length is also between one and five seconds though as you as i already said you don't have to you don't have to count on that because you can always play the video slower or faster and then determine other periodicities so in their experiment first of all they perform pretty well and they show that if they train on their data set and on the synthetic data set they perform better than if they just train on the synthetic or they just train on their data set they also show pretty clearly that the addition of this temporal self-similarity matrix helps tremendously you can see right here in each of these boxes is the comparison and this obi I think is the off by one error so it kind of forgives you if you're off by one count but otherwise you get a zero if you're wrong and you can see that the self-similarity matrix helps tremendously they also compare with some other architectural choices instead of the transformer I guess yeah so I guess they just take it because it performs pretty well and they do a lot of lot of ablations but what I particularly appreciate is that they do something like this so what they do at the end went once they've trained the architectures they do a 1d PCA protection of the encoder features over time now the encoder features they were 512 dimensional right this is the thing before it goes into the self-similarity matrix so those we said the encoder is the crucial part here because it needs to take the video and encode things that make them accessible to calculating the self-similarity now they do a 1d PCA so a projection into one dimension of these features and you can already see at this one dimensional projection that the periodicity here is clearly clearly visible namely for example right here every time up here is when the legs are up and every time down here is when the legs are down right here so that is very very impressive and that kind of that really shows that the model is doing what you claim what you claim that it's doing like I'm almost more interested in experiments like this than in and in these numbers right here because the numbers could always be because you've just thrown more stuff at it right so they go over a bunch of possible applications of their model so first of all you can do something like as we can see repetition counting from videos you can do periodicity detection those were the things that the model is trained to do but there's also a bunch of things that the model can now implicitly do namely something like change inspection where they say look if someone's chopping this pineapple right here then at the end of each of the repetitions there is something that changed namely the number of slices of pineapple is it bread is it I can't I think it's pineapple okay so the number of slices or pieces right here changes so in essence this could be the base for another model estimating whatever changed or training to recognize numbers of pieces and so on also you can detect the speed so the speed of a repeating action if you perform something slow or fast this model can implicitly do it and this they call cross-period retrieval so if you know when the repetitions are you know that okay maybe the first frame so always on the upswing right here these should all these should all be fairly similar visually right as with respect to the repeating action so you can see that even though this whenever the kid in the swing here is close it looks fairly different in in a purely visual sense in a pixel sense but it is at the same point in the repeating action and that's you know that's that's pretty cool so you can technically retrieve related things even though they visually they don't look similar that much yeah that that's the the kind of applications here are probably many many fold and I also think that so in this measure of intelligence paper by françois choulet he basically claims that this is one of the innate abilities of humans they can count you know they can count things this is something you're basically born with and maybe this thing right here will become sort of a staple staple component for many other things that we build AI on I would not be surprised but maybe it will just fade into history I think it's pretty cool project especially you know the the architectural choice here to pull everything through this self-similarity matrix and the you know just just looking at this matrix already makes you kind of know that this thing works alright this was it from me let me know in the comments what you think about the paper check out the website the website has a lot of video demo examples of what they're doing I think the data set as well and yeah I'll see you next time bye bye
[ { "end": 7.68, "start": 0, "text": " Hi there! Check out these videos on the top. Each one kind of contains a repeating action." }, { "end": 12.92, "start": 7.68, "text": " So on the left you see someone doing jumping jacks in a fairly regular pattern. In the" }, { "end": 18.64, "start": 12.92, "text": " middle it gets a bit more difficult because what you see is a tennis ball bouncing and" }, { "end": 24.560000000000002, "start": 18.64, "text": " it bounces faster and faster and faster as time goes on. On the right you see that there" }, { "end": 31.279999999999998, "start": 24.56, "text": " is a short intro sequence before the repeating action, the person shoveling the cement, is" }, { "end": 38.72, "start": 31.279999999999998, "text": " displayed. So the goal here is to build an AI that can detect that a repeating action" }, { "end": 44.8, "start": 38.72, "text": " is happening and if it detects so that it can count how often this repeating action" }, { "end": 51.36, "start": 44.8, "text": " is happening. You can already see the difficulties here is not only the recognition itself but" }, { "end": 56.2, "start": 51.36, "text": " the fact that the repeating actions can be different and can be of different length and" }, { "end": 62.3, "start": 56.2, "text": " cannot always look the same and so on. So this paper uses these temporal self-similarity" }, { "end": 68.14, "start": 62.3, "text": " matrices that you see at the bottom here to achieve this and we're going to explore how" }, { "end": 74.7, "start": 68.14, "text": " that's happening. So the paper we'll look at is called Counting Out Time Class Agnostic" }, { "end": 84.08, "start": 74.7, "text": " Video Repetition Counting in the Wild by Debidada Dwebidi, Yusuf Aitar, Jonathan Thompson, Pierre" }, { "end": 92.32000000000001, "start": 84.08, "text": " Sermonet and Andrew Zizerman of Google Research and DeepMind. So as I already said this paper" }, { "end": 98.64, "start": 92.32000000000001, "text": " detects repeating actions and is able to count them and on a high level what they do is they" }, { "end": 104.82, "start": 98.64, "text": " encode the video using convolutional networks then they build these temporal self-similarity" }, { "end": 111.52, "start": 104.82, "text": " matrices between the frames in order to detect the repetitions and they decode that into" }, { "end": 118.56, "start": 111.52, "text": " the predictions using another neural network. This is all trained end to end and they also" }, { "end": 124.2, "start": 118.56, "text": " make a new data set for this task. So that's the high level. If you want to find out how" }, { "end": 130.28, "start": 124.2, "text": " exactly that's done I invite you to stick around because we'll go into the paper. If" }, { "end": 135.3, "start": 130.28, "text": " you like content like this don't forget to share it out, leave a like and tell me what" }, { "end": 141.66, "start": 135.3, "text": " you think of the paper in the comments. I do read the comments so yeah I'm very happy" }, { "end": 149.96, "start": 141.66, "text": " to read what you all think about it. Okay so as we already said they say we present" }, { "end": 155, "start": 149.96, "text": " an approach for estimating the period with which an action is repeated in a video and" }, { "end": 161.32, "start": 155, "text": " that's actually understating what the problem is here. The problem is many manyfold. As" }, { "end": 166.96, "start": 161.32, "text": " you can see on the right here even if you don't get what this self-similarity matrix" }, { "end": 174.64000000000001, "start": 166.96, "text": " is yet the outputs that you want are at the bottom. So what you want is first of all a" }, { "end": 181.39999999999998, "start": 174.64, "text": " per frame periodicity prediction. So that means that for each frame you want to know" }, { "end": 186.23999999999998, "start": 181.39999999999998, "text": " is there even a repeating action happening or not in that particular frame. You can see" }, { "end": 190.72, "start": 186.23999999999998, "text": " here at the beginning at the end of the video there is no repeating action and then in the" }, { "end": 195.44, "start": 190.72, "text": " middle there is a repeating action. So that's the first thing you want to know. The second" }, { "end": 202.72, "start": 195.44, "text": " thing is this per frame period length prediction. So for each frame that is part of a repeating" }, { "end": 208.52, "start": 202.72, "text": " action you want to know what is the period length of that repeating action and that can" }, { "end": 213.2, "start": 208.52, "text": " change throughout the video. So you need it per frame and once you have it per frame you" }, { "end": 221, "start": 213.2, "text": " can actually count the number of repetitions. So those are two problems already. The third" }, { "end": 227.32, "start": 221, "text": " problem that this paper solves is that there is no adequate data set to train a model like" }, { "end": 236, "start": 227.32, "text": " this it seems. For example this model right here this QUVA data set I believe has 100" }, { "end": 241.28, "start": 236, "text": " data points and these are meant for testing. So you would build your system somewhere and" }, { "end": 247.35999999999999, "start": 241.28, "text": " then you would test it on those 100 data points. But they claim not even those are large and" }, { "end": 254.26, "start": 247.35999999999999, "text": " diverse enough for these systems to be evaluated. So they build. We also collect a new challenging" }, { "end": 260.32, "start": 254.26, "text": " data set called Countix which is 90 times larger than existing data sets which captures" }, { "end": 265.36, "start": 260.32, "text": " the challenges of repetition counting in real world videos and that actually consists of" }, { "end": 273.64, "start": 265.36, "text": " a train and test split. So you can also train a system using it. Let's dive into the architecture." }, { "end": 280.68, "start": 273.64, "text": " I think this paper is a very very very cool example of even though we're in this deep" }, { "end": 286.28000000000003, "start": 280.68, "text": " learning paradigm where you just throw neural networks at a problem it's a very cool example" }, { "end": 294.16, "start": 286.28000000000003, "text": " that you can still achieve a lot by smartly constructing this because we used to achieve" }, { "end": 300.04, "start": 294.16, "text": " a lot by smartly constructing features and so on. In this case the goal is achieved by" }, { "end": 305.92, "start": 300.04, "text": " smartly constructing the architecture of the neural network itself to give you back a good" }, { "end": 313.44, "start": 305.92, "text": " performance on the particular task right here. Okay so if that tablet lets me actually scroll" }, { "end": 319.56, "start": 313.44, "text": " around let's go to the architecture. So figure two shows the architecture in more detail." }, { "end": 324.12, "start": 319.56, "text": " So we'll go through it from the beginning to the end. Actually let's go let's go to" }, { "end": 330.20000000000005, "start": 324.12, "text": " the end so you know what is supposed to happen. So for each frame in this video what we'll" }, { "end": 336.15999999999997, "start": 330.2, "text": " need is a period length and a periodicity. These are two so the bottom is a binary variable" }, { "end": 344.15999999999997, "start": 336.15999999999997, "text": " and the top is a it's actually a number but it is predicted in kind of binned as a classification" }, { "end": 349.15999999999997, "start": 344.15999999999997, "text": " task. It doesn't really matter we need two outputs that we can compare with the labels" }, { "end": 355.82, "start": 349.15999999999997, "text": " right. In this case the videos are of length 64 so there's 64 frames per video and for" }, { "end": 361.92, "start": 355.82, "text": " each of those frames we want a period length and for each of those frames we want a periodicity" }, { "end": 369.2, "start": 361.92, "text": " binary prediction. And that as I said we compare it with our labels and then we can calculate" }, { "end": 374.8, "start": 369.2, "text": " a loss. So this is the loss the loss is at the end comparing these two labels and then" }, { "end": 382.48, "start": 374.8, "text": " everything else is trained using back propagation on that loss. Okay so now with that in mind" }, { "end": 388.46000000000004, "start": 382.48, "text": " let's go to the beginning. So the video is taken and fed through an encoder in order" }, { "end": 394.6, "start": 388.46000000000004, "text": " to produce these per frame embeddings. So we want an embedding for each frame that means" }, { "end": 402.88, "start": 394.6, "text": " for each of the 64 frames we want to obtain one vector of length 512 that describes that" }, { "end": 407.94, "start": 402.88, "text": " particular frame in terms that the model can understand. And we do that using an encoder." }, { "end": 414.44, "start": 407.94, "text": " Now the encoder it has a bunch of parts to it. It's not just a blob as you see right" }, { "end": 421.64, "start": 414.44, "text": " here. So the encoder consists of three things. First of all there's this convolutional feature" }, { "end": 427.62, "start": 421.64, "text": " extractor which is a ResNet 50 architecture. It's simply a convolution and you let the" }, { "end": 433.88, "start": 427.62, "text": " convolutional neural network run on each frame independently. This is simply a feature extractor" }, { "end": 441.96, "start": 433.88, "text": " from images right like you know it from any other image processing task. But of course" }, { "end": 452.6, "start": 441.96, "text": " here we have a video. So it would be nice if the frames knew something about the other" }, { "end": 460.64, "start": 452.6, "text": " frames right. Especially if you think of something like a jumping jack. If you are in this position" }, { "end": 466.91999999999996, "start": 460.64, "text": " right here. It doesn't tell you everything about that video frame. If you consider the" }, { "end": 473, "start": 466.91999999999996, "text": " frame before it and maybe the frame before it is this and maybe the frame after it is" }, { "end": 482.91999999999996, "start": 473, "text": " that, you can clearly see that the hands or the arms are in an upward motion. So the next" }, { "end": 489.2, "start": 482.91999999999996, "text": " step of the encoder tries to integrate that temporal information into the embeddings and" }, { "end": 498.03999999999996, "start": 489.2, "text": " that is achieved via a 3D convolution. So once we process each frame individually then" }, { "end": 505.52, "start": 498.03999999999996, "text": " we feed it into one layer through a layer of 3D convolution to add local temporal information" }, { "end": 510.96, "start": 505.52, "text": " to the per frame features. So if you don't know what a 3D convolution is, this already" }, { "end": 517.68, "start": 510.96, "text": " drew. So in a 2D convolution what you want to do is you want to have this filter right" }, { "end": 524.3599999999999, "start": 517.68, "text": " here which is a 2D filter for each of the channels and you slide it across the image" }, { "end": 530.2399999999999, "start": 524.3599999999999, "text": " like this. And you can have multiple channels of the input image right here and you can" }, { "end": 535.4399999999999, "start": 530.2399999999999, "text": " actually do this multiple times which corresponds to multiple output channels of the filters" }, { "end": 540.8399999999999, "start": 535.4399999999999, "text": " but the actual convolution is happening in two dimensions. So the sliding is across the" }, { "end": 546.7199999999999, "start": 540.8399999999999, "text": " width and the height of the image. In contrast to that if you have a 3D convolution and you" }, { "end": 553, "start": 546.72, "text": " have the same input stack of images and now this we have to you have to pay attention" }, { "end": 560.36, "start": 553, "text": " here. This right this stack right here is the individual channels of one image so of" }, { "end": 568.6800000000001, "start": 560.36, "text": " one video frame. This stack right here that I'm drawing these are the video frames stacked." }, { "end": 573.32, "start": 568.6800000000001, "text": " Now each of these stacked video frames can have multiple channels resulting from its" }, { "end": 580.6400000000001, "start": 573.32, "text": " 2D convolution or even just RGB. So I can't really draw in four dimensions but now we" }, { "end": 588.96, "start": 580.6400000000001, "text": " stack the video frames and now our kernel our filter will be also in the direction of" }, { "end": 595.6800000000001, "start": 588.96, "text": " the video frame. So I don't know if this is really recognizable if I draw it like this" }, { "end": 602.5200000000001, "start": 595.6800000000001, "text": " but as you can see the kernel is not only 2D but 3D and now the sliding so if we have" }, { "end": 607.4, "start": 602.52, "text": " actually more than that the sliding is not only done into the direction of height and" }, { "end": 613.64, "start": 607.4, "text": " width but also into the direction of depth so that each of the frames each of the video" }, { "end": 619, "start": 613.64, "text": " frames right here can incorporate information from its immediate neighbors in case of a" }, { "end": 628.1999999999999, "start": 619, "text": " 3x3x3 filter or even more neighbors but here we use a 3x3x3 filter. Okay so that's that's" }, { "end": 632.5600000000001, "start": 628.2, "text": " how we obtain these embeddings right here and at the end there is a dimension reduction" }, { "end": 637.6400000000001, "start": 632.5600000000001, "text": " like a max pooling or something but ultimately what you'll end up with is for each of the" }, { "end": 644.96, "start": 637.6400000000001, "text": " 64 video frames you get one vector and that vector mainly describes that particular video" }, { "end": 651.5200000000001, "start": 644.96, "text": " frame like if you consider the green one here but also contains information from the video" }, { "end": 658.52, "start": 651.52, "text": " frame before it and from the video frame after it. Okay so that's sort of and and that's" }, { "end": 664, "start": 658.52, "text": " that's the so the temporal convolution is not to detect the periodic actions because" }, { "end": 668.8, "start": 664, "text": " it's just one frame into the future and into the past it is more like what we said here" }, { "end": 674.3, "start": 668.8, "text": " in order to give you extra information of what's happening in a particular frame because" }, { "end": 681, "start": 674.3, "text": " especially for periodicity it's actually important if the arms are going up or down. Cool so" }, { "end": 685.68, "start": 681, "text": " then comes the heart of the architecture. The heart of the architecture is this temporal" }, { "end": 692.88, "start": 685.68, "text": " self-similarity matrix. Now what does this do? This is relating the frames to each other" }, { "end": 699.96, "start": 692.88, "text": " and important to note here this is just a single channel image so there is no other" }, { "end": 708.32, "start": 699.96, "text": " ones of these. For the entire video frame sequence you have one 64x64 matrix and all" }, { "end": 714.8000000000001, "start": 708.32, "text": " the signal has to go through this matrix right everything from here is only through this" }, { "end": 720.2, "start": 714.8000000000001, "text": " matrix there's no residual connections there is no skip connections that's all that's" }, { "end": 725.72, "start": 720.2, "text": " your information bottleneck and by having a bottleneck like this these the authors here" }, { "end": 732.2, "start": 725.72, "text": " force the model to basically do a good job at making this temporal self-similarity happen" }, { "end": 738.3000000000001, "start": 732.2, "text": " if it wants to achieve a low loss and that's what I mean by you can achieve a lot by having" }, { "end": 744.24, "start": 738.3, "text": " a smart architecture. This temporal self-similarity matrix is actually not learned this can be" }, { "end": 750.88, "start": 744.24, "text": " computed deterministically from the embeddings right here so what you do is you each row" }, { "end": 757, "start": 750.88, "text": " here corresponds to one frame so you take each frame that corresponds to a row and you" }, { "end": 766.52, "start": 757, "text": " calculate simply the distance to or the similarity with each of the other frames so this is as" }, { "end": 773.68, "start": 766.52, "text": " you can see it's 64x64 so frame i here is simply compared to each of the other frames" }, { "end": 780.3199999999999, "start": 773.68, "text": " j and depending on whether that embedding is very similar to the embedding of frame" }, { "end": 787.92, "start": 780.3199999999999, "text": " j this number is going to be high or low. Now there's also a after that you do a softmax" }, { "end": 794.3199999999999, "start": 787.92, "text": " across here such that this is going to be like a distribution and not just raw numbers" }, { "end": 799.38, "start": 794.32, "text": " but that's ultimately not that important so what you can see is that the diagonal is very" }, { "end": 807.2800000000001, "start": 799.38, "text": " prominent and that makes sense because technically the diagonal is always one right if for example" }, { "end": 811.0600000000001, "start": 807.2800000000001, "text": " if you use the inner product as a similarity the diagonal should always be one but here" }, { "end": 816.12, "start": 811.0600000000001, "text": " we have the softmax so it's not but ultimately we can say that any frame should be very similar" }, { "end": 821.6800000000001, "start": 816.12, "text": " to its is going to be very similar to itself so that's why but then you can see right here" }, { "end": 827.8, "start": 821.68, "text": " there's a pattern emerging and that pattern is these diagonal lines in this direction" }, { "end": 834.26, "start": 827.8, "text": " as you can see and what what does that mean they actually have a larger version of this" }, { "end": 843.1999999999999, "start": 834.26, "text": " down here so what does the diagonal pattern mean it means that so here the diagonal that's" }, { "end": 851.9200000000001, "start": 843.2, "text": " okay frame i is very similar to frame i cool but the other lines what do they mean so if" }, { "end": 857.5600000000001, "start": 851.9200000000001, "text": " i look at frame i right here it is also very similar to this frame j now this wouldn't" }, { "end": 865.48, "start": 857.5600000000001, "text": " be further you know frames are similar but the line means that if i have if i look 10" }, { "end": 874.76, "start": 865.48, "text": " frames later i plus 10 that's very similar to j plus 10 and if i now look at i plus 20" }, { "end": 882.96, "start": 874.76, "text": " frame that's very similar to j plus 20 so this is the this is here why the the pattern" }, { "end": 888.16, "start": 882.96, "text": " is emerging of a line because if i go 10 frames into the future it's similar to the other" }, { "end": 895.32, "start": 888.16, "text": " one 10 frames into the future and so on and that means the line indicates that this entire" }, { "end": 902.88, "start": 895.32, "text": " sequence starting from i 20 frames into the future is repeated starting from j actually" }, { "end": 908.7600000000001, "start": 902.88, "text": " j is earlier here so but you get the point and if i have a bunch of these lines that" }, { "end": 914.5600000000001, "start": 908.7600000000001, "text": " means that this subsequence is repeated again and again and again throughout the video so" }, { "end": 920.96, "start": 914.5600000000001, "text": " each of these lines is basically one repetition of the sequence from the middle here at some" }, { "end": 929.9200000000001, "start": 920.96, "text": " other point in the video and that's pretty fascinating and that's these these self-similarity" }, { "end": 936.32, "start": 929.9200000000001, "text": " matrices that's what they're sort of showing now they don't use the inner product here" }, { "end": 941.76, "start": 936.32, "text": " as a self-similarity metric they actually use as you can see right here they use the" }, { "end": 947.08, "start": 941.76, "text": " negative square distance but the effect is the same so negative square distance followed" }, { "end": 955.1600000000001, "start": 947.08, "text": " by a row wise softmax operation so you could say hi we are basically done having this self-similarity" }, { "end": 959.2800000000001, "start": 955.1600000000001, "text": " matrix what we could do let's let's say we could train it we don't worry about how to" }, { "end": 966.6800000000001, "start": 959.2800000000001, "text": " train it we could simply take each row right here and we plot the intensities across that" }, { "end": 970.76, "start": 966.6800000000001, "text": " row and that's maybe you know like something like this and then there's the diagonal is" }, { "end": 978.36, "start": 970.76, "text": " a bit higher okay we could just use a heuristic to detect these bumps here basically calculate" }, { "end": 983.6, "start": 978.36, "text": " the length count the bumps and calculate the period length right that that should be pretty" }, { "end": 990.72, "start": 983.6, "text": " easy with like a simple heuristic but the authors here they want more they want to solve" }, { "end": 996.08, "start": 990.72, "text": " more problems so what are some of the problems we already saw some of the problems namely" }, { "end": 1002.2, "start": 996.08, "text": " for example here is the hammer throw so the hammer throw starts out slow and gets faster" }, { "end": 1007.64, "start": 1002.2, "text": " and faster and faster and you can see this pretty clearly at the lines right here namely" }, { "end": 1014.48, "start": 1007.64, "text": " if you go through time so you start off here and you go through time you can see that the" }, { "end": 1021.6, "start": 1014.48, "text": " distance to the next line here is fairly large but you go through time further the distance" }, { "end": 1027, "start": 1021.6, "text": " gets shorter you go through time further the distance gets even shorter so these pattern" }, { "end": 1033.44, "start": 1027, "text": " of lines here that's kind of converging towards that it indicates that this repeating action" }, { "end": 1039.68, "start": 1033.44, "text": " gets faster and faster and faster this is nice to see here at the bouncy ball example" }, { "end": 1047.8, "start": 1039.68, "text": " where you can see it starts out pretty slow but it gets faster and faster and here you" }, { "end": 1052.9199999999998, "start": 1047.8, "text": " if you if you have this full thing right here that basically means all the frames are self" }, { "end": 1059.12, "start": 1052.9199999999998, "text": " similar to each other which basically means if you stop the video right that's if you" }, { "end": 1063.08, "start": 1059.12, "text": " have 10 frames in a row the same thing the ball is just lying on the ground all of these" }, { "end": 1069.72, "start": 1063.08, "text": " frames will be self similar so there's probably no bouncing happening down here you can see" }, { "end": 1075.68, "start": 1069.72, "text": " pretty well from the pattern what happens and here in this mixing concrete example that" }, { "end": 1081.1200000000001, "start": 1075.68, "text": " we saw at the beginning you can see that at the beginning at the end there's this intro" }, { "end": 1087.16, "start": 1081.1200000000001, "text": " and outro sequence and only in the middle is there a repeating action and that's indicated" }, { "end": 1095.0800000000002, "start": 1087.16, "text": " by this line pattern is only at in the middle of the videos only between here and here so" }, { "end": 1101.42, "start": 1095.0800000000002, "text": " it's it's going to be pretty difficult to just have a heuristic that reads out these" }, { "end": 1107.52, "start": 1101.42, "text": " periodic action periodicity and in a true deep learning fashion the the authors here" }, { "end": 1113.96, "start": 1107.52, "text": " oh sorry maybe you can't see that i've shifted my recording window so maybe sometimes something's" }, { "end": 1119.96, "start": 1113.96, "text": " out of frame and you have to yell at me if i do that please so i hope you you saw this" }, { "end": 1126.3600000000001, "start": 1119.96, "text": " that you have the ever the speeding up here and here they're visible in the pattern and" }, { "end": 1131.4799999999998, "start": 1126.36, "text": " then here you have the beginning sequence the end sequence that have no repeating pattern" }, { "end": 1137.4399999999998, "start": 1131.4799999999998, "text": " and the repeating pattern only merged in the middle so the authors want to do this through" }, { "end": 1143.28, "start": 1137.4399999999998, "text": " of course a deep learning network they want to read out the periodicities not through" }, { "end": 1148.9599999999998, "start": 1143.28, "text": " a heuristic but using a deep network you know respectable that's at the times we live in" }, { "end": 1156.8400000000001, "start": 1148.96, "text": " so what do they do first of all you have to see right here everything that happens from" }, { "end": 1164.68, "start": 1156.8400000000001, "text": " here as i understand it is per frame so they simply take a row of this matrix right here" }, { "end": 1173.28, "start": 1164.68, "text": " like this red line and that is independently pulled through to the end so there is no interaction" }, { "end": 1179.48, "start": 1173.28, "text": " happening anymore between the individual frame data the only interaction that happens is" }, { "end": 1184.8799999999999, "start": 1179.48, "text": " a little bit here at the temporal convolutions but the only real interaction between the" }, { "end": 1190.76, "start": 1184.8799999999999, "text": " frames is happening through the self-similarity matrix and again this is the information bottleneck" }, { "end": 1198.1399999999999, "start": 1190.76, "text": " that the authors force the information through everything happening from here no that's actually" }, { "end": 1203.3200000000002, "start": 1198.14, "text": " not right there is this convolution right here but still this is the information bottleneck" }, { "end": 1210.5600000000002, "start": 1203.3200000000002, "text": " you have to go through so right here we process this image using a convolution so this is" }, { "end": 1219.64, "start": 1210.5600000000002, "text": " an image right and we can process it using a convolutional neural network so what we" }, { "end": 1226.3200000000002, "start": 1219.64, "text": " do is we have a 64 by 64 image in one channel we simply up sample that not up sample but" }, { "end": 1232.2, "start": 1226.32, "text": " we expand the channels to 32 channels now as i said it's pretty easy to think we can" }, { "end": 1240.2, "start": 1232.2, "text": " just go to the end here use a conv net to produce our final 512 by so 512 embeddings" }, { "end": 1248.6, "start": 1240.2, "text": " we have here again 64 by 64 that we then use to predict the final result but the authors" }, { "end": 1255.8799999999999, "start": 1248.6, "text": " here do something different they do transformer layers in the middle but only per frame so" }, { "end": 1267.5600000000002, "start": 1255.88, "text": " what does it mean so here you up sample to 32 channels and then that means that one of" }, { "end": 1274.2800000000002, "start": 1267.5600000000002, "text": " these blocks right here one of these blocks corresponds to one row in the self-similarity" }, { "end": 1280.3600000000001, "start": 1274.2800000000002, "text": " matrix which corresponds to one frame and from now on so from now on i want to say what" }, { "end": 1288.6, "start": 1280.36, "text": " i said before from now on it's all just this one block they are independent of each other" }, { "end": 1295.6399999999999, "start": 1288.6, "text": " okay so you take this one block and you feed it through a transformer to achieve at your" }, { "end": 1307.08, "start": 1295.6399999999999, "text": " final embedding of 512 and it's probably best if we read what they say about it okay so" }, { "end": 1314.8799999999999, "start": 1307.08, "text": " if we're given this self-similarity matrices matrix they consist of row each row is the" }, { "end": 1320.4399999999998, "start": 1314.8799999999999, "text": " per frame self-similarity representation and generates two outputs the per frame period" }, { "end": 1325.98, "start": 1320.4399999999998, "text": " length estimation and the per frame binary periodicity classification note that both" }, { "end": 1334.9199999999998, "start": 1325.98, "text": " l and p are vectors and their elements are per frame predictions okay the architecture" }, { "end": 1339.2, "start": 1334.92, "text": " of the period predictor module can be viewed in figure two note that the predictors share" }, { "end": 1345.76, "start": 1339.2, "text": " a common architecture and waits until the last classification phase the shared processing" }, { "end": 1354.3600000000001, "start": 1345.76, "text": " pipeline starts with starts with 32 2d convolutional filters of size three by three followed by" }, { "end": 1360, "start": 1354.3600000000001, "text": " a transformer layer which uses a multi-headed attention with trainable positional embeddings" }, { "end": 1367.44, "start": 1360, "text": " in the form of a 64 length variable that is learned by training okay it's i guess the" }, { "end": 1373.18, "start": 1367.44, "text": " transformers learned by training and the positional embeddings are also learned by training that's" }, { "end": 1379.66, "start": 1373.18, "text": " fairly common we use four heads with 512 dimension in the transformer by the way if you don't" }, { "end": 1384.66, "start": 1379.66, "text": " know what a transformer is watch the video on attention is all you need i made one it's" }, { "end": 1392.8400000000001, "start": 1384.66, "text": " very popular yeah so with each head being 128 dimensions in size after the shared pipeline" }, { "end": 1400, "start": 1392.8400000000001, "text": " we have two classifiers period length classifier and periodicity classifier tau sorry this" }, { "end": 1404.8400000000001, "start": 1400, "text": " is fine this is tau each of them consists of two fully connected layers of size 512" }, { "end": 1410.88, "start": 1404.8400000000001, "text": " so i guess the the the pipeline here is pretty simple the question could be why do they use" }, { "end": 1418.4, "start": 1410.88, "text": " a transformer and not simply another convolutional network so here they up sample the image as" }, { "end": 1426.22, "start": 1418.4, "text": " we saw into 32 channels and then they simply want to take one of these one of these blocks" }, { "end": 1433.2600000000002, "start": 1426.22, "text": " here and that corresponds a little bit so we have for one frame right what does it mean" }, { "end": 1445.64, "start": 1433.26, "text": " we have basically we have 64 by 32 things so the 64 things it's this one frames temporal" }, { "end": 1451.16, "start": 1445.64, "text": " connection to each other frame given that you know comes from this self-similarity matrix" }, { "end": 1457.36, "start": 1451.16, "text": " so it kind of relates this frame that we're considering to each of the other frames and" }, { "end": 1464.28, "start": 1457.36, "text": " each of this each of these entries is a 32 size vector this is sort of a this is you" }, { "end": 1472.3999999999999, "start": 1464.28, "text": " can consider like a sequence of 64 things 64 embeddings so to use a transformer here" }, { "end": 1479.6799999999998, "start": 1472.3999999999999, "text": " it's pretty natural if you think of this as like a sequence transformation task i i would" }, { "end": 1488.5600000000002, "start": 1479.68, "text": " guess so the transformer can if there are these peaks right here like we saw right here" }, { "end": 1494.48, "start": 1488.5600000000002, "text": " the transformer can make very good sense of that because of course the attention mechanism" }, { "end": 1502.76, "start": 1494.48, "text": " from a one peak it can attend to all the other peaks and can sort of relate the different" }, { "end": 1507.72, "start": 1502.76, "text": " peaks to each other and then determine the periodicity length whereas with a convolutional" }, { "end": 1514.76, "start": 1507.72, "text": " network i guess that's going to be a lot harder because of the sort of invariance built into" }, { "end": 1520.08, "start": 1514.76, "text": " the convolution i'm not sure maybe they also it just worked better but that's how i think" }, { "end": 1527.24, "start": 1520.08, "text": " about it it's that for a given frame you basically have a sequence classification or a set classification" }, { "end": 1534.88, "start": 1527.24, "text": " task and the attention mechanism allows you to in one single step connect each peak with" }, { "end": 1541.96, "start": 1534.88, "text": " each other peak or each information with each other information in this sequence all right" }, { "end": 1547.6000000000001, "start": 1541.96, "text": " so at the end you have just fully connected layers again only on a per frame basis and" }, { "end": 1553.3600000000001, "start": 1547.6000000000001, "text": " that will give you the output and again you compare this to the label and you back prop" }, { "end": 1559.72, "start": 1553.3600000000001, "text": " through everything everything here is differentiable so all of this is trained to achieve minimum" }, { "end": 1565.88, "start": 1559.72, "text": " possible loss and because you train everything to achieve minimal possible loss you make" }, { "end": 1571.08, "start": 1565.88, "text": " this encoder right here which is the crucial part because the encoder is must give you" }, { "end": 1576.76, "start": 1571.08, "text": " good embeddings which must give you a sensible self-similarity matrix right you train the" }, { "end": 1583.84, "start": 1576.76, "text": " encoder to encode things that are relevant for the task and that's what makes the whole" }, { "end": 1594.9199999999998, "start": 1583.84, "text": " thing work okay so we've gone through the architecture now the problem right here is" }, { "end": 1603.9199999999998, "start": 1594.9199999999998, "text": " the the data set so they also go into how they do inference they can actually do a bunch" }, { "end": 1608.9399999999998, "start": 1603.9199999999998, "text": " of things like play the video at different speeds and then look at what each of the predictions" }, { "end": 1614, "start": 1608.94, "text": " so if a double speed it predicts half the period length then you can be more sure and" }, { "end": 1621.8400000000001, "start": 1614, "text": " so on so that's pretty cool but they go into another point right here and that's the data" }, { "end": 1629.04, "start": 1621.8400000000001, "text": " set so they produce this countix data set but also on the other hand which is something" }, { "end": 1636.8200000000002, "start": 1629.04, "text": " I also find very cool is they produce a synthetic data set so here they say we train with synthetic" }, { "end": 1645.3999999999999, "start": 1636.82, "text": " repetitions and that can be sort of I didn't know what to think of it at first I was just" }, { "end": 1651.8799999999999, "start": 1645.3999999999999, "text": " like huh but then it's pretty cool so if you have a video with these these are the frames" }, { "end": 1657.84, "start": 1651.8799999999999, "text": " of the video right so the video goes in this temporal direction what you can do is simply" }, { "end": 1664.8, "start": 1657.84, "text": " go here go through these frames and just repeat these frames and repeat them and repeat them" }, { "end": 1669.94, "start": 1664.8, "text": " and at the end you have these frames right and then you have a data set and if you if" }, { "end": 1676.1599999999999, "start": 1669.94, "text": " you assume that most videos do not naturally contain repeating actions right most videos" }, { "end": 1682.32, "start": 1676.1599999999999, "text": " are just videos they're not videos of something repeating then you can safely assume that" }, { "end": 1687.52, "start": 1682.32, "text": " these parts here are non repeating so and these parts here are repeating this is one" }, { "end": 1692, "start": 1687.52, "text": " of the labels that you need right the problem with synthetic data set is always to have" }, { "end": 1699.72, "start": 1692, "text": " the labels and also you know how many there are because you can simply count the number" }, { "end": 1705.8, "start": 1699.72, "text": " of times that you go through it you can even make it faster slower and so on so this synthetic" }, { "end": 1710.3, "start": 1705.8, "text": " approach is pretty cool and especially the bottom right here because this might be kind" }, { "end": 1715.7, "start": 1710.3, "text": " of hacky because each time each time you jump from the end of one of those arrows to the" }, { "end": 1722.1200000000001, "start": 1715.7, "text": " beginning right you have kind of a hack in the indie video because you know it's not" }, { "end": 1727.8600000000001, "start": 1722.1200000000001, "text": " continuous so what you can do and this is the the bottom here you can do this reversal" }, { "end": 1732.54, "start": 1727.8600000000001, "text": " technique where you go to the end and then you play the frames backwards and then you" }, { "end": 1737.88, "start": 1732.54, "text": " play the frames forwards again backwards again forwards again and then you go out here and" }, { "end": 1743.48, "start": 1737.88, "text": " that gives you one continuous motion right if someone if it's simply a video of someone" }, { "end": 1748.88, "start": 1743.48, "text": " lifting their hand like it starts out down here and it goes here and it goes here and" }, { "end": 1755.56, "start": 1748.88, "text": " then if you do this technique it would go down again down again up again up again and" }, { "end": 1763.96, "start": 1755.56, "text": " so on so that's you know i think it's a fairly smart technique honestly now they tried this" }, { "end": 1771.16, "start": 1763.96, "text": " and it doesn't work super well so what they also have to do is they have to do manual" }, { "end": 1777.4, "start": 1771.16, "text": " camera motion augmentation so that's so camera motion augmentation it basically means that" }, { "end": 1782.72, "start": 1777.4, "text": " if you just do a repeating action like this it's sort of i guess it's too monotonic it" }, { "end": 1791.64, "start": 1782.72, "text": " doesn't really cover real videos with repeating actions so what they do is they kind of simulate" }, { "end": 1798.0800000000002, "start": 1791.64, "text": " a moving camera and you simulate that much like you would do image augmentation so you" }, { "end": 1804.52, "start": 1798.08, "text": " can rotate the camera over time you can translate it you can scale it differently and through" }, { "end": 1809.04, "start": 1804.52, "text": " if you do that throughout the video and you change it around how the camera moves then" }, { "end": 1819.24, "start": 1809.04, "text": " that appears to work fairly well so if they now compare this and their data set they perform" }, { "end": 1824.72, "start": 1819.24, "text": " pretty well so in their data set they take this kinetics data set and they crowdsource" }, { "end": 1831.16, "start": 1824.72, "text": " the label and the tasks in the data set they're pretty diverse as you can see right here so" }, { "end": 1836.84, "start": 1831.16, "text": " you have sports like rope training mountain climbers but you have also things like playing" }, { "end": 1843.7, "start": 1836.84, "text": " ukulele exercising arms slicing an onion and so on and you can see that the repetition" }, { "end": 1849.04, "start": 1843.7, "text": " count is fairly diverse as well so from one or two repetitions per video it goes to 50" }, { "end": 1856.2, "start": 1849.04, "text": " or so and the period length is also between one and five seconds though as you as i already" }, { "end": 1862, "start": 1856.2, "text": " said you don't have to you don't have to count on that because you can always play the video" }, { "end": 1870.12, "start": 1862, "text": " slower or faster and then determine other periodicities so in their experiment first" }, { "end": 1878.24, "start": 1870.12, "text": " of all they perform pretty well and they show that if they train on their data set and on" }, { "end": 1884.84, "start": 1878.24, "text": " the synthetic data set they perform better than if they just train on the synthetic or" }, { "end": 1890.96, "start": 1884.84, "text": " they just train on their data set they also show pretty clearly that the addition of this" }, { "end": 1896.6, "start": 1890.96, "text": " temporal self-similarity matrix helps tremendously you can see right here in each of these boxes" }, { "end": 1903.6, "start": 1896.6, "text": " is the comparison and this obi I think is the off by one error so it kind of forgives" }, { "end": 1909.24, "start": 1903.6, "text": " you if you're off by one count but otherwise you get a zero if you're wrong and you can" }, { "end": 1914.84, "start": 1909.24, "text": " see that the self-similarity matrix helps tremendously they also compare with some other" }, { "end": 1920.4399999999998, "start": 1914.84, "text": " architectural choices instead of the transformer I guess yeah so I guess they just take it" }, { "end": 1929.6399999999999, "start": 1920.4399999999998, "text": " because it performs pretty well and they do a lot of lot of ablations but what I particularly" }, { "end": 1935.76, "start": 1929.64, "text": " appreciate is that they do something like this so what they do at the end went once" }, { "end": 1941.26, "start": 1935.76, "text": " they've trained the architectures they do a 1d PCA protection of the encoder features" }, { "end": 1948.76, "start": 1941.26, "text": " over time now the encoder features they were 512 dimensional right this is the thing before" }, { "end": 1955.5200000000002, "start": 1948.76, "text": " it goes into the self-similarity matrix so those we said the encoder is the crucial part" }, { "end": 1962.36, "start": 1955.52, "text": " here because it needs to take the video and encode things that make them accessible to" }, { "end": 1969.56, "start": 1962.36, "text": " calculating the self-similarity now they do a 1d PCA so a projection into one dimension" }, { "end": 1977.4, "start": 1969.56, "text": " of these features and you can already see at this one dimensional projection that the" }, { "end": 1985.5600000000002, "start": 1977.4, "text": " periodicity here is clearly clearly visible namely for example right here every time up" }, { "end": 1991.0400000000002, "start": 1985.5600000000002, "text": " here is when the legs are up and every time down here is when the legs are down right" }, { "end": 1999.0600000000002, "start": 1991.0400000000002, "text": " here so that is very very impressive and that kind of that really shows that the model is" }, { "end": 2004.48, "start": 1999.0600000000002, "text": " doing what you claim what you claim that it's doing like I'm almost more interested in experiments" }, { "end": 2009.16, "start": 2004.48, "text": " like this than in and in these numbers right here because the numbers could always be because" }, { "end": 2018.92, "start": 2009.16, "text": " you've just thrown more stuff at it right so they go over a bunch of possible applications" }, { "end": 2026.52, "start": 2018.92, "text": " of their model so first of all you can do something like as we can see repetition counting" }, { "end": 2032.14, "start": 2026.52, "text": " from videos you can do periodicity detection those were the things that the model is trained" }, { "end": 2037.8400000000001, "start": 2032.14, "text": " to do but there's also a bunch of things that the model can now implicitly do namely something" }, { "end": 2042.8400000000001, "start": 2037.8400000000001, "text": " like change inspection where they say look if someone's chopping this pineapple right" }, { "end": 2048.7400000000002, "start": 2042.8400000000001, "text": " here then at the end of each of the repetitions there is something that changed namely the" }, { "end": 2055.48, "start": 2048.7400000000002, "text": " number of slices of pineapple is it bread is it I can't I think it's pineapple okay" }, { "end": 2063.08, "start": 2055.48, "text": " so the number of slices or pieces right here changes so in essence this could be the base" }, { "end": 2070.44, "start": 2063.08, "text": " for another model estimating whatever changed or training to recognize numbers of pieces" }, { "end": 2078.36, "start": 2070.44, "text": " and so on also you can detect the speed so the speed of a repeating action if you perform" }, { "end": 2087, "start": 2078.36, "text": " something slow or fast this model can implicitly do it and this they call cross-period retrieval" }, { "end": 2094.84, "start": 2087, "text": " so if you know when the repetitions are you know that okay maybe the first frame so always" }, { "end": 2100.6400000000003, "start": 2094.84, "text": " on the upswing right here these should all these should all be fairly similar visually" }, { "end": 2109.3599999999997, "start": 2100.64, "text": " right as with respect to the repeating action so you can see that even though this whenever" }, { "end": 2115.08, "start": 2109.3599999999997, "text": " the kid in the swing here is close it looks fairly different in in a purely visual sense" }, { "end": 2121.3599999999997, "start": 2115.08, "text": " in a pixel sense but it is at the same point in the repeating action and that's you know" }, { "end": 2127.04, "start": 2121.3599999999997, "text": " that's that's pretty cool so you can technically retrieve related things even though they visually" }, { "end": 2134.88, "start": 2127.04, "text": " they don't look similar that much yeah that that's the the kind of applications here are" }, { "end": 2141.32, "start": 2134.88, "text": " probably many many fold and I also think that so in this measure of intelligence paper by" }, { "end": 2147.4, "start": 2141.32, "text": " françois choulet he basically claims that this is one of the innate abilities of humans" }, { "end": 2152.92, "start": 2147.4, "text": " they can count you know they can count things this is something you're basically born with" }, { "end": 2161.08, "start": 2152.92, "text": " and maybe this thing right here will become sort of a staple staple component for many" }, { "end": 2167.44, "start": 2161.08, "text": " other things that we build AI on I would not be surprised but maybe it will just fade into" }, { "end": 2173.4, "start": 2167.44, "text": " history I think it's pretty cool project especially you know the the architectural choice here" }, { "end": 2180.04, "start": 2173.4, "text": " to pull everything through this self-similarity matrix and the you know just just looking" }, { "end": 2187.44, "start": 2180.04, "text": " at this matrix already makes you kind of know that this thing works alright this was it" }, { "end": 2192.48, "start": 2187.44, "text": " from me let me know in the comments what you think about the paper check out the website" }, { "end": 2198.08, "start": 2192.48, "text": " the website has a lot of video demo examples of what they're doing I think the data set" }, { "end": 2210.56, "start": 2198.08, "text": " as well and yeah I'll see you next time bye bye" } ]
n1SXlK5rhR8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Drama] Yann LeCun against Twitter on Dataset Bias
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ylc", "yann", "lecun", "convnet", "face", "pulse", "github", "colab", "jeff dean", "hardmaru", "charles sutton", "soumith", "meredith", "timnit", "bias", "noise", "dataset", "systems", "twitter", "mob" ]
Yann LeCun points out an instance of dataset bias and proposes a sensible solution. People are not happy about it. Original Tweet: https://twitter.com/ylecun/status/1274782757907030016 ERRATA: - My specific example of the L1 regularizer wrt to Porsches and Ferraris does not actually work in this particular case. What I mean is a general sparsity-inducing regularizer. - When I claim that an L1 regularizer would make the problem worse, this only holds in certain circumstances, for example when the data is Gaussian iid. Thumbnail: https://commons.wikimedia.org/wiki/File:Yann_LeCun_-_2018_(cropped).jpg by Jérémy Barande / Ecole polytechnique Université Paris-Saclay / CC BY-SA 2.0 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! So you may have seen this already. There's a CVPR paper called Pulse. And what it does is it's a method to up sample a pixelated image in a way that makes it look realistic, but also that the again down sampled variant matches the original down sampled image. So it's kind of a cycle consistency loss together with a GAN. And all in all, it's a method to demonstrate how you could do this. Now this has been trained on this face data set, among others. There was a user Bomzy that made this into a colab so people could try it out and tweeted this out. And as you can see, it works pretty nicely, it gives pretty nice results on this particular data set. But of course, people started playing around with it and gave fairly funny results like this, or that. That gets more into the horrible category. These. So you can see these ones I particularly like Trump being made into the little child. So you can see as soon as you get away from the original kind of data set modality, you are going to get these results that are off. And people started to notice that so here you input Barack Obama, and what comes out is a fairly standard Caucasian person, someone tweeted out saying this image speaks volumes about the dangers of bias in AI, I guess here is where the entire story starts. So young Lacaan weighs in and says, ML systems are biased when data is biased. This face up sampling system makes everyone look white because the network was pre trained on flick face HQ, which mainly contains white people picks train the exact same system on a data set from Senegal, and everyone will look African. So this is pointing out why this happens namely because the data set is mainly Caucasian people. So the results of up sampling are going to be mainly Caucasian people. And this is like a straightforward explanation of why we're seeing what we're seeing. But of course, this was not okay. And here is where the piling starts. As an interjection, we have to talk about bias in machine learning. Technically, there's a statistical notion of bias, which has a very rigorous definition. And there is the societal definition of bias. And these two things, even though they're the same word, they're totally different. A machine learning system mainly consists of four different parts. There is a data set, the model, the loss function, and the optimization procedure. Statistical bias means whenever the model, the loss or the optimization procedure lead to a situation where the outcome doesn't reflect the distribution of the data that you input. This, for example, is achieved when you regularize your model, which means that you put some prior knowledge onto the model, you introduce bias, and therefore you choose to not accurately represent your data distribution, regularize it to a more biased distribution that in turn has lower variance. We know this as the bias variance trade off. It's actually very simple, right? You have the Ferraris and the Lamborghinis, and you want to make a model that predicts the accident probability. Now it just so happens that the Ferrari drivers are a bit more reckless, and they do slightly higher accidents. And now I train my logistic regression, and it tells me, okay, 60-40. Cool. But now I train my logistic regression with an L1 penalty, and I say, I want my model to be, you know, explainable. So I want it to be sparse. I want the least amount of variables to be contributing to it. What's the model going to say? The model is going to say, Ferrari drivers add Lamborghini drivers good. Societal bias in machine learning is way different. An example for this is when face detection systems work well on Caucasian people, but don't work so well faced with people from other heritages. And these societal biases are in the data set. As Jan LeCun points out here, if you change the data set, you'll change these biases. Notably, these societal biases can only be in the data set. Otherwise, you'd have to argue something like logistic regression itself has a preference for white people or something like this. Now there is a considerable interaction effect between the two, but as Jan LeCun points out, the actual societal bias of the final system is a direct result of the bias in the data set. And he is very correct. If you train that system on a different data set, it will exhibit different biases. Societal bias cannot be in the other parts of the machine learning pipeline. They can serve to exaggerate or mitigate that bias in the data set, but they themselves can only be statistically biased and not societally biased. But Jan LeCun make the terrible mistake of pinpointing the exact root cause of this problem and not addressing the I guess, wider ranging problems in the field as some people perceive it. And he shouldn't have to, right? He pretty clearly says, this is why it happens. We can solve it by swapping the data set. He doesn't say anything about anything else. Namely, he doesn't say that general bias in the field is not a problem. He doesn't say that this doesn't harm anyone. None of that. He simply he simply suggests a solution. Jonathan Peck says, well, yes, that's the point. ML researchers need to be more careful selecting their data so that they don't encode biases like this. And LeCun responds with not so much ML researchers, but ML engineers. The consequences of bias are considerably more dire in a deployed product than in an academic paper, which is also correct. This paper was about the method showing that this method works on this data set. Now, Sumit here makes an interesting point, which I agree with, saying that today, ML researchers are inadvertently powering product of a lot of non-AI companies who ignorantly start with a pre-trained BERT or ResNet or YOLO from the internet, probably ignoring the license, read me and so on, which is a valid point, right? There are going to be people that take this and think, oh, this is a face up sampler. Cool. I can use that without noting that this is simply an example implementation on an example data set. So you can argue that there might be some responsibility of the researchers right here. That doesn't make Jan LeCun not correct, but I'd still consider this to be like a fruitful discussion between individuals right here. But now we go on. This person saying, train it on the whole American population with an L2 loss and almost everyone will look white or train it on the whole American population with an L1 loss and more people might look black. Stop pretending that bias does not also come from algorithmic choices. Jan LeCun never says it doesn't, right? LeCun responds now saying, the most efficient way to do it though is to equalize the frequencies of the categories of samples during training. This forces the network to pay attention to all the relevant features for all the sample categories. And training with an L1 instead of an L2 will not even begin to solve the problem. I would pretty much argue training with an L1 loss here would exacerbate the problem because the L2 loss is much more sensitive to outliers. Charles Sutton says, serious question, why do you feel that it's important to make this point? Are you worried that people are going to start suing CycleGAN? And LeCun says, because people should be aware of this problem and know its cause so they can fix it. How terrible Jan, how terrible you dare pinpoint the exact cause of the problem so that people can fix it. The correct thing to do is to point out that everything is problematic. So Tim the Gibber says, Jan, I suggest you watch me and Emily's tutorial or a number of scholars who are experts in this area. You can't just reduce harms to data set bias. For once, listen to us people from marginalized communities and what we tell you. If not now during worldwide protests, not sure when. So again, I feel the argument here is that you can't simply point out that it's the data set bias. You must point out the bigger problems which Jan LeCun does not ever deny. He simply says this particular problem can be solved by switching the data set. Nicole Olleroux says, Jan was in my PhD jury. I am indebted for him for everything he taught me, but this constant dismissal of the harms caused directly or indirectly by the ML community is highly problematic. Where or when have I dismissed the harm caused by the ML community? I'm pointing out the cause of the harm so it can be fixed. You can't fix the harm unless you know what causes it. No. LeRoux says causes of the biases are numerous only pointing out data set bias deflects the attention away from the other more pervasive ones that make the whole field of bias in ML. Many people try to get your attention about these issues, but you kept focus on the data set because the data set is the problem right here. He doesn't dismiss any of the other things. He simply says here the data set is the problem if your problem is that it doesn't work as well for non-concassian people. Which was never the intent of this. The intent of this was to showcase the method. I mean ImageNet is like 60% dog species and still people train on it to showcase their image recognition techniques. No one training on ImageNet makes a claim that they have solved computer vision for all the classes in the world in a fair manner. Timnigibru goes on saying I'm sick of this framing, tired of it. Many people have tried to explain. Many scholars listen to us. You can't just reduce the harms caused by ML to data set bias. Doesn't do that. Doesn't do it. So someone asks her is he engaging in any ways with you? It's appalling to see that he answers to everybody but you. Yet maybe there is a conversation going on in private and I don't want to jeopardize it. Note that Yan LeCun's tweet has 500 retweets, 1.9k likes and comments as far as you can scroll. To what she responds to with yep but I'm used to white men refusing to engage with black and brown women even on issues of bias that mostly affect us. I mean he literally has ignored a whole body of work by people from that demographic hence the statement so not surprised. I mean in absence of the fact that an argument should be independent of the person making the argument that is a low blow. Hardmaru says I respectfully disagree with Yan here as long as progress is benchmarked on biased data such biases will also be reflected in the inductive biases of ML systems. Advancing ML with biased benchmarks and asking engineers to simply retrain models with unbiased data is not helpful. I don't disagree with you here. I don't think my tweet contradicts your statement which it doesn't. People are reading into this because he doesn't conform to the orthodoxy of pointing out that everything and everything is problematic and simply pinpoints a particular problem. He must be thinking all the wrong things. Jeff Dean says this is a clear example here is an illustration that seemingly minor choices in learning algorithms or loss can have significant effects so bias in ML systems is about much more than just avoid data bias. ML researchers and practitioners must pay attention to these issues and I think they are and Lacan doesn't say anything against that. He says as I point out in my comment to this tweet is much more efficient to correct this kind of bias. Note that Yan Lacan actually differentiates between the different kinds of biases by equalizing the frequencies of categories of samples during training than be hacking the loss function. Correct because if you hack the loss function you're trying to counter one kind of bias by another kind of bias. Meredith Whitaker says this is very racist and even if it recognized non-white people it would be very racist. This is cop tech. It's designed to allow those with power to surveil and control those with less power. Diverse training sets aren't going to fix it advocating that we should never build these systems and that's a discussion to be had but let me break this to you. This isn't going to help the cops. This isn't actually giving you the face of the person that was down pixeled. This is simply going to give you the most likely face associated with that down pixeled picture given the data set the algorithm was trained on. I don't see this whenever any machine learning algorithm does anything with faces at all. People jumping up going like this is cop technology. Well in line with all the broader impact statement advice can't it also be used to find lost children from very very bad security camera footage? And if I already mentioned that this doesn't actually give you back the person on the down sampled image it will give you back the most likely person given the data set. So with that I want to conclude this section. Please stop the witch hunting. Yann LeCun made a completely fine tweet here and there's no reason why people should pile on him this hard. He doesn't dismiss any of the other problems just because he doesn't mention them and while we all enjoy a good discussion where people disagree genuinely it's not helpful to accuse him of things he never said or meant. I mean where does this all lead? The result of this is going to be that small labs that don't have the resources to collect their own data sets or check for all the possible biases in their models that are reliant on the data sets that we do have even if they are biased and flawed will just be disincentivized from publishing their code or actually doing research at all. So this as every other additional constraint on research is going to help the large corporations with lots of money. And maybe that's just my opinion but we should be able to just talk about a problem and the solution to it without always having to make sure that we rabble down all the different things that are and might be wrong according to the canon. And big props to Yann LeCun here for holding his own. 90% of people by now would probably be like oh yes I'm so sorry I did a not thoughtful comment blah blah blah. Props to you Yann, keep going. And with that I conclude this section. Let me know what you think in the comments and I'll see you next time. Bye bye.
[ { "end": 6.32, "start": 0, "text": " Hi there! So you may have seen this already. There's a CVPR paper called Pulse. And what it" }, { "end": 12.88, "start": 6.32, "text": " does is it's a method to up sample a pixelated image in a way that makes it look realistic," }, { "end": 20.400000000000002, "start": 12.88, "text": " but also that the again down sampled variant matches the original down sampled image. So it's" }, { "end": 27.44, "start": 20.400000000000002, "text": " kind of a cycle consistency loss together with a GAN. And all in all, it's a method to demonstrate" }, { "end": 33.2, "start": 27.44, "text": " how you could do this. Now this has been trained on this face data set, among others. There was a" }, { "end": 41.760000000000005, "start": 33.2, "text": " user Bomzy that made this into a colab so people could try it out and tweeted this out. And as you" }, { "end": 48.24, "start": 41.760000000000005, "text": " can see, it works pretty nicely, it gives pretty nice results on this particular data set. But of" }, { "end": 56.56, "start": 48.24, "text": " course, people started playing around with it and gave fairly funny results like this, or that. That" }, { "end": 64.72, "start": 56.56, "text": " gets more into the horrible category. These. So you can see these ones I particularly like Trump" }, { "end": 72.88, "start": 65.44, "text": " being made into the little child. So you can see as soon as you get away from the original kind of" }, { "end": 80.72, "start": 72.88, "text": " data set modality, you are going to get these results that are off. And people started to notice" }, { "end": 87.92, "start": 80.72, "text": " that so here you input Barack Obama, and what comes out is a fairly standard Caucasian person," }, { "end": 94.08, "start": 87.92, "text": " someone tweeted out saying this image speaks volumes about the dangers of bias in AI, I guess" }, { "end": 101.84, "start": 94.08, "text": " here is where the entire story starts. So young Lacaan weighs in and says, ML systems are biased" }, { "end": 107.84, "start": 101.84, "text": " when data is biased. This face up sampling system makes everyone look white because the network was" }, { "end": 114.72, "start": 107.84, "text": " pre trained on flick face HQ, which mainly contains white people picks train the exact same system on" }, { "end": 120.72, "start": 114.72, "text": " a data set from Senegal, and everyone will look African. So this is pointing out why this happens" }, { "end": 126.16, "start": 120.72, "text": " namely because the data set is mainly Caucasian people. So the results of up sampling are going" }, { "end": 132.48000000000002, "start": 126.16, "text": " to be mainly Caucasian people. And this is like a straightforward explanation of why we're seeing" }, { "end": 139.28, "start": 132.48, "text": " what we're seeing. But of course, this was not okay. And here is where the piling starts. As an" }, { "end": 144.16, "start": 139.28, "text": " interjection, we have to talk about bias in machine learning. Technically, there's a statistical notion" }, { "end": 150.48, "start": 144.16, "text": " of bias, which has a very rigorous definition. And there is the societal definition of bias. And these" }, { "end": 154.79999999999998, "start": 150.48, "text": " two things, even though they're the same word, they're totally different. A machine learning" }, { "end": 160.48, "start": 154.79999999999998, "text": " system mainly consists of four different parts. There is a data set, the model, the loss function," }, { "end": 168.07999999999998, "start": 160.48, "text": " and the optimization procedure. Statistical bias means whenever the model, the loss or the optimization" }, { "end": 174.56, "start": 168.07999999999998, "text": " procedure lead to a situation where the outcome doesn't reflect the distribution of the data that" }, { "end": 180.07999999999998, "start": 174.56, "text": " you input. This, for example, is achieved when you regularize your model, which means that you put" }, { "end": 185.76, "start": 180.07999999999998, "text": " some prior knowledge onto the model, you introduce bias, and therefore you choose to not accurately" }, { "end": 192.32, "start": 185.76, "text": " represent your data distribution, regularize it to a more biased distribution that in turn has lower" }, { "end": 197.35999999999999, "start": 192.32, "text": " variance. We know this as the bias variance trade off. It's actually very simple, right? You have" }, { "end": 202.39999999999998, "start": 197.35999999999999, "text": " the Ferraris and the Lamborghinis, and you want to make a model that predicts the accident probability." }, { "end": 208.23999999999998, "start": 202.39999999999998, "text": " Now it just so happens that the Ferrari drivers are a bit more reckless, and they do slightly higher" }, { "end": 214.48, "start": 208.23999999999998, "text": " accidents. And now I train my logistic regression, and it tells me, okay, 60-40. Cool. But now I train" }, { "end": 219.6, "start": 214.48, "text": " my logistic regression with an L1 penalty, and I say, I want my model to be, you know, explainable." }, { "end": 224.32, "start": 219.6, "text": " So I want it to be sparse. I want the least amount of variables to be contributing to it. What's the" }, { "end": 228.95999999999998, "start": 224.32, "text": " model going to say? The model is going to say, Ferrari drivers add Lamborghini drivers good." }, { "end": 234.64, "start": 228.95999999999998, "text": " Societal bias in machine learning is way different. An example for this is when face detection systems" }, { "end": 240.39999999999998, "start": 234.64, "text": " work well on Caucasian people, but don't work so well faced with people from other heritages." }, { "end": 246.8, "start": 240.4, "text": " And these societal biases are in the data set. As Jan LeCun points out here, if you change the data" }, { "end": 252.96, "start": 246.8, "text": " set, you'll change these biases. Notably, these societal biases can only be in the data set." }, { "end": 257.76, "start": 252.96, "text": " Otherwise, you'd have to argue something like logistic regression itself has a preference for" }, { "end": 262.72, "start": 257.76, "text": " white people or something like this. Now there is a considerable interaction effect between the two," }, { "end": 270.32, "start": 262.72, "text": " but as Jan LeCun points out, the actual societal bias of the final system is a direct result of" }, { "end": 275.84, "start": 270.32, "text": " the bias in the data set. And he is very correct. If you train that system on a different data set," }, { "end": 281.59999999999997, "start": 275.84, "text": " it will exhibit different biases. Societal bias cannot be in the other parts of the machine" }, { "end": 288.24, "start": 281.59999999999997, "text": " learning pipeline. They can serve to exaggerate or mitigate that bias in the data set, but they" }, { "end": 293.36, "start": 288.24, "text": " themselves can only be statistically biased and not societally biased. But Jan LeCun make the" }, { "end": 299.68, "start": 293.36, "text": " terrible mistake of pinpointing the exact root cause of this problem and not addressing the" }, { "end": 306.32, "start": 299.68, "text": " I guess, wider ranging problems in the field as some people perceive it. And he shouldn't have to," }, { "end": 312.08, "start": 306.32, "text": " right? He pretty clearly says, this is why it happens. We can solve it by swapping the data" }, { "end": 317.44, "start": 312.08, "text": " set. He doesn't say anything about anything else. Namely, he doesn't say that general bias" }, { "end": 323.68, "start": 317.44, "text": " in the field is not a problem. He doesn't say that this doesn't harm anyone. None of that. He simply" }, { "end": 330.24, "start": 323.68, "text": " he simply suggests a solution. Jonathan Peck says, well, yes, that's the point. ML researchers need" }, { "end": 336.08, "start": 330.24, "text": " to be more careful selecting their data so that they don't encode biases like this. And LeCun" }, { "end": 342.08, "start": 336.08, "text": " responds with not so much ML researchers, but ML engineers. The consequences of bias are considerably" }, { "end": 348.56, "start": 342.08, "text": " more dire in a deployed product than in an academic paper, which is also correct. This paper was about" }, { "end": 356, "start": 348.56, "text": " the method showing that this method works on this data set. Now, Sumit here makes an interesting" }, { "end": 361.04, "start": 356, "text": " point, which I agree with, saying that today, ML researchers are inadvertently powering product" }, { "end": 366.4, "start": 361.04, "text": " of a lot of non-AI companies who ignorantly start with a pre-trained BERT or ResNet or YOLO from the" }, { "end": 371.12, "start": 366.4, "text": " internet, probably ignoring the license, read me and so on, which is a valid point, right?" }, { "end": 376.64, "start": 371.68, "text": " There are going to be people that take this and think, oh, this is a face up sampler. Cool. I can" }, { "end": 382.88, "start": 376.64, "text": " use that without noting that this is simply an example implementation on an example data set." }, { "end": 387.84, "start": 382.88, "text": " So you can argue that there might be some responsibility of the researchers right here." }, { "end": 392.88, "start": 387.84, "text": " That doesn't make Jan LeCun not correct, but I'd still consider this to be like a fruitful" }, { "end": 398.88, "start": 392.88, "text": " discussion between individuals right here. But now we go on. This person saying, train it on the whole" }, { "end": 404.64, "start": 398.88, "text": " American population with an L2 loss and almost everyone will look white or train it on the whole" }, { "end": 410.32, "start": 404.64, "text": " American population with an L1 loss and more people might look black. Stop pretending that bias does" }, { "end": 415.91999999999996, "start": 410.32, "text": " not also come from algorithmic choices. Jan LeCun never says it doesn't, right? LeCun responds now" }, { "end": 421.03999999999996, "start": 415.91999999999996, "text": " saying, the most efficient way to do it though is to equalize the frequencies of the categories of" }, { "end": 426.24, "start": 421.03999999999996, "text": " samples during training. This forces the network to pay attention to all the relevant features for" }, { "end": 431.59999999999997, "start": 426.24, "text": " all the sample categories. And training with an L1 instead of an L2 will not even begin to solve" }, { "end": 437.04, "start": 431.6, "text": " the problem. I would pretty much argue training with an L1 loss here would exacerbate the problem" }, { "end": 442.24, "start": 437.04, "text": " because the L2 loss is much more sensitive to outliers. Charles Sutton says, serious question," }, { "end": 446.40000000000003, "start": 442.24, "text": " why do you feel that it's important to make this point? Are you worried that people are going to" }, { "end": 453.12, "start": 446.40000000000003, "text": " start suing CycleGAN? And LeCun says, because people should be aware of this problem and know" }, { "end": 460.40000000000003, "start": 453.12, "text": " its cause so they can fix it. How terrible Jan, how terrible you dare pinpoint the exact cause of the" }, { "end": 466.08, "start": 460.4, "text": " problem so that people can fix it. The correct thing to do is to point out that everything is" }, { "end": 472.08, "start": 466.08, "text": " problematic. So Tim the Gibber says, Jan, I suggest you watch me and Emily's tutorial or a number of" }, { "end": 478.64, "start": 472.08, "text": " scholars who are experts in this area. You can't just reduce harms to data set bias. For once," }, { "end": 483.52, "start": 478.64, "text": " listen to us people from marginalized communities and what we tell you. If not now during worldwide" }, { "end": 489.52, "start": 483.52, "text": " protests, not sure when. So again, I feel the argument here is that you can't simply point out" }, { "end": 496.15999999999997, "start": 489.52, "text": " that it's the data set bias. You must point out the bigger problems which Jan LeCun does not ever" }, { "end": 502.32, "start": 496.15999999999997, "text": " deny. He simply says this particular problem can be solved by switching the data set. Nicole Olleroux" }, { "end": 508.15999999999997, "start": 502.32, "text": " says, Jan was in my PhD jury. I am indebted for him for everything he taught me, but this constant" }, { "end": 513.84, "start": 508.15999999999997, "text": " dismissal of the harms caused directly or indirectly by the ML community is highly problematic." }, { "end": 520.1600000000001, "start": 513.84, "text": " Where or when have I dismissed the harm caused by the ML community? I'm pointing out the cause of" }, { "end": 525.6800000000001, "start": 520.1600000000001, "text": " the harm so it can be fixed. You can't fix the harm unless you know what causes it. No. LeRoux says" }, { "end": 530.24, "start": 525.6800000000001, "text": " causes of the biases are numerous only pointing out data set bias deflects the attention away" }, { "end": 535.2, "start": 530.24, "text": " from the other more pervasive ones that make the whole field of bias in ML. Many people try to get" }, { "end": 540.8000000000001, "start": 535.2, "text": " your attention about these issues, but you kept focus on the data set because the data set is the" }, { "end": 547.5999999999999, "start": 540.8, "text": " problem right here. He doesn't dismiss any of the other things. He simply says here the data set is" }, { "end": 553.4399999999999, "start": 547.5999999999999, "text": " the problem if your problem is that it doesn't work as well for non-concassian people. Which was" }, { "end": 558.88, "start": 553.4399999999999, "text": " never the intent of this. The intent of this was to showcase the method. I mean ImageNet is like 60%" }, { "end": 565.5999999999999, "start": 558.88, "text": " dog species and still people train on it to showcase their image recognition techniques." }, { "end": 569.92, "start": 565.5999999999999, "text": " No one training on ImageNet makes a claim that they have solved computer vision for all the" }, { "end": 575.5999999999999, "start": 569.92, "text": " classes in the world in a fair manner. Timnigibru goes on saying I'm sick of this framing, tired of" }, { "end": 580.24, "start": 575.5999999999999, "text": " it. Many people have tried to explain. Many scholars listen to us. You can't just reduce the harms" }, { "end": 587.76, "start": 580.24, "text": " caused by ML to data set bias. Doesn't do that. Doesn't do it. So someone asks her is he engaging" }, { "end": 593.36, "start": 587.76, "text": " in any ways with you? It's appalling to see that he answers to everybody but you. Yet maybe there" }, { "end": 599.8399999999999, "start": 593.36, "text": " is a conversation going on in private and I don't want to jeopardize it. Note that Yan LeCun's tweet" }, { "end": 611.2800000000001, "start": 599.84, "text": " has 500 retweets, 1.9k likes and comments as far as you can scroll. To what she responds to with yep" }, { "end": 616.96, "start": 611.2800000000001, "text": " but I'm used to white men refusing to engage with black and brown women even on issues of bias that" }, { "end": 623.0400000000001, "start": 616.96, "text": " mostly affect us. I mean he literally has ignored a whole body of work by people from that demographic" }, { "end": 629.36, "start": 623.0400000000001, "text": " hence the statement so not surprised. I mean in absence of the fact that an argument should be" }, { "end": 638.4, "start": 629.36, "text": " independent of the person making the argument that is a low blow. Hardmaru says I respectfully" }, { "end": 643.76, "start": 638.4, "text": " disagree with Yan here as long as progress is benchmarked on biased data such biases will also" }, { "end": 650.32, "start": 643.76, "text": " be reflected in the inductive biases of ML systems. Advancing ML with biased benchmarks and asking" }, { "end": 655.84, "start": 650.32, "text": " engineers to simply retrain models with unbiased data is not helpful. I don't disagree with you" }, { "end": 660.72, "start": 655.84, "text": " here. I don't think my tweet contradicts your statement which it doesn't. People are reading" }, { "end": 665.6800000000001, "start": 660.72, "text": " into this because he doesn't conform to the orthodoxy of pointing out that everything and" }, { "end": 672.96, "start": 665.6800000000001, "text": " everything is problematic and simply pinpoints a particular problem. He must be thinking all the" }, { "end": 677.84, "start": 672.96, "text": " wrong things. Jeff Dean says this is a clear example here is an illustration that seemingly" }, { "end": 683.2800000000001, "start": 677.84, "text": " minor choices in learning algorithms or loss can have significant effects so bias in ML systems is" }, { "end": 689.12, "start": 683.28, "text": " about much more than just avoid data bias. ML researchers and practitioners must pay attention" }, { "end": 694.8, "start": 689.12, "text": " to these issues and I think they are and Lacan doesn't say anything against that. He says as I" }, { "end": 700.16, "start": 694.8, "text": " point out in my comment to this tweet is much more efficient to correct this kind of bias. Note that" }, { "end": 705.36, "start": 700.16, "text": " Yan Lacan actually differentiates between the different kinds of biases by equalizing the" }, { "end": 711.4399999999999, "start": 705.36, "text": " frequencies of categories of samples during training than be hacking the loss function." }, { "end": 716.6400000000001, "start": 711.44, "text": " Correct because if you hack the loss function you're trying to counter one kind of bias by" }, { "end": 723.36, "start": 716.6400000000001, "text": " another kind of bias. Meredith Whitaker says this is very racist and even if it recognized" }, { "end": 729.6, "start": 723.36, "text": " non-white people it would be very racist. This is cop tech. It's designed to allow those with power" }, { "end": 734.48, "start": 729.6, "text": " to surveil and control those with less power. Diverse training sets aren't going to fix it" }, { "end": 738.72, "start": 734.48, "text": " advocating that we should never build these systems and that's a discussion to be had" }, { "end": 744.5600000000001, "start": 738.72, "text": " but let me break this to you. This isn't going to help the cops. This isn't actually giving you the" }, { "end": 750.24, "start": 744.5600000000001, "text": " face of the person that was down pixeled. This is simply going to give you the most likely face" }, { "end": 756.4, "start": 750.24, "text": " associated with that down pixeled picture given the data set the algorithm was trained on. I don't" }, { "end": 762.48, "start": 756.4, "text": " see this whenever any machine learning algorithm does anything with faces at all. People jumping" }, { "end": 767.28, "start": 762.48, "text": " up going like this is cop technology. Well in line with all the broader impact statement advice" }, { "end": 773.8399999999999, "start": 767.28, "text": " can't it also be used to find lost children from very very bad security camera footage? And if I" }, { "end": 780, "start": 773.8399999999999, "text": " already mentioned that this doesn't actually give you back the person on the down sampled image" }, { "end": 787.04, "start": 781.04, "text": " it will give you back the most likely person given the data set. So with that I want to conclude this" }, { "end": 793.4399999999999, "start": 787.04, "text": " section. Please stop the witch hunting. Yann LeCun made a completely fine tweet here and there's no" }, { "end": 798.5600000000001, "start": 793.44, "text": " reason why people should pile on him this hard. He doesn't dismiss any of the other problems just" }, { "end": 803.44, "start": 798.5600000000001, "text": " because he doesn't mention them and while we all enjoy a good discussion where people disagree" }, { "end": 808.5600000000001, "start": 803.44, "text": " genuinely it's not helpful to accuse him of things he never said or meant. I mean where does this all" }, { "end": 813.6800000000001, "start": 808.5600000000001, "text": " lead? The result of this is going to be that small labs that don't have the resources to collect their" }, { "end": 819.9200000000001, "start": 813.6800000000001, "text": " own data sets or check for all the possible biases in their models that are reliant on the data sets" }, { "end": 825.92, "start": 819.92, "text": " that we do have even if they are biased and flawed will just be disincentivized from publishing their" }, { "end": 832.56, "start": 825.92, "text": " code or actually doing research at all. So this as every other additional constraint on research is" }, { "end": 837.4399999999999, "start": 832.56, "text": " going to help the large corporations with lots of money. And maybe that's just my opinion but we" }, { "end": 844.4, "start": 837.4399999999999, "text": " should be able to just talk about a problem and the solution to it without always having to make" }, { "end": 850.56, "start": 844.4, "text": " sure that we rabble down all the different things that are and might be wrong according to the canon." }, { "end": 856.24, "start": 850.56, "text": " And big props to Yann LeCun here for holding his own. 90% of people by now would probably be like" }, { "end": 861.68, "start": 856.24, "text": " oh yes I'm so sorry I did a not thoughtful comment blah blah blah. Props to you Yann," }, { "end": 865.84, "start": 861.68, "text": " keep going. And with that I conclude this section. Let me know what you think in the" }, { "end": 880.5600000000001, "start": 865.84, "text": " comments and I'll see you next time. Bye bye." } ]
Q5g3p9Zwjrk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "implicit", "nerf", "neural processes", "optimization", "curve fitting", "audio", "signal processing", "surfaces", "point clouds", "oriented", "signed distance function", "mlp", "layers", "hypernetworks", "representation", "function", "sin", "sinus", "sinusoid", "fourier", "initialization", "relu", "nonlinearity", "derivative", "gradient", "laplacian", "wave" ]
Implicit neural representations are created when a neural network is used to represent a signal as a function. SIRENs are a particular type of INR that can be applied to a variety of signals, such as images, sound, or 3D shapes. This is an interesting departure from regular machine learning and required me to think differently. OUTLINE: 0:00 - Intro & Overview 2:15 - Implicit Neural Representations 9:40 - Representing Images 14:30 - SIRENs 18:05 - Initialization 20:15 - Derivatives of SIRENs 23:05 - Poisson Image Reconstruction 28:20 - Poisson Image Editing 31:35 - Shapes with Signed Distance Functions 45:55 - Paper Website 48:55 - Other Applications 50:45 - Hypernetworks over SIRENs 54:30 - Broader Impact Paper: https://arxiv.org/abs/2006.09661 Website: https://vsitzmann.github.io/siren/ Abstract: Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions. Authors: Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at implicit neural representations with periodic activation functions by Vincent Sitzman, Julian N. P. Martel, Alexander W. Bergman, David B. Landell and Gordon Wettstein. So this paper is a bit of a special paper. If you're like me coming from like classic machine learning or deep learning, things like this, this paper requires you to think around your notion of what it means to handle data and so on a bit and to think about data points and so on. Essentially what they're doing is they are representing signals such as images or sound or generally waves or point clouds. They're representing these signals as functions mapping, for example, from their coordinates to their values. We'll see what that entails. They're not the first ones to do this, but they managed to do this very well using these new models called sirens, which are basically neural networks that have sine waves as their nonlinearities instead of like relu or hyperbolic tangents and so on. It turns out that if you initialize these very carefully, those can be made to capture these signals very, very well. That's the kind of high-level overview and we'll go through the paper in a bit of a fashion of someone that is not in this particular literature. This is not going to be like as in-depth or technical as usually because I myself am not super familiar with this kind of literature, with the neural representations and so on. If you go at this paper from a machine learning perspective, you're going to be ultimately super confused at the beginning. I'm going to try to clear up and retrace my steps of my confusion. I love that this paper starts out at, we're interested in a class of functions phi that satisfy equations of the form this right here. We are interested in a class of functions. I've never particularly had many dreams about functions like this. How can you look at this? We're interested in the relation between inputs and outputs. This here is the function as you can see. This maps input to output. We're also interested in its derivatives. Here you go first, second, third derivative and so on. This function right here is what we're going to call a neural representation or an implicit representation. It's called a neural representation if it's a neural network. So far so good. You've seen this, right? You've seen this could be a data point and then could map it to a label or something like this. Since we're going to represent images, you already know maybe a GANs, a generative adversarial network, where this here is the latent vector and then you have a neural network mapping this latent vector to an image. This is going to produce an image. This here is quite similar but not quite. Again I guess this here would count as the representation, the continuous representation of this picture. However in this case right here the function itself is the representation. So in a GAN what we do, we learn this right here, this function phi. We learn this from data such that if I plug in one particular vector I get one particular image and if I plug in another vector I get another image and the function always stays the same. Here it's going to be one function per image. So each image, the function is the image. So how is a function an image? If I have an image and it's made of pixels, each pixel has an X and the Y coordinate. Let's call that X1 and X2 the coordinate of that and each pixel also has a color value, which is three-dimensional. So each pixel has a three-dimensional RGB color value. Technically an image is a function from coordinates to pixel values. If this is my image, it is represented by a function, then if I input any coordinates like 3, 4, that function should return what are the RGB values at that. Maybe it's like 0.5, 0.7 and 0.1. Those are the RGB values at that. Now the goal is to have this right here be a neural network where I have a multi-layer perceptron and I think they always use a five layer MLPs, so really simple neural networks. You simply input, so here you have two input neurons where this here goes, so one gets the three, one gets the four, then this travels through the network and at the end the network should output three output nodes and this should be like the 0.5, 0.7, 0.1. Now again this network here is, they now train this network to map input to output. To map coordinates to values and this of course is one particular image, so you're going to have one neural network per image. Now you might reasonably ask why do we do it like this? Why don't we just save the image as the pixel values? Why do we need a function mapping the coordinates to the pixels? That's a valid question I guess and the image is just one example of this, but one advantage that you immediately get is that now you have a continuous representation. So now you cannot, not only do you know, because if you store an image like this, you only know its value at each of the pixel locations. However if you store an image like this you know its value at any continuous in-between location, right? So you can ask the network what's the pixel value at 3.2 and 4.1, right? It will give you an answer and if the network is trained well it will give you sort of an answer that makes sense. That is, what's the exact color at this sub pixel location right here? Now so far so good, right? So essentially this boils down to not really a machine learning problem in the classic sense, but an optimization problem. Because all you have to do is you have to make the neural network match all input to all output. There's not really a training and a test set right here. Namely your data set is going to be all the pixels in the image. So each pixel in the image is going to be one data point because it's one, so each pixel is x, y, 2 RGB. And the way they train these networks, now at the examples of pixels, the way they train it they simply sample a mini batch of pixels like this one, this one, this one, this one, this one. They use that mini batch to train the network to do one step to train the network and then they sample another mini batch and so on. You might sample the same pixels multiple times, but ultimately what you want is sort of a continuous representation of the image. This is not a new idea and this has been around and they cite a lot of literature where this has been around before. So what their new thing is is that they say these other representations, so if you use a neural network in a classic sense like this and you do your training with the mini batches like this, what you'll end up with is a bad image. So if you then simply go, once you've trained the network, you can take it, take your network and you can simply output each pixel location. So you say okay now I'm going to reproduce this image using my network because if it's trained well it could certainly give me back the positions at the pixels. So you ask it what's the 0, 0, what's a 0, 1, what's a 0, 2, what's 0 at 0, 3 and you can fill in the picture and that usually gives you very bad outcomes or so they claim. I mean I haven't checked it particularly, but you can see right here this is the ground truth and here you have a network that is parameterized with ReLU functions like with ReLU nonlinearities and as you can see the ReLU network misses a lot of the sort of higher definition things in the image and so it depends on the architecture that you use how well you can make a neural network represent those things. Again you kind of need to forget what you know about machine learning in the classic sense because I'd still see people who just use a GAN or something like this. So yes valid point but we are in the business right now of solving this particular problem and as we'll go on to see it's not just about images but images are a nice example of a natural signal. So the 10H networks you also see they I think they fail even harder they have these artifacts back here even and this here it gets better when you do ReLU networks with what is called a positional encoding. So not only do you have your X and your Y coordinates go through a ReLU network but you also have them go through a positional encoding and that's very much like you would have in a transformer. So if you watched my video about attention is all you need I explained how the positional encodings work there but basically what you do is you map these things to cosine and sine waves so you're going to be like the sine of X times 10 and then the sine wave of X times 100 and so on so which you'll end up and you do the same for Y and that ends you up with more features that sort of then the function can use to represent positions way better than just given the X and Y coordinates. If you do that you kind of recover some of the image but you see here they also analyze how so this is the ground truth and this is the gradient of the ground truth which is basically a a Sobel filter if you know that it's basically an edge detector color gradient thing and then this here is the second derivative the Laplacian of the image and ideally if your implicit representation models the signal very well it should also model the derivatives of the signal very well. So now we're kind of connecting it to what we saw at the beginning right these siren networks are specifically designed to not only match the signal right here but also match its derivatives and if you match maybe in an image it's not so it's not that important to match the derivatives even though it is because there are small things like you can see right here the grass isn't as well represented and here you mostly you get some artifacts that you see here in the in the gradient might not be as important for images in terms of human vision but for many signals it's also important to match the derivatives and here at the siren even though it's trained on the image itself you can see that its derivatives are very much in line with the original signal so simply by matching the signal you this architecture manages to also capture the derivatives of the signal and therefore have a more faithful representation. Okay so that was positional RBF relu's are simply the relu network and I think somewhere in here there is an RBF kernel if you young kids don't know what an RBF kernel is then yeah no I guess I guess I don't want to I don't want to dunk on anyone it's basically you how do I explain it you map it to an infinite dimensional space using Gaussian kernels yeah maybe Wikipedia is better at that than I am so sirens what what do they do in order to be able to capture a signal very well what do how does it sing a siren different from like an RBF network and the answer is pretty pretty pretty simple so the architecture of a siren network is the end does it already stand for network I'm not sure honestly maybe we'll find out yes it's the sinusoidal representation networks so the end is network so we don't say siren network we say siren and a siren is simply made of what is that here it's a multi-layer perceptron basically right so it is a this here is the network the network this is the final layer of the network which is a linear layer before that you have all these layers just not concatenate of it but following each other so it's a multi-layer perceptron pretty regular and each of the layers in the multi-layer perceptron is made up like this you have an input you multiply it by a weight matrix you add a bias and then you put it through a sine wave so the sine wave here is really that's that's the only change from a mole from an MLP otherwise so usually here you have something like a sigmoid or a relu function now you have a sine wave and the I mean it's a bit weird right because a relu function is like this so it has this center thing where it kind of switches but here it's linear and monotonic and here it's kind of constant and even a even a sigmoid so the sigmoid is don't you remember like this yes I guess so the sigmoid is like this so it's kind of constant here constant here monotonic and so on we're used to monotonic activation functions whereas a sine wave is really different the sine wave of course is something like this right where it's not monotonic at all like if you if you want to increase your function value at any point and you're here and you go up the hill and you do a step that's too large you end up down the hill again but it turns out that these these networks have particularly have some good properties if you want to capture natural signals and they have some bad properties namely that the fact that they are periodic and go down again and the reason why they get around the bad properties is because or so they claim they initialize the network in a very particular fashion because I think at least I when I when I started in deep learning I had this idea so a lot of other people must have had this idea too of like hey what if I just replaced a non-linearity with like my sine function could I do something this and then tried it out and it didn't really work so I scrapped that now this here of course isn't simply replacing the neural network it's also using the neural network for something completely different than I would namely it's using the neural network to learn these implicit representations and not like I would to do simply for learning a data set but still it seems like you need to initialize them fairly with with very careful consideration and we'll go on onto that right now so actually they just describe it it's it's not like a it's not very interesting but you need to sample the weights uniformly from this uniform distribution where I think yeah and they have a proof in the supplementary material where they sort of show why that is so or not here we propose to draw weights with C equals 6 such that W is in this uniform distribution right here oh no it's different okay this ensures that the input to each of the sign activation is normal distributed with a standard deviation of one since only a few weights have magnitude larger than pi the frequency throughout the sine work grows only slowly finally we propose to initialize the first layer of the sine network with weights so that the sine function spans multiple periods over negative 1 to 1 we found W 0 to equal 30 to work well for all the applications in this work the proposed initialization scheme yielded fast and robust convergence using the atom optimizer for all experiments in this work so the initialization here takes a fairly prominent piece in that paper which tells me maybe that they have spent a lot of time working on this and this is I mean if this is the case this is to their credit because I guess most people like me would try out something like this and then after a while realize it doesn't work and to you know be so convinced to go and really figure out how do we need to initialize these to make it work and of course as you're doing this there's still like a 99% chance that it's not going to work once you've done that is quite respectable I find it might have been really different this might have been the first thing they thought about and just worked it out but yeah okay so what what is the deal with all these derivatives now since this network right here has these sine waves in it right so it's a neural network with sine waves as derivatives as nonlinearities what now so we have a neural network what now is the first derivative of that neural network right with respect to its input so we have an input now what's the first derivative with respect to its input and the cool thing about this is what's the first derivative of a sine wave it's of course a sine wave that's shifted so it's a cosine which is a sine wave that's simply phase shifted and then the next derivative again is a shifted sine wave and so on so the derivative of a siren is a siren and that does not hold for any of these other nonlinearities so in relu's it's the derivative of a relu network is like a con so if you if I take the derivative of this it's like a constant zero right here and then a constant one right here and if I then take the derivative again it's simply a constant zero function right and all these other nonlinearities their derivatives are different from themselves and here since we want to not only match a signal but also the signals derivatives these property of this siren becoming very very very handy so how do you train a siren we've already alluded to how you would do that in the in the kind of idea of matching an image where you simply train the pixel values to the RGB values but there's more that you can do with the sirens given that they basically given that their derivatives are also sirens what you can do so with the image part we've basically neglected all of this we simply said we want to find a relationship between the input X and the output like this what we can also do is we can say no no no no we want to find a relationship between the input and its first derivative and not even have this as part of the let's say of the loss function and then we can see what comes out so that's what they do oh can I find it can I find it that's what they do right here okay so here you see the the ground truth image and this is its gradients and this is its Laplacian okay now we've already seen that we can fit the image itself but what if we just fit the first derivative so we simply input this thing right here we input this into the siren we do the same thing right the siren is now it maps X and Y to RGB but our loss function isn't going to be mapping X and Y to RGB our loss function is going to to depend on the gradient of that so our loss function is going to be something like the gradient of the image let's call the image I minus the gradient of that function that maps X of this function right here okay because we have these auto differentiation tools right now we can easily make this into a loss function so here we are looking for the function whose gradient matches the gradient of the image right now again you can say why is this why can't we just match the image itself and I think valid point but it's not about why can't we just it's about demonstrating the power of these networks so if you only match the gradients right what you'll find is if you then look at the function right you still find the function you don't you don't find the gradient you still train the function you still train the weights of the function itself but the loss function depends on the gradient of that function if you do that you'll find that if you then look at the function again you can ask the function to produce the image by simply cycling over each of the coordinates you'll find that look at that just by matching the gradient you'll match the image itself pretty pretty well right and that's pretty cool now of course you're not going to match the RGB values this is a grayscale image and you know there's a there's kind of a reason for that because since the gradient loses like constant bias information so what if you'd match an RGB image I'm gonna guess you're going to have like color very much color distortions but and here what you're going to have in this case is just distortions in luminosity like if you know that if you have a function if you have the derivative of a function and you will want to find the function itself and you integrate then the solution is always an entire space of functions because you will integrate the function this thing right here and so with the whatever its input is and you have to add a constant and you don't know what the constant was in the original function because when you derive the function the constant drops away so similarly here what we'd expect is that the image that we're getting back will be faithful with respect to like its its borders right since we're matching the gradient and the gradient is basically an edge detector will match the sort of edge information of the picture which you can clearly see but what we would expect is some difference in overall luminosity and I don't even know what how they exactly did this because they now have to choose a constant to add maybe they just chose it in some way or maybe they just let the network do but this is you know still pretty pretty impressive you can see there's some detail missing but not much and the same exact same thing you can do for matching the second derivative so now you match the Laplacian of the image and remember in the ReLU networks they don't even have a Laplacian it's a constant so this is something you could never do and you can see that the upcoming image is still pretty good right this are this is now missing the constant luminosity in the first and second derivative sorry in the in the zero with and first derivative and still the information is the the reconstruction is pretty good alright so these demonstrates kind of the power of these networks again we're not having our data set our entire data set is just this image so if we fit something then this thing right here is our entire data set there's no there's no big data set and this is a test sample like this is the data set and the test sample at the same I guess you can consider the Laplacian here the data set and then the actual image is the test sample like the label or something like this so what does that buy you here is a thing you can do if you want to mix two images what do you do so if you want to mix this and this what you could do is linearly interpolate but that would be not very cool because right here you have a lot of like very bright pixels which probably have like values of one and here you'd have the dark pixels which probably have values like more close to zero and the if you simply mix them if you simply add them together and divide by two then you get kind of get a wash of the two and similarly here you kind of wash out the bear because you'd have some pixel values here that would come over and generally not not a good idea to mix images like this now you know with GANs we can do this but we have to have like a training data set and so on here what we'll say is we'll simply say we'll take the gradient of this and we'll take the gradient of this and then we'll add the two gradient maps now what this does is that as you can see right here on the left is the composite gradients and what this does is right here in the sky there is no gradient information in this image because it's just a flat patch of sky right so and down maybe down here there's not that much gradient information there is a bit right but not here so that's where this bear head is and if you want to mix images like it can be a good idea to mix their gradients because generally the information in an image is where the gradients are so what we would expect the gradient to represent the gradient would carry over this portion it would maybe carry over a bit of this portion it would carry over this portion and this portion so everything where the signal is not flat so here you can see the composite gradient and if we fit again we fit our function such that the gradient of the function that we fit matches this mixed gradient right here then this is the gradient of the function that we match and this is the actual function and you can see pretty pretty good right it basically mixed everywhere where there was gradient and this is now just reconstructed from this gradient there is no I think there is no as least as I understand it there is no pixel information carried over from either of those images they're simply added to this gradient the gradient is fit and then the function is asked to output pixel value at each location and that's that okay so this is just a simple you know thing that you can play around with but they do they do other more interesting things right here for example this representing shapes with signed distance functions so if you go over the formulation the actual formulation of their loss function we haven't actually done this right quite yet it's here it's very complicatedly stated but ultimately what this means is so a component right here is are these CM which are constraints so this loss function operates on these constraints and the constraints are across a of X which basically it's just X it's kind of a the anything depending on the input itself then the output of the function the gradient of the output of the function the second derivative third derivative and so on so this these sirens can fit anything that you can formulate as a set of constraints that relate the input of the function right here to its output or any of its derivatives and we've already seen that at once we if we fit an image our only constraint is that these things match right here with the original image that the coordinates are mapped to the RGB values then when we match the gradients we don't care about this we only care about the relation between this and so on so the loss function is literally just over the entire signal space which in our case was was over the entire image we want these constraints to hold or to be as small as possible or the constraints are always formulate such that if they are fulfilled they equal zero and so the for example the L2 loss between the RGB values of the true image and the RGB values that you fit the RGB loss sorry the L2 loss would be a constraint like this and of course the more differentiable you make it the more the easier this network has at fitting it right so that's why there is this norm right here but it's not that complicated it simply says whatever you can formulate as a constraint on relating the inputs to the outputs or any of the derivatives of this implicit representation that is the loss function all right so the in the next interesting thing we can do as I said is representing shapes with signed distance functions so we're going to go slowly and this is yeah it's not that hard inspired by recent work on shape representation with differentiable signed distance functions as the F's we fit SDFs directly on oriented point clouds using both ReLU based implicit neural representations and sirens okay so what is an SDF a signed distance function that's pretty easy a signed distance function is simply a distance function with a sign like wow so a a if you have a and it's usually done if you have like a boundary somewhere between things then of course any point here has a distance to the boundary but you if you have a signed distance function it simply means that each point also has a sign in front of it and that means all the things on one side of the boundary maybe have a plus and all the things on the other side maybe have a minus so even though two points could be the same distance from the boundary one is like plus five away and one is negative five away and you can do this this is useful for example when you fit point clouds as they do in this example so when they have point clouds and that's usually in 3d space but if you have point clouds you basically have points right here and you know that the points should represent some kind of shape maybe a wall or so they have these room interiors as you can see right here so this is a 3d scene but you only have a point cloud of the 3d scene and what that means is that maybe you were in this room and you put up a laser scanner right here laser scanner I don't know how a laser scanner looks and the laser scanner kind of shoots lasers at random locations and always measures the distance right and that's that's how you end up with a point cloud so you'll end up with like a point cloud where in 3d space you know where the laser hit something and a reasonable assumption to make if you have like a dense sampling of this is that you should be able to like connect those point clouds in some way to obtain the actual continuous shape of the thing that you measured and this is what we're going to try to do with these sirens right to go from point clouds to shape by training an implicit representation so we're going to train a neural network that represents this shape right here basically by mapping coordinates to to signed distance values so whenever we ask the neural network what at this location here what's the signed distance and it's going to tell us oh it's plus 5 or at this location here what's the sign distance it's going to tell us it's 0 right so we're going to we're going to train a neural network to do that and hello yes no okay so this is a bit more complicated and since we have these awesome power of these sirens we can also do to more constraints so we know and this goes on this amounts to solving a particular iconal boundary value problem that constrains the norm of spatial gradients to be one almost everywhere so this iconal boundary value problem this is a property of signed distance function that the norm of the gradients with respect to the input is one almost everywhere almost everywhere means everywhere I guess except at the boundary itself where the distance is 0 though I could be wrong note that relu networks are seeming seemingly ideal for representing sdfs as their gradients are locally constant and their second derivatives are 0 adequate training procedure for working directly with point clouds were described in prior work we fit a siren to an oriented point cloud using a loss of the form and now we look at the loss so the first thing you observe in the loss is that it is made of three different integrals and that simply means they now partition the space right here they partition it into two different they partition it into two different regions so to say so maybe go here no can I zoom here so the first region is going to be whatever is on the boundary itself right and that's basically wherever a point wherever a point hit right whenever you have a point or on the boundary itself that's going to be your omega 0 is going to be that and then all the other points right here are going to be part of your omega without the omega 0 so you're going to have different constraints for all of these things right here for example and I have to pay attention that I don't say anything wrong you'll have this this constraint of this gradient my tablet maybe I'll start monetizing just so I can get a new tablet okay so no okay the this this condition right here says that the gradient should be one and that's actually everywhere right so I was wrong that the gradient is only one outside the boundary then you can see right here the last part is all the points that are not on the boundary since our network maps any point in 3d space to assign distance function so most of these points aren't going to be on the boundary itself even though in the mini batch where we train where they train they sample points on and off the on and off the boundary at the at equal rates just to to have the network train more stably so this is a condition on all the points off of the boundary and they say here this function is this exponential function with alpha larger than 1 it penalizes off surface points for creating SDF values close to 0 so this is simply a regularizer that says whenever I input coordinates that are far away from the boundary from the surface then there should be a large sign distance function like it should not be close to zero because it's away from a boundary okay and in practice how you're going to train this is if you have a point cloud if your coordinates are far away from the next point then this this is going to be a high this should be a high value otherwise the network is penalized so we have this condition right here on the gradients which we know sign distance function should fulfill we have this thing right here which is a regularizer basically telling points far away from our data that they should have a high distance function and then we have this last thing right here which is for all the points on the surface itself here's what will what we require first of all we require their value to be zero or close to zero right this is the loss function so we want to minimize this and this is simply the output value so the sign distance function of points on the surface you know the things we actually measure they should be zero right because the sign distance function measures how far away from the surface you are so this is pretty intuitive but then also this right here it says that the gradient of the sign distance function and the normal vector of that point should align and that basically means and this is now I think this is because we have an oriented point cloud or no yes so what we can do is we can kind of connect points next to each other and then calculate the normal vectors of that right and the signed the network if we ask the network hey what do you think about this position right here the network should tell us first of all the sign distance function should be zero because it's on the boundary second of all the norm of the gradient of the sign distance function at that point should be one because that's a property of sign distance function and third and that's the thing right now the gradient of the sign distance function should align with this normal vector right and that's you know pretty intuitive because you want you want the sign distance function to increase in value the gradient basically tells you where the highest increase in value of the function is you want it to increase along the normal direction and not along any other direction so that's a pretty good pretty good constraint to have so you can see right here I mean you don't really have to understand exactly about sign distance functions and so on but these sirens are pretty good at capturing all of these different constraints and this was a point you know on the surface points off the surface you additionally say hey you should have a pretty high value and actually not a zero value but a pretty high value so and again we only fit one particular scene we only ever fit one scene with an entire network so the entire neural network this this this whole structure right here everything is captured by this neural network that we train on the point cloud and you can see that if you use a relu what you'll get is super super wobbly because if even if you train the relu with the same loss function these constraints on the gradients they're just not going to work out with the relu because the gradients are like constant and discontinuous right whereas the siren can basically fulfill all of these constraints on the different parts like on the values and on the gradients of that of the loss function and they have another example right here where they fit this shape yeah so you see all the details are preserved way better where the relu's they'll simply kind of flatten over everything and make it wobbly alright so I hope this sort of made sense and we'll go to the last thing right now it is restarting I wanted to show you the website right here they have for this it's a pretty cool website to go along with it and as you can see right here they have all these samples that they have in the paper but also in an animated format in as you can see right here this is the fitting process the learning process of how you represent these images so as I said there you want to fit these functions to the ground truth and that happens in steps so this is very much like you would learn a deep learning functions I think they use the atom optimizer it's just that the data set now comes all comes from this one ground truth image and you can see that the siren network on the right pretty quickly zeros in on the on the image and then gets the details subsequently right they also represent audio with this and you can watch that they represent video compare that to relu representations then here solving the possum equation is where you only fit the gradients or the laplacian of an image and still get out the good image that's pretty cool and here you can see that you can actually play around with these things so you can click on them and look at this look at this learned thing so on the left you can see what the siren network learned and let's scroll down here a bit and on the right is a relu representation of the same thing so this is the same network with the same objective it just has relu instead of sine waves as activation functions so you can see how much of a difference that makes right here and the middle is a relu with the positional encodings still not good right the only the only thing right here that you have to think of if you look at how big these sirens are how many parameters they have they're about at the order of magnitude of how many pixels there are in the image so I'm yeah it's certainly a cool method but to like these it's not like you're the implicit representation here is very very well at generalizing though it would be very cool to see what happens outside right if you because now you have you can input any XY coordinates so technically you could continue the picture to the bottom and just see what the siren thinks should be here at the bottom so all of these things would be pretty pretty cool to actually experiment with and they have the code available to do that and you can see the fitting process of the Helmholtz equation right here and related projects pretty cool website I definitely invite you to check it out and let's go back to the paper and we're back and my tablet crashed and let's continue so they're now going on to use sirens in order to solve PDEs and so in physics often you have these problems where you are given an equation but the equation doesn't necessarily involve a function itself but only involves derivatives of that function like or relates derivatives to the function and so on so one example here is this Helmholtz equation that's given as this where the I think the the F is a known function but this is the wave field we want to you want to get you want to figure out which is unknown and then this HM is including for example this right here which is the Laplace operator so you're given the relation between the function and a Laplace operator of the wave that you want to find out and your task is to recover the wave now I don't want to go very much into this right here but what you can do is basically you can measure you can have a room and you can have measurements of the wave or of its derivatives and so on and then you kind of calculate backwards from the measurements to what the actual wave was and these sirens turn out to be very very good at things like this and I guess that's in this solving for the wave field things but essentially what this amounts to is a numerical solution of these partial differential equations in physics using these sirens and that's pretty cool and the last thing they do is and this gets back to a more of the machine learning context where they say learning a space of implicit functions so now they go ahead and say yeah so we can represent images in terms of these of these functions right but each image is basically its own function so each image is basically an optimization a fitting problem can we somehow learn functions of functions so this goes this comes now back to more of a machine learning context where you say ah so I I have a network right here that I have a network that gives me the parameters of the siren so this right here is okay let's let's go to an example in this example what you'll have is you'll have an image like this one where a few pixels are masked actually most of the pixels are masked and you want to put this into a CNN and the CNN should output the parameters of the siren network so the parameters because the the siren network given its parameters is the image itself so that's the siren I said siren network the siren is the image if you know its parameters right so here you train a CNN to give you the parameters of the siren that's almost the same as training a CNN to give you the image directly but again we don't want to have the explicit representation of an image we want to have the implicit representation such that it's continuous and we can manipulate it and so on so the CNN is now trained on a data set so you take C for 10 and you construct a whole bunch of of images with only kind of a hundred pixels remaining and then you train a CNN to give you the parameters of the siren that would reconstruct the ground truth right and then you can test that on the test image and you can see right here the results are pretty good so these are test samples these are now these are now images that were not seen during training of this CNN and therefore the upcoming siren also hasn't seen that image it's the siren is simply parameterized by the CNN you can see this works pretty well so even if you only have 10 pixels you already get something out of it right and if you have a hundred pixel you already get fairly close to the to the ground truth right here now this is not gam quality images of course but it's pretty impressive to see that an implicit parameter ization an implicit representation of the images can be so powerful right yeah so this this is a pretty cool thing and again it's it's better than it's it's kind of more back to the machine learning framework that you're used to because there's a train and a test data set and now the only thing is that the output is a function given by its parameters and not the actual pixel values okay so let's let's look at the broader impact statement the proposed siren representation enables accurate representations of natural signals such as images audio and video in a deep learning framework this may be an enabler for downstream tasks involving such signals such as classification for images or speech to text systems for audio such applications may be leveraged for both positive and negative ends siren may in the future further enable novel approaches to the generation of such signals this has potential for misuse in impersonating actors without their consent for an in-depth discussion of so-called deep fakes we refer the reader to a recent review article in your neural rendering this has this has like no perplexity like no perplexity at all like is anyone benefited by this seriously okay but at least we made the authors think of the consequences of their research yeah so I invite you to check out this paper maybe with this right now you can follow a bit better what happens here this is a different paradigm of research it's a cool paradigm it's away from your usual machine learning framework and yeah so I'm excited what happens next in this I also invite you to check out the websites they have lots of videos and goodies and so on and with that bye bye
[ { "end": 5.8, "start": 0, "text": " Hi there! Today we're looking at implicit neural representations with periodic" }, { "end": 11.52, "start": 5.8, "text": " activation functions by Vincent Sitzman, Julian N. P. Martel, Alexander W. Bergman," }, { "end": 18.240000000000002, "start": 11.52, "text": " David B. Landell and Gordon Wettstein. So this paper is a bit of a special paper." }, { "end": 21.92, "start": 18.240000000000002, "text": " If you're like me coming from like classic machine learning or deep" }, { "end": 29.78, "start": 21.92, "text": " learning, things like this, this paper requires you to think around your notion" }, { "end": 34.4, "start": 29.78, "text": " of what it means to handle data and so on a bit and to think about data points" }, { "end": 40.52, "start": 34.4, "text": " and so on. Essentially what they're doing is they are representing signals such as" }, { "end": 46.84, "start": 40.52, "text": " images or sound or generally waves or point clouds. They're representing these" }, { "end": 53.56, "start": 46.84, "text": " signals as functions mapping, for example, from their coordinates to their values." }, { "end": 60.92, "start": 53.56, "text": " We'll see what that entails. They're not the first ones to do this, but" }, { "end": 66.52000000000001, "start": 60.92, "text": " they managed to do this very well using these new models called sirens, which" }, { "end": 75.68, "start": 66.52000000000001, "text": " are basically neural networks that have sine waves as" }, { "end": 81.96000000000001, "start": 75.68, "text": " their nonlinearities instead of like relu or hyperbolic tangents and so on." }, { "end": 87.83999999999999, "start": 81.96, "text": " It turns out that if you initialize these very carefully, those can be made to" }, { "end": 94.39999999999999, "start": 87.83999999999999, "text": " capture these signals very, very well. That's the kind of high-level overview" }, { "end": 101.24, "start": 94.39999999999999, "text": " and we'll go through the paper in a bit of a fashion of someone that is not in" }, { "end": 105.61999999999999, "start": 101.24, "text": " this particular literature. This is not going to be like as in-depth or" }, { "end": 114.2, "start": 105.62, "text": " technical as usually because I myself am not super familiar with this kind" }, { "end": 119.80000000000001, "start": 114.2, "text": " of literature, with the neural representations and so on. If you go" }, { "end": 123.80000000000001, "start": 119.80000000000001, "text": " at this paper from a machine learning perspective, you're going to" }, { "end": 129.56, "start": 123.80000000000001, "text": " be ultimately super confused at the beginning. I'm going to try to" }, { "end": 137.4, "start": 129.56, "text": " clear up and retrace my steps of my confusion. I love that this" }, { "end": 142.96, "start": 137.4, "text": " paper starts out at, we're interested in a class of functions phi that satisfy" }, { "end": 150.44, "start": 142.96, "text": " equations of the form this right here. We are interested in a class of" }, { "end": 157.84, "start": 150.44, "text": " functions. I've never particularly had many dreams about functions like" }, { "end": 164.20000000000002, "start": 157.84, "text": " this. How can you look at this? We're interested in the" }, { "end": 171.04, "start": 164.20000000000002, "text": " relation between inputs and outputs. This here is the function as you can see." }, { "end": 179.36, "start": 171.04, "text": " This maps input to output. We're also interested in its derivatives." }, { "end": 184, "start": 179.36, "text": " Here you go first, second, third derivative and so on. This function" }, { "end": 189.4, "start": 184, "text": " right here is what we're going to call a neural representation or an implicit" }, { "end": 195.04, "start": 189.4, "text": " representation. It's called a neural representation if it's a neural" }, { "end": 201.16, "start": 195.04, "text": " network. So far so good. You've seen this, right? You've seen this" }, { "end": 206.72, "start": 201.16, "text": " could be a data point and then could map it to a label or something like this." }, { "end": 212.52, "start": 206.72, "text": " Since we're going to represent images, you already know maybe a GANs, a" }, { "end": 217.28, "start": 212.52, "text": " generative adversarial network, where this here is the latent vector and then" }, { "end": 223.48000000000002, "start": 217.28, "text": " you have a neural network mapping this latent vector to an image. This" }, { "end": 231.28, "start": 223.48000000000002, "text": " is going to produce an image. This here is quite similar but not quite." }, { "end": 237.32000000000002, "start": 231.28, "text": " Again I guess this here would count as the representation, the continuous" }, { "end": 242.24, "start": 237.32000000000002, "text": " representation of this picture. However in this case right here the function" }, { "end": 250.16, "start": 242.24, "text": " itself is the representation. So in a GAN what we do, we learn this right here," }, { "end": 254.48000000000002, "start": 250.16, "text": " this function phi. We learn this from data such that if I plug in one" }, { "end": 258.92, "start": 254.48000000000002, "text": " particular vector I get one particular image and if I plug in another vector I" }, { "end": 263.24, "start": 258.92, "text": " get another image and the function always stays the same. Here it's going to" }, { "end": 270.04, "start": 263.24, "text": " be one function per image. So each image, the function is the image. So how is a" }, { "end": 276.48, "start": 270.04, "text": " function an image? If I have an image and it's made of pixels," }, { "end": 285.88, "start": 276.48, "text": " each pixel has an X and the Y coordinate. Let's call that X1 and X2" }, { "end": 292.88, "start": 285.88, "text": " the coordinate of that and each pixel also has a color value, which is" }, { "end": 299.16, "start": 292.88, "text": " three-dimensional. So each pixel has a three-dimensional RGB color value." }, { "end": 306.76000000000005, "start": 299.16, "text": " Technically an image is a function from coordinates to pixel values." }, { "end": 314.20000000000005, "start": 306.76000000000005, "text": " If this is my image, it is represented by a function, then if I input any" }, { "end": 321.24, "start": 314.20000000000005, "text": " coordinates like 3, 4, that function should return what are the RGB values at" }, { "end": 329.64, "start": 321.24, "text": " that. Maybe it's like 0.5, 0.7 and 0.1. Those are the RGB values at that." }, { "end": 337.48, "start": 329.64, "text": " Now the goal is to have this right here be a neural network where I have a" }, { "end": 342.24, "start": 337.48, "text": " multi-layer perceptron and I think they always use a five layer MLPs, so" }, { "end": 349, "start": 342.24, "text": " really simple neural networks. You simply input, so here you have two input" }, { "end": 354.48, "start": 349, "text": " neurons where this here goes, so one gets the three, one gets the four, then this" }, { "end": 359.8, "start": 354.48, "text": " travels through the network and at the end the network should output three" }, { "end": 366.76, "start": 359.8, "text": " output nodes and this should be like the 0.5, 0.7, 0.1." }, { "end": 376.6, "start": 366.76, "text": " Now again this network here is, they now train this network to map input to output." }, { "end": 384.16, "start": 376.6, "text": " To map coordinates to values and this of course is one particular image, so" }, { "end": 389.48, "start": 384.16, "text": " you're going to have one neural network per image. Now you might reasonably ask" }, { "end": 394.52000000000004, "start": 389.48, "text": " why do we do it like this? Why don't we just save the image as the pixel" }, { "end": 399.44, "start": 394.52000000000004, "text": " values? Why do we need a function mapping the coordinates to the pixels?" }, { "end": 405.40000000000003, "start": 399.44, "text": " That's a valid question I guess and the image is just one example of this, but" }, { "end": 410.15999999999997, "start": 405.4, "text": " one advantage that you immediately get is that now you have a continuous" }, { "end": 415.67999999999995, "start": 410.15999999999997, "text": " representation. So now you cannot, not only do you know, because if you store an" }, { "end": 422.03999999999996, "start": 415.67999999999995, "text": " image like this, you only know its value at each of the pixel locations. However" }, { "end": 427.35999999999996, "start": 422.03999999999996, "text": " if you store an image like this you know its value at any continuous in-between" }, { "end": 432.79999999999995, "start": 427.35999999999996, "text": " location, right? So you can ask the network what's the pixel value at 3.2" }, { "end": 438.68, "start": 432.8, "text": " and 4.1, right? It will give you an answer and if the network is trained well it" }, { "end": 443.36, "start": 438.68, "text": " will give you sort of an answer that makes sense. That is, what's the exact" }, { "end": 452.88, "start": 443.36, "text": " color at this sub pixel location right here? Now so far so good, right? So" }, { "end": 457.44, "start": 452.88, "text": " essentially this boils down to not really a machine learning problem in the" }, { "end": 463.4, "start": 457.44, "text": " classic sense, but an optimization problem. Because all you have to do is" }, { "end": 468.52, "start": 463.4, "text": " you have to make the neural network match all input to all output. There's" }, { "end": 472.92, "start": 468.52, "text": " not really a training and a test set right here. Namely your data set is going" }, { "end": 477.52, "start": 472.92, "text": " to be all the pixels in the image. So each pixel in the image is going to be" }, { "end": 486.52, "start": 477.52, "text": " one data point because it's one, so each pixel is x, y, 2 RGB. And the way they" }, { "end": 490.76, "start": 486.52, "text": " train these networks, now at the examples of pixels, the way they train it they" }, { "end": 496.96, "start": 490.76, "text": " simply sample a mini batch of pixels like this one, this one, this one, this one," }, { "end": 504.59999999999997, "start": 496.96, "text": " this one. They use that mini batch to train the network to do one step to" }, { "end": 507.84, "start": 504.59999999999997, "text": " train the network and then they sample another mini batch and so on. You might" }, { "end": 512.1999999999999, "start": 507.84, "text": " sample the same pixels multiple times, but ultimately what you want is sort of" }, { "end": 517.9200000000001, "start": 512.2, "text": " a continuous representation of the image. This is not a new idea and" }, { "end": 522.88, "start": 517.9200000000001, "text": " this has been around and they cite a lot of literature where this has been around" }, { "end": 531.4000000000001, "start": 522.88, "text": " before. So what their new thing is is that they say these other representations," }, { "end": 536.48, "start": 531.4000000000001, "text": " so if you use a neural network in a classic sense like this and you do your" }, { "end": 542.76, "start": 536.48, "text": " training with the mini batches like this, what you'll end up with is a bad image." }, { "end": 547.6, "start": 542.76, "text": " So if you then simply go, once you've trained the network, you" }, { "end": 552.8000000000001, "start": 547.6, "text": " can take it, take your network and you can simply output each pixel location. So" }, { "end": 559.16, "start": 552.8000000000001, "text": " you say okay now I'm going to reproduce this image using my network because if" }, { "end": 563.6, "start": 559.16, "text": " it's trained well it could certainly give me back the positions at the" }, { "end": 571.84, "start": 563.6, "text": " pixels. So you ask it what's the 0, 0, what's a 0, 1, what's a 0, 2, what's 0 at 0, 3" }, { "end": 577.12, "start": 571.84, "text": " and you can fill in the picture and that usually gives you very bad outcomes or" }, { "end": 581.72, "start": 577.12, "text": " so they claim. I mean I haven't checked it particularly, but you can see right" }, { "end": 588.0400000000001, "start": 581.72, "text": " here this is the ground truth and here you have a network that is" }, { "end": 593.88, "start": 588.04, "text": " parameterized with ReLU functions like with ReLU nonlinearities and as you can" }, { "end": 601.88, "start": 593.88, "text": " see the ReLU network misses a lot of the sort of higher definition things in the" }, { "end": 608.7199999999999, "start": 601.88, "text": " image and so it depends on the architecture that you use how well you" }, { "end": 613.42, "start": 608.7199999999999, "text": " can make a neural network represent those things. Again you kind of need to" }, { "end": 618.36, "start": 613.42, "text": " forget what you know about machine learning in the classic sense" }, { "end": 624.28, "start": 618.36, "text": " because I'd still see people who just use a GAN or something like this. So" }, { "end": 630.16, "start": 624.28, "text": " yes valid point but we are in the business right now of solving this" }, { "end": 636.4, "start": 630.16, "text": " particular problem and as we'll go on to see it's not just about images but" }, { "end": 641.36, "start": 636.4, "text": " images are a nice example of a natural signal. So the 10H networks you also see" }, { "end": 647.6, "start": 641.36, "text": " they I think they fail even harder they have these artifacts back here even and" }, { "end": 654.84, "start": 647.6, "text": " this here it gets better when you do ReLU networks with what is called a" }, { "end": 660.24, "start": 654.84, "text": " positional encoding. So not only do you have your X and your Y coordinates go" }, { "end": 664.72, "start": 660.24, "text": " through a ReLU network but you also have them go through a positional encoding" }, { "end": 669.28, "start": 664.72, "text": " and that's very much like you would have in a transformer. So if you" }, { "end": 673.8399999999999, "start": 669.28, "text": " watched my video about attention is all you need I explained how the positional" }, { "end": 680.64, "start": 673.8399999999999, "text": " encodings work there but basically what you do is you map these things to cosine" }, { "end": 688.64, "start": 680.64, "text": " and sine waves so you're going to be like the sine of X times 10 and then" }, { "end": 695.88, "start": 688.64, "text": " the sine wave of X times 100 and so on so which you'll end up and you do the" }, { "end": 702.36, "start": 695.88, "text": " same for Y and that ends you up with more features that sort of then the" }, { "end": 707.52, "start": 702.36, "text": " function can use to represent positions way better than just given the X and Y" }, { "end": 713.76, "start": 707.52, "text": " coordinates. If you do that you kind of recover some of the image but you see" }, { "end": 718.24, "start": 713.76, "text": " here they also analyze how so this is the ground truth and this is the" }, { "end": 723.08, "start": 718.24, "text": " gradient of the ground truth which is basically a a Sobel filter if you know" }, { "end": 728.32, "start": 723.08, "text": " that it's basically an edge detector color gradient thing and then this here" }, { "end": 735.36, "start": 728.32, "text": " is the second derivative the Laplacian of the image and ideally if your" }, { "end": 744.12, "start": 735.36, "text": " implicit representation models the signal very well it should also model" }, { "end": 748.24, "start": 744.12, "text": " the derivatives of the signal very well. So now we're kind of connecting it to" }, { "end": 754.16, "start": 748.24, "text": " what we saw at the beginning right these siren networks are specifically" }, { "end": 760.88, "start": 754.16, "text": " designed to not only match the signal right here but also match its" }, { "end": 767.4, "start": 760.88, "text": " derivatives and if you match maybe in an image it's not so it's not that" }, { "end": 774.28, "start": 767.4, "text": " important to match the derivatives even though it is because there are small" }, { "end": 783.64, "start": 774.28, "text": " things like you can see right here the grass isn't as well represented and here" }, { "end": 789.68, "start": 783.64, "text": " you mostly you get some artifacts that you see here in the in the gradient might" }, { "end": 793.8399999999999, "start": 789.68, "text": " not be as important for images in terms of human vision but for many signals" }, { "end": 798.04, "start": 793.8399999999999, "text": " it's also important to match the derivatives and here at the siren even" }, { "end": 802.8399999999999, "start": 798.04, "text": " though it's trained on the image itself you can see that its derivatives are" }, { "end": 808.2800000000001, "start": 802.84, "text": " very much in line with the original signal so simply by matching the signal" }, { "end": 817.9200000000001, "start": 808.2800000000001, "text": " you this architecture manages to also capture the derivatives of the signal" }, { "end": 823.12, "start": 817.9200000000001, "text": " and therefore have a more faithful representation. Okay so that was" }, { "end": 828.32, "start": 823.12, "text": " positional RBF relu's are simply the relu network and I think somewhere in" }, { "end": 834.2800000000001, "start": 828.32, "text": " here there is an RBF kernel if you young kids don't know what an RBF kernel is" }, { "end": 843.12, "start": 834.2800000000001, "text": " then yeah no I guess I guess I don't want to I don't want to dunk on anyone" }, { "end": 851.36, "start": 843.12, "text": " it's basically you how do I explain it you map it to an infinite dimensional" }, { "end": 863.12, "start": 851.36, "text": " space using Gaussian kernels yeah maybe Wikipedia is better at that than I am so" }, { "end": 869.24, "start": 863.12, "text": " sirens what what do they do in order to be able to capture a signal very well" }, { "end": 874, "start": 869.24, "text": " what do how does it sing a siren different from like an RBF network and" }, { "end": 878.5600000000001, "start": 874, "text": " the answer is pretty pretty pretty simple so the architecture of a siren" }, { "end": 885.88, "start": 878.56, "text": " network is the end does it already stand for network I'm not sure honestly maybe" }, { "end": 895.3599999999999, "start": 885.88, "text": " we'll find out yes it's the sinusoidal representation networks so the end is" }, { "end": 903.8399999999999, "start": 895.3599999999999, "text": " network so we don't say siren network we say siren and a siren is simply made of" }, { "end": 911.36, "start": 903.84, "text": " what is that here it's a multi-layer perceptron basically right so it is a" }, { "end": 916.88, "start": 911.36, "text": " this here is the network the network this is the final layer of the network" }, { "end": 924.4, "start": 916.88, "text": " which is a linear layer before that you have all these layers just not" }, { "end": 930.12, "start": 924.4, "text": " concatenate of it but following each other so it's a multi-layer perceptron" }, { "end": 935.2, "start": 930.12, "text": " pretty regular and each of the layers in the multi-layer perceptron is made up" }, { "end": 940.4, "start": 935.2, "text": " like this you have an input you multiply it by a weight matrix you add a bias and" }, { "end": 946.8, "start": 940.4, "text": " then you put it through a sine wave so the sine wave here is really that's" }, { "end": 953.04, "start": 946.8, "text": " that's the only change from a mole from an MLP otherwise so usually here you" }, { "end": 960.8399999999999, "start": 953.04, "text": " have something like a sigmoid or a relu function now you have a sine wave and" }, { "end": 967.68, "start": 960.8399999999999, "text": " the I mean it's a bit weird right because a relu function is like this so" }, { "end": 972.48, "start": 967.68, "text": " it has this center thing where it kind of switches but here it's linear and" }, { "end": 979.9599999999999, "start": 972.48, "text": " monotonic and here it's kind of constant and even a even a sigmoid so the" }, { "end": 987.48, "start": 979.96, "text": " sigmoid is don't you remember like this yes I guess so the sigmoid is like this" }, { "end": 991.84, "start": 987.48, "text": " so it's kind of constant here constant here monotonic and so on we're used to" }, { "end": 997.58, "start": 991.84, "text": " monotonic activation functions whereas a sine wave is really different the sine" }, { "end": 1005.2800000000001, "start": 997.58, "text": " wave of course is something like this right where it's not monotonic at all" }, { "end": 1010.24, "start": 1005.28, "text": " like if you if you want to increase your function value at any point and you're" }, { "end": 1015.28, "start": 1010.24, "text": " here and you go up the hill and you do a step that's too large you end up down" }, { "end": 1022.36, "start": 1015.28, "text": " the hill again but it turns out that these these networks have particularly" }, { "end": 1029.84, "start": 1022.36, "text": " have some good properties if you want to capture natural signals and they have" }, { "end": 1034.52, "start": 1029.84, "text": " some bad properties namely that the fact that they are periodic and go down again" }, { "end": 1040.04, "start": 1034.52, "text": " and the reason why they get around the bad properties is because or so they" }, { "end": 1046.04, "start": 1040.04, "text": " claim they initialize the network in a very particular fashion because I think" }, { "end": 1051.28, "start": 1046.04, "text": " at least I when I when I started in deep learning I had this idea so a lot of" }, { "end": 1055.4, "start": 1051.28, "text": " other people must have had this idea too of like hey what if I just replaced a" }, { "end": 1061, "start": 1055.4, "text": " non-linearity with like my sine function could I do something this and then tried" }, { "end": 1065.96, "start": 1061, "text": " it out and it didn't really work so I scrapped that now this here of course" }, { "end": 1071.72, "start": 1065.96, "text": " isn't simply replacing the neural network it's also using the neural" }, { "end": 1075.24, "start": 1071.72, "text": " network for something completely different than I would namely it's using" }, { "end": 1079.4, "start": 1075.24, "text": " the neural network to learn these implicit representations and not like I" }, { "end": 1085.96, "start": 1079.4, "text": " would to do simply for learning a data set but still it seems like you need to" }, { "end": 1094.88, "start": 1085.96, "text": " initialize them fairly with with very careful consideration and we'll go on" }, { "end": 1102.44, "start": 1094.88, "text": " onto that right now so actually they just describe it it's it's not like a" }, { "end": 1110.3600000000001, "start": 1102.44, "text": " it's not very interesting but you need to sample the weights uniformly from" }, { "end": 1118.4399999999998, "start": 1110.36, "text": " this uniform distribution where I think yeah and they have a proof in the" }, { "end": 1124.9199999999998, "start": 1118.4399999999998, "text": " supplementary material where they sort of show why that is so or not here we" }, { "end": 1129.7199999999998, "start": 1124.9199999999998, "text": " propose to draw weights with C equals 6 such that W is in this uniform" }, { "end": 1137.8, "start": 1129.7199999999998, "text": " distribution right here oh no it's different okay this ensures that the" }, { "end": 1141.8, "start": 1137.8, "text": " input to each of the sign activation is normal distributed with a standard" }, { "end": 1145.6, "start": 1141.8, "text": " deviation of one since only a few weights have magnitude larger than pi" }, { "end": 1152.3999999999999, "start": 1145.6, "text": " the frequency throughout the sine work grows only slowly finally we propose" }, { "end": 1156.46, "start": 1152.3999999999999, "text": " to initialize the first layer of the sine network with weights so that the" }, { "end": 1163.56, "start": 1156.46, "text": " sine function spans multiple periods over negative 1 to 1 we found W 0 to" }, { "end": 1168.3999999999999, "start": 1163.56, "text": " equal 30 to work well for all the applications in this work the proposed" }, { "end": 1171.8999999999999, "start": 1168.3999999999999, "text": " initialization scheme yielded fast and robust convergence using the atom" }, { "end": 1176.32, "start": 1171.8999999999999, "text": " optimizer for all experiments in this work so the initialization here takes a" }, { "end": 1180.08, "start": 1176.32, "text": " fairly prominent piece in that paper which tells me maybe that they have" }, { "end": 1184.6399999999999, "start": 1180.08, "text": " spent a lot of time working on this and this is I mean if this is the case this" }, { "end": 1189.6799999999998, "start": 1184.6399999999999, "text": " is to their credit because I guess most people like me would try out something" }, { "end": 1194.3, "start": 1189.68, "text": " like this and then after a while realize it doesn't work and to you know be so" }, { "end": 1200.52, "start": 1194.3, "text": " convinced to go and really figure out how do we need to initialize these to" }, { "end": 1205.6000000000001, "start": 1200.52, "text": " make it work and of course as you're doing this there's still like a 99%" }, { "end": 1211.28, "start": 1205.6000000000001, "text": " chance that it's not going to work once you've done that is quite respectable I" }, { "end": 1214.3600000000001, "start": 1211.28, "text": " find it might have been really different this might have been the first thing" }, { "end": 1220.3999999999999, "start": 1214.36, "text": " they thought about and just worked it out but yeah okay so what what is the" }, { "end": 1225.86, "start": 1220.3999999999999, "text": " deal with all these derivatives now since this network right here has these" }, { "end": 1230.6399999999999, "start": 1225.86, "text": " sine waves in it right so it's a neural network with sine waves as derivatives" }, { "end": 1238.76, "start": 1230.6399999999999, "text": " as nonlinearities what now so we have a neural network what now is the first" }, { "end": 1244.72, "start": 1238.76, "text": " derivative of that neural network right with respect to its input so we have an" }, { "end": 1249.36, "start": 1244.72, "text": " input now what's the first derivative with respect to its input and the cool" }, { "end": 1254.4, "start": 1249.36, "text": " thing about this is what's the first derivative of a sine wave it's of course" }, { "end": 1260.08, "start": 1254.4, "text": " a sine wave that's shifted so it's a cosine which is a sine wave that's" }, { "end": 1265.8, "start": 1260.08, "text": " simply phase shifted and then the next derivative again is a shifted sine wave" }, { "end": 1276.68, "start": 1265.8, "text": " and so on so the derivative of a siren is a siren and that does not hold for" }, { "end": 1284.7, "start": 1276.68, "text": " any of these other nonlinearities so in relu's it's the derivative of a relu" }, { "end": 1289.82, "start": 1284.7, "text": " network is like a con so if you if I take the derivative of this it's like a" }, { "end": 1295.9199999999998, "start": 1289.82, "text": " constant zero right here and then a constant one right here and if I then" }, { "end": 1300.76, "start": 1295.9199999999998, "text": " take the derivative again it's simply a constant zero function right and all" }, { "end": 1305.52, "start": 1300.76, "text": " these other nonlinearities their derivatives are different from" }, { "end": 1312.2, "start": 1305.52, "text": " themselves and here since we want to not only match a signal but also the signals" }, { "end": 1319.4399999999998, "start": 1312.2, "text": " derivatives these property of this siren becoming very very very handy so how do" }, { "end": 1324.3600000000001, "start": 1319.44, "text": " you train a siren we've already alluded to how you would do that in the in the" }, { "end": 1330, "start": 1324.3600000000001, "text": " kind of idea of matching an image where you simply train the pixel values to the" }, { "end": 1336.42, "start": 1330, "text": " RGB values but there's more that you can do with the sirens given that they" }, { "end": 1343.6000000000001, "start": 1336.42, "text": " basically given that their derivatives are also sirens what you can do so with" }, { "end": 1349.3200000000002, "start": 1343.6000000000001, "text": " the image part we've basically neglected all of this we simply said we want to" }, { "end": 1357, "start": 1349.32, "text": " find a relationship between the input X and the output like this what we can" }, { "end": 1362.6799999999998, "start": 1357, "text": " also do is we can say no no no no we want to find a relationship between the" }, { "end": 1369.76, "start": 1362.6799999999998, "text": " input and its first derivative and not even have this as part of the let's say" }, { "end": 1377.24, "start": 1369.76, "text": " of the loss function and then we can see what comes out so that's what they do" }, { "end": 1388.08, "start": 1377.24, "text": " oh can I find it can I find it that's what they do right here okay so here you" }, { "end": 1395.68, "start": 1388.08, "text": " see the the ground truth image and this is its gradients and this is its" }, { "end": 1401.92, "start": 1395.68, "text": " Laplacian okay now we've already seen that we can fit the image itself but" }, { "end": 1409.4, "start": 1401.92, "text": " what if we just fit the first derivative so we simply input this thing right here" }, { "end": 1416.64, "start": 1409.4, "text": " we input this into the siren we do the same thing right the siren is now it" }, { "end": 1424.4, "start": 1416.64, "text": " maps X and Y to RGB but our loss function isn't going to be mapping X and" }, { "end": 1432.68, "start": 1424.4, "text": " Y to RGB our loss function is going to to depend on the gradient of that so our" }, { "end": 1440.76, "start": 1432.68, "text": " loss function is going to be something like the gradient of the image let's" }, { "end": 1449.3200000000002, "start": 1440.76, "text": " call the image I minus the gradient of that function that maps X of this" }, { "end": 1455.08, "start": 1449.32, "text": " function right here okay because we have these auto differentiation tools right" }, { "end": 1460.32, "start": 1455.08, "text": " now we can easily make this into a loss function so here we are looking for the" }, { "end": 1468.08, "start": 1460.32, "text": " function whose gradient matches the gradient of the image right now again" }, { "end": 1472.56, "start": 1468.08, "text": " you can say why is this why can't we just match the image itself and I think" }, { "end": 1478.3999999999999, "start": 1472.56, "text": " valid point but it's not about why can't we just it's about demonstrating the" }, { "end": 1484.44, "start": 1478.4, "text": " power of these networks so if you only match the gradients right what you'll" }, { "end": 1489.1200000000001, "start": 1484.44, "text": " find is if you then look at the function right you still find the function you" }, { "end": 1495.3200000000002, "start": 1489.1200000000001, "text": " don't you don't find the gradient you still train the function you still" }, { "end": 1500.16, "start": 1495.3200000000002, "text": " train the weights of the function itself but the loss function depends on the" }, { "end": 1506.0800000000002, "start": 1500.16, "text": " gradient of that function if you do that you'll find that if you then look at the" }, { "end": 1509.76, "start": 1506.08, "text": " function again you can ask the function to produce the image by simply cycling" }, { "end": 1516.52, "start": 1509.76, "text": " over each of the coordinates you'll find that look at that just by matching the" }, { "end": 1522.8, "start": 1516.52, "text": " gradient you'll match the image itself pretty pretty well right and that's" }, { "end": 1529.6799999999998, "start": 1522.8, "text": " pretty cool now of course you're not going to match the RGB values this is a" }, { "end": 1534.36, "start": 1529.6799999999998, "text": " grayscale image and you know there's a there's kind of a reason for that" }, { "end": 1542.4399999999998, "start": 1534.36, "text": " because since the gradient loses like constant bias information so what if" }, { "end": 1547.6399999999999, "start": 1542.4399999999998, "text": " you'd match an RGB image I'm gonna guess you're going to have like color very" }, { "end": 1553.24, "start": 1547.6399999999999, "text": " much color distortions but and here what you're going to have in this case is" }, { "end": 1562.12, "start": 1553.24, "text": " just distortions in luminosity like if you know that if you have a function if" }, { "end": 1567.28, "start": 1562.12, "text": " you have the derivative of a function and you will want to find the function" }, { "end": 1572.56, "start": 1567.28, "text": " itself and you integrate then the solution is always an entire space of" }, { "end": 1581.2399999999998, "start": 1572.56, "text": " functions because you will integrate the function this thing right here and so" }, { "end": 1587.56, "start": 1581.2399999999998, "text": " with the whatever its input is and you have to add a constant and you don't" }, { "end": 1590.4399999999998, "start": 1587.56, "text": " know what the constant was in the original function because when you" }, { "end": 1595.16, "start": 1590.44, "text": " derive the function the constant drops away so similarly here what we'd expect" }, { "end": 1600.96, "start": 1595.16, "text": " is that the image that we're getting back will be faithful with respect to" }, { "end": 1605.44, "start": 1600.96, "text": " like its its borders right since we're matching the gradient and the gradient" }, { "end": 1610.2, "start": 1605.44, "text": " is basically an edge detector will match the sort of edge information of the" }, { "end": 1614.44, "start": 1610.2, "text": " picture which you can clearly see but what we would expect is some difference" }, { "end": 1620.2, "start": 1614.44, "text": " in overall luminosity and I don't even know what how they exactly did this" }, { "end": 1624.64, "start": 1620.2, "text": " because they now have to choose a constant to add maybe they just chose it" }, { "end": 1628.92, "start": 1624.64, "text": " in some way or maybe they just let the network do but this is you know still" }, { "end": 1632.48, "start": 1628.92, "text": " pretty pretty impressive you can see there's some detail missing but not much" }, { "end": 1638.26, "start": 1632.48, "text": " and the same exact same thing you can do for matching the second derivative so" }, { "end": 1644.2, "start": 1638.26, "text": " now you match the Laplacian of the image and remember in the ReLU networks they" }, { "end": 1647.64, "start": 1644.2, "text": " don't even have a Laplacian it's a constant so this is something you could" }, { "end": 1653.72, "start": 1647.64, "text": " never do and you can see that the upcoming image is still pretty good" }, { "end": 1657.88, "start": 1653.72, "text": " right this are this is now missing the constant luminosity in the first and" }, { "end": 1663.88, "start": 1657.88, "text": " second derivative sorry in the in the zero with and first derivative and still" }, { "end": 1670.88, "start": 1663.88, "text": " the information is the the reconstruction is pretty good alright so" }, { "end": 1675.96, "start": 1670.88, "text": " these demonstrates kind of the power of these networks again we're not having" }, { "end": 1680.92, "start": 1675.96, "text": " our data set our entire data set is just this image so if we fit something then" }, { "end": 1687.4, "start": 1680.92, "text": " this thing right here is our entire data set there's no there's no big data set" }, { "end": 1692.1200000000001, "start": 1687.4, "text": " and this is a test sample like this is the data set and the test sample at the" }, { "end": 1696.1200000000001, "start": 1692.1200000000001, "text": " same I guess you can consider the Laplacian here the data set and then" }, { "end": 1703.44, "start": 1696.1200000000001, "text": " the actual image is the test sample like the label or something like this so what" }, { "end": 1708.96, "start": 1703.44, "text": " does that buy you here is a thing you can do if you want to mix two images what" }, { "end": 1713.92, "start": 1708.96, "text": " do you do so if you want to mix this and this what you could do is linearly" }, { "end": 1720.48, "start": 1713.92, "text": " interpolate but that would be not very cool because right here you have a lot" }, { "end": 1726.44, "start": 1720.48, "text": " of like very bright pixels which probably have like values of one and here" }, { "end": 1730.88, "start": 1726.44, "text": " you'd have the dark pixels which probably have values like more close to" }, { "end": 1736.1200000000001, "start": 1730.88, "text": " zero and the if you simply mix them if you simply add them together and divide" }, { "end": 1742.1200000000001, "start": 1736.1200000000001, "text": " by two then you get kind of get a wash of the two and similarly here you kind" }, { "end": 1746.3200000000002, "start": 1742.1200000000001, "text": " of wash out the bear because you'd have some pixel values here that would come" }, { "end": 1752.3200000000002, "start": 1746.3200000000002, "text": " over and generally not not a good idea to mix images like this now you know" }, { "end": 1758.0800000000002, "start": 1752.3200000000002, "text": " with GANs we can do this but we have to have like a training data set and so on" }, { "end": 1764.08, "start": 1758.08, "text": " here what we'll say is we'll simply say we'll take the gradient of this and" }, { "end": 1770, "start": 1764.08, "text": " we'll take the gradient of this and then we'll add the two gradient maps now what" }, { "end": 1774.56, "start": 1770, "text": " this does is that as you can see right here on the left is the composite" }, { "end": 1782.24, "start": 1774.56, "text": " gradients and what this does is right here in the sky there is no gradient" }, { "end": 1788.52, "start": 1782.24, "text": " information in this image because it's just a flat patch of sky right so and" }, { "end": 1793.4, "start": 1788.52, "text": " down maybe down here there's not that much gradient information there is a bit" }, { "end": 1798.84, "start": 1793.4, "text": " right but not here so that's where this bear head is and if you want to mix" }, { "end": 1805.32, "start": 1798.84, "text": " images like it can be a good idea to mix their gradients because generally the" }, { "end": 1810.68, "start": 1805.32, "text": " information in an image is where the gradients are so what we would expect" }, { "end": 1818.3600000000001, "start": 1810.68, "text": " the gradient to represent the gradient would carry over this portion it would" }, { "end": 1822, "start": 1818.3600000000001, "text": " maybe carry over a bit of this portion it would carry over this portion and" }, { "end": 1825.96, "start": 1822, "text": " this portion so everything where the signal is not flat so here you can see" }, { "end": 1834.5600000000002, "start": 1825.96, "text": " the composite gradient and if we fit again we fit our function such that the" }, { "end": 1840, "start": 1834.5600000000002, "text": " gradient of the function that we fit matches this mixed gradient right here" }, { "end": 1845.44, "start": 1840, "text": " then this is the gradient of the function that we match and this is the" }, { "end": 1852.92, "start": 1845.44, "text": " actual function and you can see pretty pretty good right it basically mixed" }, { "end": 1858.08, "start": 1852.92, "text": " everywhere where there was gradient and this is now just reconstructed from this" }, { "end": 1863.2, "start": 1858.08, "text": " gradient there is no I think there is no as least as I understand it there is no" }, { "end": 1867.68, "start": 1863.2, "text": " pixel information carried over from either of those images they're simply" }, { "end": 1874.8400000000001, "start": 1867.68, "text": " added to this gradient the gradient is fit and then the function is asked to" }, { "end": 1883.88, "start": 1874.8400000000001, "text": " output pixel value at each location and that's that okay so this is just a simple" }, { "end": 1890.92, "start": 1883.88, "text": " you know thing that you can play around with but they do they do other more" }, { "end": 1898.3200000000002, "start": 1890.92, "text": " interesting things right here for example this representing shapes with" }, { "end": 1904.3600000000001, "start": 1898.3200000000002, "text": " signed distance functions so if you go over the formulation the actual" }, { "end": 1908.16, "start": 1904.3600000000001, "text": " formulation of their loss function we haven't actually done this right quite" }, { "end": 1916.24, "start": 1908.16, "text": " yet it's here it's very complicatedly stated but ultimately what this means" }, { "end": 1925.28, "start": 1916.24, "text": " is so a component right here is are these CM which are constraints so this" }, { "end": 1929.2, "start": 1925.28, "text": " loss function operates on these constraints and the constraints are" }, { "end": 1935, "start": 1929.2, "text": " across a of X which basically it's just X it's kind of a the anything depending" }, { "end": 1941.24, "start": 1935, "text": " on the input itself then the output of the function the gradient of the output" }, { "end": 1946.92, "start": 1941.24, "text": " of the function the second derivative third derivative and so on so this these" }, { "end": 1953.28, "start": 1946.92, "text": " sirens can fit anything that you can formulate as a set of constraints that" }, { "end": 1960, "start": 1953.28, "text": " relate the input of the function right here to its output or any of its" }, { "end": 1964.96, "start": 1960, "text": " derivatives and we've already seen that at once we if we fit an image our only" }, { "end": 1971.3600000000001, "start": 1964.96, "text": " constraint is that these things match right here with the original image that" }, { "end": 1977.24, "start": 1971.3600000000001, "text": " the coordinates are mapped to the RGB values then when we match the gradients" }, { "end": 1982.3600000000001, "start": 1977.24, "text": " we don't care about this we only care about the relation between this and so" }, { "end": 1987.96, "start": 1982.3600000000001, "text": " on so the loss function is literally just over the entire signal space which" }, { "end": 1994.08, "start": 1987.96, "text": " in our case was was over the entire image we want these constraints to hold" }, { "end": 1998.28, "start": 1994.08, "text": " or to be as small as possible or the constraints are always formulate such" }, { "end": 2005.08, "start": 1998.28, "text": " that if they are fulfilled they equal zero and so the for example the L2 loss" }, { "end": 2011.52, "start": 2005.08, "text": " between the RGB values of the true image and the RGB values that you fit the RGB" }, { "end": 2016.6799999999998, "start": 2011.52, "text": " loss sorry the L2 loss would be a constraint like this and of course the" }, { "end": 2021.6, "start": 2016.6799999999998, "text": " more differentiable you make it the more the easier this network has at fitting" }, { "end": 2027.1999999999998, "start": 2021.6, "text": " it right so that's why there is this norm right here but it's not that" }, { "end": 2033.32, "start": 2027.1999999999998, "text": " complicated it simply says whatever you can formulate as a constraint on" }, { "end": 2039.32, "start": 2033.32, "text": " relating the inputs to the outputs or any of the derivatives of this implicit" }, { "end": 2046.1599999999999, "start": 2039.32, "text": " representation that is the loss function all right so the in the next interesting" }, { "end": 2050.36, "start": 2046.1599999999999, "text": " thing we can do as I said is representing shapes with signed distance" }, { "end": 2058.92, "start": 2050.36, "text": " functions so we're going to go slowly and this is yeah it's not that hard" }, { "end": 2063.2000000000003, "start": 2058.92, "text": " inspired by recent work on shape representation with differentiable" }, { "end": 2069.92, "start": 2063.2000000000003, "text": " signed distance functions as the F's we fit SDFs directly on oriented point" }, { "end": 2076.6, "start": 2069.92, "text": " clouds using both ReLU based implicit neural representations and sirens okay" }, { "end": 2082.92, "start": 2076.6, "text": " so what is an SDF a signed distance function that's pretty easy a signed" }, { "end": 2092.44, "start": 2082.92, "text": " distance function is simply a distance function with a sign like wow so a a if" }, { "end": 2096.6, "start": 2092.44, "text": " you have a and it's usually done if you have like a boundary somewhere between" }, { "end": 2103.52, "start": 2096.6, "text": " things then of course any point here has a distance to the boundary but you if" }, { "end": 2108.12, "start": 2103.52, "text": " you have a signed distance function it simply means that each point also has a" }, { "end": 2112.36, "start": 2108.12, "text": " sign in front of it and that means all the things on one side of the boundary" }, { "end": 2117.8, "start": 2112.36, "text": " maybe have a plus and all the things on the other side maybe have a minus so" }, { "end": 2122.72, "start": 2117.8, "text": " even though two points could be the same distance from the boundary one is like" }, { "end": 2130.64, "start": 2122.72, "text": " plus five away and one is negative five away and you can do this this is useful" }, { "end": 2135.6, "start": 2130.64, "text": " for example when you fit point clouds as they do in this example so when they" }, { "end": 2141.8399999999997, "start": 2135.6, "text": " have point clouds and that's usually in 3d space but if you have point clouds" }, { "end": 2147.8799999999997, "start": 2141.8399999999997, "text": " you basically have points right here and you know that the points should" }, { "end": 2153.96, "start": 2147.8799999999997, "text": " represent some kind of shape maybe a wall or so they have these room" }, { "end": 2160.76, "start": 2153.96, "text": " interiors as you can see right here so this is a 3d scene but you only have a" }, { "end": 2165.2, "start": 2160.76, "text": " point cloud of the 3d scene and what that means is that maybe you were in" }, { "end": 2170.4, "start": 2165.2, "text": " this room and you put up a laser scanner right here laser scanner I don't know" }, { "end": 2175.8, "start": 2170.4, "text": " how a laser scanner looks and the laser scanner kind of shoots lasers at random" }, { "end": 2180.32, "start": 2175.8, "text": " locations and always measures the distance right and that's that's how you" }, { "end": 2185.56, "start": 2180.32, "text": " end up with a point cloud so you'll end up with like a point cloud where in 3d" }, { "end": 2190.28, "start": 2185.56, "text": " space you know where the laser hit something and a reasonable assumption to" }, { "end": 2195.32, "start": 2190.28, "text": " make if you have like a dense sampling of this is that you should be able to" }, { "end": 2202.1600000000003, "start": 2195.32, "text": " like connect those point clouds in some way to obtain the actual continuous" }, { "end": 2208.32, "start": 2202.1600000000003, "text": " shape of the thing that you measured and this is what we're going to try to do" }, { "end": 2215.7200000000003, "start": 2208.32, "text": " with these sirens right to go from point clouds to shape by training an implicit" }, { "end": 2220.88, "start": 2215.7200000000003, "text": " representation so we're going to train a neural network that represents this" }, { "end": 2229.2000000000003, "start": 2220.88, "text": " shape right here basically by mapping coordinates to to signed distance values" }, { "end": 2237.1800000000003, "start": 2229.2000000000003, "text": " so whenever we ask the neural network what at this location here what's the" }, { "end": 2242.7999999999997, "start": 2237.18, "text": " signed distance and it's going to tell us oh it's plus 5 or at this location" }, { "end": 2247.12, "start": 2242.7999999999997, "text": " here what's the sign distance it's going to tell us it's 0 right so we're going" }, { "end": 2258.64, "start": 2247.12, "text": " to we're going to train a neural network to do that and hello yes no okay so this" }, { "end": 2264.2799999999997, "start": 2258.64, "text": " is a bit more complicated and since we have these awesome power of these sirens" }, { "end": 2275.28, "start": 2264.28, "text": " we can also do to more constraints so we know and this goes on this amounts to" }, { "end": 2281.2000000000003, "start": 2275.28, "text": " solving a particular iconal boundary value problem that constrains the norm" }, { "end": 2286.7200000000003, "start": 2281.2000000000003, "text": " of spatial gradients to be one almost everywhere so this iconal boundary value" }, { "end": 2293.2000000000003, "start": 2286.7200000000003, "text": " problem this is a property of signed distance function that the norm of the" }, { "end": 2298.48, "start": 2293.2, "text": " gradients with respect to the input is one almost everywhere almost everywhere" }, { "end": 2303.12, "start": 2298.48, "text": " means everywhere I guess except at the boundary itself where the distance is 0" }, { "end": 2306.12, "start": 2303.12, "text": " though I could be wrong" }, { "end": 2313.96, "start": 2306.12, "text": " note that relu networks are seeming seemingly ideal for representing sdfs as" }, { "end": 2318.72, "start": 2313.96, "text": " their gradients are locally constant and their second derivatives are 0" }, { "end": 2322.3999999999996, "start": 2318.72, "text": " adequate training procedure for working directly with point clouds were" }, { "end": 2328.2000000000003, "start": 2322.4, "text": " described in prior work we fit a siren to an oriented point cloud using a loss" }, { "end": 2333.36, "start": 2328.2000000000003, "text": " of the form and now we look at the loss so the first thing you observe in the" }, { "end": 2337.2400000000002, "start": 2333.36, "text": " loss is that it is made of three different integrals and that simply means" }, { "end": 2344.2400000000002, "start": 2337.2400000000002, "text": " they now partition the space right here they partition it into two different" }, { "end": 2351.44, "start": 2344.2400000000002, "text": " they partition it into two different regions so to say so maybe go here" }, { "end": 2357.96, "start": 2351.44, "text": " no can I zoom here so the first region is going to be whatever is on the" }, { "end": 2362.36, "start": 2357.96, "text": " boundary itself right and that's basically wherever a point wherever a" }, { "end": 2366.56, "start": 2362.36, "text": " point hit right whenever you have a point or on the boundary itself that's" }, { "end": 2373.7200000000003, "start": 2366.56, "text": " going to be your omega 0 is going to be that and then all the other points right" }, { "end": 2380.2400000000002, "start": 2373.7200000000003, "text": " here are going to be part of your omega without the omega 0 so you're going to" }, { "end": 2383.72, "start": 2380.24, "text": " have different constraints for all of these things right here for example and" }, { "end": 2389.4399999999996, "start": 2383.72, "text": " I have to pay attention that I don't say anything wrong you'll have this this" }, { "end": 2398.64, "start": 2389.4399999999996, "text": " constraint of this gradient my tablet maybe I'll start monetizing just so I" }, { "end": 2408.8799999999997, "start": 2398.64, "text": " can get a new tablet okay so no okay the this this condition right here says that" }, { "end": 2413.7200000000003, "start": 2408.88, "text": " the gradient should be one and that's actually everywhere right so I was wrong" }, { "end": 2423.8, "start": 2413.7200000000003, "text": " that the gradient is only one outside the boundary then you can see right here" }, { "end": 2431.6800000000003, "start": 2424.44, "text": " the last part is all the points that are not on the boundary since our network" }, { "end": 2436.84, "start": 2431.6800000000003, "text": " maps any point in 3d space to assign distance function so most of these" }, { "end": 2441.1200000000003, "start": 2436.84, "text": " points aren't going to be on the boundary itself even though in the mini" }, { "end": 2447.6800000000003, "start": 2441.1200000000003, "text": " batch where we train where they train they sample points on and off the on and" }, { "end": 2453.76, "start": 2447.6800000000003, "text": " off the boundary at the at equal rates just to to have the network train more" }, { "end": 2460.88, "start": 2453.76, "text": " stably so this is a condition on all the points off of the boundary and they say" }, { "end": 2468.36, "start": 2460.88, "text": " here this function is this exponential function with alpha larger than 1 it" }, { "end": 2475.44, "start": 2468.36, "text": " penalizes off surface points for creating SDF values close to 0 so this" }, { "end": 2482.1600000000003, "start": 2475.44, "text": " is simply a regularizer that says whenever I input coordinates that are" }, { "end": 2487.2000000000003, "start": 2482.1600000000003, "text": " far away from the boundary from the surface then there should be a large" }, { "end": 2492.3599999999997, "start": 2487.2, "text": " sign distance function like it should not be close to zero because it's away" }, { "end": 2496.96, "start": 2492.3599999999997, "text": " from a boundary okay and in practice how you're going to train this is if you" }, { "end": 2502.12, "start": 2496.96, "text": " have a point cloud if your coordinates are far away from the next point then" }, { "end": 2508.3199999999997, "start": 2502.12, "text": " this this is going to be a high this should be a high value otherwise the" }, { "end": 2513.96, "start": 2508.3199999999997, "text": " network is penalized so we have this condition right here on the gradients" }, { "end": 2518.28, "start": 2513.96, "text": " which we know sign distance function should fulfill we have this thing right" }, { "end": 2522.96, "start": 2518.28, "text": " here which is a regularizer basically telling points far away from our data" }, { "end": 2526.96, "start": 2522.96, "text": " that they should have a high distance function and then we have this last" }, { "end": 2533.68, "start": 2526.96, "text": " thing right here which is for all the points on the surface itself here's what" }, { "end": 2541.36, "start": 2533.68, "text": " will what we require first of all we require their value to be zero or close" }, { "end": 2545.2000000000003, "start": 2541.36, "text": " to zero right this is the loss function so we want to minimize this and this is" }, { "end": 2549.56, "start": 2545.2000000000003, "text": " simply the output value so the sign distance function of points on the" }, { "end": 2552.88, "start": 2549.56, "text": " surface you know the things we actually measure they should be zero right" }, { "end": 2556.96, "start": 2552.88, "text": " because the sign distance function measures how far away from the surface" }, { "end": 2565.6400000000003, "start": 2556.96, "text": " you are so this is pretty intuitive but then also this right here it says that" }, { "end": 2573.3199999999997, "start": 2565.64, "text": " the gradient of the sign distance function and the normal vector of that" }, { "end": 2580.12, "start": 2573.3199999999997, "text": " point should align and that basically means and this is now I think this is" }, { "end": 2586.7999999999997, "start": 2580.12, "text": " because we have an oriented point cloud or no yes so what we can do is we can" }, { "end": 2592.2799999999997, "start": 2586.7999999999997, "text": " kind of connect points next to each other and then calculate the normal" }, { "end": 2600.28, "start": 2592.28, "text": " vectors of that right and the signed the network if we ask the network hey what" }, { "end": 2604.76, "start": 2600.28, "text": " do you think about this position right here the network should tell us first of" }, { "end": 2609.52, "start": 2604.76, "text": " all the sign distance function should be zero because it's on the boundary" }, { "end": 2616.5600000000004, "start": 2609.52, "text": " second of all the norm of the gradient of the sign distance function at that" }, { "end": 2620.1600000000003, "start": 2616.5600000000004, "text": " point should be one because that's a property of sign distance function and" }, { "end": 2627.12, "start": 2620.16, "text": " third and that's the thing right now the gradient of the sign distance function" }, { "end": 2633.96, "start": 2627.12, "text": " should align with this normal vector right and that's you know pretty" }, { "end": 2638.72, "start": 2633.96, "text": " intuitive because you want you want the sign distance function to increase in" }, { "end": 2643.56, "start": 2638.72, "text": " value the gradient basically tells you where the highest increase in value of" }, { "end": 2648.16, "start": 2643.56, "text": " the function is you want it to increase along the normal direction and not along" }, { "end": 2653.56, "start": 2648.16, "text": " any other direction so that's a pretty good pretty good constraint to have so" }, { "end": 2658, "start": 2653.56, "text": " you can see right here I mean you don't really have to understand exactly about" }, { "end": 2661.72, "start": 2658, "text": " sign distance functions and so on but these sirens are pretty good at" }, { "end": 2665.24, "start": 2661.72, "text": " capturing all of these different constraints and this was a point you" }, { "end": 2670.24, "start": 2665.24, "text": " know on the surface points off the surface you additionally say hey you" }, { "end": 2674.2, "start": 2670.24, "text": " should have a pretty high value and actually not a zero value but a pretty" }, { "end": 2682.56, "start": 2674.2, "text": " high value so and again we only fit one particular scene we only ever fit one" }, { "end": 2688.2799999999997, "start": 2682.56, "text": " scene with an entire network so the entire neural network this this this" }, { "end": 2693.3599999999997, "start": 2688.2799999999997, "text": " whole structure right here everything is captured by this neural network that we" }, { "end": 2699.08, "start": 2693.3599999999997, "text": " train on the point cloud and you can see that if you use a relu what you'll get" }, { "end": 2706.56, "start": 2699.08, "text": " is super super wobbly because if even if you train the relu with the same loss" }, { "end": 2711.24, "start": 2706.56, "text": " function these constraints on the gradients they're just not going to" }, { "end": 2714.92, "start": 2711.24, "text": " work out with the relu because the gradients are like constant and" }, { "end": 2720.92, "start": 2714.92, "text": " discontinuous right whereas the siren can basically fulfill all of these" }, { "end": 2725.96, "start": 2720.92, "text": " constraints on the different parts like on the values and on the gradients of" }, { "end": 2731.32, "start": 2725.96, "text": " that of the loss function and they have another example right here where they" }, { "end": 2739.36, "start": 2731.32, "text": " fit this shape yeah so you see all the details are preserved way better where" }, { "end": 2744.28, "start": 2739.36, "text": " the relu's they'll simply kind of flatten over everything and make it" }, { "end": 2751.88, "start": 2744.28, "text": " wobbly alright so I hope this sort of made sense and we'll go to the last" }, { "end": 2755.56, "start": 2751.88, "text": " thing right now" }, { "end": 2759.52, "start": 2755.56, "text": " it is restarting I wanted to show you the website right here they have for" }, { "end": 2763.7599999999998, "start": 2759.52, "text": " this it's a pretty cool website to go along with it and as you can see right" }, { "end": 2769.7999999999997, "start": 2763.7599999999998, "text": " here they have all these samples that they have in the paper but also in an" }, { "end": 2774.56, "start": 2769.7999999999997, "text": " animated format in as you can see right here this is the fitting process the" }, { "end": 2781, "start": 2774.56, "text": " learning process of how you represent these images so as I said there you want" }, { "end": 2784.96, "start": 2781, "text": " to fit these functions to the ground truth and that happens in steps so this" }, { "end": 2788.98, "start": 2784.96, "text": " is very much like you would learn a deep learning functions I think they use the" }, { "end": 2793.84, "start": 2788.98, "text": " atom optimizer it's just that the data set now comes all comes from this one" }, { "end": 2798.64, "start": 2793.84, "text": " ground truth image and you can see that the siren network on the right pretty" }, { "end": 2805.6, "start": 2798.64, "text": " quickly zeros in on the on the image and then gets the details subsequently right" }, { "end": 2812.52, "start": 2805.6, "text": " they also represent audio with this and you can watch that they represent video" }, { "end": 2819.28, "start": 2812.52, "text": " compare that to relu representations then here solving the possum equation is" }, { "end": 2825.56, "start": 2819.28, "text": " where you only fit the gradients or the laplacian of an image and still get out" }, { "end": 2834.88, "start": 2825.56, "text": " the good image that's pretty cool and here you can see that you can actually" }, { "end": 2842.08, "start": 2834.88, "text": " play around with these things so you can click on them and look at this" }, { "end": 2847.04, "start": 2842.08, "text": " look at this learned thing so on the left you can see what the siren network" }, { "end": 2852.08, "start": 2847.04, "text": " learned and let's scroll down here a bit and on the right is a relu" }, { "end": 2856.64, "start": 2852.08, "text": " representation of the same thing so this is the same network with the same" }, { "end": 2861.56, "start": 2856.64, "text": " objective it just has relu instead of sine waves as activation functions so" }, { "end": 2865.96, "start": 2861.56, "text": " you can see how much of a difference that makes right here and the middle is" }, { "end": 2871.6, "start": 2865.96, "text": " a relu with the positional encodings still not good right the only the only" }, { "end": 2877, "start": 2871.6, "text": " thing right here that you have to think of if you look at how big these sirens" }, { "end": 2881.2, "start": 2877, "text": " are how many parameters they have they're about at the order of magnitude" }, { "end": 2889.2799999999997, "start": 2881.2, "text": " of how many pixels there are in the image so I'm yeah it's certainly a cool" }, { "end": 2896.7599999999998, "start": 2889.2799999999997, "text": " method but to like these it's not like you're the implicit representation here" }, { "end": 2900.08, "start": 2896.7599999999998, "text": " is very very well at generalizing though it would be very cool to see what" }, { "end": 2905.7599999999998, "start": 2900.08, "text": " happens outside right if you because now you have you can input any XY coordinates" }, { "end": 2910.56, "start": 2905.7599999999998, "text": " so technically you could continue the picture to the bottom and just see what" }, { "end": 2915.04, "start": 2910.56, "text": " the siren thinks should be here at the bottom so all of these things would be" }, { "end": 2919.4, "start": 2915.04, "text": " pretty pretty cool to actually experiment with and they have the code" }, { "end": 2924.64, "start": 2919.4, "text": " available to do that and you can see the fitting process of the Helmholtz" }, { "end": 2930.3599999999997, "start": 2924.64, "text": " equation right here and related projects pretty cool website I definitely invite" }, { "end": 2936.16, "start": 2930.3599999999997, "text": " you to check it out and let's go back to the paper and we're back and my tablet" }, { "end": 2941.8799999999997, "start": 2936.16, "text": " crashed and let's continue so they're now going on to use sirens in order to" }, { "end": 2948.92, "start": 2941.8799999999997, "text": " solve PDEs and so in physics often you have these problems where you are given" }, { "end": 2952.68, "start": 2948.92, "text": " an equation but the equation doesn't necessarily involve a function itself" }, { "end": 2957.8399999999997, "start": 2952.68, "text": " but only involves derivatives of that function like or relates derivatives to" }, { "end": 2963.04, "start": 2957.8399999999997, "text": " the function and so on so one example here is this Helmholtz equation that's" }, { "end": 2971.08, "start": 2963.04, "text": " given as this where the I think the the F is a known function but this is the" }, { "end": 2976.16, "start": 2971.08, "text": " wave field we want to you want to get you want to figure out which is unknown" }, { "end": 2983.2, "start": 2976.16, "text": " and then this HM is including for example this right here which is the" }, { "end": 2991.6, "start": 2983.2, "text": " Laplace operator so you're given the relation between the function and a Laplace" }, { "end": 2996.7599999999998, "start": 2991.6, "text": " operator of the wave that you want to find out and your task is to recover the" }, { "end": 3002.24, "start": 2996.7599999999998, "text": " wave now I don't want to go very much into this right here but what you can do" }, { "end": 3008.4799999999996, "start": 3002.24, "text": " is basically you can measure you can have a room and you can have" }, { "end": 3014.3599999999997, "start": 3008.4799999999996, "text": " measurements of the wave or of its derivatives and so on and then you kind" }, { "end": 3020, "start": 3014.3599999999997, "text": " of calculate backwards from the measurements to what the actual wave" }, { "end": 3027.52, "start": 3020, "text": " was and these sirens turn out to be very very good at things like this and I" }, { "end": 3032.92, "start": 3027.52, "text": " guess that's in this solving for the wave field things but essentially what" }, { "end": 3040.36, "start": 3032.92, "text": " this amounts to is a numerical solution of these partial differential" }, { "end": 3047.04, "start": 3040.36, "text": " equations in physics using these sirens and that's pretty cool and the last" }, { "end": 3052.48, "start": 3047.04, "text": " thing they do is and this gets back to a more of the machine learning context" }, { "end": 3058.12, "start": 3052.48, "text": " where they say learning a space of implicit functions so now they go ahead" }, { "end": 3065.28, "start": 3058.12, "text": " and say yeah so we can represent images in terms of these of these functions" }, { "end": 3069.32, "start": 3065.28, "text": " right but each image is basically its own function so each image is basically" }, { "end": 3076.12, "start": 3069.32, "text": " an optimization a fitting problem can we somehow learn functions of functions so" }, { "end": 3081.44, "start": 3076.12, "text": " this goes this comes now back to more of a machine learning context where you say" }, { "end": 3097.2200000000003, "start": 3081.44, "text": " ah so I I have a network right here that I have a network that gives me the" }, { "end": 3104.4, "start": 3097.2200000000003, "text": " parameters of the siren so this right here is okay let's let's go to an" }, { "end": 3112.6, "start": 3104.4, "text": " example in this example what you'll have is you'll have an image like this one" }, { "end": 3120, "start": 3112.6, "text": " where a few pixels are masked actually most of the pixels are masked and you" }, { "end": 3129.36, "start": 3120, "text": " want to put this into a CNN and the CNN should output the parameters of the" }, { "end": 3136.2400000000002, "start": 3129.36, "text": " siren network so the parameters because the the siren network given its" }, { "end": 3144.1200000000003, "start": 3136.2400000000002, "text": " parameters is the image itself so that's the siren I said siren network the siren" }, { "end": 3152.6400000000003, "start": 3144.1200000000003, "text": " is the image if you know its parameters right so here you train a CNN to give" }, { "end": 3158.6800000000003, "start": 3152.6400000000003, "text": " you the parameters of the siren that's almost the same as training a CNN to" }, { "end": 3165.3999999999996, "start": 3158.68, "text": " give you the image directly but again we don't want to have the explicit" }, { "end": 3168.8799999999997, "start": 3165.3999999999996, "text": " representation of an image we want to have the implicit representation such" }, { "end": 3174.2799999999997, "start": 3168.8799999999997, "text": " that it's continuous and we can manipulate it and so on so the CNN is" }, { "end": 3180.6, "start": 3174.2799999999997, "text": " now trained on a data set so you take C for 10 and you construct a whole bunch" }, { "end": 3188.3199999999997, "start": 3180.6, "text": " of of images with only kind of a hundred pixels remaining and then you train a" }, { "end": 3193.6400000000003, "start": 3188.32, "text": " CNN to give you the parameters of the siren that would reconstruct the ground" }, { "end": 3198.28, "start": 3193.6400000000003, "text": " truth right and then you can test that on the test image and you can see right" }, { "end": 3203.2000000000003, "start": 3198.28, "text": " here the results are pretty good so these are test samples these are now" }, { "end": 3210.2000000000003, "start": 3203.2000000000003, "text": " these are now images that were not seen during training of this CNN and therefore" }, { "end": 3216.4, "start": 3210.2000000000003, "text": " the upcoming siren also hasn't seen that image it's the siren is simply" }, { "end": 3220.84, "start": 3216.4, "text": " parameterized by the CNN you can see this works pretty well so even if you" }, { "end": 3227.96, "start": 3220.84, "text": " only have 10 pixels you already get something out of it right and if you have" }, { "end": 3232.6800000000003, "start": 3227.96, "text": " a hundred pixel you already get fairly close to the to the ground truth right" }, { "end": 3237.84, "start": 3232.6800000000003, "text": " here now this is not gam quality images of course but it's pretty impressive to" }, { "end": 3243.92, "start": 3237.84, "text": " see that an implicit parameter ization an implicit representation of the images" }, { "end": 3251.7200000000003, "start": 3243.92, "text": " can be so powerful right yeah so this this is a pretty cool thing and again" }, { "end": 3257.76, "start": 3251.7200000000003, "text": " it's it's better than it's it's kind of more back to the machine learning" }, { "end": 3261.44, "start": 3257.76, "text": " framework that you're used to because there's a train and a test data set and" }, { "end": 3267.12, "start": 3261.44, "text": " now the only thing is that the output is a function given by its parameters and" }, { "end": 3274.8399999999997, "start": 3267.12, "text": " not the actual pixel values okay so let's let's look at the broader impact" }, { "end": 3279.88, "start": 3274.8399999999997, "text": " statement the proposed siren representation enables accurate" }, { "end": 3285.24, "start": 3279.88, "text": " representations of natural signals such as images audio and video in a deep" }, { "end": 3290.52, "start": 3285.24, "text": " learning framework this may be an enabler for downstream tasks involving" }, { "end": 3295.04, "start": 3290.52, "text": " such signals such as classification for images or speech to text systems for" }, { "end": 3299.56, "start": 3295.04, "text": " audio such applications may be leveraged for both positive and negative ends" }, { "end": 3304.88, "start": 3299.56, "text": " siren may in the future further enable novel approaches to the generation of" }, { "end": 3309.7599999999998, "start": 3304.88, "text": " such signals this has potential for misuse in impersonating actors without" }, { "end": 3313.6, "start": 3309.7599999999998, "text": " their consent for an in-depth discussion of so-called deep fakes we refer the" }, { "end": 3319, "start": 3313.6, "text": " reader to a recent review article in your neural rendering this has this has" }, { "end": 3327.52, "start": 3319, "text": " like no perplexity like no perplexity at all like is anyone benefited by this" }, { "end": 3334.16, "start": 3327.52, "text": " seriously okay but at least we made the authors think of the consequences of" }, { "end": 3341.32, "start": 3334.16, "text": " their research yeah so I invite you to check out this paper maybe with this" }, { "end": 3346.84, "start": 3341.32, "text": " right now you can follow a bit better what happens here this is a different" }, { "end": 3350.7200000000003, "start": 3346.84, "text": " paradigm of research it's a cool paradigm it's away from your usual" }, { "end": 3358, "start": 3350.7200000000003, "text": " machine learning framework and yeah so I'm excited what happens next in this I" }, { "end": 3361.2000000000003, "start": 3358, "text": " also invite you to check out the websites they have lots of videos and" }, { "end": 3377.72, "start": 3361.2, "text": " goodies and so on and with that bye bye" } ]
2lkUNDZld-4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "cnn", "resnet", "simclr", "simclr2", "simclrv2", "simclr v2", "v2", "hinton", "geoff", "brain", "wide", "deep", "convolutional", "convolutions", "self-supervised", "contrastive", "moco", "momentum", "projection", "semi-supervised", "unsupervised", "distillation", "teacher", "student" ]
This paper proposes SimCLRv2 and shows that semi-supervised learning benefits a lot from self-supervised pre-training. And stunningly, that effect gets larger the fewer labels are available and the more parameters the model has. OUTLINE: 0:00 - Intro & Overview 1:40 - Semi-Supervised Learning 3:50 - Pre-Training via Self-Supervision 5:45 - Contrastive Loss 10:50 - Retaining Projection Heads 13:10 - Supervised Fine-Tuning 13:45 - Unsupervised Distillation & Self-Training 18:45 - Architecture Recap 22:25 - Experiments 34:15 - Broader Impact Paper: https://arxiv.org/abs/2006.10029 Code: https://github.com/google-research/simclr Abstract: One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to most previous approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of a big (deep and wide) network during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLR), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9\% ImageNet top-1 accuracy with just 1\% of the labels (≤13 labeled images per class) using ResNet-50, a 10× improvement in label efficiency over the previous state-of-the-art. With 10\% of labels, ResNet-50 trained with our method achieves 77.5\% top-1 accuracy, outperforming standard supervised training with all of the labels. Authors: Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey Hinton Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we'll look at big self-supervised models are strong semi-supervised learners by Ting Chen, Simon Kornblith, Kevin Swirsky, Mohamed Nourouzi and Jeffrey Hinton of Google Brain. So this paper on a high level, it's also known as Sinclair v2, demonstrates that if you want to do semi-supervised learning, that you're very well served by starting out with self-supervised learning, and then doing fine tuning much like NLP models do, rather than the kind of semi-supervised approach that image tasks had so far. And they present this Sinclair v2, which is an improvement over the Sinclair approach to self-supervised pre-training, and they demonstrate it outperforms a lot of the baselines. Alright, so if you like content like this, don't forget to share it out, and leave a like and tell me what you think in the comments. So this paper, it sort of is kind of a club together thing of different things. So they present this new method, like this Sinclair v2, which is a modification of Sinclair, and we'll go over that, but they also try to make like a scientific claim, namely that somehow bigger models are better for this pathway of learning, and we'll try to untangle all of these things. So first of all, we're in the semi-supervised learning regime right here. This basically means that you have a data set, and you only have labels for a part of that data set. So this could be like here, the bottom 10% or so, because labels might be expensive to get. And so you only have a few of them, but you have much more data that's unlabeled. Now sometimes this problem is formulated as this here is your data set, and then this here is like a different data set, but one that's close enough such that you can learn from it. And that's usually in NLP. You'll have your data set is like a sentiment classification task, but you have all of Wikipedia that is not labeled, but it's just text. So you can sort of pre-train on it. In this case, we'll be in a situation where we'll artificially construct a small data set. So this entire thing here is going to be the ImageNet data set, and this right here is going to be our labeled portion, like we have labels. Now usually one has labels for ImageNet as well, but we artificially restrict ourselves to simulate a situation where we have lots of data and we only have a fixed budget. So we can only, because to obtain labels, oftentimes you have to ask humans to label images. And let's say we are a company and we've collected this big data set, but we only have like maybe 500 bucks on Amazon Mechanical Turk, and we only managed to get a very like 1% of our data set labeled. Now we're in the regime of semi-supervised learning. This is slightly different from what NLP does. As I said, in NLP, usually assume you have different data sets, the large one being a different distribution, and in the semi-supervised regime, you often assume that it is actually the same data distribution, but you only have labels for some of them. But there should be a fair bit of overlap between the two things. So I've recently made a video about OpenAI's ImageGPT that kind of goes into the same direction as this work right here that basically says pre-training on unlabeled data, like this whole data set without the labels, can be a very good preconditioner for fine tuning later. And this paper says the same thing. So basically, in the good old days, what you would do is you would devise a method that somehow takes, you know, takes in a device, a method that takes in a mini batch. And in the mini batch, you'd have your data samples, and then some of them would be labeled, right here, you'd have a Y and here you'd have a Y, but most of them would be not labeled. And you'd have like some sort of loss function that would put special weight on the ones that are labeled or somehow handle these ones that are unlabeled in a way, you might be doing like some sort of a consistency loss such that if they are very nearest near neighbors to these in the feature space, they should have similar labels or things like this. So these semi supervised methods, they basically try to solve the problem at once. But while taking data that is labeled and not labeled, this paper goes into a different direction. This paper says, first, we should, it's actually three stages right here, and they have a diagram, so I don't need to draw. They have a three stage approach. Three stages. The one on the left is unsupervised pre training. So they say, let's forget about the labels right now, even like your unlabeled data. So even the data where we have the labels, let's forget about the labels. And let's just do unsupervised pre training. Now unsupervised pre training in this kind of setting is also known as self supervised pre training. And this first stage is done using a contrastive loss, and that's very similar to sim clear to this contrastive loss. So what you'll do, and they describe it very, very well here. So what you'll do is given a randomly sampled mini batch of images, each image is augmented twice using random crop color distortion and Gaussian blur, creating two views of the same example. Okay, so you have an image in your mini batch. Each image you take and you make two versions of it. And each version you crop, you random crop somewhere. So version one could be random cropped here. Version two could be random cropped here. And then you put some Gaussian blur on it and so on. So a little bit of, as you can see, random crop color distortion, Gaussian blur. So what you'll want is two different versions of the same image. Each of these versions has been augmented in a different way, cropped in a different way, blurred in a different way. It's two slightly different versions of the same image. And now you want to enforce, you want to put this through your network. So ultimately, as you can see on the right side here, what you want to end up is a network. And then, okay, we'll forget about this right now. What you want to train is this network right here, actually including these projection layers. We'll get to them later. This is the network that you want to train. So you want to put, you take your unlabeled data, you take an image, you make two versions of it. And you put those through the network, right, until the end right here. So you'll get Z1, Z2. These are the outputs of the network for the two images. And then what you want to do is you want to take another image that's not this image, and also put it through the network, maybe also augmented first. And then you have Z3. So now you have the outputs of two things that are supposed to come from the same image and one thing that's supposed to come from a different image. And now your loss is simply going to be make those two things close together and push those two things apart, or those three actually. So the loss, and this is the contrastive loss of self supervised learning. As you know, you don't need any labels right here. You simply say the things that come from the same image should be close together. And the things that come from different images should be far apart. And this relies heavily on these data augmentations that you do right here. They also employ some other tricks like the momentum encoder from MoCo, from momentum contrast and so on. But this is the main part. So you can pull a lot of strings here to get like another percent of performance. But ultimately, they won't see the similarity of ZI and ZJ, which are the outputs of the same image to be close together. And then this down here, they want to be far apart, ZI with ZK, where K is all the other images. Okay. And you can do this in a mini batch fashion. So this is self supervised learning. And the reason why you do this is you don't need labels. And it tends, we know it tends to give very, very good representations. So I'm past that. So what this network here will learn will be very good for some reason. We still don't exactly know why combining augmentation with the self supervised loss with contrastive loss, for example, gives such good performance. There have been papers recently that modify the loss and so on. But it's not super well understood yet. But if you do it like this, the network here will give you already very, very good representation. And we know this because we can take a network like this and then simply train a linear classifier on top of that on a data set and achieve very, very good performance. And mind you, you have trained it with unlabeled data, right? So the network has never been trained to solve like ImageNet classification. It has simply been trained to look at the pictures and determine if two versions of a picture come from the same picture or from different pictures. And now, if you simply train a linear classifier on top of these representations, you're doing extremely well already. So we know these representations, they actually learn something about these images. So that's the first part. Then stage two, let's cancel all that. Stage two is you want to do supervised fine tuning. Now you already see that the arrow here coming out is not this task agnostic big CNN. The arrow is actually coming out of those yellow boxes. And the yellow boxes are these projection heads. So in the original SimClear paper, what they did was they wanted originally, they wanted to train this network right here. This is like a ResNet-50. It's pretty standard in these kind of self-supervised approaches and so on to train or these few label approaches to train a standardized network. And this is like a ResNet-50. So in the original SimClear paper, they said we want to make ResNet-50 as strong as possible. But in order to do this loss right here, we are going to attach this projection head just to because the dimensionality here I think is like 2048. And we want to do this inner product in a lower dimension of like maybe 256 or so. So these are just multi-layer perceptrons. These are just fully connected layers that compress the representation down to that. And once we're done with the unsupervised pre-training, we're going to throw those away, right? And this ResNet is the thing that we really care about. Now here they claim, OK, it actually works better. And they have experiments to prove this or to show this if you use one, if you actually leave one of these layers here. So in the end, I guess they converge on three projection head layers. And then they only throw away the top two. And like they make this big deal out of the fact where, you know, I can just call this part right here now the encoder. And I don't so I don't know exactly like I don't see the giant deal here. Like you just made your network one layer bigger. And now you consider that to be your encoder. And the projection head is now two layers. And that will be much easier than calling the projection head three layers. But we leave one layer and we train from the middle layer. In any case, they have this layer, additional layer right here compared to the old Sinclair. And then the representation of that goes into supervised fine tuning. Now, this is pretty easy. This is exactly what it sounds like. So now you use only only the data set that has labels. So the part of the data set that has labels, and you do the fine tuning and fine tuning is simply supervised learning. You train this network in a supervised fashion on that small fraction of data that has cloud class labels. And that already performs pretty well. And they show this in experiments. But then you can go a step further and do what's known as distillation or self training. And what's distillation or self training? It's so distillation is when you have a network that you call the teacher network. And that network has been trained to do some classification maybe into three classes pretty, pretty well. Okay. But now this is very large and you want maybe a smaller model. So you just want like this tiny model because you want to ship it on a mobile device, right? But it's also supposed to do this. And you know that if you just directly train this, which is called the student model, it doesn't perform as well as the teacher model. There is a better way. If you have the teacher model, you can sort of transfer the knowledge to the student model. You can distill the knowledge. And how do you do that? You do that by, so what would you do in supervised training? In supervised training, you would take an image, put it in, and then put the label that comes along with the image. You put it up here and you compare the output to the label and that gives you the loss function. Right? So you do that right here. If you distill, you put the image into both. Now the teacher is already trained. So its output will be a distribution over classes. It won't be a single label. It will be like, okay, 90% class one, 10% class two, 0% class three, something like this. And now you take this as like a pseudo label, this entire distribution, and you put it here and you compare the output of the student to that of the teacher and that's your loss function. So this kind of, the teacher might have learned to put some nuance into the classification to say, well, I'm pretty sure this is class one, but I'm not 100% sure. And it can transfer that knowledge to the student. And that makes the student better than had you just trained it from the beginning from, with just the labels. Right? So this is distillation and you can do this even what they call self distillation here or self training. So apparently this even helps if the teacher is, if the student model is the same as the teacher model. Now why does it help in this case? And I think it is not exactly the case in this case because they always say their teacher model has this extra projection layer. Right? And then the student model doesn't have that even if they do self training. But why does it help in this case? I mean, it's, it's kind of shocking and I'm pretty sure it helps in any case, but in this particular case it helps because now you're using the unlabeled data again. So you have a teacher model and the teacher model is trained first using unsupervised like this is the teacher model right here using unsupervised training. Then the teacher model is further fine tuned on the small data. Right? So it is now already pretty good at the task, but how can you get a student model that's even better than the teacher model? It's by using again this unlabeled data. You have this giant amount of data. So what you'll do is you take an image from the unlabeled data and you ask the teacher model, teacher model, what do you think about that image? Right? And the teacher model will give you a prediction. Like let's say again, this 90%, 10%, 0% and then you take the student model, you input that image and you compare its output to what the teacher said. So this combines the teacher model. You freeze the teacher model, right? The teacher model is only trained until here. You take it from here. The student model is now able to take basically the teacher. It takes everything that the teacher model knows, not only about this data, but about all the data. So it kind of gets to ask the teacher model, what do you think about this? What do you think about this? What do you think about this? And it can incorporate all that knowledge about all of this unlabeled data. And that's why the student model here in the end, if it's the same size, will probably end up even better than the teacher model. So distillation, I think also is still kind of a mystery of why you get a better model or, I mean, to make it smaller, if you make it a lot smaller, usually you don't end up with a better model, but you end up with a pretty good model that you couldn't have gotten by just training the small model. So that's already pretty cool. But why you get a better model when they're the same size, I don't think that's well understood yet. So that's the three stage approach. So recap, first, use all of the data without labels to do unsupervised or self supervised contrastive pre-training. Second, use only the data that has labels to do fine tuning. Third, either distill the learned classifier to a smaller model or distill it to a model of the same size. Then in both cases, you would again use the unlabeled, all of the unlabeled data. And that's the three step approach. That's SEMCLEAR v2 in all of its form. So they go into fine tuning right here. And yeah, so they say again, we elaborate with a three layer projection head. So that's the three layer projection head. This here is the output of ResNet-50, where Sigma is a ReLU non-linearity and we ignore the bias term for brevity, blah, blah, blah, blah, blah. So they contrast this here. For fine tuning, SEMCLEAR uses this right here, which is just, it's basically just a classifier on top of the output of the ResNet-50. This is fine tuning from the input layer of the projection head. To fine tune from the first layer of the projection head, we have a new encoder function as this, which is ResNet followed by fully connected layers. And you see they take the ResNet-50 output and they ship it through the first projection layer and then there is a task specific classifier. Now, again, why, I don't even see why they make like this ginormous deal out of it, especially, especially since the last layer of the ResNet-50. I'm not, okay, here is, I'm not entirely sure, but are they taking the log? No, they're probably not taking the log. It's okay. But it's, yeah, it's just weird. Like is there even a non-linearity at the end right here? Or is this really just like two matrix multiplications in a row, which I'm going to guess there's a big chance that that's the case, that the last layer of this encoder is actually not even followed by non-linearity and therefore you'll just kind of make the dimension different. And I don't see why you can't just incorporate this into the model and have to like say it over and over again that this is a new special thing, right? Again, this is equivalent of tuning from a middle layer of the projection head instead of the output layer. Okay, you just make your model a bit bigger. Yeah. So the third step is self-training or knowledge distillation. And they give two variants right here. This variant, as you can see here, this is just the cross entropy. But instead of having labels right here, Y, you have what the teacher model thinks Y is given X. Okay, that's cross entropy, but not with the true labels, but with the output of the teacher model. And you can even mix that. So you can, as you can see right here, you can mix this with an actual supervised loss. So this would be the supervised loss, whatever. Yeah, I guess that I was wrong. That wasn't, I guess P of Y is always one in that case. But they don't use this particular kind, I think, except in one of the ablations. So how does this work? It works pretty well. And so one of their experiments, as you see up here, it works pretty well in that if you have 1% of the labels, only 1% of ImageNet labels, which they say is smaller or equal than 13 images per class, so there's a thousand classes and you only have 13 labels per class or less. If you, and they differentiate, if your encoder that you train is a ResNet 50, then you get, and you can see the dashed line here is a supervised baseline. You almost get to the supervised baseline with 1% of the labels. And if you actually have a larger ResNet, then you get to the supervised performance without 99% of the labels. And if you have, excuse me, 10% of the labels, you pass the supervised baseline. So the supervised baseline is on 100% of the labels, mind you, and you only have 10% and this outperforms the supervised baseline. Now of course, you could, here you could have another graphic where you show, oh, 100%. What if we, you know, what if we do the whole procedure with 100% of the labels? So first we don't label the data, we do supervised, self-supervision, then we fine tune on a 100% of the data. And then we do this distillation again, you would of course be even better. And I think they have this somewhere in a table, but this is already pretty, pretty impressive. And another claim they make right here is about the model sizes. So and this figure is description, this now relates to the title. They say bigger models yield larger gains when fine tuning with fewer labeled examples. So there are three comparative statement words in one sentence. Let's unpack this. Bigger models yield larger gains. So the bigger the model, the better the good, let's say, when fine tuning with fewer labeled examples. Let's just look at the graph. It's pretty, it's really clear. So here we have number of parameters going over. So these are the different models they look at, how many parameters they have to do this whole procedure. And here is the relative improvement in percent over the top ImageNet 1 top accuracy. So if you do this whole thing with 100% of the labels, right, I'm going to guess this here, this here is where they start out. And you can see as you grow your models, you grow the performance. And this, this is just by increasing the model size, right, you have the same data set, you have the same amount of labels, you have the same number of steps that you train for, and so on, just by the fact that you make your model bigger, you gain in performance. Okay, now you can see that these curves here are above one another. And these curves refer to getting small, less and less labels. Okay, so if you only have 10% of the labels, your relative gains are larger. That doesn't mean that you perform better with 10% of the labels than with 100% of the labels, that would be like ridiculous. Well, I guess in this day and age, nothing is ridiculous. But for now, we're still performing better by having more labels if we do the same procedure, right? It's not like here. So here, this baseline, the supervised baseline only does supervised training, right? So that's why we can outperform it with less of labels. But here, we do the same procedure. This is relative improvement, right? So this right here, the starting point would be if you had 10% of labels and a 25 million model, parameter model. And this right here, for example, is if you have the same amount of labels, but a 200 million parameter model. And this is relative improvement, okay? But what the graph says is that the relative improvement is larger, the relative improvement is higher, the more parameters you have, which is the more you go to the right. And that effect in itself is higher, the fewer labels you have, which is the different graphs. And you can see that right here. So if you have fewer and fewer labels, it becomes more and more important that you have bigger models. And that's really counterintuitive, right? Because you would expect that the bigger models, they can overfit much more easily to the fewer labels. But that doesn't seem the case. So this self supervision, it really seems to be sort of a counter to this notion of overfitting. And if you have larger and larger models, that's what they argue in the paper, you might be able to learn more and more features that might be useful for classification. So if you have a larger model, you might, you're going to learn more kinds of features, and then you're going to outperform because you have more chance that these features are going to be useful for classification. And I don't think they really make a statement as to why that happens more with the, if you have less labels. So let's think about this. If I have very few labels, very, very few labels, why does it help me even more if I have a big model? Well, with the same argumentation, we could say, and maybe they actually say this already. So I might be copying them involuntarily. Maybe with fewer and fewer labels, like let's say we have all the labels, that's probably too many, right? If we can learn a task with some accuracy, we probably had too many labels. Okay. It's like, if we can't learn a task, we know we have too few. Somewhere there is a border where we have enough, but that's like kind of one number. And everything else is too many, technically speaking, like learning theoretically speaking. So usually we have too many labels. And what does that mean? That probably means that there are multiple ways. Like if we have too many labels, there are multiple different features we can pick up to learn. There are multiple different paths to learn our goals. So if we have ImageNet, and like there's this weird task to recognize a three, and we get lots and lots and lots of examples of threes, right? We can decide on a feature. We can say, oh, all the threes that I see, they have this bow down here, or all the threes that I see, they have this bend here, and so on. But if I only have very few labels, there might only be like a single feature that is even theoretically possible to learn from the labels I'm given. And therefore, if I have a bigger model in cell in pre-training, because the pre-training happens with the same amount of data, right? If I have a bigger model that does the self-supervised pre-training, it's going to learn more features. And then there's a higher chance that that one feature that these very few labels that I am able to learn something from is going to be in these features. So that's kind of how I make sense of it in combination with what they're saying right here. Okay, so this was the main points. They do a lot of empirical studies showing the effects of these sizes. They stress that it's important to have both deep and wide networks. And they also do this additional attention mechanism over the convolution filters. I don't want to go into that particularly. But they also do linear evaluation compared to supervised, compared to fine tuning with 100% of the labels. So they do a very thorough empirical investigation. And yeah, I do appreciate that. And they kind of show the same things. And here they show the number of layers in the projection head. So as you increase the number of layers in the projection head and train from the optimal layer in the middle, your performance goes up, as you can see. But it also this effect is stronger when you have fewer labels, right? You can see the differences here are greater than the differences here or even here when you have 100% of the labels. So the fewer labels, the fewer the labels, the more benefit you have from the architecture right here. And here they show that it's not always optimal to train from the last projection layer, but here the first one. So I guess they converge on three projection layers, and you always want to keep the first one around after self supervised training, as we mentioned before. They investigate different distillation losses and show that it is actually important that you do the distillation loss on labeled and unlabeled sets. You can see here if you only train with the labels after fine tuning, you get poor performance. If you do the label and distillation loss, but only do it on the data set where you have labels, then you get more performance. If you do label and distillation loss, but also include your unlabeled data, you get even more performance. And then if you do that, but you don't do the label loss. So before we've seen you can mix the distillation loss with the label loss, if you have lots of labels, then you drop in performance again. And you can see right here, the drop in performance is proportional to how many labeled examples you have. And that's natural, right? If you have the labels, you can actually mix that information in with the distillation loss and that will make you better. And here they drop 0.1% and here they drop less than 1% by leaving away the label. But their point basically is that it is more important to distill using also unlabeled data, then it is to distill, including the label loss. And it's much easier to not include the label loss. So they don't do it, I guess. All right, so I think that was it. They compare, as I said, they compare like self distillation, where you distill into an equally sized model and down distillation, where you distill into a smaller model, maybe that's vice versa. And they do a lot of comparison to other methods. So this is a very thorough work, I feel. And yeah, if you want more about the exact experiments, I invite you to look at the paper. And let's just have a final look at the broader impact statement right here. So the broader, remember the broader impact statement is supposed to force you to think about how society might be impacted at large by your work. So it says, the finding described in this paper can potentially be harnessed to improve accuracy in any application or computer vision, where it is more expensive or difficult to label additional data than to train larger models. Such applications are clearly beneficial to society. For example, in medical applications where acquiring high quality labels requires careful annotation by clinicians, better semi supervised learning approaches can potentially help save lives. Application of computer vision to agriculture can increase crop yields, which may help to improve availability of food. However, we also recognize that our approach can become a potential component of harmful surveillance systems. Moreover, there is an entire industry built around human labeling services and technology that reduces the need for these services could lead to short term loss of income for some of those currently employed or contracted to provide labels. So ask yourself how much of that statement has to do with the actual novelty of this paper? And the answer is of course, zero, right? Like you can replace like our method in this thing with like machine learning or computer vision in general, like, oh, really SIMClear V2 specifically can increase crop yields? Like that specific invention of this paper will lead to higher crop yields, will lead to surveillance systems. So I'm, yeah, you know, I think like, I'm not gonna get too upset about these. I mean, this, I think it's quite funny. But just, again, I wonder whether the people advocating for these things are happy with these statements, because clearly, clearly, this is just a template that you copy paste from paper to paper, replacing like a few words. And if it's computer vision, you're like, oh, my deep fakes. And if it's an NLP, it's like, oh, I'm a fake news. And yeah, I wonder if really anything like particularly is has I wonder whether these people are happy now. Yeah, I just I wonder. And if, if they are, I wonder whether it's really for the reason that they claim that, oh, now we have a statement here of how it impacts society, because I could have told you that before. I even read the title of the paper, right, what the broader impact statement is going to be. In any case, rant too long, check out paper, share it out, leave a like, comment if you disagree or agree. And yeah, bye bye.
[ { "end": 6.72, "start": 0, "text": " Hi there, today we'll look at big self-supervised models are strong semi-supervised learners" }, { "end": 12.96, "start": 6.72, "text": " by Ting Chen, Simon Kornblith, Kevin Swirsky, Mohamed Nourouzi and Jeffrey Hinton of Google" }, { "end": 14.32, "start": 12.96, "text": " Brain." }, { "end": 21.42, "start": 14.32, "text": " So this paper on a high level, it's also known as Sinclair v2, demonstrates that if you want" }, { "end": 28.18, "start": 21.42, "text": " to do semi-supervised learning, that you're very well served by starting out with self-supervised" }, { "end": 34.92, "start": 28.18, "text": " learning, and then doing fine tuning much like NLP models do, rather than the kind of" }, { "end": 40.36, "start": 34.92, "text": " semi-supervised approach that image tasks had so far." }, { "end": 45.34, "start": 40.36, "text": " And they present this Sinclair v2, which is an improvement over the Sinclair approach" }, { "end": 51.96, "start": 45.34, "text": " to self-supervised pre-training, and they demonstrate it outperforms a lot of the baselines." }, { "end": 58.32, "start": 51.96, "text": " Alright, so if you like content like this, don't forget to share it out, and leave a" }, { "end": 62.08, "start": 58.32, "text": " like and tell me what you think in the comments." }, { "end": 70.44, "start": 62.08, "text": " So this paper, it sort of is kind of a club together thing of different things." }, { "end": 77.26, "start": 70.44, "text": " So they present this new method, like this Sinclair v2, which is a modification of Sinclair," }, { "end": 86.92, "start": 77.26, "text": " and we'll go over that, but they also try to make like a scientific claim, namely that" }, { "end": 93.80000000000001, "start": 86.92, "text": " somehow bigger models are better for this pathway of learning, and we'll try to untangle" }, { "end": 95.64, "start": 93.80000000000001, "text": " all of these things." }, { "end": 101.56, "start": 95.64, "text": " So first of all, we're in the semi-supervised learning regime right here." }, { "end": 108.16, "start": 101.56, "text": " This basically means that you have a data set, and you only have labels for a part of" }, { "end": 109.48, "start": 108.16, "text": " that data set." }, { "end": 115.4, "start": 109.48, "text": " So this could be like here, the bottom 10% or so, because labels might be expensive to" }, { "end": 116.4, "start": 115.4, "text": " get." }, { "end": 121.76, "start": 116.4, "text": " And so you only have a few of them, but you have much more data that's unlabeled." }, { "end": 127.76, "start": 121.76, "text": " Now sometimes this problem is formulated as this here is your data set, and then this" }, { "end": 132.56, "start": 127.76, "text": " here is like a different data set, but one that's close enough such that you can learn" }, { "end": 133.56, "start": 132.56, "text": " from it." }, { "end": 135.52, "start": 133.56, "text": " And that's usually in NLP." }, { "end": 141.44, "start": 135.52, "text": " You'll have your data set is like a sentiment classification task, but you have all of Wikipedia" }, { "end": 143.44, "start": 141.44, "text": " that is not labeled, but it's just text." }, { "end": 146.48000000000002, "start": 143.44, "text": " So you can sort of pre-train on it." }, { "end": 152.72, "start": 146.48000000000002, "text": " In this case, we'll be in a situation where we'll artificially construct a small data" }, { "end": 153.72, "start": 152.72, "text": " set." }, { "end": 159.84, "start": 153.72, "text": " So this entire thing here is going to be the ImageNet data set, and this right here is" }, { "end": 163.88, "start": 159.84, "text": " going to be our labeled portion, like we have labels." }, { "end": 170.36, "start": 163.88, "text": " Now usually one has labels for ImageNet as well, but we artificially restrict ourselves" }, { "end": 177.04, "start": 170.36, "text": " to simulate a situation where we have lots of data and we only have a fixed budget." }, { "end": 182.2, "start": 177.04, "text": " So we can only, because to obtain labels, oftentimes you have to ask humans to label" }, { "end": 183.2, "start": 182.2, "text": " images." }, { "end": 190.48, "start": 183.2, "text": " And let's say we are a company and we've collected this big data set, but we only have like maybe" }, { "end": 196.72, "start": 190.48, "text": " 500 bucks on Amazon Mechanical Turk, and we only managed to get a very like 1% of our" }, { "end": 198.44, "start": 196.72, "text": " data set labeled." }, { "end": 204.56, "start": 198.44, "text": " Now we're in the regime of semi-supervised learning." }, { "end": 208.2, "start": 204.56, "text": " This is slightly different from what NLP does." }, { "end": 212.44, "start": 208.2, "text": " As I said, in NLP, usually assume you have different data sets, the large one being a" }, { "end": 218.2, "start": 212.44, "text": " different distribution, and in the semi-supervised regime, you often assume that it is actually" }, { "end": 221.84, "start": 218.2, "text": " the same data distribution, but you only have labels for some of them." }, { "end": 226.56, "start": 221.84, "text": " But there should be a fair bit of overlap between the two things." }, { "end": 235.28, "start": 226.56, "text": " So I've recently made a video about OpenAI's ImageGPT that kind of goes into the same direction" }, { "end": 241.12, "start": 235.28, "text": " as this work right here that basically says pre-training on unlabeled data, like this" }, { "end": 248.68, "start": 241.12, "text": " whole data set without the labels, can be a very good preconditioner for fine tuning" }, { "end": 249.68, "start": 248.68, "text": " later." }, { "end": 251.44, "start": 249.68, "text": " And this paper says the same thing." }, { "end": 258.4, "start": 251.44, "text": " So basically, in the good old days, what you would do is you would devise a method that" }, { "end": 265.72, "start": 258.4, "text": " somehow takes, you know, takes in a device, a method that takes in a mini batch." }, { "end": 271.44000000000005, "start": 265.72, "text": " And in the mini batch, you'd have your data samples, and then some of them would be labeled," }, { "end": 276.56, "start": 271.44000000000005, "text": " right here, you'd have a Y and here you'd have a Y, but most of them would be not labeled." }, { "end": 281.92, "start": 276.56, "text": " And you'd have like some sort of loss function that would put special weight on the ones" }, { "end": 287.32000000000005, "start": 281.92, "text": " that are labeled or somehow handle these ones that are unlabeled in a way, you might be" }, { "end": 293.8, "start": 287.32000000000005, "text": " doing like some sort of a consistency loss such that if they are very nearest near neighbors" }, { "end": 298.76, "start": 293.8, "text": " to these in the feature space, they should have similar labels or things like this." }, { "end": 305.16, "start": 298.76, "text": " So these semi supervised methods, they basically try to solve the problem at once." }, { "end": 309.96000000000004, "start": 305.16, "text": " But while taking data that is labeled and not labeled, this paper goes into a different" }, { "end": 310.96000000000004, "start": 309.96000000000004, "text": " direction." }, { "end": 317.08000000000004, "start": 310.96000000000004, "text": " This paper says, first, we should, it's actually three stages right here, and they have a diagram," }, { "end": 319.32, "start": 317.08000000000004, "text": " so I don't need to draw." }, { "end": 322, "start": 319.32, "text": " They have a three stage approach." }, { "end": 323.12, "start": 322, "text": " Three stages." }, { "end": 327.08, "start": 323.12, "text": " The one on the left is unsupervised pre training." }, { "end": 333, "start": 327.08, "text": " So they say, let's forget about the labels right now, even like your unlabeled data." }, { "end": 337.62, "start": 333, "text": " So even the data where we have the labels, let's forget about the labels." }, { "end": 340.96, "start": 337.62, "text": " And let's just do unsupervised pre training." }, { "end": 346.28000000000003, "start": 340.96, "text": " Now unsupervised pre training in this kind of setting is also known as self supervised" }, { "end": 347.56, "start": 346.28000000000003, "text": " pre training." }, { "end": 355.84, "start": 347.56, "text": " And this first stage is done using a contrastive loss, and that's very similar to sim clear" }, { "end": 356.88, "start": 355.84, "text": " to this contrastive loss." }, { "end": 361.12, "start": 356.88, "text": " So what you'll do, and they describe it very, very well here." }, { "end": 367, "start": 361.12, "text": " So what you'll do is given a randomly sampled mini batch of images, each image is augmented" }, { "end": 373.04, "start": 367, "text": " twice using random crop color distortion and Gaussian blur, creating two views of the same" }, { "end": 374.04, "start": 373.04, "text": " example." }, { "end": 377.04, "start": 374.04, "text": " Okay, so you have an image in your mini batch." }, { "end": 380.56, "start": 377.04, "text": " Each image you take and you make two versions of it." }, { "end": 383.70000000000005, "start": 380.56, "text": " And each version you crop, you random crop somewhere." }, { "end": 385.84000000000003, "start": 383.70000000000005, "text": " So version one could be random cropped here." }, { "end": 388.78000000000003, "start": 385.84000000000003, "text": " Version two could be random cropped here." }, { "end": 392.56, "start": 388.78000000000003, "text": " And then you put some Gaussian blur on it and so on." }, { "end": 398.24, "start": 392.56, "text": " So a little bit of, as you can see, random crop color distortion, Gaussian blur." }, { "end": 402.84000000000003, "start": 398.24, "text": " So what you'll want is two different versions of the same image." }, { "end": 408.03999999999996, "start": 402.84, "text": " Each of these versions has been augmented in a different way, cropped in a different" }, { "end": 410.96, "start": 408.03999999999996, "text": " way, blurred in a different way." }, { "end": 414.44, "start": 410.96, "text": " It's two slightly different versions of the same image." }, { "end": 421.53999999999996, "start": 414.44, "text": " And now you want to enforce, you want to put this through your network." }, { "end": 428.59999999999997, "start": 421.53999999999996, "text": " So ultimately, as you can see on the right side here, what you want to end up is a network." }, { "end": 432.35999999999996, "start": 428.59999999999997, "text": " And then, okay, we'll forget about this right now." }, { "end": 437.44, "start": 432.36, "text": " What you want to train is this network right here, actually including these projection" }, { "end": 438.44, "start": 437.44, "text": " layers." }, { "end": 439.44, "start": 438.44, "text": " We'll get to them later." }, { "end": 441.28000000000003, "start": 439.44, "text": " This is the network that you want to train." }, { "end": 446.24, "start": 441.28000000000003, "text": " So you want to put, you take your unlabeled data, you take an image, you make two versions" }, { "end": 448.12, "start": 446.24, "text": " of it." }, { "end": 453.96000000000004, "start": 448.12, "text": " And you put those through the network, right, until the end right here." }, { "end": 457.24, "start": 453.96000000000004, "text": " So you'll get Z1, Z2." }, { "end": 461.92, "start": 457.24, "text": " These are the outputs of the network for the two images." }, { "end": 467.28000000000003, "start": 461.92, "text": " And then what you want to do is you want to take another image that's not this image," }, { "end": 471.04, "start": 467.28000000000003, "text": " and also put it through the network, maybe also augmented first." }, { "end": 473.36, "start": 471.04, "text": " And then you have Z3." }, { "end": 478.56, "start": 473.36, "text": " So now you have the outputs of two things that are supposed to come from the same image" }, { "end": 481.44, "start": 478.56, "text": " and one thing that's supposed to come from a different image." }, { "end": 489.16, "start": 481.44, "text": " And now your loss is simply going to be make those two things close together and push those" }, { "end": 493.56, "start": 489.16, "text": " two things apart, or those three actually." }, { "end": 499.76000000000005, "start": 493.56, "text": " So the loss, and this is the contrastive loss of self supervised learning." }, { "end": 502.72, "start": 499.76000000000005, "text": " As you know, you don't need any labels right here." }, { "end": 506.40000000000003, "start": 502.72, "text": " You simply say the things that come from the same image should be close together." }, { "end": 510.12, "start": 506.40000000000003, "text": " And the things that come from different images should be far apart." }, { "end": 516.4, "start": 510.12, "text": " And this relies heavily on these data augmentations that you do right here." }, { "end": 521.4399999999999, "start": 516.4, "text": " They also employ some other tricks like the momentum encoder from MoCo, from momentum" }, { "end": 523.4399999999999, "start": 521.4399999999999, "text": " contrast and so on." }, { "end": 525.64, "start": 523.4399999999999, "text": " But this is the main part." }, { "end": 531.84, "start": 525.64, "text": " So you can pull a lot of strings here to get like another percent of performance." }, { "end": 540.4, "start": 531.84, "text": " But ultimately, they won't see the similarity of ZI and ZJ, which are the outputs of the" }, { "end": 543.92, "start": 540.4, "text": " same image to be close together." }, { "end": 552.4399999999999, "start": 543.92, "text": " And then this down here, they want to be far apart, ZI with ZK, where K is all the other" }, { "end": 553.4399999999999, "start": 552.4399999999999, "text": " images." }, { "end": 554.4399999999999, "start": 553.4399999999999, "text": " Okay." }, { "end": 556.64, "start": 554.4399999999999, "text": " And you can do this in a mini batch fashion." }, { "end": 558.0799999999999, "start": 556.64, "text": " So this is self supervised learning." }, { "end": 561.88, "start": 558.0799999999999, "text": " And the reason why you do this is you don't need labels." }, { "end": 567.24, "start": 561.88, "text": " And it tends, we know it tends to give very, very good representations." }, { "end": 570.26, "start": 567.24, "text": " So I'm past that." }, { "end": 576.16, "start": 570.26, "text": " So what this network here will learn will be very good for some reason." }, { "end": 581.88, "start": 576.16, "text": " We still don't exactly know why combining augmentation with the self supervised loss" }, { "end": 587.52, "start": 581.88, "text": " with contrastive loss, for example, gives such good performance." }, { "end": 593, "start": 587.52, "text": " There have been papers recently that modify the loss and so on." }, { "end": 595, "start": 593, "text": " But it's not super well understood yet." }, { "end": 601.52, "start": 595, "text": " But if you do it like this, the network here will give you already very, very good representation." }, { "end": 607.54, "start": 601.52, "text": " And we know this because we can take a network like this and then simply train a linear classifier" }, { "end": 613.72, "start": 607.54, "text": " on top of that on a data set and achieve very, very good performance." }, { "end": 617.92, "start": 613.72, "text": " And mind you, you have trained it with unlabeled data, right?" }, { "end": 622.52, "start": 617.92, "text": " So the network has never been trained to solve like ImageNet classification." }, { "end": 627.6, "start": 622.52, "text": " It has simply been trained to look at the pictures and determine if two versions of" }, { "end": 630.4399999999999, "start": 627.6, "text": " a picture come from the same picture or from different pictures." }, { "end": 635.84, "start": 630.4399999999999, "text": " And now, if you simply train a linear classifier on top of these representations, you're doing" }, { "end": 637.64, "start": 635.84, "text": " extremely well already." }, { "end": 642.6999999999999, "start": 637.64, "text": " So we know these representations, they actually learn something about these images." }, { "end": 644.64, "start": 642.6999999999999, "text": " So that's the first part." }, { "end": 649.0799999999999, "start": 644.64, "text": " Then stage two, let's cancel all that." }, { "end": 653.5200000000001, "start": 649.08, "text": " Stage two is you want to do supervised fine tuning." }, { "end": 661.6800000000001, "start": 653.5200000000001, "text": " Now you already see that the arrow here coming out is not this task agnostic big CNN." }, { "end": 665.26, "start": 661.6800000000001, "text": " The arrow is actually coming out of those yellow boxes." }, { "end": 668.1600000000001, "start": 665.26, "text": " And the yellow boxes are these projection heads." }, { "end": 675.1, "start": 668.1600000000001, "text": " So in the original SimClear paper, what they did was they wanted originally, they wanted" }, { "end": 678.3000000000001, "start": 675.1, "text": " to train this network right here." }, { "end": 679.92, "start": 678.3, "text": " This is like a ResNet-50." }, { "end": 685.88, "start": 679.92, "text": " It's pretty standard in these kind of self-supervised approaches and so on to train or these few" }, { "end": 690.06, "start": 685.88, "text": " label approaches to train a standardized network." }, { "end": 692.28, "start": 690.06, "text": " And this is like a ResNet-50." }, { "end": 698.4399999999999, "start": 692.28, "text": " So in the original SimClear paper, they said we want to make ResNet-50 as strong as possible." }, { "end": 705.0799999999999, "start": 698.4399999999999, "text": " But in order to do this loss right here, we are going to attach this projection head just" }, { "end": 709.96, "start": 705.08, "text": " to because the dimensionality here I think is like 2048." }, { "end": 715.96, "start": 709.96, "text": " And we want to do this inner product in a lower dimension of like maybe 256 or so." }, { "end": 719.86, "start": 715.96, "text": " So these are just multi-layer perceptrons." }, { "end": 726.24, "start": 719.86, "text": " These are just fully connected layers that compress the representation down to that." }, { "end": 730.48, "start": 726.24, "text": " And once we're done with the unsupervised pre-training, we're going to throw those away," }, { "end": 731.48, "start": 730.48, "text": " right?" }, { "end": 734.8000000000001, "start": 731.48, "text": " And this ResNet is the thing that we really care about." }, { "end": 738.4799999999999, "start": 734.8, "text": " Now here they claim, OK, it actually works better." }, { "end": 744.88, "start": 738.4799999999999, "text": " And they have experiments to prove this or to show this if you use one, if you actually" }, { "end": 747.3399999999999, "start": 744.88, "text": " leave one of these layers here." }, { "end": 752.68, "start": 747.3399999999999, "text": " So in the end, I guess they converge on three projection head layers." }, { "end": 755.76, "start": 752.68, "text": " And then they only throw away the top two." }, { "end": 761.8399999999999, "start": 755.76, "text": " And like they make this big deal out of the fact where, you know, I can just call this" }, { "end": 765.52, "start": 761.84, "text": " part right here now the encoder." }, { "end": 771.72, "start": 765.52, "text": " And I don't so I don't know exactly like I don't see the giant deal here." }, { "end": 774.96, "start": 771.72, "text": " Like you just made your network one layer bigger." }, { "end": 778.32, "start": 774.96, "text": " And now you consider that to be your encoder." }, { "end": 780.76, "start": 778.32, "text": " And the projection head is now two layers." }, { "end": 784.52, "start": 780.76, "text": " And that will be much easier than calling the projection head three layers." }, { "end": 787.64, "start": 784.52, "text": " But we leave one layer and we train from the middle layer." }, { "end": 793.28, "start": 787.64, "text": " In any case, they have this layer, additional layer right here compared to the old Sinclair." }, { "end": 797.16, "start": 793.28, "text": " And then the representation of that goes into supervised fine tuning." }, { "end": 798.48, "start": 797.16, "text": " Now, this is pretty easy." }, { "end": 799.98, "start": 798.48, "text": " This is exactly what it sounds like." }, { "end": 805.22, "start": 799.98, "text": " So now you use only only the data set that has labels." }, { "end": 809.8, "start": 805.22, "text": " So the part of the data set that has labels, and you do the fine tuning and fine tuning" }, { "end": 811.92, "start": 809.8, "text": " is simply supervised learning." }, { "end": 817.6, "start": 811.92, "text": " You train this network in a supervised fashion on that small fraction of data that has cloud" }, { "end": 820.0400000000001, "start": 817.6, "text": " class labels." }, { "end": 822.28, "start": 820.0400000000001, "text": " And that already performs pretty well." }, { "end": 824.16, "start": 822.28, "text": " And they show this in experiments." }, { "end": 832.6800000000001, "start": 824.16, "text": " But then you can go a step further and do what's known as distillation or self training." }, { "end": 835.8000000000001, "start": 832.6800000000001, "text": " And what's distillation or self training?" }, { "end": 841.88, "start": 835.8000000000001, "text": " It's so distillation is when you have a network that you call the teacher network." }, { "end": 849.28, "start": 841.88, "text": " And that network has been trained to do some classification maybe into three classes pretty," }, { "end": 850.28, "start": 849.28, "text": " pretty well." }, { "end": 851.28, "start": 850.28, "text": " Okay." }, { "end": 855.32, "start": 851.28, "text": " But now this is very large and you want maybe a smaller model." }, { "end": 860.28, "start": 855.32, "text": " So you just want like this tiny model because you want to ship it on a mobile device, right?" }, { "end": 863.52, "start": 860.28, "text": " But it's also supposed to do this." }, { "end": 868.68, "start": 863.52, "text": " And you know that if you just directly train this, which is called the student model, it" }, { "end": 871.14, "start": 868.68, "text": " doesn't perform as well as the teacher model." }, { "end": 872.3199999999999, "start": 871.14, "text": " There is a better way." }, { "end": 877.68, "start": 872.3199999999999, "text": " If you have the teacher model, you can sort of transfer the knowledge to the student model." }, { "end": 879, "start": 877.68, "text": " You can distill the knowledge." }, { "end": 880.4399999999999, "start": 879, "text": " And how do you do that?" }, { "end": 884.4399999999999, "start": 880.4399999999999, "text": " You do that by, so what would you do in supervised training?" }, { "end": 890.04, "start": 884.4399999999999, "text": " In supervised training, you would take an image, put it in, and then put the label that" }, { "end": 891.68, "start": 890.04, "text": " comes along with the image." }, { "end": 896.84, "start": 891.68, "text": " You put it up here and you compare the output to the label and that gives you the loss function." }, { "end": 897.84, "start": 896.84, "text": " Right?" }, { "end": 901.4, "start": 897.84, "text": " So you do that right here." }, { "end": 904.9200000000001, "start": 901.4, "text": " If you distill, you put the image into both." }, { "end": 907.2, "start": 904.9200000000001, "text": " Now the teacher is already trained." }, { "end": 910.76, "start": 907.2, "text": " So its output will be a distribution over classes." }, { "end": 912.2800000000001, "start": 910.76, "text": " It won't be a single label." }, { "end": 918.6, "start": 912.2800000000001, "text": " It will be like, okay, 90% class one, 10% class two, 0% class three, something like" }, { "end": 919.6, "start": 918.6, "text": " this." }, { "end": 925.6800000000001, "start": 919.6, "text": " And now you take this as like a pseudo label, this entire distribution, and you put it here" }, { "end": 930.16, "start": 925.68, "text": " and you compare the output of the student to that of the teacher and that's your loss" }, { "end": 931.1999999999999, "start": 930.16, "text": " function." }, { "end": 936.56, "start": 931.1999999999999, "text": " So this kind of, the teacher might have learned to put some nuance into the classification" }, { "end": 941.78, "start": 936.56, "text": " to say, well, I'm pretty sure this is class one, but I'm not 100% sure." }, { "end": 945.0999999999999, "start": 941.78, "text": " And it can transfer that knowledge to the student." }, { "end": 951.64, "start": 945.0999999999999, "text": " And that makes the student better than had you just trained it from the beginning from," }, { "end": 953.0799999999999, "start": 951.64, "text": " with just the labels." }, { "end": 954.0799999999999, "start": 953.0799999999999, "text": " Right?" }, { "end": 960, "start": 954.08, "text": " So this is distillation and you can do this even what they call self distillation here" }, { "end": 961.64, "start": 960, "text": " or self training." }, { "end": 968.76, "start": 961.64, "text": " So apparently this even helps if the teacher is, if the student model is the same as the" }, { "end": 970.08, "start": 968.76, "text": " teacher model." }, { "end": 972, "start": 970.08, "text": " Now why does it help in this case?" }, { "end": 976.74, "start": 972, "text": " And I think it is not exactly the case in this case because they always say their teacher" }, { "end": 979.08, "start": 976.74, "text": " model has this extra projection layer." }, { "end": 980.08, "start": 979.08, "text": " Right?" }, { "end": 983.96, "start": 980.08, "text": " And then the student model doesn't have that even if they do self training." }, { "end": 985.9200000000001, "start": 983.96, "text": " But why does it help in this case?" }, { "end": 990.44, "start": 985.9200000000001, "text": " I mean, it's, it's kind of shocking and I'm pretty sure it helps in any case, but in this" }, { "end": 997.76, "start": 990.44, "text": " particular case it helps because now you're using the unlabeled data again." }, { "end": 1004.46, "start": 997.76, "text": " So you have a teacher model and the teacher model is trained first using unsupervised" }, { "end": 1009.1600000000001, "start": 1004.46, "text": " like this is the teacher model right here using unsupervised training." }, { "end": 1013.24, "start": 1009.1600000000001, "text": " Then the teacher model is further fine tuned on the small data." }, { "end": 1014.24, "start": 1013.24, "text": " Right?" }, { "end": 1020.72, "start": 1014.24, "text": " So it is now already pretty good at the task, but how can you get a student model that's" }, { "end": 1022.88, "start": 1020.72, "text": " even better than the teacher model?" }, { "end": 1025.2, "start": 1022.88, "text": " It's by using again this unlabeled data." }, { "end": 1027.16, "start": 1025.2, "text": " You have this giant amount of data." }, { "end": 1031.88, "start": 1027.16, "text": " So what you'll do is you take an image from the unlabeled data and you ask the teacher" }, { "end": 1035.04, "start": 1031.88, "text": " model, teacher model, what do you think about that image?" }, { "end": 1036.04, "start": 1035.04, "text": " Right?" }, { "end": 1039.6, "start": 1036.04, "text": " And the teacher model will give you a prediction." }, { "end": 1045.8799999999999, "start": 1039.6, "text": " Like let's say again, this 90%, 10%, 0% and then you take the student model, you input" }, { "end": 1051.1, "start": 1045.8799999999999, "text": " that image and you compare its output to what the teacher said." }, { "end": 1054.1799999999998, "start": 1051.1, "text": " So this combines the teacher model." }, { "end": 1055.76, "start": 1054.1799999999998, "text": " You freeze the teacher model, right?" }, { "end": 1058.9599999999998, "start": 1055.76, "text": " The teacher model is only trained until here." }, { "end": 1060.6799999999998, "start": 1058.9599999999998, "text": " You take it from here." }, { "end": 1065.3, "start": 1060.6799999999998, "text": " The student model is now able to take basically the teacher." }, { "end": 1073.36, "start": 1065.3, "text": " It takes everything that the teacher model knows, not only about this data, but about" }, { "end": 1074.36, "start": 1073.36, "text": " all the data." }, { "end": 1077.68, "start": 1074.36, "text": " So it kind of gets to ask the teacher model, what do you think about this?" }, { "end": 1078.68, "start": 1077.68, "text": " What do you think about this?" }, { "end": 1079.76, "start": 1078.68, "text": " What do you think about this?" }, { "end": 1084.8799999999999, "start": 1079.76, "text": " And it can incorporate all that knowledge about all of this unlabeled data." }, { "end": 1091.6, "start": 1084.8799999999999, "text": " And that's why the student model here in the end, if it's the same size, will probably" }, { "end": 1094.96, "start": 1091.6, "text": " end up even better than the teacher model." }, { "end": 1100, "start": 1094.96, "text": " So distillation, I think also is still kind of a mystery of why you get a better model" }, { "end": 1106.44, "start": 1100, "text": " or, I mean, to make it smaller, if you make it a lot smaller, usually you don't end up" }, { "end": 1109.92, "start": 1106.44, "text": " with a better model, but you end up with a pretty good model that you couldn't have gotten" }, { "end": 1114.4, "start": 1109.92, "text": " by just training the small model." }, { "end": 1115.8400000000001, "start": 1114.4, "text": " So that's already pretty cool." }, { "end": 1123, "start": 1115.8400000000001, "text": " But why you get a better model when they're the same size, I don't think that's well understood" }, { "end": 1124.3600000000001, "start": 1123, "text": " yet." }, { "end": 1127.3799999999999, "start": 1124.36, "text": " So that's the three stage approach." }, { "end": 1133.9599999999998, "start": 1127.3799999999999, "text": " So recap, first, use all of the data without labels to do unsupervised or self supervised" }, { "end": 1135.9199999999998, "start": 1133.9599999999998, "text": " contrastive pre-training." }, { "end": 1141.4399999999998, "start": 1135.9199999999998, "text": " Second, use only the data that has labels to do fine tuning." }, { "end": 1150.76, "start": 1141.4399999999998, "text": " Third, either distill the learned classifier to a smaller model or distill it to a model" }, { "end": 1152.1599999999999, "start": 1150.76, "text": " of the same size." }, { "end": 1160.8400000000001, "start": 1152.16, "text": " Then in both cases, you would again use the unlabeled, all of the unlabeled data." }, { "end": 1162.3200000000002, "start": 1160.8400000000001, "text": " And that's the three step approach." }, { "end": 1168.72, "start": 1162.3200000000002, "text": " That's SEMCLEAR v2 in all of its form." }, { "end": 1172.76, "start": 1168.72, "text": " So they go into fine tuning right here." }, { "end": 1180.68, "start": 1172.76, "text": " And yeah, so they say again, we elaborate with a three layer projection head." }, { "end": 1182.48, "start": 1180.68, "text": " So that's the three layer projection head." }, { "end": 1190.04, "start": 1182.48, "text": " This here is the output of ResNet-50, where Sigma is a ReLU non-linearity and we ignore" }, { "end": 1193.2, "start": 1190.04, "text": " the bias term for brevity, blah, blah, blah, blah, blah." }, { "end": 1194.52, "start": 1193.2, "text": " So they contrast this here." }, { "end": 1200.68, "start": 1194.52, "text": " For fine tuning, SEMCLEAR uses this right here, which is just, it's basically just a" }, { "end": 1210.28, "start": 1200.68, "text": " classifier on top of the output of the ResNet-50." }, { "end": 1214.28, "start": 1210.28, "text": " This is fine tuning from the input layer of the projection head." }, { "end": 1220.52, "start": 1214.28, "text": " To fine tune from the first layer of the projection head, we have a new encoder function as this," }, { "end": 1223.94, "start": 1220.52, "text": " which is ResNet followed by fully connected layers." }, { "end": 1229.76, "start": 1223.94, "text": " And you see they take the ResNet-50 output and they ship it through the first projection" }, { "end": 1233.16, "start": 1229.76, "text": " layer and then there is a task specific classifier." }, { "end": 1239.6399999999999, "start": 1233.16, "text": " Now, again, why, I don't even see why they make like this ginormous deal out of it, especially," }, { "end": 1242.5600000000002, "start": 1239.64, "text": " especially since the last layer of the ResNet-50." }, { "end": 1248.68, "start": 1242.5600000000002, "text": " I'm not, okay, here is, I'm not entirely sure, but are they taking the log?" }, { "end": 1250.3200000000002, "start": 1248.68, "text": " No, they're probably not taking the log." }, { "end": 1251.72, "start": 1250.3200000000002, "text": " It's okay." }, { "end": 1255.68, "start": 1251.72, "text": " But it's, yeah, it's just weird." }, { "end": 1259.5600000000002, "start": 1255.68, "text": " Like is there even a non-linearity at the end right here?" }, { "end": 1264.76, "start": 1259.5600000000002, "text": " Or is this really just like two matrix multiplications in a row, which I'm going to guess there's" }, { "end": 1269.24, "start": 1264.76, "text": " a big chance that that's the case, that the last layer of this encoder is actually not" }, { "end": 1274.52, "start": 1269.24, "text": " even followed by non-linearity and therefore you'll just kind of make the dimension different." }, { "end": 1279.8, "start": 1274.52, "text": " And I don't see why you can't just incorporate this into the model and have to like say it" }, { "end": 1283.44, "start": 1279.8, "text": " over and over again that this is a new special thing, right?" }, { "end": 1287.28, "start": 1283.44, "text": " Again, this is equivalent of tuning from a middle layer of the projection head instead" }, { "end": 1288.76, "start": 1287.28, "text": " of the output layer." }, { "end": 1291.84, "start": 1288.76, "text": " Okay, you just make your model a bit bigger." }, { "end": 1292.84, "start": 1291.84, "text": " Yeah." }, { "end": 1297.24, "start": 1292.84, "text": " So the third step is self-training or knowledge distillation." }, { "end": 1298.96, "start": 1297.24, "text": " And they give two variants right here." }, { "end": 1304.04, "start": 1298.96, "text": " This variant, as you can see here, this is just the cross entropy." }, { "end": 1313.24, "start": 1304.04, "text": " But instead of having labels right here, Y, you have what the teacher model thinks Y is" }, { "end": 1314.24, "start": 1313.24, "text": " given X." }, { "end": 1321.16, "start": 1314.24, "text": " Okay, that's cross entropy, but not with the true labels, but with the output of the teacher" }, { "end": 1322.16, "start": 1321.16, "text": " model." }, { "end": 1323.66, "start": 1322.16, "text": " And you can even mix that." }, { "end": 1330.88, "start": 1323.66, "text": " So you can, as you can see right here, you can mix this with an actual supervised loss." }, { "end": 1333.1200000000001, "start": 1330.88, "text": " So this would be the supervised loss, whatever." }, { "end": 1335.1200000000001, "start": 1333.1200000000001, "text": " Yeah, I guess that I was wrong." }, { "end": 1340.5800000000002, "start": 1335.1200000000001, "text": " That wasn't, I guess P of Y is always one in that case." }, { "end": 1347.68, "start": 1340.5800000000002, "text": " But they don't use this particular kind, I think, except in one of the ablations." }, { "end": 1349.0400000000002, "start": 1347.68, "text": " So how does this work?" }, { "end": 1352.28, "start": 1349.0400000000002, "text": " It works pretty well." }, { "end": 1359.68, "start": 1352.28, "text": " And so one of their experiments, as you see up here, it works pretty well in that if you" }, { "end": 1368.44, "start": 1359.68, "text": " have 1% of the labels, only 1% of ImageNet labels, which they say is smaller or equal" }, { "end": 1375.28, "start": 1368.44, "text": " than 13 images per class, so there's a thousand classes and you only have 13 labels per class" }, { "end": 1377.28, "start": 1375.28, "text": " or less." }, { "end": 1388.32, "start": 1377.28, "text": " If you, and they differentiate, if your encoder that you train is a ResNet 50, then you get," }, { "end": 1391.8999999999999, "start": 1388.32, "text": " and you can see the dashed line here is a supervised baseline." }, { "end": 1396, "start": 1391.8999999999999, "text": " You almost get to the supervised baseline with 1% of the labels." }, { "end": 1401.52, "start": 1396, "text": " And if you actually have a larger ResNet, then you get to the supervised performance" }, { "end": 1405.3799999999999, "start": 1401.52, "text": " without 99% of the labels." }, { "end": 1413.24, "start": 1405.38, "text": " And if you have, excuse me, 10% of the labels, you pass the supervised baseline." }, { "end": 1419.72, "start": 1413.24, "text": " So the supervised baseline is on 100% of the labels, mind you, and you only have 10% and" }, { "end": 1421.88, "start": 1419.72, "text": " this outperforms the supervised baseline." }, { "end": 1427.5200000000002, "start": 1421.88, "text": " Now of course, you could, here you could have another graphic where you show, oh, 100%." }, { "end": 1431.5200000000002, "start": 1427.5200000000002, "text": " What if we, you know, what if we do the whole procedure with 100% of the labels?" }, { "end": 1438.52, "start": 1431.52, "text": " So first we don't label the data, we do supervised, self-supervision, then we fine tune on a 100%" }, { "end": 1439.52, "start": 1438.52, "text": " of the data." }, { "end": 1443.44, "start": 1439.52, "text": " And then we do this distillation again, you would of course be even better." }, { "end": 1448, "start": 1443.44, "text": " And I think they have this somewhere in a table, but this is already pretty, pretty" }, { "end": 1451.2, "start": 1448, "text": " impressive." }, { "end": 1456.24, "start": 1451.2, "text": " And another claim they make right here is about the model sizes." }, { "end": 1463.64, "start": 1456.24, "text": " So and this figure is description, this now relates to the title." }, { "end": 1470.44, "start": 1463.64, "text": " They say bigger models yield larger gains when fine tuning with fewer labeled examples." }, { "end": 1475.72, "start": 1470.44, "text": " So there are three comparative statement words in one sentence." }, { "end": 1479, "start": 1475.72, "text": " Let's unpack this." }, { "end": 1482.4, "start": 1479, "text": " Bigger models yield larger gains." }, { "end": 1491.52, "start": 1482.4, "text": " So the bigger the model, the better the good, let's say, when fine tuning with fewer labeled" }, { "end": 1492.52, "start": 1491.52, "text": " examples." }, { "end": 1493.52, "start": 1492.52, "text": " Let's just look at the graph." }, { "end": 1494.68, "start": 1493.52, "text": " It's pretty, it's really clear." }, { "end": 1497.88, "start": 1494.68, "text": " So here we have number of parameters going over." }, { "end": 1502.72, "start": 1497.88, "text": " So these are the different models they look at, how many parameters they have to do this" }, { "end": 1504.0800000000002, "start": 1502.72, "text": " whole procedure." }, { "end": 1511.2, "start": 1504.0800000000002, "text": " And here is the relative improvement in percent over the top ImageNet 1 top accuracy." }, { "end": 1518.8, "start": 1511.2, "text": " So if you do this whole thing with 100% of the labels, right, I'm going to guess this" }, { "end": 1521.8400000000001, "start": 1518.8, "text": " here, this here is where they start out." }, { "end": 1528.04, "start": 1521.8400000000001, "text": " And you can see as you grow your models, you grow the performance." }, { "end": 1534.24, "start": 1528.04, "text": " And this, this is just by increasing the model size, right, you have the same data set, you" }, { "end": 1538.76, "start": 1534.24, "text": " have the same amount of labels, you have the same number of steps that you train for, and" }, { "end": 1546.52, "start": 1538.76, "text": " so on, just by the fact that you make your model bigger, you gain in performance." }, { "end": 1553.4, "start": 1546.52, "text": " Okay, now you can see that these curves here are above one another." }, { "end": 1558.28, "start": 1553.4, "text": " And these curves refer to getting small, less and less labels." }, { "end": 1564.4, "start": 1558.28, "text": " Okay, so if you only have 10% of the labels, your relative gains are larger." }, { "end": 1569.76, "start": 1564.4, "text": " That doesn't mean that you perform better with 10% of the labels than with 100% of the" }, { "end": 1572.68, "start": 1569.76, "text": " labels, that would be like ridiculous." }, { "end": 1575.9, "start": 1572.68, "text": " Well, I guess in this day and age, nothing is ridiculous." }, { "end": 1582.5600000000002, "start": 1575.9, "text": " But for now, we're still performing better by having more labels if we do the same procedure," }, { "end": 1583.5600000000002, "start": 1582.5600000000002, "text": " right?" }, { "end": 1585.4, "start": 1583.5600000000002, "text": " It's not like here." }, { "end": 1591.4, "start": 1585.4, "text": " So here, this baseline, the supervised baseline only does supervised training, right?" }, { "end": 1595.3200000000002, "start": 1591.4, "text": " So that's why we can outperform it with less of labels." }, { "end": 1597.6000000000001, "start": 1595.3200000000002, "text": " But here, we do the same procedure." }, { "end": 1599.76, "start": 1597.6000000000001, "text": " This is relative improvement, right?" }, { "end": 1608.72, "start": 1599.76, "text": " So this right here, the starting point would be if you had 10% of labels and a 25 million" }, { "end": 1611.5600000000002, "start": 1608.72, "text": " model, parameter model." }, { "end": 1617.44, "start": 1611.5600000000002, "text": " And this right here, for example, is if you have the same amount of labels, but a 200" }, { "end": 1618.76, "start": 1617.44, "text": " million parameter model." }, { "end": 1622.66, "start": 1618.76, "text": " And this is relative improvement, okay?" }, { "end": 1631.64, "start": 1622.66, "text": " But what the graph says is that the relative improvement is larger, the relative improvement" }, { "end": 1639.12, "start": 1631.64, "text": " is higher, the more parameters you have, which is the more you go to the right." }, { "end": 1645.92, "start": 1639.12, "text": " And that effect in itself is higher, the fewer labels you have, which is the different graphs." }, { "end": 1647.6, "start": 1645.92, "text": " And you can see that right here." }, { "end": 1652.74, "start": 1647.6, "text": " So if you have fewer and fewer labels, it becomes more and more important that you have" }, { "end": 1654.24, "start": 1652.74, "text": " bigger models." }, { "end": 1656.78, "start": 1654.24, "text": " And that's really counterintuitive, right?" }, { "end": 1663.76, "start": 1656.78, "text": " Because you would expect that the bigger models, they can overfit much more easily to the fewer" }, { "end": 1664.76, "start": 1663.76, "text": " labels." }, { "end": 1665.76, "start": 1664.76, "text": " But that doesn't seem the case." }, { "end": 1671.9199999999998, "start": 1665.76, "text": " So this self supervision, it really seems to be sort of a counter to this notion of" }, { "end": 1673.6, "start": 1671.9199999999998, "text": " overfitting." }, { "end": 1677.8, "start": 1673.6, "text": " And if you have larger and larger models, that's what they argue in the paper, you might" }, { "end": 1683.4199999999998, "start": 1677.8, "text": " be able to learn more and more features that might be useful for classification." }, { "end": 1688.48, "start": 1683.4199999999998, "text": " So if you have a larger model, you might, you're going to learn more kinds of features," }, { "end": 1692.9199999999998, "start": 1688.48, "text": " and then you're going to outperform because you have more chance that these features are" }, { "end": 1695.4599999999998, "start": 1692.9199999999998, "text": " going to be useful for classification." }, { "end": 1701.1999999999998, "start": 1695.4599999999998, "text": " And I don't think they really make a statement as to why that happens more with the, if you" }, { "end": 1703.4399999999998, "start": 1701.1999999999998, "text": " have less labels." }, { "end": 1704.8, "start": 1703.44, "text": " So let's think about this." }, { "end": 1712, "start": 1704.8, "text": " If I have very few labels, very, very few labels, why does it help me even more if I" }, { "end": 1713, "start": 1712, "text": " have a big model?" }, { "end": 1717.28, "start": 1713, "text": " Well, with the same argumentation, we could say, and maybe they actually say this already." }, { "end": 1722.3200000000002, "start": 1717.28, "text": " So I might be copying them involuntarily." }, { "end": 1729.4, "start": 1722.3200000000002, "text": " Maybe with fewer and fewer labels, like let's say we have all the labels, that's probably" }, { "end": 1731.0800000000002, "start": 1729.4, "text": " too many, right?" }, { "end": 1736.6, "start": 1731.08, "text": " If we can learn a task with some accuracy, we probably had too many labels." }, { "end": 1737.6, "start": 1736.6, "text": " Okay." }, { "end": 1740.9199999999998, "start": 1737.6, "text": " It's like, if we can't learn a task, we know we have too few." }, { "end": 1745.32, "start": 1740.9199999999998, "text": " Somewhere there is a border where we have enough, but that's like kind of one number." }, { "end": 1751.54, "start": 1745.32, "text": " And everything else is too many, technically speaking, like learning theoretically speaking." }, { "end": 1755.36, "start": 1751.54, "text": " So usually we have too many labels." }, { "end": 1756.6399999999999, "start": 1755.36, "text": " And what does that mean?" }, { "end": 1759, "start": 1756.6399999999999, "text": " That probably means that there are multiple ways." }, { "end": 1763.84, "start": 1759, "text": " Like if we have too many labels, there are multiple different features we can pick up" }, { "end": 1764.84, "start": 1763.84, "text": " to learn." }, { "end": 1767.76, "start": 1764.84, "text": " There are multiple different paths to learn our goals." }, { "end": 1773.88, "start": 1767.76, "text": " So if we have ImageNet, and like there's this weird task to recognize a three, and we get" }, { "end": 1778.72, "start": 1773.88, "text": " lots and lots and lots of examples of threes, right?" }, { "end": 1780.2, "start": 1778.72, "text": " We can decide on a feature." }, { "end": 1784.62, "start": 1780.2, "text": " We can say, oh, all the threes that I see, they have this bow down here, or all the threes" }, { "end": 1787.68, "start": 1784.62, "text": " that I see, they have this bend here, and so on." }, { "end": 1793.5600000000002, "start": 1787.68, "text": " But if I only have very few labels, there might only be like a single feature that is" }, { "end": 1798, "start": 1793.5600000000002, "text": " even theoretically possible to learn from the labels I'm given." }, { "end": 1802.98, "start": 1798, "text": " And therefore, if I have a bigger model in cell in pre-training, because the pre-training" }, { "end": 1806.8, "start": 1802.98, "text": " happens with the same amount of data, right?" }, { "end": 1813.44, "start": 1806.8, "text": " If I have a bigger model that does the self-supervised pre-training, it's going to learn more features." }, { "end": 1820.72, "start": 1813.44, "text": " And then there's a higher chance that that one feature that these very few labels that" }, { "end": 1825.46, "start": 1820.72, "text": " I am able to learn something from is going to be in these features." }, { "end": 1831.4, "start": 1825.46, "text": " So that's kind of how I make sense of it in combination with what they're saying right" }, { "end": 1832.72, "start": 1831.4, "text": " here." }, { "end": 1836.9, "start": 1832.72, "text": " Okay, so this was the main points." }, { "end": 1841.56, "start": 1836.9, "text": " They do a lot of empirical studies showing the effects of these sizes." }, { "end": 1848.52, "start": 1841.56, "text": " They stress that it's important to have both deep and wide networks." }, { "end": 1853.06, "start": 1848.52, "text": " And they also do this additional attention mechanism over the convolution filters." }, { "end": 1856.8799999999999, "start": 1853.06, "text": " I don't want to go into that particularly." }, { "end": 1864.96, "start": 1856.8799999999999, "text": " But they also do linear evaluation compared to supervised, compared to fine tuning with" }, { "end": 1866.6799999999998, "start": 1864.96, "text": " 100% of the labels." }, { "end": 1872.88, "start": 1866.68, "text": " So they do a very thorough empirical investigation." }, { "end": 1876.48, "start": 1872.88, "text": " And yeah, I do appreciate that." }, { "end": 1880.16, "start": 1876.48, "text": " And they kind of show the same things." }, { "end": 1883.92, "start": 1880.16, "text": " And here they show the number of layers in the projection head." }, { "end": 1890.3, "start": 1883.92, "text": " So as you increase the number of layers in the projection head and train from the optimal" }, { "end": 1893.94, "start": 1890.3, "text": " layer in the middle, your performance goes up, as you can see." }, { "end": 1899.72, "start": 1893.94, "text": " But it also this effect is stronger when you have fewer labels, right?" }, { "end": 1904.26, "start": 1899.72, "text": " You can see the differences here are greater than the differences here or even here when" }, { "end": 1906.6000000000001, "start": 1904.26, "text": " you have 100% of the labels." }, { "end": 1913.3400000000001, "start": 1906.6000000000001, "text": " So the fewer labels, the fewer the labels, the more benefit you have from the architecture" }, { "end": 1914.3400000000001, "start": 1913.3400000000001, "text": " right here." }, { "end": 1919.0800000000002, "start": 1914.3400000000001, "text": " And here they show that it's not always optimal to train from the last projection layer, but" }, { "end": 1920.3200000000002, "start": 1919.0800000000002, "text": " here the first one." }, { "end": 1925.12, "start": 1920.32, "text": " So I guess they converge on three projection layers, and you always want to keep the first" }, { "end": 1932.1599999999999, "start": 1925.12, "text": " one around after self supervised training, as we mentioned before." }, { "end": 1938.08, "start": 1932.1599999999999, "text": " They investigate different distillation losses and show that it is actually important that" }, { "end": 1944.12, "start": 1938.08, "text": " you do the distillation loss on labeled and unlabeled sets." }, { "end": 1952.7199999999998, "start": 1944.12, "text": " You can see here if you only train with the labels after fine tuning, you get poor performance." }, { "end": 1959.32, "start": 1952.7199999999998, "text": " If you do the label and distillation loss, but only do it on the data set where you have" }, { "end": 1962.5, "start": 1959.32, "text": " labels, then you get more performance." }, { "end": 1967.82, "start": 1962.5, "text": " If you do label and distillation loss, but also include your unlabeled data, you get" }, { "end": 1969.8, "start": 1967.82, "text": " even more performance." }, { "end": 1974.56, "start": 1969.8, "text": " And then if you do that, but you don't do the label loss." }, { "end": 1980.82, "start": 1974.56, "text": " So before we've seen you can mix the distillation loss with the label loss, if you have lots" }, { "end": 1984.32, "start": 1980.82, "text": " of labels, then you drop in performance again." }, { "end": 1988.8799999999999, "start": 1984.32, "text": " And you can see right here, the drop in performance is proportional to how many labeled examples" }, { "end": 1989.8799999999999, "start": 1988.8799999999999, "text": " you have." }, { "end": 1991.3999999999999, "start": 1989.8799999999999, "text": " And that's natural, right?" }, { "end": 1996.84, "start": 1991.3999999999999, "text": " If you have the labels, you can actually mix that information in with the distillation" }, { "end": 1999, "start": 1996.84, "text": " loss and that will make you better." }, { "end": 2006.96, "start": 1999, "text": " And here they drop 0.1% and here they drop less than 1% by leaving away the label." }, { "end": 2014.48, "start": 2006.96, "text": " But their point basically is that it is more important to distill using also unlabeled" }, { "end": 2020.44, "start": 2014.48, "text": " data, then it is to distill, including the label loss." }, { "end": 2022.64, "start": 2020.44, "text": " And it's much easier to not include the label loss." }, { "end": 2026.56, "start": 2022.64, "text": " So they don't do it, I guess." }, { "end": 2030.44, "start": 2026.56, "text": " All right, so I think that was it." }, { "end": 2035.6799999999998, "start": 2030.44, "text": " They compare, as I said, they compare like self distillation, where you distill into" }, { "end": 2042.6799999999998, "start": 2035.6799999999998, "text": " an equally sized model and down distillation, where you distill into a smaller model, maybe" }, { "end": 2044.36, "start": 2042.6799999999998, "text": " that's vice versa." }, { "end": 2047.12, "start": 2044.36, "text": " And they do a lot of comparison to other methods." }, { "end": 2050.52, "start": 2047.12, "text": " So this is a very thorough work, I feel." }, { "end": 2057.68, "start": 2050.52, "text": " And yeah, if you want more about the exact experiments, I invite you to look at the paper." }, { "end": 2063.96, "start": 2057.68, "text": " And let's just have a final look at the broader impact statement right here." }, { "end": 2071.84, "start": 2063.96, "text": " So the broader, remember the broader impact statement is supposed to force you to think" }, { "end": 2078.88, "start": 2071.84, "text": " about how society might be impacted at large by your work." }, { "end": 2082.48, "start": 2078.88, "text": " So it says, the finding described in this paper can potentially be harnessed to improve" }, { "end": 2087.56, "start": 2082.48, "text": " accuracy in any application or computer vision, where it is more expensive or difficult to" }, { "end": 2090.96, "start": 2087.56, "text": " label additional data than to train larger models." }, { "end": 2094.44, "start": 2090.96, "text": " Such applications are clearly beneficial to society." }, { "end": 2099.12, "start": 2094.44, "text": " For example, in medical applications where acquiring high quality labels requires careful" }, { "end": 2103.88, "start": 2099.12, "text": " annotation by clinicians, better semi supervised learning approaches can potentially help save" }, { "end": 2105.32, "start": 2103.88, "text": " lives." }, { "end": 2109.2400000000002, "start": 2105.32, "text": " Application of computer vision to agriculture can increase crop yields, which may help to" }, { "end": 2111.6000000000004, "start": 2109.2400000000002, "text": " improve availability of food." }, { "end": 2115.92, "start": 2111.6000000000004, "text": " However, we also recognize that our approach can become a potential component of harmful" }, { "end": 2118.04, "start": 2115.92, "text": " surveillance systems." }, { "end": 2123.76, "start": 2118.04, "text": " Moreover, there is an entire industry built around human labeling services and technology" }, { "end": 2128, "start": 2123.76, "text": " that reduces the need for these services could lead to short term loss of income for some" }, { "end": 2131.7200000000003, "start": 2128, "text": " of those currently employed or contracted to provide labels." }, { "end": 2139.68, "start": 2131.72, "text": " So ask yourself how much of that statement has to do with the actual novelty of this" }, { "end": 2141.52, "start": 2139.68, "text": " paper?" }, { "end": 2144.24, "start": 2141.52, "text": " And the answer is of course, zero, right?" }, { "end": 2150.6, "start": 2144.24, "text": " Like you can replace like our method in this thing with like machine learning or computer" }, { "end": 2157.9599999999996, "start": 2150.6, "text": " vision in general, like, oh, really SIMClear V2 specifically can increase crop yields?" }, { "end": 2164.68, "start": 2157.96, "text": " Like that specific invention of this paper will lead to higher crop yields, will lead" }, { "end": 2167.12, "start": 2164.68, "text": " to surveillance systems." }, { "end": 2174.56, "start": 2167.12, "text": " So I'm, yeah, you know, I think like, I'm not gonna get too upset about these." }, { "end": 2178.76, "start": 2174.56, "text": " I mean, this, I think it's quite funny." }, { "end": 2188.84, "start": 2178.76, "text": " But just, again, I wonder whether the people advocating for these things are happy with" }, { "end": 2195.5600000000004, "start": 2188.84, "text": " these statements, because clearly, clearly, this is just a template that you copy paste" }, { "end": 2199.48, "start": 2195.5600000000004, "text": " from paper to paper, replacing like a few words." }, { "end": 2202.76, "start": 2199.48, "text": " And if it's computer vision, you're like, oh, my deep fakes." }, { "end": 2206.7200000000003, "start": 2202.76, "text": " And if it's an NLP, it's like, oh, I'm a fake news." }, { "end": 2217.4399999999996, "start": 2206.72, "text": " And yeah, I wonder if really anything like particularly is has I wonder whether these" }, { "end": 2218.7599999999998, "start": 2217.4399999999996, "text": " people are happy now." }, { "end": 2220.64, "start": 2218.7599999999998, "text": " Yeah, I just I wonder." }, { "end": 2227.04, "start": 2220.64, "text": " And if, if they are, I wonder whether it's really for the reason that they claim that," }, { "end": 2232.9599999999996, "start": 2227.04, "text": " oh, now we have a statement here of how it impacts society, because I could have told" }, { "end": 2233.9599999999996, "start": 2232.9599999999996, "text": " you that before." }, { "end": 2237.6, "start": 2233.96, "text": " I even read the title of the paper, right, what the broader impact statement is going" }, { "end": 2238.6, "start": 2237.6, "text": " to be." }, { "end": 2244.6, "start": 2238.6, "text": " In any case, rant too long, check out paper, share it out, leave a like, comment if you" }, { "end": 2247.6, "start": 2244.6, "text": " disagree or agree." }, { "end": 2264.44, "start": 2247.6, "text": " And yeah, bye bye." } ]
THcuTJbeD34
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
On the Measure of Intelligence by François Chollet - Part 2: Human Priors (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "chollet", "keras", "google", "francois", "intelligence", "iq", "iq test", "deep neural networks", "prior", "skill", "performance", "measurement", "measure", "test", "number", "intelligent", "smart", "learning", "generalization", "ability", "experience", "humans", "evolution", "nature", "nurture", "psychometrics", "range", "adaptability", "arc", "kaggle", "difficulty", "entropy", "core knowledge", "objectness", "navigation", "contact", "agent", "goal" ]
In this part, we go much more in-depth into the relationship between intelligence, generality, skill, experience, and prior knowledge and take a close look at what priors are built into humans. This will form the basis for comparing the intelligence of humans and AI systems. OUTLINE: 0:00 - Intro & Recap 3:00 - Optimize for Generality 5:45 - Buying Skill with Data and Priors 12:40 - The Human Scope 17:30 - Human Priors 24:05 - Core Knowledge 28:50 - Comments & Conclusion Paper: https://arxiv.org/abs/1911.01547 Tim Scarfe's Video: https://youtu.be/GpWLZUbPhr0 Abstract: To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans. Authors: François Chollet Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're going to continue with On the Measure of Intelligence by François Chollet. Now if you remember last time, if you haven't seen last time, go watch part 1 if you're interested. This is a multi-part series on this paper. Why? Because the paper itself is very long. It's 40 pages, the main part, and it's a big wall of text. So I've opted to basically pull out notes and show you the notes that I have pulled out and to divide this into multiple parts. So last time we went over sort of the history of assessing intelligence and the basics. And I know I said this time I'm going to get into the math, but I lied. So I realized that there's still a lot that comes up in part 2 of the paper before we get into the actual math. So this part is going to be about the prerequisites to that and then next time math. I'm sorry to disappoint anyone. You might just skip this one if you want. I do have to shout out Tim Scarf who of course runs the Machine Learning Street Talk channel podcast with me and Connor Shorten together. Tim has just made an entire video about this paper, about On the Measure of Intelligence, and his videos are usually like super high quality, like higher than mine, and it's the entire paper. So if you want to, you know, know the end of the story or, you know, have a different take on the paper, definitely can recommend his video. I do make a guest appearance there, so yeah, that's a given. And I will still finish the paper on this channel just in my regular in this style right here. So all the options are available to you. Let's dive into part 2. So if you remember part 1, we sort of went over what it means to be intelligent and we differentiated basically two things, which are skills and abilities. So a skill is how well you achieve a given task or how well you can do in a given task. So this could be chess or, you know, Go or something very, very measurable. An IQ test is a specific task. So these things right here, they're all tasks. But these tasks aren't the thing we're interested in. Just because a machine is good at chess doesn't mean it's intelligent. And what we want is sort of a generalizable skill. So we want to assess how generalizable is an ability. So can I throw a computer at a new problem that it has never seen before and it can solve that. And that's going to be this generalizability, this notion of can you solve things that you have never encountered before and weren't prepared for. That is going to be the basis for us to measure intelligence. So Chollet says you have to optimize directly for generality and flexibility rather than task performance if you want to build an intelligent agent. You have to sort of build something that is not just good at a thing. It is good at getting good at things. That's almost a quote worthy thing. So if you just give it a single task, the learner will just take any available shortcut. So if you just say you have to be good at chess, the developer of a system can exploit all the tricks that make you good at chess. And you don't have to be smart. You just know basically if you had enough memory you could just memorize all the moves of all chess games ever. That's why we say hard-coded chatbots are not intelligent. So hard-coded chatbots, they simply match your input to a database of reg exes and then they answer. And we're not very impressed because as soon as I give them something that is not covered by their reg exes, they fail. They just say I don't know or something like this. In fact what is intelligent is the engineer in that case. So the engineer that makes the program is intelligent. So he has this drawing, I think I've shown it last time, where you have the environment and there is this agent. And if the agent is really good with the environment, you might consider that intelligent, but in Sholeh's mind you have to also consider here the developer of the agent. It could be that the developer is very intelligent and just builds the agent to interact with the environment in a matter that it gets a lot of reward. It's very good at a skill, but the agent itself might not be intelligent. So it says intelligence, the intelligence of a process is not encoded by the performance of a system, but by the fact that the same process can be applied to different tasks. So in this case if I have a new task, a new environment, so E2 right here, the question is could I throw the same agent at that even if it hasn't seen it before? Or would I be able to take the same developer that develops me a new agent, agent 2, that then can solve the task? In the case where I can throw over the agent, that would make an argument that the agent is intelligent, but if I can't you'd have to make the argument that the developer is the intelligent part here, which of course is the point of what he's saying right here. So hard-coded programs themselves are not intelligent, but, and this is the case, the same counts for adding more training data. So not only is the hard coding not intelligent, it's also, Chollet says, not intelligent if you simply add more training data. So a machine learning system that sort of learns to how to interact with these environments. If you can imagine that you have lots of environments, environment, environment, environment, environment, and so on, that give you a dense sampling of the environment space. So you have all of these environments, you build all of them, and you train this agent to interact well with all of them. So you give it lots of compute, lots of data, and the environments are really a dense sampling of the environment space. It will be able, even though it has never seen environment 2, this environment right here, it might be just able to generalize to this environment, given that it has been trained on all the environments here, like it has been trained on every possible environment around this environment that is similar to this environment, it could generally generalize to environment 2, but also we wouldn't view that as intelligent, because in a sense this skill has been bought with data. And this notion of buying skill comes up a lot in this paper. So Suley says there are two ways to buy a skill, and by buying he basically means you don't buy as opposed to intelligently solving the skill. So whenever you buy a skill, that's not intelligent, Suley says, and you can buy a skill by either hard coding the solution or giving lots of data. And there is like this spectrum where you hard code, completely hard code a solution, and here is where you completely only feed data. So here would be something like GPT-3. There are almost no priors there, it's just a transformer, big transformer, and you just throw in data, lots and lots and lots and lots and lots of data. So in this measure, last time I've had lots of people, when GPT-3 came out, I've had people commenting on the part one of this paper saying, isn't GPT-3 intelligent? Because it can sort of generalize to these tasks that it hasn't been trained on. By the way, if you haven't seen the GPT-3 video, you should go watch it. Something tells me this video is popular, it might be five times as many views as any other video. So at least it's not a terrible video. But in essence, GPT-3 can solve these tasks that it wasn't trained for, right? And therefore you might argue in this definition right here, it could be intelligent because it can generalize. But there is this counteraction where Shirley says, maybe you have just bought that skill with lots and lots of data. Now I actually don't know what to say to this because I mean it really seems like GPT-3 generalizes to tasks it has never seen before, but also it has had a lot of data. And so as of right now it is not really clear where the line is here. When are we going to argue that GPT-3 is actually... It could be that it has had lots of data, but also it is intelligent. How are we going to make that distinction? And I guess we're going to get into the math part, but I can tell you right now the math part is so abstract as it is not really practical. It is like a theoretical framework that you might be able to approximate. But you know there is like a wishy-washy thing going on. But he basically says, okay there's this spectrum of hard coding over here and then fully learning from data and with all of these methods and you know in between methods. Like a CNN would be here because it has like considerable priors because of its architecture and so on. And over here would be something like an A star search with like a learned heuristic or things like this. You can get good with any of these things, but the intelligence is orthogonal to that. So orthogonal to that is the intelligence axis. It has nothing... Sholay says this has nothing to do with this dimension. You can buy skill with this, but it's basically like it's like a triangle almost sort of where you have hard coding, data and then intelligence. And it's like its own axis. So the hard coding refers to the priors of a system that the developer has basically built in. And the learning from data refers to its experience. So basically the more experience a system has had, the more it can generalize to a new skill. That doesn't mean it's intelligent, it just means had more experience or respectively more priors. So the example gives a locality-sensitive hashing which is basically like a nearest neighbor method with enough data can solve any task. Like nearest neighbor enough data can solve any task. I think it's a famous theorem that establishes that. So keep that in mind. That is why we basically need to pay attention to how much data went into this algorithm and sort of subtract that from our notion of how good, how intelligent it is. Yeah that's what he says. When we measure intelligence via skill, and this is really the only thing we can measure, we can only measure how good an agent is at a given skill. Anything higher than that we can't measure. So we must measure a skill, but we should factor out priors and experience. And that's gonna come up later. We should also pay attention to generalization difficulty. So generally how difficult is the task to solve given the experience we had. Because if the task is more difficult in a generalization sense, so if it's harder to from what we know get to the point where we can solve the task, then that would display higher intelligence. Yeah it says solving tasks via experience and priors has nothing to do with intelligence. It's just more experience, more priors. So goes back to human intelligence. It says how universal is actually human intelligence. And it gets to the to the point where he says it's not very universal. Because first of all there's no free lunch theorem where it says any two optimization algorithms will perform the same if you integrate across all possible problems. So it's even questionable whether something like general intelligence could even like universal intelligence could even exist. But if we look at the DG factor which is used sometimes to assess human intelligence, or is the measure for human intelligence that is established right now, then they only encompass tasks that humans can perform and understand. Of course and I would say they only encompass tasks where human actually differ with respect to each other. Because if you're making these tests and you have a human for 40 minutes or so, you're not going to give the humans tasks where they don't differentiate from one another. So it's going to be a range like a very very small subset of tasks that are exactly hard enough such that a couple of humans can't solve them, a couple of humans can't solve them. They're going to be you know understandable by humans, and understandable ideally by any human. You don't have to have special pre-knowledge, not have studied biology in order to answer the questions, or not have a higher degree in math or something. So the reference for the G factor is very much a reference frame of human values. And he compares this to physical fitness. So if we call someone physically fit, what do we mean? We mean this general abstract concept of physical fitness where it's not really one skill. So you can measure humans in how fast they run, how high they jump, how fast they swim and so on, and how much they can lift. And across that you'll find generally that all of these things correlate and the result we call physical fitness. But it's not like physical fitness is a universal measure. So we only measure humans at tasks that humans can solve and are different at. So the physical fitness is very human centric and so is intelligence. So that's the analogy. I think it's a very good one. He says he gives this example where humans are for example very very good at shortest path traveling salesman problems. Give up to a certain number of nodes in the graph. Humans can solve them extremely well to like very good degree. But as soon as you go to a longest path problem, which shouldn't be that much harder if you just look at it from an algorithmic perspective, but humans are terrible at it. Absolutely terrible. And that probably has to do with the fact that we have a prior. And the prior is much much more adapt to shortest path problems than longest path problems. Because in our history, in our evolutionary history, it made a lot of sense to build in a navigational unit in the brain that calculates shortest routings. But it's probably like the your fitness is not very much affected by you being able to calculate the longest path unless you want to really avoid something. And just yeah I don't know. You really want to walk in these shoes. So that should be taken into effect and it shows that intelligence is a very human centric concept. So when we talk about general artificial intelligence, what we mean is it's tied to a scope of problems. And that's going to be important that we can only measure intelligence in this framework with respect to a scope of problems and the scope that we consider is the human scope. So why human centric? Because we must have a scope and the human scope is the only meaningful scope. It's the only one we know that you know there is one thing in the universe that we think is intelligent that we know of and that's humans. Or to a degree like what can make the argument it's general biological life on earth but we measure intelligence with a human scope intelligence test and that's the thing we have. We don't have anything else. So we ask ourselves what are priors of humans? What has evolution built into humans? And Chollet decides on three levels of priors. The low level priors which are like reflexes. So if I pinch you, you flick like if I flick you, you move back and if I shine a bright light at your eyes you close them and so on. So this Chollet says it's not very interesting because there's nothing to do like we feel that it has nothing to do with intelligence. And then there are, I'm gonna skip this one, say there are knowledge priors. So the knowledge priors are you know things like the fact that there are objects in this world. That's a knowledge prior that the human have. That's built into you. The notion that the world consists of objects and you can interact with these objects. The navigation capability. We say okay navigate there. Humans can do it very very well as I already said. They're very good at shortest path problems and so on. Intuitive navigation. That's built into you by evolution. That's a prior. Goal directedness. Humans generally view the world in terms of agents and in terms of agents having a goal. Like chasing after something. He makes this example and if we observe something we often want to frame it in terms of agents that pursue goals. And as soon as we can do that, that allows us to some degree to predict the world. And that's probably why this evolved. So very valuable skill. Social intuition and things like counting, like basic arithmetic, are built into humans. And Choulet says if we measure intelligence this is what we must account for. So these things should not count towards intelligence because they're already built into humans if we measure human intelligence. So you wouldn't test human intelligence by making them count because that's built into the humans. Now the third kind of priors that human have are meta learning priors. And the meta learning priors is basically your ability to learn something. This is your meta learning prior. It's just your ability to learn something that is not learned. No one has to teach you how to learn something. I guess they're like learning strategies and so on. But you as a human are incredibly good at picking up new skills. And the skill of picking up new skills, that's built into you. Among these are assumptions that the world is a hierarchical and causal place. That's how you see the world. And because you see the world like this, you can pick up these new skills very very quickly. You can explain through explaining the world. You can pick up new skills. And that is usually what we mean by intelligence. If someone sees a new unencountered before situation, thinks about it, which basically interprets the world in the hierarchical and causal way, and then is able to come up with a skill that solves the problem. And we generally view that as intelligence. So if we want to measure intelligence, we should measure this, basically how good you are picking up new skills, while accounting for these things. When we compare to humans. So Chollet says tests of intelligence should be founded on human-like knowledge priors. It basically makes the case that these should, if we build machines and compare them to humans in, let's say in terms of intelligence, we should build into the machines these things right here. These we should give them, like we should give them a counting module that they can use, like a calculator app. We should just build that in and make that available for the agent to use, like basic arithmetic. You shouldn't have to learn this. We should build in the notion that there are objects. We should build in the basic navigation module and so on that the agent can use. You can almost think of it like, you know, there's whatever your reinforcement learning agent, A, it consists of like this, maybe this big neural network with lots of layers, but then each layer maybe has access to, you know, the calculator app right here, and each layer has access to maybe also memory. I would guess memory is one of those priors as well. Or you know, it has access to a navigation prior. Let's draw a little world map right here. This is Google Maps. It can do that. It can query that sort of. And we shouldn't, if we want to build something that's intelligent, just compared to humans, we shouldn't, we should build in the navigation. We shouldn't, at least as much as human can do it. I mean, it's cool to have a machine learning system learn navigation, but that makes it less comparable. So it says either we should match the humans or we should account for the difference. So we should kind of let the difference be in, let the difference between the humans and the machines into the measure of our intelligence. Further, if you test intelligence, all of these priors should be explicitly described and not rely on additional priors. So that's often the case in IQ tests. There are so many priors that are not explicitly described because we just, you know, we just think, yeah, every human can count. We don't need to write down that our prior assumptions are that the humans can count or understand language. And that's, I hear sometimes a problem in IQ tests that basically the better you understand language, the better you score at these tests. And therefore the tests are more like a measure of language ability than intelligence. So there, this is a lot much informed by, so I think the psychometrics community is on the same path right here. Yeah, he goes into this theory of human core knowledge where he basically expands on what the human priors are. So this core knowledge theory takes on four different categories. So the first one is object-ness and elementary physics, which means you as a human have an inherent knowledge that there are objects, as I said, but also of elementary physics, that stuff sticks together, persistence of objects. And some people say, you know, this is learned. So children, young children, they do not know object persistence and that's why peekaboo is so interesting. But because they think you're gone, right? They think, you know, if a parent goes to the toilet, they really think the parent doesn't exist anymore and only later they learn object persistence. But I would question whether or not that's actually a learned thing or simply a built-in module that gets switched on at that particular point because probably evolution deems it not necessary to waste resources on that module before that. So I would, you know, I would be cautionary saying that object persistence and all of these things, in fact, are learned. I would argue more that they are built in and are simply switched on at a given time during development. Because we also know these things like object persistence, I think that's, you can almost pinpoint the month of a human's life when that's switched on. That will be so accurate. And if this is really learned, then, you know, you'd have to assume like a very regular structure of the training data distribution that a baby gets to experience. So I'm not sure here. But, you know, it's not my opinion, it's a show-lease. Yeah, contact interaction, the fact that you can interact with objects by contacting them, or that objects can interact with each other by being next to each other. That's built into you as well. You don't have to learn that. If you compare this to like an RL agent that has to learn all of this from pixels, basically, you can see why Sholei has a problem with the current direction of deep learning and claiming that things are intelligent there because the comparison is just very invalid. So the second core knowledge you have is agent-ness and goal-directedness, and we have already discussed that. Then natural numbers and elementary arithmetic. In, you know, you can small numbers, you can add, subtract, compare, sort, that sort of thing. And elementary geometry and topology, where in there would be orientation and navigation, and then distance orientation if it's something is inside or outside of a room and so on. Now I have heard, and this might be a myth, that there are languages where left and right, like relative directions, have no meaning, like doesn't exist in the language, but they always use absolute directions, and then these people automatically have a much, much better orientation at all time. Like if they get into a building, they can always tell you where north is. I don't know, maybe that's a myth, but I would guess that's pretty cool, and just shows you the flexibility of something like orientation. Sure, we can all orient, but it seems like by simply learning a different language, you can sort of supercharge that drive for orientation. So again, it sort of feels like there is a lot of nature versus nurture going on in here, in that all of these things, yes, you probably have a tendency built in to learn objectness and physics and so on, but then also probably a lot of it might be learned in addition, or you might just be able to supercharge one of these modules that's inside of you. I think there's lots of room for discussion here. So he says tests for intelligence should only involve core knowledge, and the AI systems taking these tests should hard code that core knowledge. So basically what he said before, we should build in these things right here, these core knowledge things, we should build these into the AI systems if we want to compare them to humans. Because if they have these things and only these things built in, then they sort of have the same starting point as a human. Now in this case, this is where I sort of disagree, because the notion that we can ever explicitly list the priors that humans have, to me seems a bit ridiculous. So I guess we can sort of approximate this at first, but we will never exhaustively exactly describe what the priors are, what is learned. We've seen this with the orientation, like how much of that is learned and prior. And then secondly, even if we could list them pretty exactly, what says that we can exactly program them into an agent such that it can make use of it. That's an entirely, that's even harder challenge. So I'm not so sure a bit this AI systems should hard code core knowledge. He's gonna try that with this ARC challenge that we're going to look at in like the last part of this series. But it's a cool test for intelligence, I admit that, but I doubt that anyone really manages to hard code the core knowledge. And he says tests should only involve core knowledge. And we're going to see how valid that claim is for his own ARC challenge. Now luckily in the math part that's gonna come up, he doesn't strictly rely on these things. So he gives us a way how we can compare, even if the priors of two systems are different, we can compare which one's more intelligent. Alright, so that was part two of this series. You know, it's already been a while now and this is only part two. And I do promise next time we're gonna get into the math. I hope you like this and go check out Tim Scarves video on the same topic. Yeah, as I said, usually much higher quality videos than mine. And I'll see you next time. Bye bye.
[ { "end": 5.04, "start": 0, "text": " Hi there! Today we're going to continue with On the Measure of Intelligence by" }, { "end": 9.92, "start": 5.04, "text": " François Chollet. Now if you remember last time, if you haven't seen last time," }, { "end": 15.36, "start": 9.92, "text": " go watch part 1 if you're interested. This is a multi-part series on this" }, { "end": 20.36, "start": 15.36, "text": " paper. Why? Because the paper itself is very long. It's 40 pages, the main part," }, { "end": 25.44, "start": 20.36, "text": " and it's a big wall of text. So I've opted to basically pull out notes and" }, { "end": 30.96, "start": 25.44, "text": " show you the notes that I have pulled out and to divide this into multiple parts." }, { "end": 34.92, "start": 30.96, "text": " So last time we went over sort of the history of assessing intelligence and" }, { "end": 40.72, "start": 34.92, "text": " the basics. And I know I said this time I'm going to get into the math, but I" }, { "end": 46.84, "start": 40.72, "text": " lied. So I realized that there's still a lot that comes up in part 2 of the paper" }, { "end": 51.78, "start": 46.84, "text": " before we get into the actual math. So this part is going to be about the" }, { "end": 57.800000000000004, "start": 51.78, "text": " prerequisites to that and then next time math. I'm sorry to disappoint anyone." }, { "end": 64, "start": 57.800000000000004, "text": " You might just skip this one if you want. I do have to shout out Tim Scarf who of" }, { "end": 69.28, "start": 64, "text": " course runs the Machine Learning Street Talk channel podcast with me and Connor" }, { "end": 74.4, "start": 69.28, "text": " Shorten together. Tim has just made an entire video about this paper, about On" }, { "end": 79, "start": 74.4, "text": " the Measure of Intelligence, and his videos are usually like super high" }, { "end": 85.4, "start": 79, "text": " quality, like higher than mine, and it's the entire paper. So if you want to, you" }, { "end": 89.2, "start": 85.4, "text": " know, know the end of the story or, you know, have a different take on the paper," }, { "end": 96.8, "start": 89.2, "text": " definitely can recommend his video. I do make a guest appearance there, so yeah," }, { "end": 102.92, "start": 96.8, "text": " that's a given. And I will still finish the paper on this channel just in my" }, { "end": 109.72, "start": 102.92, "text": " regular in this style right here. So all the options are available to you. Let's" }, { "end": 116.2, "start": 109.72, "text": " dive into part 2. So if you remember part 1, we sort of went over what it" }, { "end": 121, "start": 116.2, "text": " means to be intelligent and we differentiated basically two things, which" }, { "end": 130.44, "start": 121, "text": " are skills and abilities. So a skill is how well you achieve a given task or how" }, { "end": 137.16, "start": 130.44, "text": " well you can do in a given task. So this could be chess or, you know, Go or" }, { "end": 142, "start": 137.16, "text": " something very, very measurable. An IQ test is a specific task. So these" }, { "end": 146.72, "start": 142, "text": " things right here, they're all tasks. But these tasks aren't the thing we're" }, { "end": 150.12, "start": 146.72, "text": " interested in. Just because a machine is good at chess doesn't mean it's" }, { "end": 157.48, "start": 150.12, "text": " intelligent. And what we want is sort of a generalizable skill. So we want to" }, { "end": 163.79999999999998, "start": 157.48, "text": " assess how generalizable is an ability. So can I throw a computer at a new" }, { "end": 167.83999999999997, "start": 163.79999999999998, "text": " problem that it has never seen before and it can solve that. And that's going" }, { "end": 172.48, "start": 167.83999999999997, "text": " to be this generalizability, this notion of can you solve things that you have" }, { "end": 176.88, "start": 172.48, "text": " never encountered before and weren't prepared for. That is going to be the" }, { "end": 183.95999999999998, "start": 176.88, "text": " basis for us to measure intelligence. So Chollet says you have to optimize" }, { "end": 189.96, "start": 183.96, "text": " directly for generality and flexibility rather than task performance if you want" }, { "end": 195.48000000000002, "start": 189.96, "text": " to build an intelligent agent. You have to sort of build something that is" }, { "end": 202.12, "start": 195.48000000000002, "text": " not just good at a thing. It is good at getting good at things. That's almost" }, { "end": 210.16, "start": 202.12, "text": " a quote worthy thing. So if you just give it a single task, the learner" }, { "end": 214.2, "start": 210.16, "text": " will just take any available shortcut. So if you just say you have to be" }, { "end": 219.56, "start": 214.2, "text": " good at chess, the developer of a system can exploit all the tricks" }, { "end": 223.84, "start": 219.56, "text": " that make you good at chess. And you don't have to be smart. You just know" }, { "end": 227.51999999999998, "start": 223.84, "text": " basically if you had enough memory you could just memorize all the moves of all" }, { "end": 232.96, "start": 227.51999999999998, "text": " chess games ever. That's why we say hard-coded chatbots are not" }, { "end": 238.44, "start": 232.96, "text": " intelligent. So hard-coded chatbots, they simply match your input to a database" }, { "end": 243.44, "start": 238.44, "text": " of reg exes and then they answer. And we're not very impressed because as soon" }, { "end": 249.12, "start": 243.44, "text": " as I give them something that is not covered by their reg exes, they" }, { "end": 254.04, "start": 249.12, "text": " fail. They just say I don't know or something like this. In fact what is" }, { "end": 260.44, "start": 254.04, "text": " intelligent is the engineer in that case. So the engineer that makes the program" }, { "end": 266.15999999999997, "start": 260.44, "text": " is intelligent. So he has this drawing, I think I've shown it last time, where" }, { "end": 274.08000000000004, "start": 266.16, "text": " you have the environment and there is this agent. And if the agent" }, { "end": 280.12, "start": 274.08000000000004, "text": " is really good with the environment, you might consider that intelligent, but in" }, { "end": 285.8, "start": 280.12, "text": " Sholeh's mind you have to also consider here the developer of the agent. It could" }, { "end": 290.72, "start": 285.8, "text": " be that the developer is very intelligent and just builds the agent to" }, { "end": 294.68, "start": 290.72, "text": " interact with the environment in a matter that it gets a lot of reward. It's" }, { "end": 301.04, "start": 294.68, "text": " very good at a skill, but the agent itself might not be intelligent. So it" }, { "end": 307.32, "start": 301.04, "text": " says intelligence, the intelligence of a process is not encoded by the performance" }, { "end": 311.96000000000004, "start": 307.32, "text": " of a system, but by the fact that the same process can be applied to different" }, { "end": 317.72, "start": 311.96000000000004, "text": " tasks. So in this case if I have a new task, a new environment, so E2 right here," }, { "end": 322.76, "start": 317.72, "text": " the question is could I throw the same agent at that even if it hasn't seen it" }, { "end": 328.59999999999997, "start": 322.76, "text": " before? Or would I be able to take the same developer that develops me a new" }, { "end": 334.44, "start": 328.59999999999997, "text": " agent, agent 2, that then can solve the task? In the case where I can throw" }, { "end": 338.59999999999997, "start": 334.44, "text": " over the agent, that would make an argument that the agent is intelligent," }, { "end": 343.08, "start": 338.59999999999997, "text": " but if I can't you'd have to make the argument that the developer is the" }, { "end": 347.64, "start": 343.08, "text": " intelligent part here, which of course is the point of what he's saying right" }, { "end": 354.8, "start": 347.64, "text": " here. So hard-coded programs themselves are not intelligent, but, and this is the" }, { "end": 360.3, "start": 354.8, "text": " case, the same counts for adding more training data. So not only is the hard" }, { "end": 366.52, "start": 360.3, "text": " coding not intelligent, it's also, Chollet says, not intelligent if you simply add" }, { "end": 371.12, "start": 366.52, "text": " more training data. So a machine learning system that sort of learns to how to" }, { "end": 377.32, "start": 371.12, "text": " interact with these environments. If you can imagine that you have lots" }, { "end": 382.52, "start": 377.32, "text": " of environments, environment, environment, environment, environment, and so on, that" }, { "end": 387.2, "start": 382.52, "text": " give you a dense sampling of the environment space. So you have all of" }, { "end": 391.68, "start": 387.2, "text": " these environments, you build all of them, and you train this agent to interact" }, { "end": 396.15999999999997, "start": 391.68, "text": " well with all of them. So you give it lots of compute, lots of data, and the" }, { "end": 401.32, "start": 396.15999999999997, "text": " environments are really a dense sampling of the environment space. It will be able," }, { "end": 405, "start": 401.32, "text": " even though it has never seen environment 2, this environment right" }, { "end": 409.76, "start": 405, "text": " here, it might be just able to generalize to this environment, given that it has" }, { "end": 415.12, "start": 409.76, "text": " been trained on all the environments here, like it has been trained on every" }, { "end": 418.8, "start": 415.12, "text": " possible environment around this environment that is similar to this" }, { "end": 425.36, "start": 418.8, "text": " environment, it could generally generalize to environment 2, but also we" }, { "end": 429.64, "start": 425.36, "text": " wouldn't view that as intelligent, because in a sense this skill has been" }, { "end": 436.08, "start": 429.64, "text": " bought with data. And this notion of buying skill comes up a lot in" }, { "end": 442.03999999999996, "start": 436.08, "text": " this paper. So Suley says there are two ways to buy a skill, and by buying he" }, { "end": 448.53999999999996, "start": 442.03999999999996, "text": " basically means you don't buy as opposed to intelligently solving the" }, { "end": 453.76, "start": 448.53999999999996, "text": " skill. So whenever you buy a skill, that's not intelligent, Suley says, and you can" }, { "end": 458.2, "start": 453.76, "text": " buy a skill by either hard coding the solution or giving lots of data. And" }, { "end": 463.24, "start": 458.2, "text": " there is like this spectrum where you hard code, completely hard code a" }, { "end": 469.03999999999996, "start": 463.24, "text": " solution, and here is where you completely only feed data. So here would" }, { "end": 474.91999999999996, "start": 469.03999999999996, "text": " be something like GPT-3. There are almost no priors there, it's just a" }, { "end": 480.52, "start": 474.91999999999996, "text": " transformer, big transformer, and you just throw in data, lots and" }, { "end": 484.08, "start": 480.52, "text": " lots and lots and lots and lots of data. So in this measure, last time I've had" }, { "end": 489.15999999999997, "start": 484.08, "text": " lots of people, when GPT-3 came out, I've had people commenting on the part one of" }, { "end": 495.68, "start": 489.15999999999997, "text": " this paper saying, isn't GPT-3 intelligent? Because it can sort of" }, { "end": 499.38, "start": 495.68, "text": " generalize to these tasks that it hasn't been trained on. By the way, if you" }, { "end": 505.88, "start": 499.38, "text": " haven't seen the GPT-3 video, you should go watch it. Something tells me this" }, { "end": 510.76, "start": 505.88, "text": " video is popular, it might be five times as many views as any other video." }, { "end": 518.52, "start": 510.76, "text": " So at least it's not a terrible video. But in essence, GPT-3 can solve these" }, { "end": 523.08, "start": 518.52, "text": " tasks that it wasn't trained for, right? And therefore you might argue in this" }, { "end": 526.84, "start": 523.08, "text": " definition right here, it could be intelligent because it can generalize." }, { "end": 532.48, "start": 526.84, "text": " But there is this counteraction where Shirley says, maybe you have just bought" }, { "end": 537.68, "start": 532.48, "text": " that skill with lots and lots of data. Now I actually don't know what to say to" }, { "end": 543.4799999999999, "start": 537.68, "text": " this because I mean it really seems like GPT-3 generalizes to tasks it has" }, { "end": 550.4799999999999, "start": 543.4799999999999, "text": " never seen before, but also it has had a lot of data. And so as of right now it is" }, { "end": 555.88, "start": 550.4799999999999, "text": " not really clear where the line is here. When are we going to argue" }, { "end": 560.5999999999999, "start": 555.88, "text": " that GPT-3 is actually... It could be that it has had lots of data, but also" }, { "end": 565.7199999999999, "start": 560.5999999999999, "text": " it is intelligent. How are we going to make that distinction? And I guess we're" }, { "end": 570.5600000000001, "start": 565.72, "text": " going to get into the math part, but I can tell you right now the math part is" }, { "end": 577.64, "start": 570.5600000000001, "text": " so abstract as it is not really practical. It is like a theoretical" }, { "end": 583.12, "start": 577.64, "text": " framework that you might be able to approximate. But you know there is" }, { "end": 587.96, "start": 583.12, "text": " like a wishy-washy thing going on. But he basically says, okay there's this" }, { "end": 593.72, "start": 587.96, "text": " spectrum of hard coding over here and then fully learning from data and with" }, { "end": 597.28, "start": 593.72, "text": " all of these methods and you know in between methods. Like a CNN would be" }, { "end": 601.08, "start": 597.28, "text": " here because it has like considerable priors because of its architecture and" }, { "end": 605.96, "start": 601.08, "text": " so on. And over here would be something like an A star search with like a" }, { "end": 612.52, "start": 605.96, "text": " learned heuristic or things like this. You can get good with any of" }, { "end": 617.88, "start": 612.52, "text": " these things, but the intelligence is orthogonal to that. So orthogonal to that" }, { "end": 624.84, "start": 617.88, "text": " is the intelligence axis. It has nothing... Sholay says this has nothing to do with" }, { "end": 629.36, "start": 624.84, "text": " this dimension. You can buy skill with this, but it's basically like it's like a" }, { "end": 636.04, "start": 629.36, "text": " triangle almost sort of where you have hard coding, data and then intelligence." }, { "end": 646.68, "start": 636.04, "text": " And it's like its own axis. So the hard coding refers to the priors of a system" }, { "end": 653.52, "start": 646.68, "text": " that the developer has basically built in. And the learning from data refers to" }, { "end": 658.2399999999999, "start": 653.52, "text": " its experience. So basically the more experience a system has had, the more" }, { "end": 662.7199999999999, "start": 658.2399999999999, "text": " it can generalize to a new skill. That doesn't mean it's intelligent, it just" }, { "end": 669.52, "start": 662.7199999999999, "text": " means had more experience or respectively more priors. So the example" }, { "end": 672.8, "start": 669.52, "text": " gives a locality-sensitive hashing which is basically like a nearest neighbor" }, { "end": 678.7199999999999, "start": 672.8, "text": " method with enough data can solve any task. Like nearest neighbor enough" }, { "end": 684.4399999999999, "start": 678.7199999999999, "text": " data can solve any task. I think it's a famous theorem that" }, { "end": 689.56, "start": 684.4399999999999, "text": " establishes that. So keep that in mind. That is why we basically need to" }, { "end": 695.4399999999999, "start": 689.56, "text": " pay attention to how much data went into this algorithm and sort of" }, { "end": 703.6800000000001, "start": 695.44, "text": " subtract that from our notion of how good, how intelligent it is. Yeah that's" }, { "end": 706.96, "start": 703.6800000000001, "text": " what he says. When we measure intelligence via skill, and this is" }, { "end": 710.6800000000001, "start": 706.96, "text": " really the only thing we can measure, we can only measure how good an agent is at" }, { "end": 715.5200000000001, "start": 710.6800000000001, "text": " a given skill. Anything higher than that we can't measure. So we must measure a" }, { "end": 721.96, "start": 715.5200000000001, "text": " skill, but we should factor out priors and experience. And that's gonna come up" }, { "end": 725.8000000000001, "start": 721.96, "text": " later. We should also pay attention to generalization difficulty. So generally" }, { "end": 732.6, "start": 725.8000000000001, "text": " how difficult is the task to solve given the experience we had. Because if the" }, { "end": 738.2, "start": 732.6, "text": " task is more difficult in a generalization sense, so if it's harder to" }, { "end": 743.52, "start": 738.2, "text": " from what we know get to the point where we can solve the task, then that would" }, { "end": 751, "start": 743.52, "text": " display higher intelligence. Yeah it says solving tasks via experience and priors" }, { "end": 757.12, "start": 751, "text": " has nothing to do with intelligence. It's just more experience, more priors. So" }, { "end": 762.88, "start": 757.12, "text": " goes back to human intelligence. It says how universal is actually human" }, { "end": 767.2, "start": 762.88, "text": " intelligence. And it gets to the to the point where he says it's not very" }, { "end": 772.04, "start": 767.2, "text": " universal. Because first of all there's no free lunch theorem where it says" }, { "end": 777.56, "start": 772.04, "text": " any two optimization algorithms will perform the same if you integrate across" }, { "end": 782.04, "start": 777.56, "text": " all possible problems. So it's even questionable whether something like" }, { "end": 786.16, "start": 782.04, "text": " general intelligence could even like universal intelligence could even exist." }, { "end": 792.1199999999999, "start": 786.16, "text": " But if we look at the DG factor which is used sometimes to assess human" }, { "end": 796.8399999999999, "start": 792.1199999999999, "text": " intelligence, or is the measure for human intelligence that is established" }, { "end": 802.52, "start": 796.8399999999999, "text": " right now, then they only encompass tasks that humans can perform and" }, { "end": 807.4799999999999, "start": 802.52, "text": " understand. Of course and I would say they only encompass tasks where human" }, { "end": 811.5600000000001, "start": 807.48, "text": " actually differ with respect to each other. Because if you're making these" }, { "end": 815.64, "start": 811.5600000000001, "text": " tests and you have a human for 40 minutes or so, you're not going to give" }, { "end": 820.44, "start": 815.64, "text": " the humans tasks where they don't differentiate from one another. So it's" }, { "end": 825.8000000000001, "start": 820.44, "text": " going to be a range like a very very small subset of tasks that are exactly" }, { "end": 829.96, "start": 825.8000000000001, "text": " hard enough such that a couple of humans can't solve them, a couple of humans" }, { "end": 836.64, "start": 829.96, "text": " can't solve them. They're going to be you know understandable by humans, and" }, { "end": 841.8, "start": 836.64, "text": " understandable ideally by any human. You don't have to have special" }, { "end": 846.92, "start": 841.8, "text": " pre-knowledge, not have studied biology in order to answer the questions, or not" }, { "end": 853.48, "start": 846.92, "text": " have a higher degree in math or something. So the reference for the" }, { "end": 860.76, "start": 853.48, "text": " G factor is very much a reference frame of human values. And he compares this to" }, { "end": 866.52, "start": 860.76, "text": " physical fitness. So if we call someone physically fit, what do we mean?" }, { "end": 872.4, "start": 866.52, "text": " We mean this general abstract concept of physical fitness where it's" }, { "end": 877.6, "start": 872.4, "text": " not really one skill. So you can measure humans in how fast they run, how high" }, { "end": 883.76, "start": 877.6, "text": " they jump, how fast they swim and so on, and how much they can lift. And" }, { "end": 888.96, "start": 883.76, "text": " across that you'll find generally that all of these things correlate and the" }, { "end": 894.72, "start": 888.96, "text": " result we call physical fitness. But it's not like physical fitness is a universal" }, { "end": 900.64, "start": 894.72, "text": " measure. So we only measure humans at tasks that humans can solve and are" }, { "end": 906.84, "start": 900.64, "text": " different at. So the physical fitness is very human centric and so is" }, { "end": 913.1600000000001, "start": 906.84, "text": " intelligence. So that's the analogy. I think it's a very good one. He" }, { "end": 918.6, "start": 913.1600000000001, "text": " says he gives this example where humans are for example very very good at" }, { "end": 923.88, "start": 918.6, "text": " shortest path traveling salesman problems. Give up to a certain number of" }, { "end": 929.52, "start": 923.88, "text": " nodes in the graph. Humans can solve them extremely well to like very good degree." }, { "end": 934.68, "start": 929.52, "text": " But as soon as you go to a longest path problem, which shouldn't be that much" }, { "end": 939.6, "start": 934.68, "text": " harder if you just look at it from an algorithmic perspective, but" }, { "end": 944.4399999999999, "start": 939.6, "text": " humans are terrible at it. Absolutely terrible. And that probably has to do" }, { "end": 951.68, "start": 944.4399999999999, "text": " with the fact that we have a prior. And the prior is much much more adapt to" }, { "end": 956.64, "start": 951.68, "text": " shortest path problems than longest path problems. Because in our history, in our" }, { "end": 961.4, "start": 956.64, "text": " evolutionary history, it made a lot of sense to build in a navigational unit in" }, { "end": 967.52, "start": 961.4, "text": " the brain that calculates shortest routings. But it's probably like the" }, { "end": 972.1999999999999, "start": 967.52, "text": " your fitness is not very much affected by you being able to calculate the" }, { "end": 978.92, "start": 972.1999999999999, "text": " longest path unless you want to really avoid something. And just" }, { "end": 985.16, "start": 978.92, "text": " yeah I don't know. You really want to walk in these shoes. So that should be" }, { "end": 990.4799999999999, "start": 985.16, "text": " taken into effect and it shows that intelligence is a very human centric" }, { "end": 997.52, "start": 990.4799999999999, "text": " concept. So when we talk about general artificial intelligence, what we" }, { "end": 1001.92, "start": 997.52, "text": " mean is it's tied to a scope of problems. And that's going to be important" }, { "end": 1006.68, "start": 1001.92, "text": " that we can only measure intelligence in this framework with respect to a scope" }, { "end": 1017.2399999999999, "start": 1006.68, "text": " of problems and the scope that we consider is the human scope. So why human" }, { "end": 1022.1999999999999, "start": 1017.2399999999999, "text": " centric? Because we must have a scope and the human scope is the only meaningful" }, { "end": 1028.08, "start": 1022.1999999999999, "text": " scope. It's the only one we know that you know there is one thing in the universe" }, { "end": 1033.24, "start": 1028.08, "text": " that we think is intelligent that we know of and that's humans. Or to a" }, { "end": 1038.04, "start": 1033.24, "text": " degree like what can make the argument it's general biological life on earth" }, { "end": 1045.04, "start": 1038.04, "text": " but we measure intelligence with a human scope intelligence test and that's the" }, { "end": 1052.88, "start": 1045.04, "text": " thing we have. We don't have anything else. So we ask ourselves what are" }, { "end": 1061.56, "start": 1052.88, "text": " priors of humans? What has evolution built into humans? And Chollet decides on" }, { "end": 1067.8, "start": 1061.56, "text": " three levels of priors. The low level priors which are like reflexes. So if I" }, { "end": 1074.6399999999999, "start": 1067.8, "text": " pinch you, you flick like if I flick you, you move back and if I shine a" }, { "end": 1080.84, "start": 1074.6399999999999, "text": " bright light at your eyes you close them and so on. So this Chollet says it's not" }, { "end": 1085.08, "start": 1080.84, "text": " very interesting because there's nothing to do like we feel that it has" }, { "end": 1090.84, "start": 1085.08, "text": " nothing to do with intelligence. And then there are, I'm gonna skip this one," }, { "end": 1096.6399999999999, "start": 1090.84, "text": " say there are knowledge priors. So the knowledge priors are you know things" }, { "end": 1103.6799999999998, "start": 1096.6399999999999, "text": " like the fact that there are objects in this world. That's a knowledge prior that" }, { "end": 1108.1999999999998, "start": 1103.6799999999998, "text": " the human have. That's built into you. The notion that the world consists of" }, { "end": 1114.12, "start": 1108.1999999999998, "text": " objects and you can interact with these objects. The" }, { "end": 1120.82, "start": 1114.12, "text": " navigation capability. We say okay navigate there. Humans can do it very" }, { "end": 1125.4399999999998, "start": 1120.82, "text": " very well as I already said. They're very good at shortest path problems and so on." }, { "end": 1133, "start": 1125.4399999999998, "text": " Intuitive navigation. That's built into you by evolution. That's a prior. Goal" }, { "end": 1139.2, "start": 1133, "text": " directedness. Humans generally view the world in terms of agents and in terms of" }, { "end": 1145.72, "start": 1139.2, "text": " agents having a goal. Like chasing after something. He makes this example and if" }, { "end": 1150.32, "start": 1145.72, "text": " we observe something we often want to frame it in terms of agents that pursue" }, { "end": 1155.4199999999998, "start": 1150.32, "text": " goals. And as soon as we can do that, that allows us to some degree to predict the" }, { "end": 1161.2, "start": 1155.4199999999998, "text": " world. And that's probably why this evolved. So very valuable skill. Social" }, { "end": 1167.6, "start": 1161.2, "text": " intuition and things like counting, like basic arithmetic, are built into humans." }, { "end": 1172.28, "start": 1167.6, "text": " And Choulet says if we measure intelligence this is what we must account" }, { "end": 1177.96, "start": 1172.28, "text": " for. So these things should not count towards intelligence because" }, { "end": 1182.52, "start": 1177.96, "text": " they're already built into humans if we measure human intelligence. So you" }, { "end": 1188.32, "start": 1182.52, "text": " wouldn't test human intelligence by making them count because" }, { "end": 1194.08, "start": 1188.32, "text": " that's built into the humans. Now the third kind of priors that human have are" }, { "end": 1199.76, "start": 1194.08, "text": " meta learning priors. And the meta learning priors is basically your" }, { "end": 1204.32, "start": 1199.76, "text": " ability to learn something. This is your meta learning prior. It's just your" }, { "end": 1209.24, "start": 1204.32, "text": " ability to learn something that is not learned. No one has to teach you how to" }, { "end": 1214.28, "start": 1209.24, "text": " learn something. I guess they're like learning strategies and so on. But you as" }, { "end": 1220.56, "start": 1214.28, "text": " a human are incredibly good at picking up new skills. And the skill of picking" }, { "end": 1226.6799999999998, "start": 1220.56, "text": " up new skills, that's built into you. Among these are assumptions" }, { "end": 1232.08, "start": 1226.6799999999998, "text": " that the world is a hierarchical and causal place. That's how you see the" }, { "end": 1237.08, "start": 1232.08, "text": " world. And because you see the world like this, you can pick up these new skills" }, { "end": 1241.08, "start": 1237.08, "text": " very very quickly. You can explain through explaining the world. You can" }, { "end": 1246.96, "start": 1241.08, "text": " pick up new skills. And that is usually what we mean by intelligence. If someone" }, { "end": 1252.52, "start": 1246.96, "text": " sees a new unencountered before situation, thinks about it, which" }, { "end": 1257.1999999999998, "start": 1252.52, "text": " basically interprets the world in the hierarchical and causal way, and then is" }, { "end": 1263.24, "start": 1257.2, "text": " able to come up with a skill that solves the problem. And we generally view that" }, { "end": 1267.68, "start": 1263.24, "text": " as intelligence. So if we want to measure intelligence, we should measure this," }, { "end": 1272.28, "start": 1267.68, "text": " basically how good you are picking up new skills, while accounting for these" }, { "end": 1280.96, "start": 1272.28, "text": " things. When we compare to humans. So Chollet says tests of intelligence" }, { "end": 1285.8400000000001, "start": 1280.96, "text": " should be founded on human-like knowledge priors. It basically makes the case that" }, { "end": 1293.4399999999998, "start": 1285.84, "text": " these should, if we build machines and compare them to humans in, let's say in" }, { "end": 1299.52, "start": 1293.4399999999998, "text": " terms of intelligence, we should build into the machines these things right" }, { "end": 1304.1599999999999, "start": 1299.52, "text": " here. These we should give them, like we should give them a counting module that" }, { "end": 1308.52, "start": 1304.1599999999999, "text": " they can use, like a calculator app. We should just build that in and make that" }, { "end": 1312.6399999999999, "start": 1308.52, "text": " available for the agent to use, like basic arithmetic. You shouldn't have to" }, { "end": 1317.5600000000002, "start": 1312.64, "text": " learn this. We should build in the notion that there are objects. We should build" }, { "end": 1322.2800000000002, "start": 1317.5600000000002, "text": " in the basic navigation module and so on that the agent can use. You can almost" }, { "end": 1326.0400000000002, "start": 1322.2800000000002, "text": " think of it like, you know, there's whatever your reinforcement learning" }, { "end": 1331.0800000000002, "start": 1326.0400000000002, "text": " agent, A, it consists of like this, maybe this big neural network with lots of" }, { "end": 1336.3200000000002, "start": 1331.0800000000002, "text": " layers, but then each layer maybe has access to, you know, the calculator app" }, { "end": 1343, "start": 1336.32, "text": " right here, and each layer has access to maybe also memory. I would guess" }, { "end": 1347.96, "start": 1343, "text": " memory is one of those priors as well. Or you know, it has access to a navigation" }, { "end": 1352.72, "start": 1347.96, "text": " prior. Let's draw a little world map right here. This is Google Maps. It can do" }, { "end": 1359.3999999999999, "start": 1352.72, "text": " that. It can query that sort of. And we shouldn't, if we want to build something" }, { "end": 1364.28, "start": 1359.3999999999999, "text": " that's intelligent, just compared to humans, we shouldn't, we should build in" }, { "end": 1369.96, "start": 1364.28, "text": " the navigation. We shouldn't, at least as much as human can do it. I mean, it's cool" }, { "end": 1374.3999999999999, "start": 1369.96, "text": " to have a machine learning system learn navigation, but that makes it less" }, { "end": 1379.16, "start": 1374.3999999999999, "text": " comparable. So it says either we should match the humans or we should account" }, { "end": 1384.56, "start": 1379.16, "text": " for the difference. So we should kind of let the difference be in, let the" }, { "end": 1388.56, "start": 1384.56, "text": " difference between the humans and the machines into the measure of our" }, { "end": 1394.8, "start": 1388.56, "text": " intelligence. Further, if you test intelligence, all of these priors" }, { "end": 1400.8, "start": 1394.8, "text": " should be explicitly described and not rely on additional priors. So that's" }, { "end": 1407.36, "start": 1400.8, "text": " often the case in IQ tests. There are so many priors that are not explicitly" }, { "end": 1412.24, "start": 1407.36, "text": " described because we just, you know, we just think, yeah, every human can count. We" }, { "end": 1417.76, "start": 1412.24, "text": " don't need to write down that our prior assumptions are that the humans can" }, { "end": 1423.44, "start": 1417.76, "text": " count or understand language. And that's, I hear sometimes a problem in IQ tests" }, { "end": 1428, "start": 1423.44, "text": " that basically the better you understand language, the better you score at these" }, { "end": 1433.52, "start": 1428, "text": " tests. And therefore the tests are more like a measure of language ability than" }, { "end": 1440.32, "start": 1433.52, "text": " intelligence. So there, this is a lot much informed by, so I think the" }, { "end": 1448.4399999999998, "start": 1440.32, "text": " psychometrics community is on the same path right here. Yeah, he goes into this" }, { "end": 1452.62, "start": 1448.4399999999998, "text": " theory of human core knowledge where he basically expands on what the human" }, { "end": 1459.56, "start": 1452.62, "text": " priors are. So this core knowledge theory takes on four" }, { "end": 1464.1599999999999, "start": 1459.56, "text": " different categories. So the first one is object-ness and elementary physics, which" }, { "end": 1467.96, "start": 1464.1599999999999, "text": " means you as a human have an inherent knowledge that there are objects, as I" }, { "end": 1473.1200000000001, "start": 1467.96, "text": " said, but also of elementary physics, that stuff sticks together, persistence of" }, { "end": 1477.72, "start": 1473.1200000000001, "text": " objects. And some people say, you know, this is learned. So children, young" }, { "end": 1482.28, "start": 1477.72, "text": " children, they do not know object persistence and that's why peekaboo is" }, { "end": 1488.2, "start": 1482.28, "text": " so interesting. But because they think you're gone, right? They think, you know," }, { "end": 1493.24, "start": 1488.2, "text": " if a parent goes to the toilet, they really think the parent doesn't" }, { "end": 1499.4, "start": 1493.24, "text": " exist anymore and only later they learn object persistence. But I would question" }, { "end": 1503.4, "start": 1499.4, "text": " whether or not that's actually a learned thing or simply a built-in module that" }, { "end": 1508.96, "start": 1503.4, "text": " gets switched on at that particular point because probably evolution deems it not" }, { "end": 1515.2, "start": 1508.96, "text": " necessary to waste resources on that module before that. So I would, you know," }, { "end": 1520.36, "start": 1515.2, "text": " I would be cautionary saying that object persistence and all of these" }, { "end": 1526.04, "start": 1520.36, "text": " things, in fact, are learned. I would argue more that they are built in and are" }, { "end": 1532.28, "start": 1526.04, "text": " simply switched on at a given time during development. Because we also know" }, { "end": 1536.24, "start": 1532.28, "text": " these things like object persistence, I think that's, you can almost" }, { "end": 1541.9199999999998, "start": 1536.24, "text": " pinpoint the month of a human's life when that's switched on. That will be so" }, { "end": 1548.24, "start": 1541.9199999999998, "text": " accurate. And if this is really learned, then, you know, you'd have to assume like" }, { "end": 1553.92, "start": 1548.24, "text": " a very regular structure of the training data distribution that a baby gets to" }, { "end": 1560, "start": 1553.92, "text": " experience. So I'm not sure here. But, you know, it's not my opinion, it's" }, { "end": 1564.84, "start": 1560, "text": " a show-lease. Yeah, contact interaction, the fact that you can interact with" }, { "end": 1569.68, "start": 1564.84, "text": " objects by contacting them, or that objects can interact with each other by" }, { "end": 1574.2, "start": 1569.68, "text": " being next to each other. That's built into you as well. You don't have to" }, { "end": 1578.44, "start": 1574.2, "text": " learn that. If you compare this to like an RL agent that has to learn all of this" }, { "end": 1586.24, "start": 1578.44, "text": " from pixels, basically, you can see why Sholei has a problem with the current" }, { "end": 1591.76, "start": 1586.24, "text": " direction of deep learning and claiming that things are intelligent there" }, { "end": 1599.56, "start": 1591.76, "text": " because the comparison is just very invalid. So the second core knowledge you" }, { "end": 1604.18, "start": 1599.56, "text": " have is agent-ness and goal-directedness, and we have already discussed that. Then" }, { "end": 1608.72, "start": 1604.18, "text": " natural numbers and elementary arithmetic. In, you know, you can small numbers, you" }, { "end": 1613.92, "start": 1608.72, "text": " can add, subtract, compare, sort, that sort of thing. And elementary geometry and" }, { "end": 1619.1200000000001, "start": 1613.92, "text": " topology, where in there would be orientation and navigation, and then" }, { "end": 1624.48, "start": 1619.1200000000001, "text": " distance orientation if it's something is inside or outside of a room and so on." }, { "end": 1631.8400000000001, "start": 1624.48, "text": " Now I have heard, and this might be a myth, that there are languages where left" }, { "end": 1636.8799999999999, "start": 1631.84, "text": " and right, like relative directions, have no meaning, like doesn't exist in the" }, { "end": 1641.6799999999998, "start": 1636.8799999999999, "text": " language, but they always use absolute directions, and then these people" }, { "end": 1646.8, "start": 1641.6799999999998, "text": " automatically have a much, much better orientation at all time. Like if they" }, { "end": 1651.4399999999998, "start": 1646.8, "text": " get into a building, they can always tell you where north is. I don't know," }, { "end": 1656.74, "start": 1651.4399999999998, "text": " maybe that's a myth, but I would guess that's pretty cool, and just shows you" }, { "end": 1661.04, "start": 1656.74, "text": " the flexibility of something like orientation. Sure, we can all orient, but" }, { "end": 1664.24, "start": 1661.04, "text": " it seems like by simply learning a different language, you can sort of" }, { "end": 1671.84, "start": 1664.24, "text": " supercharge that drive for orientation. So again, it sort of feels like" }, { "end": 1676.44, "start": 1671.84, "text": " there is a lot of nature versus nurture going on in here, in that all of these" }, { "end": 1682.28, "start": 1676.44, "text": " things, yes, you probably have a tendency built in to learn objectness and physics" }, { "end": 1688.8799999999999, "start": 1682.28, "text": " and so on, but then also probably a lot of it might be learned in addition, or" }, { "end": 1693.16, "start": 1688.88, "text": " you might just be able to supercharge one of these modules that's" }, { "end": 1700.48, "start": 1693.16, "text": " inside of you. I think there's lots of room for discussion here." }, { "end": 1708.1200000000001, "start": 1700.48, "text": " So he says tests for intelligence should only involve core knowledge, and the AI" }, { "end": 1712.4, "start": 1708.1200000000001, "text": " systems taking these tests should hard code that core knowledge. So basically" }, { "end": 1718.16, "start": 1712.4, "text": " what he said before, we should build in these things right here, these core" }, { "end": 1722.64, "start": 1718.16, "text": " knowledge things, we should build these into the AI systems if we want to" }, { "end": 1727.76, "start": 1722.64, "text": " compare them to humans. Because if they have these things and only these things" }, { "end": 1733.0800000000002, "start": 1727.76, "text": " built in, then they sort of have the same starting point as a human. Now in this" }, { "end": 1738.88, "start": 1733.0800000000002, "text": " case, this is where I sort of disagree, because the notion that we can" }, { "end": 1743.52, "start": 1738.88, "text": " ever explicitly list the priors that humans have, to me seems a bit" }, { "end": 1749.92, "start": 1743.52, "text": " ridiculous. So I guess we can sort of approximate this at first, but we" }, { "end": 1754.56, "start": 1749.92, "text": " will never exhaustively exactly describe what the priors are, what is learned." }, { "end": 1758.2, "start": 1754.56, "text": " We've seen this with the orientation, like how much of that is learned and" }, { "end": 1764.96, "start": 1758.2, "text": " prior. And then secondly, even if we could list them pretty exactly, what says" }, { "end": 1771.76, "start": 1764.96, "text": " that we can exactly program them into an agent such that it can make use" }, { "end": 1777.16, "start": 1771.76, "text": " of it. That's an entirely, that's even harder challenge. So I'm not so sure a" }, { "end": 1782.68, "start": 1777.16, "text": " bit this AI systems should hard code core knowledge. He's gonna try that with" }, { "end": 1786, "start": 1782.68, "text": " this ARC challenge that we're going to look at in like the last part of this" }, { "end": 1792.56, "start": 1786, "text": " series. But it's a cool test for intelligence, I admit that, but I doubt" }, { "end": 1798.36, "start": 1792.56, "text": " that anyone really manages to hard code the core knowledge. And he says tests" }, { "end": 1803.08, "start": 1798.36, "text": " should only involve core knowledge. And we're going to see how" }, { "end": 1809.4799999999998, "start": 1803.08, "text": " valid that claim is for his own ARC challenge. Now luckily in the math part" }, { "end": 1816, "start": 1809.4799999999998, "text": " that's gonna come up, he doesn't strictly rely on these things. So he" }, { "end": 1820.9599999999998, "start": 1816, "text": " gives us a way how we can compare, even if the priors of two systems are" }, { "end": 1825.84, "start": 1820.9599999999998, "text": " different, we can compare which one's more intelligent. Alright, so that was" }, { "end": 1831.04, "start": 1825.84, "text": " part two of this series. You know, it's already been a while now and this is" }, { "end": 1836.4399999999998, "start": 1831.04, "text": " only part two. And I do promise next time we're gonna get into the math. I hope" }, { "end": 1843.76, "start": 1836.4399999999998, "text": " you like this and go check out Tim Scarves video on the same topic. Yeah, as" }, { "end": 1848.28, "start": 1843.76, "text": " I said, usually much higher quality videos than mine. And I'll see you next" }, { "end": 1856.16, "start": 1848.28, "text": " time. Bye bye." } ]
YBlNQK0Ao6g
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Image GPT: Generative Pretraining from Pixels (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "openai", "gpt2", "gpt3", "bert", "transformer", "attention is all you need", "attention mechanism", "multi-head attention", "pixel rnn", "pixel cnn", "pretraining", "representation", "linear probe", "fine-tuning", "cifar10", "cifar100", "imagenet", "cnn", "convolutional neural network", "autoregressive" ]
BERT and GPT-2/3 have shown the enormous power of using generative models as pre-training for classification tasks. However, for images, pre-training is usually done with supervised or self-supervised objectives. This paper investigates how far you can get when applying the principles from the world of NLP to the world of images. OUTLINE: 0:00 - Intro & Overview 2:50 - Generative Models for Pretraining 4:50 - Pretraining for Visual Tasks 7:40 - Model Architecture 15:15 - Linear Probe Experiments 24:15 - Fine-Tuning Experiments 30:25 - Conclusion & Comments Paper: https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf Blog: https://openai.com/blog/image-gpt/ Code: https://github.com/openai/image-gpt Abstract: Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full finetuning, matching the top supervised pre-trained models. An even larger model trained on a mixture of ImageNet and web images is competitive with self-supervised benchmarks on ImageNet, achieving 72.0% top-1 accuracy on a linear probe of our features. Authors: Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, Ilya Sutskever Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Okay, I'm sure many of you have already seen this because it was rather widely announced but the OpenAI team has announced a new model that produces pictures instead of text. So as you can see right here on the left you'll always see like a half a picture and on the right is the ground truth. So they took this picture, they simply cut the bottom half right here and then they let the model sort of imagine what they cut away and what it comes up with is pretty cool I have to say. Like look at the birds, like this is just awesome. But the special thing about this isn't that it simply completes pictures, the special thing about it is it does it one pixel by pixel. So basically it goes at this pixel right here and asks okay what's that pixel and then what's that pixel and so on. So it is basically like a language model but for pixels in that it goes over the images in order basically like this or like always from left to right, left to right, left to right and it has no clue of the spatial relations between the pixels. It needs to learn that by itself as opposed to a convolutional neural network which is specifically designed such that if you want to predict this pixel right here then it's specifically designed to say okay the most important information is probably around that pixel and then some like other important information is wider around that pixel. So CNNs are built with this in mind whereas this model right here which is also known as image GPT doesn't have any of that. It's simply a transformer model that goes over these pixels one by one and we'll see how that's done. There are some more examples right here. Particularly cool is the cat and you see that there is the beginning of this little white thing right here which is this card and the completions of the model. Yes very interesting. The model can of course as a language model can also sample by itself just random images. You sample them once through and this is what it comes up with. So these are pretty good quality images for a model that just produces one pixel by one pixel. Now this is a pixel. Now this idea of one pixel by pixel isn't new. This has been around before but the investigation here is basically how much can we how far can we push these generative models for pre-training. Hi there this is Janek from Post Production. I've realized that I've forgotten to even read the name of the paper. So it's called generative pre-training from pixels by Mark Chen, Alec Radford, Rowan Child, Jeff Wu, Hewon Ju, Prafula Dariwal, David Luan and Ilya Sotskyver. And since Henry AI Labs has already made a video on this, this video is going to be more of kind of a rumble rant about what I find interesting about the paper and some thoughts about it rather than like a classic explanation. I hope you still enjoy that. So what you saw on the right wasn't even the this isn't the final result the supposed result this is simply the pre-training task. It's fun to look at it but the actual object objective of the paper is the following. What if we train we pre-train on a large data set to generate what good images like these or we to complete images like these and then we fine-tune on a classification task. And the answer is here they say on C410 we achieve the 96.3% accuracy with a linear probe outperforming a supervised wide resonant and a 99% accuracy with full fine-tuning matching the top supervised pre-trained models. An even larger model trained on a mixture of ImageNet and web images is competitive with self supervised benchmarks on ImageNet achieving 72 top one accuracy on a linear probe of our features. So the goal here is that you have a data set that you want to train a classifier on. So usually you have a data set and the data set has images and you put them through like a convolutional neural network and then you have to classify the image into one of I don't know how many classes on C410 that's 10 classes on ImageNet it's a thousand. And the data set is these images together with these labels. Now the idea of pre-training is that you somewhere have a bigger data set that is sort of similar to the small data set but yeah it's similar enough such that the network could learn something. So what you want to do first is you first want to take the large data set, train this network right here and then in a second step fine-tune the network on this smaller data set and you sort of hope that what you learn from the large data set right here transfers over a little bit of knowledge. You already have a little bit of knowledge and you can make better use of the data that you have right here. Now the question is how do you do this pre-training and of course this has a long tradition, well long for maybe two or three years right now in the language community where people they pre-train these large models like we've just seen GPT-3 or BERT was one of them. They pre-train these large transformer models on text and then to fine-tune them on classification tasks for text and that's what this paper is doing right here. They pre-train a transformer that is a GPT-2 scale model, they pre-train it on image generation and then they fine-tune it or transfer learn it to classification tasks. And the point of the paper is to say that like in text data, in text data we have made pretty good experiences with doing this, with pre-training a generative model and then fine-tuning on a classification task. While so far in images all we've ever done is we've pre-trained these pre-training tasks which usually is a classification task or like a self-supervised task with a contrastive loss or something like this. What they're doing new is the generative modeling as a pre-training. And again this isn't entirely new but they show that if you throw a lot of computers at it and lots of data and a model then that can work equally well to these self-supervised tasks. So their model as I said is pretty pretty simple. They take an image and they unroll the image. Now a fully unrolled image on let's say ImageNet has 224 squared pixels and that times three right because you have three color channels. That's too large even for an open AI supercomputer. So what they do is first they downscale the image. So they downscale it's not as drastic as here where you just get a three by three image but they do downscale it to like a 32 by 32 or a 64 by 64. Then they unroll it which simply means they go through the image like this and make a sequence out of it because their models are naturally made for text sequences. They simply put the image into a text sequence. They further simplify this by reducing the three color channels to a single one. So they have their own color representation and basically yeah they reduce the three color channels to one channel that simply indexes the color in their color representation. And they say it's still pretty good. It's pretty faithful. So ultimately they end up with like a 32 squared length representation of their image. And then they do one of two things. They either do autoregressive generative pre-training which is the sort of GPT-2 style pre-training. And the idea here is that you always want to predict the next pixel of a sequence. So you can see right here that's the sequence that you input. And you always want to predict what is the next pixel. And in this case you see that we've already predicted everything. Here we've already predicted everything up to this red pixel. So you want to know what's this next pixel, this thing right here. What's this going to be? And the diagram here basically shows you how the attention flows. So every position in this transformer, and if you don't know what a transformer is, I haven't made a video about attention is all you need where these are explained. But briefly every position here can sort of send information only in one direction. So you train all of these in parallel. And when you predict this pixel right here, you only want information from whatever was before that pixel. Otherwise the model could cheat, right? Otherwise the model could simply learn to copy over the value. But the attention pattern here is simply to show you that this is autoregressive and it's in one direction. So you always want to predict the next pixel. And then from all of this you want to predict the next pixel. And then from all of this you want to predict the next pixel. This is in contrast to this objective here that comes from BERT. And I've also made a video on BERT. What you do in BERT is you simply take that image and you cross the block out two of the pixels, or many of the pixels, and you simply ask your network to reconstruct those pixels. And now you can see the attention flows in all directions. BERT, the B stands actually for bidirectional. So this is the contrast to the autoregressive pre-training framework. Now these two things have been applied in text both. The autoregressive is usually easier to actually make it produce something, like we saw producing these images, because you can always just predict the next pixel and then the next and then the next and then the next. Whereas in BERT it's a bit more unclear how you would produce things in a consistent manner. Because the predictions of these two pixels right here, they are independent. It's one forward pass and then both of these are predicted. But other papers have tried to solve this like this, not XLNet. I forget its name. It's something with an X. But these are the two objectives they look at. And it turns out they sort of trade off a bit. They work equally well, or a bit better and a bit worse, depending on the task. So once they have done this, they simply feed images. And you'll notice that you don't need any labels for this. So what you'll do is simply input an image and then simply take away half of it like this, and then predict that pixel. And then you want to predict that pixel and then you want to predict that pixel. That's all like you do with text. And in BERT you simply input an image, cross out pixels and then predict them. So you don't need labels for this. And that's why you can do it with this big data set. And you can do it in an unsupervised fashion. So you can just crawl the internet for images and just feed this into there. And it will sort of learn to produce these images. Now the question is, if you learn to produce these images, does that help you for classification? And they have two methods of assessing this. The bottom one here is the fine-tuning method. So this is supposed to be the representation you learn in the different layers of the network. So this is supposed to be this thing right here. What you'll do is you'll simply fine-tune. That means you on top of this representation you add a classification head that has two outputs, cat or dog, and you train this entire network on your small data set that we discussed before. So you train the entire network, all of the parameters. This is called fine-tuning. In contrast to that, what you can do is you can simply add this classification head with two outputs and then only train this classification head. And that won't perform as well, but it gives you sort of a better idea of how good is the representation that this network right here learned. And on top of that, so if you spin this idea further, you can actually go and do this at any intermediate layer right here. So you can forward propagate until layer two right here, and then here you add your classification head into the two classes and you only train the classification head. That being said, you can also do this with fine-tuning, but in this case this is called a linear probe. And it is often used to assess how good the A representation in intermediate layers is. Whereas what it actually does is assessing how linearly classifiable a representation is, which isn't the same as how useful or how informative, but it is one way to assess these things. So these are the two things they assess. So as for datasets, for C410 they use like C410 and C4100 as datasets and the STL10. And there you have to keep in mind the pre-training is done on ImageNet for those. So you pre-train on ImageNet without the labels and then you transfer learn or fine-tune or linear probe on these small datasets. Whereas later we're going to look at ImageNet and there the pre-training as I understand it is done on ImageNet itself, but also a wider collection of a hundred million or so images from the web, from the internet. Okay, so as you can see right here this is what happens if you do this linear probing. And you can see it works pretty well. So you get like a 95-96% accuracy with linear probes. This is very powerful. So it's not easy to get 96% on C410. I mean current state-of-the-art is like 99%, but still 96% is pretty good. And this is the entire network. There is this big giant network that you input your image into and then there is this one linear layer that does the classification. And all of this right here has not been trained with classification in mind. It simply has been trained to reproduce images. It hasn't even been trained on C410 as far as I understand. It's been trained on ImageNet. So this is to stress how cool or how significant this result is basically. That just a linear probe on top of that will give you such a good accuracy. And the second thing that is obvious right here is this bottom axis is the layer. So this is the layer where they attach the linear probe. And usually if you pre-train a network with a classification task in mind, so you pre-train it with the labels or maybe even without the labels in a self supervised way or something like this, usually the last layer has the best representation for classification. But here the special thing is that the intermediate layers in the middle have the best representation. You can see that the representation quality in terms of linear probing falls off as they sort of it falls off as they go into higher layers. And this is consistent across the datasets as you can see. And the idea here is or the way they interpret it is that if you have an image right here and you've blocked part of it, so you've blocked this and this, wrong way around this, so you've generated everything and now your task is to predict the next pixel. So you're trained to predict this next pixel right here. And the idea is that as you put the image through the network, what it will do is sort of, since the first layers they're going to be, if you're going to be similar to a CNN, they're going to be doing some low-level feature transformation thing. But also the last layers, they're going to really care about what's the exact pixel that goes here. Since it's their job to do that, they're going to care what color does it need to have, what exact luminosity and so on, how does it fit in with the previous pixels and so on. So that's also good. But it's not just low-level information and consistency with other pixels or something like this. At some point if you want to generate consistent images, and we saw that this model can generate consistent images, at some point there needs to be some kind of a notion of the global information in the picture, because the images are consistent throughout. So there needs to be some notion of what is in that image as a whole. And that's the exact information that we need for classification. And the only way that could actually be is here in the middle, since you know that's the place. So the hypothesis is that these models somehow learn a higher level of representation of global information somewhere in the middle before they then specify that information again down to predict the actual pixel. And that's why the best representations for classification are in the middle. So this is one of the interesting findings of this paper. I mean it's cool that they can reach a good accuracy, but to recognize that maybe in these generative models they have some intermediate stage where they represent the global information, and that will actually make the best representation. The second cool thing right here is that you can see they have different sizes of models. So the IGPT-L I believe is something like 60 layers, then this is like 48 layers, and this is 32 layers. So these are all on the scale of GPT-2, either a little bigger or a little smaller. It's not like a GPT-3 scale where you need a ginormous supercomputer, though they do do a lot of computation. But this still sort of fits within hardware of a standard size and not like exascale. What's interesting right here is that you can see the larger models, they reach a lower validation loss. So here is the validation loss. The larger model, if you train them on, so these checkpoints here are always after the same amount of steps. The larger models do reach a lower validation loss right here, as you can see. So this is the large, this is the medium, this is the small. And also you can see that on this axis the linear probe accuracy. So this is whenever you go and you find the best intermediate layer for linear probing, you probe it and you record the accuracy. So you can see a general trend as your validation loss goes down, the linear probe accuracy goes up. So there is a connection like it is in text models. In text models there's a connection of the perplexity of your language model and the quality of the representation you get for downstream tasks. In this model it seems to be the exact same thing. There is a connection between reaching lower validation loss and reaching a higher performance on classification. So that's one interesting thing, the general trend to up to the upper right corner. The other interesting and even arguably even more interesting thing is that if you look at the same validation loss. So at this point all of these models have the same validation loss, yet still the bigger model is better. You can see right here the bigger model outperforms the smaller model even though they have the same validation loss on the image modeling task. And this is also something that OpenAI in their text papers has stressed, that the larger models they seem to be somehow more capable of forming good representations even if they have the same loss. So again this could just be sort of a training data, better training data remembering thing. And when I said that in GPT-3 I didn't actually mean explicit remembering of training data. I meant kind of a fuzzy remembering of training data. I formulate that in the comments but I feel a lot of people have misunderstood me there. Here I think it's a much harder to estimate what's going on also since image pixels. Humans don't have a super good model on image pixels in their head as we have about text. As you can see if you then fine-tune, so for now we've just do linear probing, if you fine-tune these architectures then you reach like a 99% accuracy on C410 which is on par with the best models that we have. So G-Pipe is supervised, pre-trained on ImageNet but also I guess uses a bunch of data augmentation while these image GPT it uses minimal data augmentation I think. They simply random crop a little bit and that's about it. So they also experiment around with this BERT objective. So until now this was all this autoregressive objective and I feel that OpenAI people are a bit more of a fan of the autoregressive objective just given what they've done so far in their papers. And you can see here comparison of the two objectives on C410 and on ImageNet. Again C410 is pre-trained with ImageNet and ImageNet itself is pre-trained with like a larger collection of images from the web. All the pre-training is done without labels. Now the blue is what you can reach with a linear probe and the orange is then on top of that what you can reach by fine-tuning. So no linear probe but fine-tuning. I have to say that the fine-tuning is always done at the end. So even though the linear probe can be attached anywhere in between and it's often useful to do that as we saw because the in-between layers are the best. They say they tried fine-tuning also from in-between but it always worked out best whenever you fine-tune. Whenever you fine-tune you take actually the last layer. So that kind of gives you an idea that the model is then... What seems to be important is this coming up with the higher level representation and then once you fine-tune you're probably able to push that representation through to the end because of your training signal. But if you hadn't done the pre-training you wouldn't even have that higher level representation and then the signal I guess is not strong enough to back propagate through the whole model. It would be very interesting if they investigate, if they do this linear probe analysis again after they fine-tune the model. And to see if then still it is the intermediate layers that have the best representation or if now the best representation in a linear probe sense shifted towards the end. I'm gonna guess it's shifted towards the end but I sort of want to even see if the accuracy of the linear probe in the middle, does it keep the same? So does the curve go like this? This is the linear probe when you simply pre-train. This is linear probe accuracy. The question would be does it change to be like this or does it change to be like this? This is supposed to be the same at the end. So basically does it stay as good as it is but simply get better at the end or does the representation like in this curve, does the good representation now shift towards the end and leave the lower layer with even more capacity to do some low-level stuff? Yeah, maybe they've done this. I haven't seen it. And as you can see these BERT and autoregressive objective, they sort of trade off. So the BERT it tends to do poorly in the linear probe setting but then it catches up during fine-tuning. In C410 almost being at the level of the autoregressive and in in ImageNet actually outperforming it. This darker thing here it simply means that you average across different maskings of BERT because I guess even in classification it's not entirely clear how to get a signal out of BERT because they don't do this CLS vector with BERT. What they do for classification and linear probing and that's written up here, they simply take the average pooling of all the representations of the sequence. And the last thing that I've also forgotten, there's a lot of stuff, when they fine-tune, while fine-tuning the classification loss yields reasonable downstream performance, we find empirically that the joint objective, the generative objective and the classification objective works even better. So even when you fine-tune with this model you have to keep the generative modeling part, the generative loss around and then it performs even more better, more well, whatever that word is. So that's also something to think about. I think this paper right here it kind of lays down a lot of cool things that you can think about and it gives rise to a lot of hypotheses of how does this stuff work, why does this stuff work. I don't even think that the numbers are the most important thing, it's mostly the fact of the effects and what does it mean. Okay, so this was my take on it. It's more kind of a my rant of what I find special about this paper than about the actual paper. You can look at the paper, their numbers are pretty good. On ImageNet they do not reach the same like super-duper performance as they do on C410 and I guess that's probably because they have to downscale the ImageNet images way more than they have to downscale the C410 images because those are of course only 32 by 32. So because they have to downscale so much they lose probably a lot of information and I would be interested to see if there is a way to involve convolutions in all of this. So to do the downscaling that in a learned manner with convolutions or something. I'm sure this has all been done already, I'm just lazy to look it up. Yeah, so I invite you to look at their blog post where they have these samples. They look pretty funny and these full samples up here look fairly cool for what it's trained to do and that it has no spatial awareness whatsoever. It simply uses learned position encodings. And yeah, check it out, that was it from me. Bye bye.
[ { "end": 5.78, "start": 0, "text": " Okay, I'm sure many of you have already seen this because it was rather widely" }, { "end": 12.5, "start": 5.78, "text": " announced but the OpenAI team has announced a new model that produces" }, { "end": 17.84, "start": 12.5, "text": " pictures instead of text. So as you can see right here on the left you'll always" }, { "end": 23.98, "start": 17.84, "text": " see like a half a picture and on the right is the ground truth. So they took" }, { "end": 29.54, "start": 23.98, "text": " this picture, they simply cut the bottom half right here and then they let the" }, { "end": 35.16, "start": 29.54, "text": " model sort of imagine what they cut away and what it comes up with is pretty cool" }, { "end": 42.96, "start": 35.16, "text": " I have to say. Like look at the birds, like this is just awesome. But the special" }, { "end": 47.14, "start": 42.96, "text": " thing about this isn't that it simply completes pictures, the special thing" }, { "end": 54.28, "start": 47.14, "text": " about it is it does it one pixel by pixel. So basically it goes at this pixel" }, { "end": 59.08, "start": 54.28, "text": " right here and asks okay what's that pixel and then what's that pixel and" }, { "end": 67.36, "start": 59.08, "text": " so on. So it is basically like a language model but for pixels in that it" }, { "end": 74.36, "start": 67.36, "text": " goes over the images in order basically like this or like always from left to" }, { "end": 81.64, "start": 74.36, "text": " right, left to right, left to right and it has no clue of the spatial relations" }, { "end": 85, "start": 81.64, "text": " between the pixels. It needs to learn that by itself as opposed to a" }, { "end": 90.88, "start": 85, "text": " convolutional neural network which is specifically designed such that if you" }, { "end": 95.48, "start": 90.88, "text": " want to predict this pixel right here then it's specifically designed to say" }, { "end": 101.24000000000001, "start": 95.48, "text": " okay the most important information is probably around that pixel and then some" }, { "end": 106.76, "start": 101.24000000000001, "text": " like other important information is wider around that pixel. So CNNs are" }, { "end": 111.32, "start": 106.76, "text": " built with this in mind whereas this model right here which is also known as" }, { "end": 118.44, "start": 111.32, "text": " image GPT doesn't have any of that. It's simply a transformer model that" }, { "end": 123.88, "start": 118.44, "text": " goes over these pixels one by one and we'll see how that's done. There are some" }, { "end": 128.84, "start": 123.88, "text": " more examples right here. Particularly cool is the cat and you see that there" }, { "end": 135.79999999999998, "start": 128.84, "text": " is the beginning of this little white thing right here which is this card and" }, { "end": 148.68, "start": 135.8, "text": " the completions of the model. Yes very interesting. The model can of course as a" }, { "end": 154.84, "start": 148.68, "text": " language model can also sample by itself just random images. You sample them once" }, { "end": 159.8, "start": 154.84, "text": " through and this is what it comes up with. So these are pretty good quality" }, { "end": 165.24, "start": 159.8, "text": " images for a model that just produces one pixel by one pixel. Now this is a" }, { "end": 169.88, "start": 165.24, "text": " pixel. Now this idea of one pixel by pixel isn't new. This has been around" }, { "end": 177.32000000000002, "start": 169.88, "text": " before but the investigation here is basically how much can we how far can we" }, { "end": 182.84, "start": 177.32000000000002, "text": " push these generative models for pre-training. Hi there this is Janek" }, { "end": 188.20000000000002, "start": 182.84, "text": " from Post Production. I've realized that I've forgotten to even read the name of" }, { "end": 192.12, "start": 188.20000000000002, "text": " the paper. So it's called generative pre-training from pixels by Mark Chen," }, { "end": 199.8, "start": 192.12, "text": " Alec Radford, Rowan Child, Jeff Wu, Hewon Ju, Prafula Dariwal, David Luan and" }, { "end": 206.36, "start": 199.8, "text": " Ilya Sotskyver. And since Henry AI Labs has already made a video on this, this" }, { "end": 210.76, "start": 206.36, "text": " video is going to be more of kind of a rumble rant about what I find" }, { "end": 215.32, "start": 210.76, "text": " interesting about the paper and some thoughts about it rather than like a" }, { "end": 219.96, "start": 215.32, "text": " classic explanation. I hope you still enjoy that. So what you saw on the right" }, { "end": 224.28, "start": 219.96, "text": " wasn't even the this isn't the final result the supposed result this is" }, { "end": 229.56, "start": 224.28, "text": " simply the pre-training task. It's fun to look at it but the actual object" }, { "end": 237, "start": 229.56, "text": " objective of the paper is the following. What if we train we pre-train on a large" }, { "end": 245.96, "start": 237, "text": " data set to generate what good images like these or we to complete images like" }, { "end": 253.72, "start": 245.96, "text": " these and then we fine-tune on a classification task. And the answer is" }, { "end": 262.28000000000003, "start": 253.72, "text": " here they say on C410 we achieve the 96.3% accuracy with a linear probe" }, { "end": 269.08, "start": 262.28000000000003, "text": " outperforming a supervised wide resonant and a 99% accuracy with full" }, { "end": 275.64, "start": 269.08, "text": " fine-tuning matching the top supervised pre-trained models. An even larger model" }, { "end": 280.76, "start": 275.64, "text": " trained on a mixture of ImageNet and web images is competitive with self" }, { "end": 286.28, "start": 280.76, "text": " supervised benchmarks on ImageNet achieving 72 top one accuracy on a" }, { "end": 294.12, "start": 286.28, "text": " linear probe of our features. So the goal here is that you have a data set that" }, { "end": 300.59999999999997, "start": 294.12, "text": " you want to train a classifier on. So usually you have a data set and the" }, { "end": 306.04, "start": 300.6, "text": " data set has images and you put them through like a convolutional neural" }, { "end": 311.16, "start": 306.04, "text": " network and then you have to classify the image into one of I don't know how" }, { "end": 315.88, "start": 311.16, "text": " many classes on C410 that's 10 classes on ImageNet it's a thousand. And the" }, { "end": 321.88, "start": 315.88, "text": " data set is these images together with these labels. Now the idea of pre-training" }, { "end": 328.52000000000004, "start": 321.88, "text": " is that you somewhere have a bigger data set that is sort of similar to the small" }, { "end": 333.4, "start": 328.52, "text": " data set but yeah it's similar enough such that the network could learn" }, { "end": 337.32, "start": 333.4, "text": " something. So what you want to do first is you first want to take the large data" }, { "end": 343.4, "start": 337.32, "text": " set, train this network right here and then in a second step fine-tune the" }, { "end": 347.47999999999996, "start": 343.4, "text": " network on this smaller data set and you sort of hope that what you learn from" }, { "end": 352.68, "start": 347.47999999999996, "text": " the large data set right here transfers over a little bit of knowledge. You" }, { "end": 356.59999999999997, "start": 352.68, "text": " already have a little bit of knowledge and you can make better use of the data" }, { "end": 361.96000000000004, "start": 356.6, "text": " that you have right here. Now the question is how do you do this pre-training" }, { "end": 367.48, "start": 361.96000000000004, "text": " and of course this has a long tradition, well long for maybe two or three years" }, { "end": 373.48, "start": 367.48, "text": " right now in the language community where people they pre-train these large" }, { "end": 380.52000000000004, "start": 373.48, "text": " models like we've just seen GPT-3 or BERT was one of them. They pre-train these" }, { "end": 386.59999999999997, "start": 380.52, "text": " large transformer models on text and then to fine-tune them on classification" }, { "end": 391.15999999999997, "start": 386.59999999999997, "text": " tasks for text and that's what this paper is doing right here. They pre-train a" }, { "end": 401.15999999999997, "start": 391.15999999999997, "text": " transformer that is a GPT-2 scale model, they pre-train it on image generation" }, { "end": 407.56, "start": 401.15999999999997, "text": " and then they fine-tune it or transfer learn it to classification tasks. And the" }, { "end": 413.88, "start": 407.56, "text": " point of the paper is to say that like in text data, in text data we have made" }, { "end": 421.88, "start": 413.88, "text": " pretty good experiences with doing this, with pre-training a generative" }, { "end": 426.44, "start": 421.88, "text": " model and then fine-tuning on a classification task. While so far in" }, { "end": 431.96, "start": 426.44, "text": " images all we've ever done is we've pre-trained these pre-training tasks" }, { "end": 437.88, "start": 431.96, "text": " which usually is a classification task or like a self-supervised task with a" }, { "end": 443.56, "start": 437.88, "text": " contrastive loss or something like this. What they're doing new is the" }, { "end": 450.2, "start": 443.56, "text": " generative modeling as a pre-training. And again this isn't entirely new but" }, { "end": 457.4, "start": 450.2, "text": " they show that if you throw a lot of computers at it and lots of data and a" }, { "end": 462.52, "start": 457.4, "text": " model then that can work equally well to these self-supervised tasks. So their" }, { "end": 467.64, "start": 462.52, "text": " model as I said is pretty pretty simple. They take an image and they unroll the" }, { "end": 475.23999999999995, "start": 467.64, "text": " image. Now a fully unrolled image on let's say ImageNet has 224 squared" }, { "end": 479.64, "start": 475.23999999999995, "text": " pixels and that times three right because you have three color channels." }, { "end": 486.91999999999996, "start": 479.64, "text": " That's too large even for an open AI supercomputer. So what they do is first" }, { "end": 492.04, "start": 486.92, "text": " they downscale the image. So they downscale it's not as drastic as here" }, { "end": 496.68, "start": 492.04, "text": " where you just get a three by three image but they do downscale it to like a 32 by" }, { "end": 503.48, "start": 496.68, "text": " 32 or a 64 by 64. Then they unroll it which simply means they go through the" }, { "end": 508.84000000000003, "start": 503.48, "text": " image like this and make a sequence out of it because their models are naturally" }, { "end": 515.32, "start": 508.84000000000003, "text": " made for text sequences. They simply put the image into a text sequence. They" }, { "end": 521.88, "start": 515.32, "text": " further simplify this by reducing the three color channels to a single one. So" }, { "end": 527.5600000000001, "start": 521.88, "text": " they have their own color representation and basically yeah they reduce the" }, { "end": 532.9200000000001, "start": 527.5600000000001, "text": " three color channels to one channel that simply indexes the color in their color" }, { "end": 539.5600000000001, "start": 532.9200000000001, "text": " representation. And they say it's still pretty good. It's pretty faithful. So" }, { "end": 546.8399999999999, "start": 539.56, "text": " ultimately they end up with like a 32 squared length representation of their" }, { "end": 552.7199999999999, "start": 546.8399999999999, "text": " image. And then they do one of two things. They either do autoregressive" }, { "end": 559.1999999999999, "start": 552.7199999999999, "text": " generative pre-training which is the sort of GPT-2 style pre-training. And the" }, { "end": 565.3399999999999, "start": 559.1999999999999, "text": " idea here is that you always want to predict the next pixel of a sequence. So" }, { "end": 571.9200000000001, "start": 565.34, "text": " you can see right here that's the sequence that you input." }, { "end": 579.6800000000001, "start": 571.9200000000001, "text": " And you always want to predict what is the next pixel. And in this case you" }, { "end": 583.6, "start": 579.6800000000001, "text": " see that we've already predicted everything. Here we've already predicted" }, { "end": 588.96, "start": 583.6, "text": " everything up to this red pixel. So you want to know what's this next pixel, this" }, { "end": 596.08, "start": 588.96, "text": " thing right here. What's this going to be? And the diagram here basically shows" }, { "end": 600.6600000000001, "start": 596.08, "text": " you how the attention flows. So every position in this transformer, and if you" }, { "end": 604.58, "start": 600.6600000000001, "text": " don't know what a transformer is, I haven't made a video about attention is" }, { "end": 610, "start": 604.58, "text": " all you need where these are explained. But briefly every position here can sort" }, { "end": 618.48, "start": 610, "text": " of send information only in one direction. So you train all" }, { "end": 623.44, "start": 618.48, "text": " of these in parallel. And when you predict this pixel right here, you only" }, { "end": 629.24, "start": 623.44, "text": " want information from whatever was before that pixel. Otherwise the model" }, { "end": 634.72, "start": 629.24, "text": " could cheat, right? Otherwise the model could simply learn to copy over the" }, { "end": 640, "start": 634.72, "text": " value. But the attention pattern here is simply to show you that this is" }, { "end": 643.24, "start": 640, "text": " autoregressive and it's in one direction. So you always want to predict" }, { "end": 647.16, "start": 643.24, "text": " the next pixel. And then from all of this you want to predict the next pixel. And" }, { "end": 650.36, "start": 647.16, "text": " then from all of this you want to predict the next pixel. This is in" }, { "end": 655.24, "start": 650.36, "text": " contrast to this objective here that comes from BERT. And I've also made a" }, { "end": 660.12, "start": 655.24, "text": " video on BERT. What you do in BERT is you simply take that image and you cross" }, { "end": 666.6, "start": 660.12, "text": " the block out two of the pixels, or many of the pixels, and you simply ask your" }, { "end": 671.76, "start": 666.6, "text": " network to reconstruct those pixels. And now you can see the attention flows" }, { "end": 677.48, "start": 671.76, "text": " in all directions. BERT, the B stands actually for bidirectional. So this is the" }, { "end": 683.48, "start": 677.48, "text": " contrast to the autoregressive pre-training framework. Now these two" }, { "end": 689.04, "start": 683.48, "text": " things have been applied in text both. The autoregressive is usually easier" }, { "end": 694.08, "start": 689.04, "text": " to actually make it produce something, like we saw producing these images," }, { "end": 698.08, "start": 694.08, "text": " because you can always just predict the next pixel and then the next and then" }, { "end": 702.5600000000001, "start": 698.08, "text": " the next and then the next. Whereas in BERT it's a bit more unclear how you" }, { "end": 706.96, "start": 702.5600000000001, "text": " would produce things in a consistent manner. Because the predictions of these" }, { "end": 712.76, "start": 706.96, "text": " two pixels right here, they are independent. It's one forward pass and" }, { "end": 718.4000000000001, "start": 712.76, "text": " then both of these are predicted. But other papers have tried to solve this" }, { "end": 727.0400000000001, "start": 718.4000000000001, "text": " like this, not XLNet. I forget its name. It's something with an X." }, { "end": 733.92, "start": 727.04, "text": " But these are the two objectives they look at. And it turns out they" }, { "end": 739.56, "start": 733.92, "text": " sort of trade off a bit. They work equally well, or a bit better and a bit worse," }, { "end": 744.68, "start": 739.56, "text": " depending on the task. So once they have done this, they simply feed images." }, { "end": 749.76, "start": 744.68, "text": " And you'll notice that you don't need any labels for this. So what you'll do is" }, { "end": 755.98, "start": 749.76, "text": " simply input an image and then simply take away half of it like this, and then" }, { "end": 761.64, "start": 755.98, "text": " predict that pixel. And then you want to predict that pixel and then you want to" }, { "end": 766.32, "start": 761.64, "text": " predict that pixel. That's all like you do with text. And in BERT you simply" }, { "end": 772.08, "start": 766.32, "text": " input an image, cross out pixels and then predict them. So you don't need labels" }, { "end": 776.8000000000001, "start": 772.08, "text": " for this. And that's why you can do it with this big data set. And you can do it" }, { "end": 781.4, "start": 776.8000000000001, "text": " in an unsupervised fashion. So you can just crawl the internet for images and" }, { "end": 786.92, "start": 781.4, "text": " just feed this into there. And it will sort of learn to produce these images." }, { "end": 793.12, "start": 786.92, "text": " Now the question is, if you learn to produce these images, does that" }, { "end": 800.52, "start": 793.12, "text": " help you for classification? And they have two methods of assessing" }, { "end": 805.3199999999999, "start": 800.52, "text": " this. The bottom one here is the fine-tuning method. So this is" }, { "end": 810.56, "start": 805.3199999999999, "text": " supposed to be the representation you learn in the different layers of the" }, { "end": 815.1199999999999, "start": 810.56, "text": " network. So this is supposed to be this thing right here. What you'll do is" }, { "end": 820.5999999999999, "start": 815.1199999999999, "text": " you'll simply fine-tune. That means you on top of this representation you add a" }, { "end": 827.7199999999999, "start": 820.5999999999999, "text": " classification head that has two outputs, cat or dog, and you train this entire" }, { "end": 832.2399999999999, "start": 827.7199999999999, "text": " network on your small data set that we discussed before. So you train the" }, { "end": 837.1199999999999, "start": 832.2399999999999, "text": " entire network, all of the parameters. This is called fine-tuning. In contrast" }, { "end": 843.28, "start": 837.12, "text": " to that, what you can do is you can simply add this" }, { "end": 848.4, "start": 843.28, "text": " classification head with two outputs and then only train this classification head." }, { "end": 853.76, "start": 848.4, "text": " And that won't perform as well, but it gives you sort of a better idea of" }, { "end": 860.88, "start": 853.76, "text": " how good is the representation that this network right here learned. And on top of" }, { "end": 866.5600000000001, "start": 860.88, "text": " that, so if you spin this idea further, you can actually go and do this at any" }, { "end": 871.7199999999999, "start": 866.56, "text": " intermediate layer right here. So you can forward propagate until layer two" }, { "end": 878.0799999999999, "start": 871.7199999999999, "text": " right here, and then here you add your classification head into the two" }, { "end": 882.64, "start": 878.0799999999999, "text": " classes and you only train the classification head. That being said, you" }, { "end": 888.64, "start": 882.64, "text": " can also do this with fine-tuning, but in this case this is called a linear probe." }, { "end": 894.8, "start": 888.64, "text": " And it is often used to assess how good the A representation in intermediate" }, { "end": 900.5999999999999, "start": 894.8, "text": " layers is. Whereas what it actually does is assessing how linearly classifiable a" }, { "end": 905.52, "start": 900.5999999999999, "text": " representation is, which isn't the same as how useful or how informative, but it" }, { "end": 913, "start": 905.52, "text": " is one way to assess these things. So these are the two things they" }, { "end": 921.24, "start": 913, "text": " assess. So as for datasets, for C410 they use like C410 and C4100 as" }, { "end": 926.4, "start": 921.24, "text": " datasets and the STL10. And there you have to keep in mind the pre-training" }, { "end": 931.2, "start": 926.4, "text": " is done on ImageNet for those. So you pre-train on ImageNet without the" }, { "end": 939.96, "start": 931.2, "text": " labels and then you transfer learn or fine-tune or linear probe on these" }, { "end": 945.2, "start": 939.96, "text": " small datasets. Whereas later we're going to look at ImageNet and there the" }, { "end": 950.48, "start": 945.2, "text": " pre-training as I understand it is done on ImageNet itself, but also a wider" }, { "end": 958.08, "start": 950.48, "text": " collection of a hundred million or so images from the web, from the internet." }, { "end": 966.08, "start": 958.08, "text": " Okay, so as you can see right here this is what happens if you do this linear" }, { "end": 974.48, "start": 966.08, "text": " probing. And you can see it works pretty well. So you get like a 95-96% accuracy" }, { "end": 980.6, "start": 974.48, "text": " with linear probes. This is very powerful. So it's not easy to get 96% on" }, { "end": 988.5600000000001, "start": 980.6, "text": " C410. I mean current state-of-the-art is like 99%, but still 96% is pretty" }, { "end": 995.8000000000001, "start": 988.5600000000001, "text": " good. And this is the entire network. There is this big giant network that you" }, { "end": 1000.84, "start": 995.8000000000001, "text": " input your image into and then there is this one linear layer that does the" }, { "end": 1006.36, "start": 1000.84, "text": " classification. And all of this right here has not been trained with" }, { "end": 1012.52, "start": 1006.36, "text": " classification in mind. It simply has been trained to reproduce images. It" }, { "end": 1016.6800000000001, "start": 1012.52, "text": " hasn't even been trained on C410 as far as I understand. It's been trained on" }, { "end": 1026.4, "start": 1016.6800000000001, "text": " ImageNet. So this is to stress how cool or how significant this result is" }, { "end": 1030.56, "start": 1026.4, "text": " basically. That just a linear probe on top of that will give you such a good" }, { "end": 1037.72, "start": 1030.56, "text": " accuracy. And the second thing that is obvious right here is this bottom axis" }, { "end": 1045.1599999999999, "start": 1037.72, "text": " is the layer. So this is the layer where they attach the linear probe. And usually" }, { "end": 1049.8, "start": 1045.1599999999999, "text": " if you pre-train a network with a classification task in mind, so you" }, { "end": 1053.44, "start": 1049.8, "text": " pre-train it with the labels or maybe even without the labels in a self" }, { "end": 1058.6, "start": 1053.44, "text": " supervised way or something like this, usually the last layer has the best" }, { "end": 1064.12, "start": 1058.6, "text": " representation for classification. But here the special thing is that the" }, { "end": 1069.48, "start": 1064.12, "text": " intermediate layers in the middle have the best representation. You can see that" }, { "end": 1076, "start": 1069.48, "text": " the representation quality in terms of linear probing falls off as they sort of" }, { "end": 1083.84, "start": 1076, "text": " it falls off as they go into higher layers. And this is consistent across the" }, { "end": 1090.8799999999999, "start": 1083.84, "text": " datasets as you can see. And the idea here is or the way they interpret it" }, { "end": 1099.6399999999999, "start": 1090.8799999999999, "text": " is that if you have an image right here and you've blocked part of it," }, { "end": 1111, "start": 1099.6399999999999, "text": " so you've blocked this and this, wrong way around this, so you've generated" }, { "end": 1118.76, "start": 1111, "text": " everything and now your task is to predict the next pixel. So you're" }, { "end": 1127.16, "start": 1118.76, "text": " trained to predict this next pixel right here. And the idea is that as you put" }, { "end": 1133.68, "start": 1127.16, "text": " the image through the network, what it will do is sort of, since the first" }, { "end": 1138.12, "start": 1133.68, "text": " layers they're going to be, if you're going to be similar to a CNN, they're" }, { "end": 1143.6799999999998, "start": 1138.12, "text": " going to be doing some low-level feature transformation thing. But also the" }, { "end": 1149.12, "start": 1143.6799999999998, "text": " last layers, they're going to really care about what's the exact pixel that goes" }, { "end": 1154.36, "start": 1149.12, "text": " here. Since it's their job to do that, they're going to care what color" }, { "end": 1159.84, "start": 1154.36, "text": " does it need to have, what exact luminosity and so on, how does it fit in" }, { "end": 1166.9599999999998, "start": 1159.84, "text": " with the previous pixels and so on. So that's also good. But it's not" }, { "end": 1171.72, "start": 1166.96, "text": " just low-level information and consistency with other pixels or" }, { "end": 1177.44, "start": 1171.72, "text": " something like this. At some point if you want to generate consistent images, and" }, { "end": 1183.16, "start": 1177.44, "text": " we saw that this model can generate consistent images, at some point there" }, { "end": 1187.68, "start": 1183.16, "text": " needs to be some kind of a notion of the global information in the picture," }, { "end": 1194, "start": 1187.68, "text": " because the images are consistent throughout. So there needs to be some" }, { "end": 1198.92, "start": 1194, "text": " notion of what is in that image as a whole. And that's the exact" }, { "end": 1203.36, "start": 1198.92, "text": " information that we need for classification. And the only way that" }, { "end": 1208.6, "start": 1203.36, "text": " could actually be is here in the middle, since you know that's the place. So the" }, { "end": 1213.72, "start": 1208.6, "text": " hypothesis is that these models somehow learn a higher level of" }, { "end": 1218.44, "start": 1213.72, "text": " representation of global information somewhere in the middle before they then" }, { "end": 1224.4, "start": 1218.44, "text": " specify that information again down to predict the actual pixel. And that's why" }, { "end": 1228.88, "start": 1224.4, "text": " the best representations for classification are in the middle. So this" }, { "end": 1234.96, "start": 1228.88, "text": " is one of the interesting findings of" }, { "end": 1239.44, "start": 1234.96, "text": " this paper. I mean it's cool that they can reach a good accuracy, but to recognize" }, { "end": 1247.0800000000002, "start": 1239.44, "text": " that maybe in these generative models they have some intermediate stage" }, { "end": 1250.24, "start": 1247.08, "text": " where they represent the global information, and that will actually make" }, { "end": 1256.9199999999998, "start": 1250.24, "text": " the best representation. The second cool thing right here is that you can" }, { "end": 1264.96, "start": 1256.9199999999998, "text": " see they have different sizes of models. So the IGPT-L I believe is something" }, { "end": 1272.56, "start": 1264.96, "text": " like 60 layers, then this is like 48 layers, and this is 32 layers. So" }, { "end": 1278.04, "start": 1272.56, "text": " these are all on the scale of GPT-2, either a little bigger or a" }, { "end": 1282, "start": 1278.04, "text": " little smaller. It's not like a GPT-3 scale where you need a ginormous" }, { "end": 1290.2, "start": 1282, "text": " supercomputer, though they do do a lot of computation. But this still sort of fits" }, { "end": 1298.3999999999999, "start": 1290.2, "text": " within hardware of a standard size and not like exascale. What's interesting" }, { "end": 1303.44, "start": 1298.4, "text": " right here is that you can see the larger models, they reach a lower" }, { "end": 1307.68, "start": 1303.44, "text": " validation loss. So here is the validation loss. The larger model, if you train them" }, { "end": 1311.96, "start": 1307.68, "text": " on, so these checkpoints here are always after the same amount of steps. The" }, { "end": 1317, "start": 1311.96, "text": " larger models do reach a lower validation loss right here, as you can see." }, { "end": 1324.96, "start": 1317, "text": " So this is the large, this is the medium, this is the small. And also you can see" }, { "end": 1329.56, "start": 1324.96, "text": " that on this axis the linear probe accuracy. So this is whenever you go" }, { "end": 1334.32, "start": 1329.56, "text": " and you find the best intermediate layer for linear probing, you probe it and you" }, { "end": 1339, "start": 1334.32, "text": " record the accuracy. So you can see a general trend as your validation loss" }, { "end": 1345.56, "start": 1339, "text": " goes down, the linear probe accuracy goes up. So there is a connection like it is" }, { "end": 1349.68, "start": 1345.56, "text": " in text models. In text models there's a connection of the perplexity of your" }, { "end": 1355.44, "start": 1349.68, "text": " language model and the quality of the representation you get for downstream" }, { "end": 1359.92, "start": 1355.44, "text": " tasks. In this model it seems to be the exact same thing. There is a connection" }, { "end": 1366.24, "start": 1359.92, "text": " between reaching lower validation loss and reaching a higher performance on" }, { "end": 1373.48, "start": 1366.24, "text": " classification. So that's one interesting thing, the general trend to up to the" }, { "end": 1378.68, "start": 1373.48, "text": " upper right corner. The other interesting and even arguably even more" }, { "end": 1383.88, "start": 1378.68, "text": " interesting thing is that if you look at the same validation loss. So at this" }, { "end": 1388.96, "start": 1383.88, "text": " point all of these models have the same validation loss, yet still the bigger" }, { "end": 1394.28, "start": 1388.96, "text": " model is better. You can see right here the bigger model outperforms the" }, { "end": 1399.8400000000001, "start": 1394.28, "text": " smaller model even though they have the same validation loss on the image" }, { "end": 1405.68, "start": 1399.8400000000001, "text": " modeling task. And this is also something that OpenAI in their text" }, { "end": 1410.68, "start": 1405.68, "text": " papers has stressed, that the larger models they seem to be somehow more" }, { "end": 1415.76, "start": 1410.68, "text": " capable of forming good representations even if they have the" }, { "end": 1424.1200000000001, "start": 1415.76, "text": " same loss. So again this could just be sort of a training data," }, { "end": 1429.8, "start": 1424.1200000000001, "text": " better training data remembering thing. And when I said that in GPT-3 I didn't" }, { "end": 1434.8, "start": 1429.8, "text": " actually mean explicit remembering of training data. I meant kind of a fuzzy" }, { "end": 1439.36, "start": 1434.8, "text": " remembering of training data. I formulate that in the comments but" }, { "end": 1445.68, "start": 1439.36, "text": " I feel a lot of people have misunderstood me there. Here I think it's" }, { "end": 1451.3999999999999, "start": 1445.68, "text": " a much harder to estimate what's going on also since image pixels. Humans" }, { "end": 1456.36, "start": 1451.3999999999999, "text": " don't have a super good model on image pixels in their head as we have about" }, { "end": 1461.3999999999999, "start": 1456.36, "text": " text. As you can see if you then fine-tune, so for now we've just do" }, { "end": 1467.96, "start": 1461.4, "text": " linear probing, if you fine-tune these architectures then you reach like a 99%" }, { "end": 1476.92, "start": 1467.96, "text": " accuracy on C410 which is on par with the best models that we have. So G-Pipe" }, { "end": 1483.0800000000002, "start": 1476.92, "text": " is supervised, pre-trained on ImageNet but also I guess uses a bunch of data" }, { "end": 1489.48, "start": 1483.0800000000002, "text": " augmentation while these image GPT it uses minimal data augmentation I think." }, { "end": 1501.72, "start": 1489.48, "text": " They simply random crop a little bit and that's about it. So they also experiment" }, { "end": 1508.3600000000001, "start": 1501.72, "text": " around with this BERT objective. So until now this was all this" }, { "end": 1513.08, "start": 1508.3600000000001, "text": " autoregressive objective and I feel that OpenAI people are a bit more of a fan of" }, { "end": 1518.3600000000001, "start": 1513.08, "text": " the autoregressive objective just given what they've done so far in their papers." }, { "end": 1527.8, "start": 1518.36, "text": " And you can see here comparison of the two objectives on C410 and on ImageNet." }, { "end": 1533.3999999999999, "start": 1527.8, "text": " Again C410 is pre-trained with ImageNet and ImageNet itself is pre-trained" }, { "end": 1537.6, "start": 1533.3999999999999, "text": " with like a larger collection of images from the web. All the pre-training is" }, { "end": 1544.8799999999999, "start": 1537.6, "text": " done without labels. Now the blue is what you can reach with a linear probe and" }, { "end": 1551.24, "start": 1544.88, "text": " the orange is then on top of that what you can reach by fine-tuning. So no" }, { "end": 1555.2800000000002, "start": 1551.24, "text": " linear probe but fine-tuning. I have to say that the fine-tuning is always done" }, { "end": 1562.88, "start": 1555.2800000000002, "text": " at the end. So even though the linear probe can be attached" }, { "end": 1567.3200000000002, "start": 1562.88, "text": " anywhere in between and it's often useful to do that as we saw because the" }, { "end": 1573.1200000000001, "start": 1567.3200000000002, "text": " in-between layers are the best. They say they tried fine-tuning also from" }, { "end": 1578.32, "start": 1573.12, "text": " in-between but it always worked out best whenever you fine-tune. Whenever you" }, { "end": 1583.4399999999998, "start": 1578.32, "text": " fine-tune you take actually the last layer. So that kind of gives you an idea" }, { "end": 1591.2399999999998, "start": 1583.4399999999998, "text": " that the model is then... What seems to be important is this coming up" }, { "end": 1596.3999999999999, "start": 1591.2399999999998, "text": " with the higher level representation and then once you fine-tune you're probably" }, { "end": 1602.8, "start": 1596.3999999999999, "text": " able to push that representation through to the end because of your training" }, { "end": 1607.6399999999999, "start": 1602.8, "text": " signal. But if you hadn't done the pre-training you wouldn't even have" }, { "end": 1612.28, "start": 1607.6399999999999, "text": " that higher level representation and then the signal I guess is not strong" }, { "end": 1616.56, "start": 1612.28, "text": " enough to back propagate through the whole model. It would be very interesting" }, { "end": 1621.8799999999999, "start": 1616.56, "text": " if they investigate, if they do this linear probe analysis again after they" }, { "end": 1628.08, "start": 1621.8799999999999, "text": " fine-tune the model. And to see if then still it is the intermediate layers" }, { "end": 1634.8, "start": 1628.08, "text": " that have the best representation or if now the best representation in a linear" }, { "end": 1640.1999999999998, "start": 1634.8, "text": " probe sense shifted towards the end. I'm gonna guess it's shifted towards the end" }, { "end": 1645.6399999999999, "start": 1640.1999999999998, "text": " but I sort of want to even see if the accuracy of the linear probe in the" }, { "end": 1651.96, "start": 1645.6399999999999, "text": " middle, does it keep the same? So does the curve go like this? This is the" }, { "end": 1658.56, "start": 1651.96, "text": " linear probe when you simply pre-train. This is linear probe accuracy. The" }, { "end": 1665.92, "start": 1658.56, "text": " question would be does it change to be like this or does it change to be like" }, { "end": 1672.08, "start": 1665.92, "text": " this? This is supposed to be the same at the end. So basically does it stay as" }, { "end": 1677.32, "start": 1672.08, "text": " good as it is but simply get better at the end or does the representation like" }, { "end": 1680.92, "start": 1677.32, "text": " in this curve, does the good representation now shift towards the" }, { "end": 1685.68, "start": 1680.92, "text": " end and leave the lower layer with even more capacity to do some low-level" }, { "end": 1693.3200000000002, "start": 1685.68, "text": " stuff? Yeah, maybe they've done this. I haven't seen it. And as you can see" }, { "end": 1699, "start": 1693.3200000000002, "text": " these BERT and autoregressive objective, they sort of trade off. So the BERT it" }, { "end": 1704.5600000000002, "start": 1699, "text": " tends to do poorly in the linear probe setting but then it catches up during" }, { "end": 1710.6000000000001, "start": 1704.5600000000002, "text": " fine-tuning. In C410 almost being at the level of the autoregressive and in" }, { "end": 1717.4399999999998, "start": 1710.6, "text": " in ImageNet actually outperforming it. This darker thing here it" }, { "end": 1722.32, "start": 1717.4399999999998, "text": " simply means that you average across different maskings of BERT because I" }, { "end": 1728.36, "start": 1722.32, "text": " guess even in classification it's not entirely clear how to get a signal out" }, { "end": 1733.6399999999999, "start": 1728.36, "text": " of BERT because they don't do this CLS vector with BERT. What they do for" }, { "end": 1740.24, "start": 1733.6399999999999, "text": " classification and linear probing and that's written up here, they simply take" }, { "end": 1746.28, "start": 1740.24, "text": " the average pooling of" }, { "end": 1752.92, "start": 1746.28, "text": " all the representations of the sequence. And the last thing that I've" }, { "end": 1762.84, "start": 1752.92, "text": " also forgotten, there's a lot of stuff, when they fine-tune, while fine-tuning" }, { "end": 1769.52, "start": 1762.84, "text": " the classification loss yields reasonable" }, { "end": 1774.04, "start": 1769.52, "text": " downstream performance, we find empirically that the joint objective, the" }, { "end": 1778.58, "start": 1774.04, "text": " generative objective and the classification objective works even" }, { "end": 1784.52, "start": 1778.58, "text": " better. So even when you fine-tune with this model you have to keep the" }, { "end": 1790.8, "start": 1784.52, "text": " generative modeling part, the generative loss around and then it performs even" }, { "end": 1799.4, "start": 1790.8, "text": " more better, more well, whatever that word is. So that's also something to think" }, { "end": 1805.44, "start": 1799.4, "text": " about. I think this paper right here it kind of lays down a lot of cool" }, { "end": 1811.64, "start": 1805.44, "text": " things that you can think about and it gives rise to a lot of hypotheses of how" }, { "end": 1816.6000000000001, "start": 1811.64, "text": " does this stuff work, why does this stuff work. I don't even think that the" }, { "end": 1822.0400000000002, "start": 1816.6000000000001, "text": " numbers are the most important thing, it's mostly the fact of the effects and" }, { "end": 1831.24, "start": 1822.04, "text": " what does it mean. Okay, so this was my take on it. It's more kind of a my" }, { "end": 1837.36, "start": 1831.24, "text": " rant of what I find special about this paper than about the actual paper. You" }, { "end": 1841.28, "start": 1837.36, "text": " can look at the paper, their numbers are pretty good. On ImageNet they do not" }, { "end": 1847.8, "start": 1841.28, "text": " reach the same like super-duper performance as they do on C410 and I" }, { "end": 1852.72, "start": 1847.8, "text": " guess that's probably because they have to downscale the ImageNet images way" }, { "end": 1856.2, "start": 1852.72, "text": " more than they have to downscale the C410 images because those are of course" }, { "end": 1863.46, "start": 1856.2, "text": " only 32 by 32. So because they have to downscale so much they lose probably a" }, { "end": 1869.12, "start": 1863.46, "text": " lot of information and I would be interested to see if there is a way to" }, { "end": 1876.72, "start": 1869.12, "text": " involve convolutions in all of this. So to do the downscaling that in a" }, { "end": 1880.72, "start": 1876.72, "text": " learned manner with convolutions or something. I'm sure this has all been" }, { "end": 1886.08, "start": 1880.72, "text": " done already, I'm just lazy to look it up. Yeah, so I invite you to look at their" }, { "end": 1891.96, "start": 1886.08, "text": " blog post where they have these samples. They look pretty funny and these" }, { "end": 1897.8, "start": 1891.96, "text": " full samples up here look fairly cool for what it's trained to do" }, { "end": 1901.44, "start": 1897.8, "text": " and that it has no spatial awareness whatsoever. It simply uses learned" }, { "end": 1907.8400000000001, "start": 1901.44, "text": " position encodings. And yeah, check it out, that was it from me. Bye bye." } ]
YPfUiOMYOEE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "deepmind", "ucl", "representation", "moco", "momentum contrast", "simclr", "encoder", "augmentation", "mixup", "randaugment", "crop", "random crop", "jitter", "flip", "unsupervised", "self-supervised", "cnn", "resnet", "latent", "contrastive", "online", "target", "exponential moving average", "negatives" ]
Self-supervised representation learning relies on negative samples to keep the encoder from collapsing to trivial solutions. However, this paper shows that negative samples, which are a nuisance to implement, are not necessary for learning good representation, and their algorithm BYOL is able to outperform other baselines using just positive samples. OUTLINE: 0:00 - Intro & Overview 1:10 - Image Representation Learning 3:55 - Self-Supervised Learning 5:35 - Negative Samples 10:50 - BYOL 23:20 - Experiments 30:10 - Conclusion & Broader Impact Paper: https://arxiv.org/abs/2006.07733 Abstract: We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods intrinsically rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using the standard linear evaluation protocol with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Authors: Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello there! Today we're looking at Bootstrap Your Own Latent, a new approach to self-supervised learning by researchers of DeepMind and Imperial College. So almost no day goes by where we don't hear some sort of new self-supervised algorithm right here. This paper on a high level tries to get rid of the necessary negative samples when doing the contrastive loss for self-supervised learning. They basically combine momentum contrast and same clear and then remove the negative samples. That seems to work pretty well even though it's magic. So yeah, if you want to see how it's done, stick around, share the video out. If you want other people to see how it's done, leave a comment. This one I really don't get what's going on. So if you have ideas, put them there. I'll read them through. It'll be fun. Alright, so they say we introduce Bootstrap Your Own Latent or B.O.L. a new approach to self-supervised image representation learning. Image representation learning is the simple task of taking an image and then feeding it through a function which is usually like a neural network. Let's just say this is a neural network and in fact all of these, the community has sort of standardized this to be, most of the time it's something like a ResNet-50. So what you want to do is you want to train a neural network like a ResNet-50 to give you a good representation of the image. So this would be like H and H is a vector and H is a representation of this image and the representation should be such that you can then take this representation and solve many tasks with it, which either can be like linear, you can put a linear classifier on top of the H or you can fine-tune the entire architecture to solve some other tasks. The idea is if you have a large data set here you may use this data set to train these good representations of these images and then you can transfer this to a task where you might not have as much data. Because you don't have as much data it's not enough to completely train an architecture like this, but it is enough to take an architecture that's been trained with the large data set and just adapt it to your small data set. That usually tends to work pretty well. This is called transfer learning. This step here is called fine-tuning sometimes and it's sort of the approach that comes from natural language processing from these big transformers like BERT where you first train on a really big data set that might not be the data set that you want in the end but it's really big so you can sort of learn a lot of things from that data set and then the only thing left to do is to fine-tune it to basically adapt it to the nuances of your data set but it will have learned most things already and that's called representation learning. The goal is to learn a good representation. The self-supervised here is also important because representation learning can be as easy as if this here is ImageNet. The ImageNet data set contains like a million images all with labels. You can simply train your ResNet-50 to predict the class. This is called supervised pre-training or supervised representation learning and that works pretty well but you need a labeled data set. In self-supervised learning you do not need labels. What you do is you do self-supervision and self-supervision there are many ways to do self-supervision but what we'll see in this particular paper is that you will take an image and you'll make different variants of that same image. You'll take the image and you'll make many many variants of it. Let's just say two. You have some procedure to sort of change the picture a little bit but it's essentially still the same and you do that through data augmentation. This could be a random crop or you color jitter or you rotate it or something like this and then you exploit the fact that you know that these two things they should be still sort of the same image. Once you send them through your encoder the representations of the two images they should be fairly close. Now let's actually read on right here. Bjoll relies on two neural networks referred to as online and target networks that interact and learn from each other. From an augmented view of an image we train the online network to predict the target representation of the same image under a different augmented view. That's sort of what we saw. We have the same image under a different augmented view. What does it mean? You make two versions of the same image. One that are slightly different and then their representation should be close. Until this point we have always thought that this would degenerate. If you think of this neural network that does this encoding to the hidden space, this ResNet-50 right here, if you simply want to make the two representations close, what's the best thing it can do? It can simply map all the hidden, it can simply have the constant function h equals zero or something like this. Just a constant function. Because then this loss here is always going to be zero. Like perfect. No matter what image comes in, if you always map it to the same thing you will always be close in representation space and therefore you always win. That doesn't learn a really good representation. What people have done is they have included so-called negative samples where you'll say I'll take a different image from this data set but it's a different image than this image. I also do some data augmentation with that image and then I send this through the same encoder to also give me an h. This is the h, let's call that h original. This is h plus because it's the same image but slightly differently augmented. And this is h minus which is a different image. Now the task is let's make those two very similar to each other but let's distance them from this other one. We want this to be as far away as possible and these two to be close to each other. Now the network can't simply map everything to a constant function anymore. It needs to actually do something to make these be close together and this be far apart. The combination of this together with the augmentation procedure that goes into augmenting the images has been sort of a good combo to learn good representations. A lot of papers have alluded to the fact that this is... The negative samples are to not have these degeneracy, so to not have the simple solutions. But the fact that the representation then is actually good, like is good for image tasks down the line, probably comes from the fact of these augmentations right here. There's a lot of evidence from the fact that depending on which augmentations we choose, these representations are going to be better or worse. For example random cropping of an image, so the random sub, like taking a random crop from the image, tends to be very very beneficial. So here this is the same image twice, right? Let's say we take a random crop here and one up here. Maybe there's an overlap here in the middle, right? So it sort of needs to understand that these random crops sort of needs to communicate between these two places in these random crops. So the representation has to somehow make sure that the object that is overlapping here is somehow represented, but it can't represent it just as a pixel value, because it doesn't know where the crops come from. So there's a lot of evidence that these representations are the thing that's responsible for making the representation so good. Okay, now this paper simply says do we really need these negative samples right here? Let's just get rid of them. And with a couple of tricks this seems to work. And this is what seems like magic to me, because as we go forward, think of it, nothing keeps this model right here from doing the degenerate solution h equals constant. Nothing, right? Now for some reason it doesn't do that. And I have the feeling that this is a super delicate balance that you have to do, because when you train, when you start out, it's probably not the constant function, right? It's probably some distribution. And then simply by the fact that you train it and kind of keep it in the... So this is certainly an optimal solution, but you might be like in some sort of local minimum once you start training and you simply don't get out of it during training. And that's why the network has an easier time step by step as it updates itself in very small incremental steps. It has an easier time actually going for the good representation than it has to see this solution right here and converge to that. But yeah, it seems delicate. So what are they doing? They are taking that idea of taking an input image right here. And so by the way, why is it important that there are no negative samples? Because now the question is always, oh, where do you get these negative samples from? Right? Should they be uniformly sampled? Should we keep a buffer? Should we order them? There is this task of hard negative mining where you say, oh, any old negative won't do. It's actually better if we take negatives that are, you know, just hard enough. There is a curriculum learning problems and so on. So it would be best to actually just get rid of these negative things. So that's why we want to get rid of them. So that's the approach. BYOL. Bootstrap your own latent. There is the input image. You take one image at a time and you apply two different random augmentations to it. Right? So you create two slightly different variants of that image through augmentation. And again, this can be something like a random crop. It can be a horizontal flip randomly. You color jitter, you solarize, you blur, and so on. There are all these variants of data augmentation. And the fact that down the line, the representation of these two things has to be close to each other. I think these random, these augmentations here are responsible to make the representations powerful. The fact that later down the line, the network has to sort of learn to ignore these. It has to learn that, oh, you know, it doesn't matter where in the image this object is, because it's been random cropped for different, you know, at different locations. It doesn't matter where in the image this object is. I simply need to have my hidden representation have this particular object in the image. And that's what makes it powerful. Okay, I've said that enough now. Then you have these two slightly different versions. And then you map it through your encoder. Okay, let's go the top path first. You see the bottom path has the same encoder, but the parameters are different. And this is going to be one of the crucial elements right here. So this here are your actual parameters that you learn. And this here are what are called the target parameters. Now after each, and you can see this for all of these components right here. So what happens is that the target parameters are basically a copy of these what's what are called the online parameters. Okay, so after each step, you copy over from the online parameters, you copy over to the target parameters, you never learn the target parameters, you simply copy them after each step. Now you don't copy them outright, what you do is you do an exponential moving average. So the target parameters are always going to be sort of a lagging average of your online parameters. And that idea comes from the momentum contrast principle, where the reasoning sort of behind it is that you need a kind of a stable you kind of need a stable representation as a target. But I think it hasn't been fully explored or explained why exactly that is so helpful. But we just know that if if we have the target to be not the same as the the online parameters, but actually a kind of a stable version of the past of the online parameters, then that tends to work well. Again, it's kind of the same principle as with the augmentations. With the augmentations, we have two different versions of the same image. And now with this procedure here, we sort of have two different versions of the same neural network, but they're slightly different, right. This idea has been around for much longer, like the first queue, deep queue networks, and so on. They had the same principles where they had the the network that they actually learned and then the target network that is copied over every such and such episodes, and so on. So this, this seems to work seems to be a fundamental principle that seems to work. All right, so we take our two slightly different augmented versions of the same image, and we run them through our two slightly different encoders to obtain two representations. Now this thing right here, that's going to be our representer. So after this procedure, we discard the entire thing right here, except that. So this here is your whatever your ResNet 50. Okay, after that follows a projection. And the projection is is here to reduce the dimensionality. And honestly, I'm actually not sure why it is here. Because you can do it without, like technically, the algorithm doesn't require this projection. So you can imagine the algorithm without the projection. But just really quickly, the projection simply brings down the representation, which is like 2048 dimensional that comes out of the ResNet 50. It has, it is a two layer neural network that first pumps this up to like 4092, and then compresses it down to 256 dimensions. Okay, so that's the projection network. Again, there is a part that's learned and then the target projector is simply the exponential moving average of the online projector. But again, this is why exactly this is here, probably simply because it works, right? But probably because there is no distinction because you don't have different losses, you simply back propagate through everything and then train everything. So there is no logical distinction between the projection and the representation other than you have a different dimensionality. But maybe that's the point here that you make a different dimensionality, even though you could you could do the rest in this 2048 space. Yeah, so for now, just this doesn't exist. Let's just say this doesn't exist. And we just work with this representation here. Let's call this Z, Z prime. Okay, so what happens is we take the representation. And now we have one neural network, the predictor right here, that takes the representation of one of the image versions. And it simply tries to predict the representation of the other image version. So what you want is that q of z equals z prime. Okay, and if we expand that is that q of f of z is equal to f target of z prime. And if we expand that even further, you can see that q, I'll just write q and f for now q of f of a, which is an augmentation at an augmentation of z should be one bracket two bracket three bracket should be f of a of z sorry not see that's the image x. Alright, so this makes a lot of sense. You're simply with q. Since these are all different here, so f is the target instead of the online parameters, a is also different, it's a different augmentation that you do, but the x is the same. Okay, so the queue simply tries to somehow negate this augmentation and this difference between the target and the online parameters. But you don't tell the queue which augmentation was used. And you don't tell the queue what are the exact parameters of that network. So what the queue has to do is it has to somehow it's like it's like it has to take its best guess, right? So basically the queue is trained to output the expected value of the representation right the expected value of the representation f of a of x under all of the different possible image augmentations. And that's why it learns to ignore these augmentations. So your entire goal with these methods is you learn to ignore these augmentations. So you want to learn some method that is independent of the augmentations. So by crafting the augmentations in a smart way, we can make these representations contain a lot of semantic information, because what we want to do with the augmentation is basically we want to destroy all the non-segmented information. So non-semantic information. And random cropping is one of those methods. Horizontal flipping is one of those methods, because we say, well, whether an image goes left to right or right to left, most of the time the semantics are the same. The pixels are different, but the semantics are the same. So by putting an augmentation in there, we learn to ignore that augmentation, because our representation now needs to be predictable. We learn Q to predict the representation under the expectation of our augmentations. And that means it can't be dependent on one particular augmentation. It learns to ignore it. So that's basically what's happening here. Again, there is nothing keeping this from simply collapsing it to a trivial solution. And it's probably a combination of the initialization and the learning procedure itself, that it goes on in little, little steps, one by one, that keeps it in the realm of rather having to... Like it's easier to learn a good representation than it is to collapse to that solution. Okay? So again, components is image, then you augment differently, then you run it through different encoders, but the encoders are similar in the fact that one is the exponential moving average of the other. And then you try to predict one from the other. And that ultimately makes the representation be independent of the augmentation. And that means that the representation can only include things that are not destroyed by the augmentations. And if you construct the augmentations smartly, that means you only retain the semantic information. That's it. So the loss function is pretty simple. As you can see right here, what you want is, and this bar is a normalization, what you want is the L2 norm between this representation be close to the Q of that representation. So the Q simply tries to predict the other representation. And you do that for both ways. So you once stick the image in here and try to predict the other one, and you do it vice versa. So you get two loss components each time. It's a symmetric loss. And that's it. That's the method. And they beat all the other self-supervised methods, and they get pretty close to the supervised representation learning method. As you can see right here, as the number of parameters goes up in their model, so one of them is ResNet-50, but I'm gonna guess this one right here. But you can also get two higher architectures, and then it appears to work even better and come even closer to this supervised baseline. This could be because if you have more parameters technically in a supervised method, you would also need more labeled images maybe, and therefore it doesn't scale as well. I don't know. There is a lot of unclarity in this research. All they show is that their numbers are good, which is cool, right? And it's cool that you don't need the negative samples anymore, and it actually doesn't collapse when you do that kind of stuff. But there's a lot of, I don't know, there's a lot of things here. For example, we use a batch size of 4096 split over 512 TPUv3 cores. With this setup, training takes approximately eight hours for ResNet-50. So they train eight hours on 512 TPUs. Just imagine that! So that's sort of crazy amount of computation, again, going into these models. And then the second thing here is that you can see that there are some things missing right here, and there are all these annotations, which probably means that they take these numbers from those papers. Now, they allude to the fact that they try to follow their protocol as closely as possible, but I mean, that's never given. Or almost never, unless they release the exact code, and even then there are still going to be differences. You'd have to replicate the exact thing on the exact same number of TPU cores and whatnot. So I highly like these numbers seem to be... I'm not sure, especially if you then go and look, and at some point they actually do reproduce the SimClear baseline. So you can see right here that they have a own implementation of SimClear, and they actually compare this to the numbers that they find in the SimClear paper. And you can see, for example, here there's like four percentage points that their implementation of SimClear gains above this implementation. And if you look at this supervised baseline, that's also from that paper. And there is a graph further down where they also implement their own version of the... their own version of the supervised baseline. I forget... here. So you can see that between the supervised in that paper and the supervised of them, sometimes there's like a giant gap right here for the same model, it seems. So all of these numbers, I'm not sure you should put too much weight on the fact that this is now outperforming the other methods. I would not put... like unless this is like super duper replicated very often, I would not put a lot of weight on the fact that it is better. What I would put a lot of weight on is the fact that it works at all and achieves, you know, good performance. And there is more. They make... they have like experiments right here that show that their method, the BYOL, is much more resistant to like changes in hyperparameters. So here you can see that it falls off much later when you reduce the batch size, which makes sense, right? Because SimClear is one of these methods that uses negative samples. And for negative samples, it uses the other samples in the mini batch. Now if you have less samples in the mini batch, that means you have a less representative distribution of your entire data set as negative samples. And therefore, if you increase... decrease the mini batch, then this drops off. And also they show that, for example, their method is much more robust to the removal of a couple of these image augmentations. So all of this I find actually pretty cool. But the actual numbers here... first, I'm not super duper interested that they get like two or one points more in something, but they do perform like a lot of experiments. And that... it shows that you can apply the method to different things. It's not only like in one setting, so that's pretty cool. It works at least... you can say it works at least as well as other methods. And it is a lot easier because you don't have this negative sample things. Now the last quarrel I have with the paper, and where is it? Where is it? Somewhere they say that we release the code... they release the pseudo code. They don't release the code. They release the pseudo code in the appendix. So I mean, there are reasons why you sometimes want to release pseudo code. And that's if an algorithm is so high level and so simple in its high levelity and so modular to be fleshed out that you can't... like it makes more sense. But here it's like pseudo code in jacks. And come on... is it really that competitively advantageous to retain your code? It's just not reproducible with this. You know that they have like 50 billion hacks in their code. And yeah, so DeepMind has this history of just not releasing... like publishing behind paywalls and just giving pseudo code that has lots of mistakes in them. Like the new zero pseudo code, you can't even like run it in its basic form if you fill in the things. It's a bit annoying. In any way, the method itself seems promising for representation learning, as I said, especially because it's pretty simple. It still heavily relies on these augmentation methods. So and that's what they say right here. Nevertheless, BYOL remains dependent on existing sets of augmentations that are specific to vision applications. To generalize BOL to other modalities, it is necessary to obtain similarly suitable augmentations for each of them. Designing such augmentations may require significant effort and expertise. Therefore automating the search for these augmentations would be an important next step to generalize BOL to other modalities. And I'm not sure if you can do this automating the search for these augmentations. I guess you can do it if you have like a supervised data set and then you can search and then you can use those augmentations for the unsupervised. But it seems a bit bootstrap-y, no pun intended right here. I think the power of these representations again comes from the fact that we have these augmentations carefully constructed. So oh yes, the last thing broader impact statement. Just read this like try to estimate the perplexity of this broader impact statement. Let's go. The presented research should be categorized as research in the field of unsupervised learning. This work may inspire new algorithms, theoretical and experimental investigation. The algorithm presented here can be used for many different vision applications and a particular use may have both positive or negative impacts, which is known as the dual use problem. Besides as vision data sets could be biased, the representation learned by BOL could be susceptible to replicate these biases. Like come on. So people who advocated for making everyone do this. Is this what you wanted? Is this like is this a satisfactory result for you? And if you have this as a reviewer, is this okay or not? I mean let's just cross out some words here. Blank, like field, let's just put field. Or machine learning. Why not? Machine learning. Machine learning. This work inspire new algorithms? Yes. The algorithm presented here can be used for many different machine learning applications and a particular use may have both negative effects. Besides as data sets could be biased, the representation learned by this paper could be susceptible to replicate these biases. Well there is a copy-paste thing that you can apparently put into any and all papers that you write from now on. And hey DeepMind is doing it. So you know, there you go. Okay maybe a bit cynical but I'm like I told you this would happen. I told you. And you know. Okay so that was it for my comments right here. They do have like a giant ton of experiments and I appreciate that right. They really try to show that it works in many different situations and yeah yet to solve why this doesn't collapse but apparently it doesn't. So try it out. Give it a try and I'll see you next time. Bye bye.
[ { "end": 4.5, "start": 0, "text": " Hello there! Today we're looking at Bootstrap Your Own Latent, a new approach" }, { "end": 9.74, "start": 4.5, "text": " to self-supervised learning by researchers of DeepMind and Imperial" }, { "end": 16.740000000000002, "start": 9.74, "text": " College. So almost no day goes by where we don't hear some sort of new" }, { "end": 21.82, "start": 16.740000000000002, "text": " self-supervised algorithm right here. This paper on a high level tries to get" }, { "end": 27, "start": 21.82, "text": " rid of the necessary negative samples when doing the contrastive loss for" }, { "end": 33.08, "start": 27, "text": " self-supervised learning. They basically combine momentum contrast and" }, { "end": 38.28, "start": 33.08, "text": " same clear and then remove the negative samples. That seems to work pretty" }, { "end": 44.32, "start": 38.28, "text": " well even though it's magic. So yeah, if you want to see how it's done, stick" }, { "end": 50.22, "start": 44.32, "text": " around, share the video out. If you want other people to see how it's done, leave" }, { "end": 56.84, "start": 50.22, "text": " a comment. This one I really don't get what's going on. So if you have" }, { "end": 61.040000000000006, "start": 56.84, "text": " ideas, put them there. I'll read them through. It'll be fun." }, { "end": 68.68, "start": 61.040000000000006, "text": " Alright, so they say we introduce Bootstrap Your Own Latent or B.O.L. a new" }, { "end": 73.68, "start": 68.68, "text": " approach to self-supervised image representation learning. Image" }, { "end": 80.44, "start": 73.68, "text": " representation learning is the simple task of taking an image and then feeding" }, { "end": 84.04, "start": 80.44, "text": " it through a function which is usually like a neural network. Let's just" }, { "end": 89.52000000000001, "start": 84.04, "text": " say this is a neural network and in fact all of these, the community has sort of" }, { "end": 95.80000000000001, "start": 89.52000000000001, "text": " standardized this to be, most of the time it's something like a ResNet-50." }, { "end": 100.48, "start": 95.80000000000001, "text": " So what you want to do is you want to train a neural network like a ResNet-50" }, { "end": 106.04, "start": 100.48, "text": " to give you a good representation of the image. So this would be like H and H is a" }, { "end": 112.92, "start": 106.04, "text": " vector and H is a representation of this image and the representation should be" }, { "end": 119.2, "start": 112.92, "text": " such that you can then take this representation and solve many tasks with" }, { "end": 125.24000000000001, "start": 119.2, "text": " it, which either can be like linear, you can put a linear classifier on top of" }, { "end": 130.04, "start": 125.24000000000001, "text": " the H or you can fine-tune the entire architecture to solve some other tasks." }, { "end": 136.6, "start": 130.04, "text": " The idea is if you have a large data set here you may use this data set to train" }, { "end": 141.56, "start": 136.6, "text": " these good representations of these images and then you can transfer" }, { "end": 147.32, "start": 141.56, "text": " this to a task where you might not have as much data. Because you" }, { "end": 151.24, "start": 147.32, "text": " don't have as much data it's not enough to completely train an architecture like" }, { "end": 155.6, "start": 151.24, "text": " this, but it is enough to take an architecture that's been trained with" }, { "end": 160.48000000000002, "start": 155.6, "text": " the large data set and just adapt it to your small data set. That usually" }, { "end": 165.72, "start": 160.48000000000002, "text": " tends to work pretty well. This is called transfer learning. This step here" }, { "end": 172.32, "start": 165.72, "text": " is called fine-tuning sometimes and it's sort of the approach that comes from" }, { "end": 178.4, "start": 172.32, "text": " natural language processing from these big transformers like BERT where you" }, { "end": 182.16, "start": 178.4, "text": " first train on a really big data set that might not be the data set that you" }, { "end": 187.16, "start": 182.16, "text": " want in the end but it's really big so you can sort of learn a lot of things" }, { "end": 192.4, "start": 187.16, "text": " from that data set and then the only thing left to do is to fine-tune it to" }, { "end": 197.16, "start": 192.4, "text": " basically adapt it to the nuances of your data set but it will have learned" }, { "end": 200.88, "start": 197.16, "text": " most things already and that's called representation learning. The goal is" }, { "end": 208.8, "start": 200.88, "text": " to learn a good representation. The self-supervised here is also important" }, { "end": 214.96, "start": 208.8, "text": " because representation learning can be as easy as if this here is ImageNet." }, { "end": 219.54000000000002, "start": 214.96, "text": " The ImageNet data set contains like a million images all with labels. You can" }, { "end": 224.51999999999998, "start": 219.54, "text": " simply train your ResNet-50 to predict the class. This is called" }, { "end": 230.64, "start": 224.51999999999998, "text": " supervised pre-training or supervised representation learning and that works" }, { "end": 236.16, "start": 230.64, "text": " pretty well but you need a labeled data set. In self-supervised learning you do" }, { "end": 241.2, "start": 236.16, "text": " not need labels. What you do is you do self-supervision and self-supervision" }, { "end": 245.79999999999998, "start": 241.2, "text": " there are many ways to do self-supervision but what we'll" }, { "end": 253.04000000000002, "start": 245.8, "text": " see in this particular paper is that you will take an image and you'll make" }, { "end": 259.2, "start": 253.04000000000002, "text": " different variants of that same image. You'll take the image and you'll make" }, { "end": 265.36, "start": 259.2, "text": " many many variants of it. Let's just say two. You have some procedure to" }, { "end": 269.6, "start": 265.36, "text": " sort of change the picture a little bit but it's essentially still the same and" }, { "end": 275, "start": 269.6, "text": " you do that through data augmentation. This could be a random crop or you" }, { "end": 281.12, "start": 275, "text": " color jitter or you rotate it or something like this and then you exploit" }, { "end": 285.56, "start": 281.12, "text": " the fact that you know that these two things they should be still sort of the" }, { "end": 291.44, "start": 285.56, "text": " same image. Once you send them through your encoder the" }, { "end": 298.2, "start": 291.44, "text": " representations of the two images they should be fairly close. Now let's" }, { "end": 308.68, "start": 298.2, "text": " actually read on right here. Bjoll relies on two neural networks referred to as" }, { "end": 312.08, "start": 308.68, "text": " online and target networks that interact and learn from each other. From an" }, { "end": 316.12, "start": 312.08, "text": " augmented view of an image we train the online network to predict the target" }, { "end": 321.84, "start": 316.12, "text": " representation of the same image under a different augmented view. That's" }, { "end": 327.48, "start": 321.84, "text": " sort of what we saw. We have the same image under a different" }, { "end": 333.8, "start": 327.48, "text": " augmented view. What does it mean? You make two versions of" }, { "end": 338.28000000000003, "start": 333.8, "text": " the same image. One that are slightly different and then their representation" }, { "end": 345.04, "start": 338.28000000000003, "text": " should be close. Until this point we have always thought that this would" }, { "end": 350.64000000000004, "start": 345.04, "text": " degenerate. If you think of this neural network that does this" }, { "end": 356.08000000000004, "start": 350.64000000000004, "text": " encoding to the hidden space, this ResNet-50 right here, if you" }, { "end": 359.59999999999997, "start": 356.08, "text": " simply want to make the two representations close, what's the best" }, { "end": 365.15999999999997, "start": 359.59999999999997, "text": " thing it can do? It can simply map all the hidden, it can simply have the" }, { "end": 370.24, "start": 365.15999999999997, "text": " constant function h equals zero or something like this. Just a constant" }, { "end": 375.56, "start": 370.24, "text": " function. Because then this loss here is always going to be zero. Like perfect." }, { "end": 380.59999999999997, "start": 375.56, "text": " No matter what image comes in, if you always map it to the same thing you" }, { "end": 386.03999999999996, "start": 380.59999999999997, "text": " will always be close in representation space and therefore you always win." }, { "end": 392.12, "start": 386.04, "text": " That doesn't learn a really good representation. What people have" }, { "end": 398.76000000000005, "start": 392.12, "text": " done is they have included so-called negative samples where you'll say I'll" }, { "end": 403.8, "start": 398.76000000000005, "text": " take a different image from this data set but it's a different" }, { "end": 409.6, "start": 403.8, "text": " image than this image. I also do some data augmentation with that" }, { "end": 415.56, "start": 409.6, "text": " image and then I send this through the same encoder to also give me an h." }, { "end": 422.36, "start": 415.56, "text": " This is the h, let's call that h original. This is h plus because it's the same" }, { "end": 428.04, "start": 422.36, "text": " image but slightly differently augmented. And this is h minus which is a different" }, { "end": 436.48, "start": 428.04, "text": " image. Now the task is let's make those two very similar to each other but" }, { "end": 443.12, "start": 436.48, "text": " let's distance them from this other one. We want this to be as far away" }, { "end": 449.6, "start": 443.12, "text": " as possible and these two to be close to each other. Now the network can't simply" }, { "end": 454.08, "start": 449.6, "text": " map everything to a constant function anymore. It needs to actually do" }, { "end": 460.6, "start": 454.08, "text": " something to make these be close together and this be far apart. The" }, { "end": 465.08, "start": 460.6, "text": " combination of this together with the augmentation procedure that goes into" }, { "end": 470.32, "start": 465.08, "text": " augmenting the images has been sort of a good combo to learn good" }, { "end": 476.15999999999997, "start": 470.32, "text": " representations. A lot of papers have alluded to the fact that this is..." }, { "end": 481.88, "start": 476.15999999999997, "text": " The negative samples are to not have these degeneracy, so to not have" }, { "end": 487.96, "start": 481.88, "text": " the simple solutions. But the fact that the representation then is actually good," }, { "end": 493.9, "start": 487.96, "text": " like is good for image tasks down the line, probably comes from the" }, { "end": 498.88, "start": 493.9, "text": " fact of these augmentations right here. There's a lot of evidence from the" }, { "end": 503.32, "start": 498.88, "text": " fact that depending on which augmentations we choose, these" }, { "end": 508.56, "start": 503.32, "text": " representations are going to be better or worse. For example random cropping of" }, { "end": 516.88, "start": 508.56, "text": " an image, so the random sub, like taking a random crop from the image, tends to be" }, { "end": 523.88, "start": 516.88, "text": " very very beneficial. So here this is the same image twice, right? Let's" }, { "end": 529.64, "start": 523.88, "text": " say we take a random crop here and one up here. Maybe there's an" }, { "end": 536.2, "start": 529.64, "text": " overlap here in the middle, right? So it sort of needs to understand that these" }, { "end": 541.98, "start": 536.2, "text": " random crops sort of needs to communicate between these two places in" }, { "end": 548, "start": 541.98, "text": " these random crops. So the representation has to somehow make sure that the" }, { "end": 551.72, "start": 548, "text": " object that is overlapping here is somehow represented, but it can't" }, { "end": 556.72, "start": 551.72, "text": " represent it just as a pixel value, because it doesn't know where the crops" }, { "end": 562.48, "start": 556.72, "text": " come from. So there's a lot of evidence that these representations are the thing" }, { "end": 569.64, "start": 562.48, "text": " that's responsible for making the representation so good. Okay, now this" }, { "end": 575.76, "start": 569.64, "text": " paper simply says do we really need these negative samples right here? Let's" }, { "end": 583.12, "start": 575.76, "text": " just get rid of them. And with a couple of tricks this seems to work. And this" }, { "end": 589.4399999999999, "start": 583.12, "text": " is what seems like magic to me, because as we go forward, think of it," }, { "end": 597.28, "start": 589.4399999999999, "text": " nothing keeps this model right here from doing the degenerate solution" }, { "end": 605.52, "start": 597.28, "text": " h equals constant. Nothing, right? Now for some reason it doesn't do that. And I" }, { "end": 609.24, "start": 605.52, "text": " have the feeling that this is a super delicate balance that you have to do," }, { "end": 613.4399999999999, "start": 609.24, "text": " because when you train, when you start out, it's probably not the constant" }, { "end": 618.16, "start": 613.4399999999999, "text": " function, right? It's probably some distribution. And then simply by the" }, { "end": 623.16, "start": 618.16, "text": " fact that you train it and kind of keep it in the... So this is certainly an" }, { "end": 629.4399999999999, "start": 623.16, "text": " optimal solution, but you might be like in some sort of local minimum once you" }, { "end": 634.84, "start": 629.4399999999999, "text": " start training and you simply don't get out of it during training. And that's why" }, { "end": 641.52, "start": 634.84, "text": " the network has an easier time step by step as it updates itself in very small" }, { "end": 645.48, "start": 641.52, "text": " incremental steps. It has an easier time actually going for the good" }, { "end": 651.36, "start": 645.48, "text": " representation than it has to see this solution right here and converge to that." }, { "end": 660.72, "start": 651.36, "text": " But yeah, it seems delicate. So what are they doing? They are taking that idea of" }, { "end": 667, "start": 660.72, "text": " taking an input image right here. And so by the way, why is it important that" }, { "end": 671.12, "start": 667, "text": " there are no negative samples? Because now the question is always, oh, where do" }, { "end": 675.28, "start": 671.12, "text": " you get these negative samples from? Right? Should they be uniformly sampled?" }, { "end": 680.28, "start": 675.28, "text": " Should we keep a buffer? Should we order them? There is this task of hard negative" }, { "end": 684.72, "start": 680.28, "text": " mining where you say, oh, any old negative won't do. It's actually better if we take" }, { "end": 690.36, "start": 684.72, "text": " negatives that are, you know, just hard enough. There is a curriculum" }, { "end": 694.84, "start": 690.36, "text": " learning problems and so on. So it would be best to actually just get rid of these" }, { "end": 700.32, "start": 694.84, "text": " negative things. So that's why we want to get rid of them. So that's the approach." }, { "end": 708, "start": 700.32, "text": " BYOL. Bootstrap your own latent. There is the input image. You take one image at a" }, { "end": 714.8000000000001, "start": 708, "text": " time and you apply two different random augmentations to it. Right? So you create" }, { "end": 720.8399999999999, "start": 714.8, "text": " two slightly different variants of that image through augmentation. And again," }, { "end": 725.56, "start": 720.8399999999999, "text": " this can be something like a random crop. It can be a horizontal flip randomly." }, { "end": 731.8399999999999, "start": 725.56, "text": " You color jitter, you solarize, you blur, and so on. There are all these variants of" }, { "end": 740.7199999999999, "start": 731.8399999999999, "text": " data augmentation. And the fact that down the line, the representation of" }, { "end": 746.5600000000001, "start": 740.72, "text": " these two things has to be close to each other. I think these random, these" }, { "end": 756.6800000000001, "start": 746.5600000000001, "text": " augmentations here are responsible to make the representations powerful." }, { "end": 761.4, "start": 756.6800000000001, "text": " The fact that later down the line, the network has to sort of learn to ignore" }, { "end": 767.24, "start": 761.4, "text": " these. It has to learn that, oh, you know, it doesn't matter where in the image this" }, { "end": 770.92, "start": 767.24, "text": " object is, because it's been random cropped for different, you know, at" }, { "end": 776.24, "start": 770.92, "text": " different locations. It doesn't matter where in the image this object is. I" }, { "end": 780.08, "start": 776.24, "text": " simply need to have my hidden representation have this particular" }, { "end": 784.92, "start": 780.08, "text": " object in the image. And that's what makes it powerful. Okay, I've said that" }, { "end": 790.36, "start": 784.92, "text": " enough now. Then you have these two slightly different versions. And then you" }, { "end": 795.96, "start": 790.36, "text": " map it through your encoder. Okay, let's go the top path first. You see the bottom" }, { "end": 800.2800000000001, "start": 795.96, "text": " path has the same encoder, but the parameters are different. And this is" }, { "end": 805.9200000000001, "start": 800.2800000000001, "text": " going to be one of the crucial elements right here. So this here are your actual" }, { "end": 810.8000000000001, "start": 805.9200000000001, "text": " parameters that you learn. And this here are what are called the target" }, { "end": 816.9200000000001, "start": 810.8000000000001, "text": " parameters. Now after each, and you can see this for all of these components" }, { "end": 821.52, "start": 816.9200000000001, "text": " right here. So what happens is that the target parameters are basically a copy" }, { "end": 826.92, "start": 821.52, "text": " of these what's what are called the online parameters. Okay, so after each" }, { "end": 832.52, "start": 826.92, "text": " step, you copy over from the online parameters, you copy over to the target" }, { "end": 836.76, "start": 832.52, "text": " parameters, you never learn the target parameters, you simply copy them after" }, { "end": 841.76, "start": 836.76, "text": " each step. Now you don't copy them outright, what you do is you do an" }, { "end": 846.88, "start": 841.76, "text": " exponential moving average. So the target parameters are always going to be sort" }, { "end": 852.76, "start": 846.88, "text": " of a lagging average of your online parameters. And that idea comes from the" }, { "end": 860, "start": 852.76, "text": " momentum contrast principle, where the reasoning sort of behind it is that you" }, { "end": 867.48, "start": 860, "text": " need a kind of a stable you kind of need a stable representation as a target. But" }, { "end": 874.92, "start": 867.76, "text": " I think it hasn't been fully explored or explained why exactly that is so helpful." }, { "end": 882.28, "start": 874.92, "text": " But we just know that if if we have the target to be not the same as the the" }, { "end": 887.5999999999999, "start": 882.28, "text": " online parameters, but actually a kind of a stable version of the past of the" }, { "end": 892.28, "start": 887.5999999999999, "text": " online parameters, then that tends to work well. Again, it's kind of the same" }, { "end": 896.64, "start": 892.28, "text": " principle as with the augmentations. With the augmentations, we have two" }, { "end": 901.7199999999999, "start": 896.76, "text": " different versions of the same image. And now with this procedure here, we sort of" }, { "end": 906.4, "start": 901.72, "text": " have two different versions of the same neural network, but they're slightly" }, { "end": 913.76, "start": 906.4, "text": " different, right. This idea has been around for much longer, like the first" }, { "end": 918.76, "start": 913.76, "text": " queue, deep queue networks, and so on. They had the same principles where they had" }, { "end": 922.88, "start": 918.76, "text": " the the network that they actually learned and then the target network that" }, { "end": 927.9200000000001, "start": 922.88, "text": " is copied over every such and such episodes, and so on. So this, this seems" }, { "end": 934.52, "start": 927.92, "text": " to work seems to be a fundamental principle that seems to work. All right, so" }, { "end": 940.5999999999999, "start": 934.52, "text": " we take our two slightly different augmented versions of the same image, and" }, { "end": 947.0799999999999, "start": 940.5999999999999, "text": " we run them through our two slightly different encoders to obtain two" }, { "end": 952.24, "start": 947.0799999999999, "text": " representations. Now this thing right here, that's going to be our representer." }, { "end": 960, "start": 952.24, "text": " So after this procedure, we discard the entire thing right here, except that. So" }, { "end": 966.64, "start": 960, "text": " this here is your whatever your ResNet 50. Okay, after that follows a projection." }, { "end": 975.08, "start": 966.64, "text": " And the projection is is here to reduce the dimensionality. And honestly, I'm" }, { "end": 980.64, "start": 975.08, "text": " actually not sure why it is here. Because you can do it without, like" }, { "end": 986.12, "start": 980.64, "text": " technically, the algorithm doesn't require this projection. So you can" }, { "end": 989.6, "start": 986.12, "text": " imagine the algorithm without the projection. But just really quickly, the" }, { "end": 995.84, "start": 989.6, "text": " projection simply brings down the representation, which is like 2048" }, { "end": 1000.84, "start": 995.84, "text": " dimensional that comes out of the ResNet 50. It has, it is a two layer neural" }, { "end": 1008.96, "start": 1000.84, "text": " network that first pumps this up to like 4092, and then compresses it down to 256" }, { "end": 1015.24, "start": 1008.96, "text": " dimensions. Okay, so that's the projection network. Again, there is a part that's" }, { "end": 1019.8000000000001, "start": 1015.24, "text": " learned and then the target projector is simply the exponential moving average of" }, { "end": 1027.1200000000001, "start": 1019.8000000000001, "text": " the online projector. But again, this is why exactly this is here, probably" }, { "end": 1035.3600000000001, "start": 1027.1200000000001, "text": " simply because it works, right? But probably because there is no" }, { "end": 1039.1999999999998, "start": 1035.36, "text": " distinction because you don't have different losses, you simply back propagate" }, { "end": 1043, "start": 1039.1999999999998, "text": " through everything and then train everything. So there is no logical" }, { "end": 1047.08, "start": 1043, "text": " distinction between the projection and the representation other than you have a" }, { "end": 1051.84, "start": 1047.08, "text": " different dimensionality. But maybe that's the point here that you make a" }, { "end": 1056.9599999999998, "start": 1051.84, "text": " different dimensionality, even though you could you could do the rest in this" }, { "end": 1064.04, "start": 1056.9599999999998, "text": " 2048 space. Yeah, so for now, just this doesn't exist. Let's just say this" }, { "end": 1070.04, "start": 1064.04, "text": " doesn't exist. And we just work with this representation here. Let's call this Z, Z" }, { "end": 1077.1599999999999, "start": 1070.04, "text": " prime. Okay, so what happens is we take the representation. And now we have one" }, { "end": 1084.8, "start": 1077.1599999999999, "text": " neural network, the predictor right here, that takes the representation of one of" }, { "end": 1090.36, "start": 1084.8, "text": " the image versions. And it simply tries to predict the representation of the" }, { "end": 1099.52, "start": 1090.36, "text": " other image version. So what you want is that q of z equals z prime. Okay, and if" }, { "end": 1113.6, "start": 1099.52, "text": " we expand that is that q of f of z is equal to f target of z prime. And if we" }, { "end": 1121.36, "start": 1113.6, "text": " expand that even further, you can see that q, I'll just write q and f for now" }, { "end": 1132.12, "start": 1121.36, "text": " q of f of a, which is an augmentation at an augmentation of z should be one" }, { "end": 1141.4399999999998, "start": 1132.12, "text": " bracket two bracket three bracket should be f of a of z sorry not see that's the" }, { "end": 1153.24, "start": 1141.44, "text": " image x. Alright, so this makes a lot of sense. You're simply with q. Since these" }, { "end": 1158.1200000000001, "start": 1153.24, "text": " are all different here, so f is the target instead of the online parameters," }, { "end": 1163.0800000000002, "start": 1158.1200000000001, "text": " a is also different, it's a different augmentation that you do, but the x is the" }, { "end": 1171.52, "start": 1163.08, "text": " same. Okay, so the queue simply tries to somehow negate this augmentation and this" }, { "end": 1176.4399999999998, "start": 1171.52, "text": " difference between the target and the online parameters. But you don't tell the" }, { "end": 1181.72, "start": 1176.4399999999998, "text": " queue which augmentation was used. And you don't tell the queue what are the" }, { "end": 1187.6399999999999, "start": 1181.72, "text": " exact parameters of that network. So what the queue has to do is it has to" }, { "end": 1194.96, "start": 1187.64, "text": " somehow it's like it's like it has to take its best guess, right? So basically" }, { "end": 1201.8000000000002, "start": 1194.96, "text": " the queue is trained to output the expected value of the representation" }, { "end": 1213.48, "start": 1201.8000000000002, "text": " right the expected value of the representation f of a of x under all of" }, { "end": 1220.24, "start": 1213.48, "text": " the different possible image augmentations. And that's why it learns" }, { "end": 1224.32, "start": 1220.24, "text": " to ignore these augmentations. So your entire goal with these methods is you" }, { "end": 1230.32, "start": 1224.32, "text": " learn to ignore these augmentations. So you want to learn some method that is" }, { "end": 1235.52, "start": 1230.32, "text": " independent of the augmentations. So by crafting the augmentations in a smart" }, { "end": 1241.24, "start": 1235.52, "text": " way, we can make these representations contain a lot of semantic information," }, { "end": 1244.2, "start": 1241.24, "text": " because what we want to do with the augmentation is basically we want to" }, { "end": 1249.52, "start": 1244.2, "text": " destroy all the non-segmented information. So non-semantic information." }, { "end": 1254.1200000000001, "start": 1249.52, "text": " And random cropping is one of those methods. Horizontal flipping is one of" }, { "end": 1258.28, "start": 1254.1200000000001, "text": " those methods, because we say, well, whether an image goes left to right or" }, { "end": 1262.48, "start": 1258.28, "text": " right to left, most of the time the semantics are the same. The pixels are" }, { "end": 1267.24, "start": 1262.48, "text": " different, but the semantics are the same. So by putting an augmentation in there," }, { "end": 1273.76, "start": 1267.24, "text": " we learn to ignore that augmentation, because our representation now needs to" }, { "end": 1283.08, "start": 1273.76, "text": " be predictable. We learn Q to predict the representation under the" }, { "end": 1288.84, "start": 1283.08, "text": " expectation of our augmentations. And that means it can't be dependent on one" }, { "end": 1296, "start": 1288.84, "text": " particular augmentation. It learns to ignore it. So that's basically what's" }, { "end": 1301.84, "start": 1296, "text": " happening here. Again, there is nothing keeping this from simply collapsing it" }, { "end": 1309.44, "start": 1301.84, "text": " to a trivial solution. And it's probably a combination of the initialization and" }, { "end": 1314.56, "start": 1309.44, "text": " the learning procedure itself, that it goes on in little, little steps, one by" }, { "end": 1320.16, "start": 1314.56, "text": " one, that keeps it in the realm of rather having to... Like it's easier to learn a" }, { "end": 1328.1200000000001, "start": 1320.16, "text": " good representation than it is to collapse to that solution. Okay? So again," }, { "end": 1333.5600000000002, "start": 1328.1200000000001, "text": " components is image, then you augment differently, then you run it through" }, { "end": 1337.6000000000001, "start": 1333.5600000000002, "text": " different encoders, but the encoders are similar in the fact that one is the" }, { "end": 1343.24, "start": 1337.6000000000001, "text": " exponential moving average of the other. And then you try to predict one from the" }, { "end": 1350.2, "start": 1343.24, "text": " other. And that ultimately makes the representation be independent of the" }, { "end": 1354.64, "start": 1350.2, "text": " augmentation. And that means that the representation can only include things" }, { "end": 1359, "start": 1354.64, "text": " that are not destroyed by the augmentations. And if you construct the" }, { "end": 1365.76, "start": 1359, "text": " augmentations smartly, that means you only retain the semantic information." }, { "end": 1371.84, "start": 1365.76, "text": " That's it. So the loss function is pretty simple. As you can see right here, what" }, { "end": 1375.72, "start": 1371.84, "text": " you want is, and this bar is a normalization, what you want is the L2" }, { "end": 1382.4399999999998, "start": 1375.72, "text": " norm between this representation be close to the Q of that" }, { "end": 1388.3999999999999, "start": 1382.4399999999998, "text": " representation. So the Q simply tries to predict the other representation. And you" }, { "end": 1393.8, "start": 1388.3999999999999, "text": " do that for both ways. So you once stick the image in here and try to predict the" }, { "end": 1398.28, "start": 1393.8, "text": " other one, and you do it vice versa. So you get two loss components each time." }, { "end": 1405.72, "start": 1398.28, "text": " It's a symmetric loss. And that's it. That's the method. And they beat all the" }, { "end": 1410.76, "start": 1405.72, "text": " other self-supervised methods, and they get pretty close to the supervised" }, { "end": 1416.96, "start": 1410.76, "text": " representation learning method. As you can see right here, as the" }, { "end": 1421.24, "start": 1416.96, "text": " number of parameters goes up in their model, so one of them is ResNet-50, but" }, { "end": 1425.24, "start": 1421.24, "text": " I'm gonna guess this one right here. But you can also get two higher" }, { "end": 1431.84, "start": 1425.24, "text": " architectures, and then it appears to work even better and come even closer to" }, { "end": 1436.48, "start": 1431.84, "text": " this supervised baseline. This could be because if you have more" }, { "end": 1440.52, "start": 1436.48, "text": " parameters technically in a supervised method, you would also need more labeled" }, { "end": 1446.48, "start": 1440.52, "text": " images maybe, and therefore it doesn't scale as well. I don't know. There is a" }, { "end": 1451.64, "start": 1446.48, "text": " lot of unclarity in this research. All they show is that their numbers are" }, { "end": 1456.5600000000002, "start": 1451.64, "text": " good, which is cool, right? And it's cool that you don't need the" }, { "end": 1461.48, "start": 1456.5600000000002, "text": " negative samples anymore, and it actually doesn't collapse when you do that kind" }, { "end": 1468.2, "start": 1461.48, "text": " of stuff. But there's a lot of, I don't know, there's a lot of things here. For" }, { "end": 1479.72, "start": 1468.2, "text": " example, we use a batch size of 4096 split over 512 TPUv3 cores. With this" }, { "end": 1484.76, "start": 1479.72, "text": " setup, training takes approximately eight hours for ResNet-50. So they train eight" }, { "end": 1494.6000000000001, "start": 1484.76, "text": " hours on 512 TPUs. Just imagine that! So that's sort of crazy amount of" }, { "end": 1498.76, "start": 1494.6000000000001, "text": " computation, again, going into these models. And then the second thing here is" }, { "end": 1502.68, "start": 1498.76, "text": " that you can see that there are some things missing right here, and there are" }, { "end": 1507.16, "start": 1502.68, "text": " all these annotations, which probably means that they take these numbers" }, { "end": 1515.3600000000001, "start": 1507.16, "text": " from those papers. Now, they allude to the fact that they try to follow their" }, { "end": 1521.0800000000002, "start": 1515.3600000000001, "text": " protocol as closely as possible, but I mean, that's never given." }, { "end": 1528.28, "start": 1521.0800000000002, "text": " Or almost never, unless they release the exact code, and even then there" }, { "end": 1533.24, "start": 1528.28, "text": " are still going to be differences. You'd have to replicate the" }, { "end": 1542.04, "start": 1533.24, "text": " exact thing on the exact same number of TPU cores and whatnot. So I highly" }, { "end": 1549, "start": 1542.04, "text": " like these numbers seem to be... I'm not sure, especially if you then go and look," }, { "end": 1555.8, "start": 1549, "text": " and at some point they actually do reproduce the SimClear baseline. So you" }, { "end": 1561.16, "start": 1555.8, "text": " can see right here that they have a own implementation of SimClear, and they" }, { "end": 1566.2, "start": 1561.16, "text": " actually compare this to the numbers that they find in the SimClear paper. And" }, { "end": 1571.5600000000002, "start": 1566.2, "text": " you can see, for example, here there's like four percentage points that" }, { "end": 1577.72, "start": 1571.5600000000002, "text": " their implementation of SimClear gains above this implementation. And if you" }, { "end": 1582.8400000000001, "start": 1577.72, "text": " look at this supervised baseline, that's also from that paper. And there is a" }, { "end": 1590.1200000000001, "start": 1582.8400000000001, "text": " graph further down where they also implement their own version of the..." }, { "end": 1596.52, "start": 1590.12, "text": " their own version of the supervised baseline. I forget... here. So you can see" }, { "end": 1601.6399999999999, "start": 1596.52, "text": " that between the supervised in that paper and the supervised of them," }, { "end": 1610.28, "start": 1601.6399999999999, "text": " sometimes there's like a giant gap right here for the same model, it seems. So all" }, { "end": 1615.12, "start": 1610.28, "text": " of these numbers, I'm not sure you should put too much weight on the fact" }, { "end": 1621.7199999999998, "start": 1615.12, "text": " that this is now outperforming the other methods. I would not put... like unless" }, { "end": 1626.9199999999998, "start": 1621.7199999999998, "text": " this is like super duper replicated very often, I would not put a lot of weight on" }, { "end": 1631.6799999999998, "start": 1626.9199999999998, "text": " the fact that it is better. What I would put a lot of weight on is the fact that" }, { "end": 1638.6399999999999, "start": 1631.6799999999998, "text": " it works at all and achieves, you know, good performance. And there is more. They" }, { "end": 1644.12, "start": 1638.6399999999999, "text": " make... they have like experiments right here that show that their method, the" }, { "end": 1651.12, "start": 1644.12, "text": " BYOL, is much more resistant to like changes in hyperparameters. So here you" }, { "end": 1656.7199999999998, "start": 1651.12, "text": " can see that it falls off much later when you reduce the batch size, which" }, { "end": 1661.1599999999999, "start": 1656.7199999999998, "text": " makes sense, right? Because SimClear is one of these methods that uses negative" }, { "end": 1666.36, "start": 1661.1599999999999, "text": " samples. And for negative samples, it uses the other samples in the mini batch. Now" }, { "end": 1670.1599999999999, "start": 1666.36, "text": " if you have less samples in the mini batch, that means you have a less" }, { "end": 1675.3200000000002, "start": 1670.16, "text": " representative distribution of your entire data set as negative samples. And" }, { "end": 1681.1200000000001, "start": 1675.3200000000002, "text": " therefore, if you increase... decrease the mini batch, then this drops off. And also" }, { "end": 1687.8000000000002, "start": 1681.1200000000001, "text": " they show that, for example, their method is much more robust to the removal of a" }, { "end": 1695.3600000000001, "start": 1687.8000000000002, "text": " couple of these image augmentations. So all of this I find actually pretty cool." }, { "end": 1701.8799999999999, "start": 1695.36, "text": " But the actual numbers here... first, I'm not super duper interested that they" }, { "end": 1708, "start": 1701.8799999999999, "text": " get like two or one points more in something, but they do perform like a lot" }, { "end": 1715.56, "start": 1708, "text": " of experiments. And that... it shows that you can apply the method to different" }, { "end": 1720.28, "start": 1715.56, "text": " things. It's not only like in one setting, so that's pretty cool. It works at least..." }, { "end": 1727.36, "start": 1720.28, "text": " you can say it works at least as well as other methods. And it is a lot easier" }, { "end": 1732.48, "start": 1727.36, "text": " because you don't have this negative sample things. Now the last quarrel I" }, { "end": 1744.2, "start": 1732.48, "text": " have with the paper, and where is it? Where is it? Somewhere they say that we" }, { "end": 1750.04, "start": 1744.2, "text": " release the code... they release the pseudo code. They don't release the code." }, { "end": 1756.8799999999999, "start": 1750.04, "text": " They release the pseudo code in the appendix. So I mean, there are reasons why" }, { "end": 1761.28, "start": 1756.8799999999999, "text": " you sometimes want to release pseudo code. And that's if an algorithm is so" }, { "end": 1767.52, "start": 1761.28, "text": " high level and so simple in its high levelity and so modular to be fleshed" }, { "end": 1774.8, "start": 1767.52, "text": " out that you can't... like it makes more sense. But here it's like pseudo code in" }, { "end": 1783.84, "start": 1774.8, "text": " jacks. And come on... is it really that competitively advantageous to retain" }, { "end": 1788.76, "start": 1783.84, "text": " your code? It's just not reproducible with this. You know that they" }, { "end": 1795.6399999999999, "start": 1788.76, "text": " have like 50 billion hacks in their code. And yeah, so DeepMind has this history of" }, { "end": 1800.76, "start": 1795.6399999999999, "text": " just not releasing... like publishing behind paywalls and just giving pseudo" }, { "end": 1805.52, "start": 1800.76, "text": " code that has lots of mistakes in them. Like the new zero pseudo code, you can't" }, { "end": 1812.52, "start": 1805.52, "text": " even like run it in its basic form if you fill in the things. It's a bit" }, { "end": 1818.72, "start": 1812.52, "text": " annoying. In any way, the method itself seems promising for representation" }, { "end": 1823.08, "start": 1818.72, "text": " learning, as I said, especially because it's pretty simple. It still heavily" }, { "end": 1828.08, "start": 1823.08, "text": " relies on these augmentation methods. So and that's what they say right here." }, { "end": 1832.8, "start": 1828.08, "text": " Nevertheless, BYOL remains dependent on existing sets of" }, { "end": 1837.6399999999999, "start": 1832.8, "text": " augmentations that are specific to vision applications. To generalize BOL to" }, { "end": 1843.12, "start": 1837.6399999999999, "text": " other modalities, it is necessary to obtain similarly suitable augmentations" }, { "end": 1847.3999999999999, "start": 1843.12, "text": " for each of them. Designing such augmentations may require significant" }, { "end": 1850.36, "start": 1847.3999999999999, "text": " effort and expertise. Therefore automating the search for these" }, { "end": 1854.08, "start": 1850.36, "text": " augmentations would be an important next step to generalize BOL to other" }, { "end": 1859.6, "start": 1854.08, "text": " modalities. And I'm not sure if you can do this automating the search for these" }, { "end": 1864.4399999999998, "start": 1859.6, "text": " augmentations. I guess you can do it if you have like a supervised data set and" }, { "end": 1867, "start": 1864.4399999999998, "text": " then you can search and then you can use those augmentations for the" }, { "end": 1871.6799999999998, "start": 1867, "text": " unsupervised. But it seems a bit bootstrap-y, no pun intended right here. I" }, { "end": 1877.1999999999998, "start": 1871.6799999999998, "text": " think the power of these representations again comes from the" }, { "end": 1885.72, "start": 1877.2, "text": " fact that we have these augmentations carefully constructed. So oh yes, the last" }, { "end": 1890.4, "start": 1885.72, "text": " thing broader impact statement. Just read this like try to estimate the" }, { "end": 1895.1200000000001, "start": 1890.4, "text": " perplexity of this broader impact statement. Let's go. The presented" }, { "end": 1899.64, "start": 1895.1200000000001, "text": " research should be categorized as research in the field of unsupervised" }, { "end": 1905.92, "start": 1899.64, "text": " learning. This work may inspire new algorithms, theoretical and experimental" }, { "end": 1910.64, "start": 1905.92, "text": " investigation. The algorithm presented here can be used for many different" }, { "end": 1915.88, "start": 1910.64, "text": " vision applications and a particular use may have both positive or negative" }, { "end": 1922.16, "start": 1915.88, "text": " impacts, which is known as the dual use problem. Besides as vision data sets" }, { "end": 1927.76, "start": 1922.16, "text": " could be biased, the representation learned by BOL could be susceptible to" }, { "end": 1934.76, "start": 1927.76, "text": " replicate these biases. Like come on. So people who advocated for making everyone" }, { "end": 1940.12, "start": 1934.76, "text": " do this. Is this what you wanted? Is this like is this a satisfactory result for" }, { "end": 1946.28, "start": 1940.12, "text": " you? And if you have this as a reviewer, is this okay or not? I mean let's just" }, { "end": 1953.64, "start": 1946.28, "text": " cross out some words here. Blank, like field, let's just put field. Or" }, { "end": 1959, "start": 1953.64, "text": " machine learning. Why not? Machine learning. Machine learning. This work" }, { "end": 1963.04, "start": 1959, "text": " inspire new algorithms? Yes. The algorithm presented here can be used for many" }, { "end": 1967.96, "start": 1963.04, "text": " different machine learning applications and a particular use may have both negative" }, { "end": 1974.24, "start": 1967.96, "text": " effects. Besides as data sets could be biased, the representation learned by this" }, { "end": 1982.96, "start": 1974.24, "text": " paper could be susceptible to replicate these biases. Well there is a copy-paste" }, { "end": 1987.56, "start": 1982.96, "text": " thing that you can apparently put into any and all papers that you write from" }, { "end": 1994.12, "start": 1987.56, "text": " now on. And hey DeepMind is doing it. So you know, there you go. Okay maybe a bit" }, { "end": 2000.6399999999999, "start": 1994.12, "text": " cynical but I'm like I told you this would happen. I told you. And you know." }, { "end": 2008, "start": 2000.6399999999999, "text": " Okay so that was it for my comments right here. They do have like a giant ton of" }, { "end": 2012.6399999999999, "start": 2008, "text": " experiments and I appreciate that right. They really try to show that it works in" }, { "end": 2018.8400000000001, "start": 2012.64, "text": " many different situations and yeah yet to solve why this doesn't collapse but" }, { "end": 2043.12, "start": 2018.84, "text": " apparently it doesn't. So try it out. Give it a try and I'll see you next time. Bye bye." } ]
sEG8hD64c_Q
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
TUNIT: Rethinking the Truly Unsupervised Image-to-Image Translation (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "image translation", "style transfer", "unsupervised", "clustering", "self-supervised", "cnn", "convolutional neural networks", "gan", "generative adversarial network", "generator", "encoder", "discriminator", "conditional", "style", "pseudo-label", "augmentation", "cropping" ]
Image-to-Image translation usually requires corresponding samples or at least domain labels of the dataset. This paper removes that restriction and allows for fully unsupervised image translation of a source image to the style of one or many reference images. This is achieved by jointly training a guiding network that provides style information and pseudo-labels. OUTLINE: 0:00 - Intro & Overview 1:20 - Unsupervised Image-to-Image Translation 7:05 - Architecture Overview 14:15 - Pseudo-Label Loss 19:30 - Encoder Style Contrastive Loss 25:30 - Adversarial Loss 31:20 - Generator Style Contrastive Loss 35:15 - Image Reconstruction Loss 36:55 - Architecture Recap 39:55 - Full Loss 42:05 - Experiments Paper: https://arxiv.org/abs/2006.06500 Code: https://github.com/clovaai/tunit Abstract: Every recent image-to-image translation model uses either image-level (i.e. input-output pairs) or set-level (i.e. domain labels) supervision at minimum. However, even the set-level supervision can be a serious bottleneck for data collection in practice. In this paper, we tackle image-to-image translation in a fully unsupervised setting, i.e., neither paired images nor domain labels. To this end, we propose the truly unsupervised image-to-image translation method (TUNIT) that simultaneously learns to separate image domains via an information-theoretic approach and generate corresponding images using the estimated domain labels. Experimental results on various datasets show that the proposed method successfully separates domains and translates images across those domains. In addition, our model outperforms existing set-level supervised methods under a semi-supervised setting, where a subset of domain labels is provided. The source code is available at this https URL Authors: Kyungjune Baek, Yunjey Choi, Youngjung Uh, Jaejun Yoo, Hyunjung Shim Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at rethinking the truly unsupervised image-to-image translation by Kyungjoon Baek, Yoonjae Choi, Yongjung Woo, Ja-Joon Yoo, and Hyeong-Joon Shim. So in this paper we'll deal with image-to-image translation in an unsupervised fashion. So on a high level they replace the need for domain or really single image label annotations in image-to-image translation by training a guiding network that is able to sort of do a self-clustering of the image domain and therefore that guides the image-to-image translation instead of the previously needed labels. I myself don't know too much about image-to-image translation and style transfer and all of this stuff. This has always been kind of a mystery to me and we'll try to make as much sense as possible out of this paper if you're with me. I might not get everything right but I will give my best of course. As always if you like content like this consider sharing it out and leaving a like and a comment. I do read the comments so I get a good idea of what you have to say about it. Cool so what we're seeing here is an example of image-to-image translation of like a sort of a style transfer. Now what you'll have on the left is a source image. Now the goal is to translate this source image to a different domain while sort of keeping the the features of the image the same. And here is sort of I'm always confused because here it's like we keep the pose of the cat the same okay so we sort of keep the same cat but we want to change its style which means it's breed in this particular case. So on the top you can see that the domain images are they come in these different groups and in fact it's not only those four but the entire data set is split into these different groups and among these different groups you have some sort of a shared style. Now this shared style is what you would like to transfer to the source image. So if you transfer the style of all of these cats right here which all seem to be sort of ginger cats to this instance right here what you'll end up with is a cat okay it was ginger before. Might not be the best example but you you sort of get what I mean is that the thing that you transfer is whatever is common among these domain images okay and that's what I guess explains why the pose of the cat stays the same because it only it is basically taught to keep the image the same except to transfer whatever is common among the images in the domain class and that's image to image transfer or translation. Now until this paper at least that's what the paper claims these image to image translation models they required labels and why is that? That's because you need to know how to build these domains here at the top to get these different style vectors out or you actually would need label annotations for each image for each single image you would need to know which one you need to know which one of the source corresponds to which one of the target so they have a graphic right here where they explain the sort of different the different stages that image to image translation went through historically so first you'd have to have corresponding images one to one where you'd say okay here is an example of a sketch of a shoe and here is the corresponding shoe here is the sketch of another shoe and here is the corresponding shoe and so on and from that you could learn a model that translates from one domain to the other because you have corresponding image level annotations which image corresponds to which so basically which element of the domain a corresponds to which element in the domain b then the next stage of this was when you only need set level annotations and that's sort of what we looked at if you had supervised labels for domains so what you'll say is that there are three domains a b and c and actually let's let's forget c for a moment and just deal with a and b to make it equivalent to the thing on the left now i just know that these things are instances of class a and these things are instances of class b yet i i don't there's no correspondence right there is no this corresponds to this or or something like this so image to image translation is now possible between domains when i just have domain level labels but this is still expensive collecting these labels you know it's is like collecting labels for a supervised data so collecting labels for a supervised data set a human needs to look at each image and then conclude what sort of domain it is their paper introduces the following where you do not have domains anymore you simply have a data set x now this data set your hypothesis is that there are still going to be domains in the data set they can i guess they can be overlapping or not but there are still going to be domains you just don't know what they are so in this case um i guess you could differentiate these people into many many different ways but um in essence you're going to assume that there is some kind of a domain structure you just don't know what it is but if you knew what it was then you could simply apply methods from here to the data set and you'd be done now their paper shows that if you apply something like a self-clustering approach and we've seen these approaches before in the paper about learning to classify images without labels if you have techniques like this you can do like a self-clustering approach on this data set x right here and then you could learn your image to image translation yet this paper shows that if you do that the quality is not as good as if you do both things jointly so what this paper does is it jointly learns to cluster let's say to self-label the images and to make the to do this image to image translation and by doing the tasks jointly they help each other perform better okay that's a general overview so how do they do this they have three different parts to their model there is the encoder or they call this the guiding network there is the generator and there is the discriminator so the generator and the discriminator they are fairly standard GAN generators and discriminators so general adversarial network but they have like a bit of some sort of twists so you can already see from the design from the drawings right here the discriminator is probably the easiest the discriminator gets an image right here it doesn't have to be a generated it is a either a generated image or a real image and it needs to decide you can see right here this means that the input domain is a vector or an image in this case and the output is a number it needs to decide if it's real or fake now in fact it's not as easy because you can see there are these multiple heads right here so this whole thing as I said is built on this kind of pseudo clustering approach there is this pseudo label that comes out of the left side we're going to look at that in a second but in essence you assume that there are multiple classes multiple domains in the data set and the discriminator here has one classification head for each of those classes so from somewhere outside it will get the information oh this is now supposed to be one of those ginger cats right as opposed to one of those black and white cats or one of the brown haired cats no it's one of the ginger cats and then there is a special head on top of the classifier that only classifies fake from real ginger cats okay which is a different classifier from the other domains so the discriminator it's sort of a conditional discriminator conditioned on a label okay from the discriminators point of view it's simply a label conditioned discriminator discriminating between real and false and I think that's yeah how you train the discriminator is you would give an image and you would let this encoder here this guiding network label the image and how we come up with this label again that we'll look in a second but this just gives a label and then you'd for that particular label you'd classify the image into real or false now the fact that there is this shared part right here of course is you could also think of having one discriminator per class but the shared part now gives you some shared features and so on but it's not necessary it's not the the point is that there is a discriminator per class it's class conditional okay so what about the generator I think is I guess is the most complex um what about this this encoding network right here it's e for encoder I guess but they also call it the guiding network so what this does is this is what's this supposed to do is it'll take an image any image and it will output two things one is a label and one is a style code so the label is supposed to be a number between zero and da da da da da k minus one so k minus one so that's supposed to be a class label and how do you know how many classes there are if there are no labels you just guess and your best bet is to slightly over guess so if you expect there to be between 10 and 15 classes maybe put k to 20 okay you don't want to under guess but you can over guess but not by too much of course so you have to have this this estimation of how many classes but then this e it simply comes up with a class label and it also comes up with the style code now these two things are going to go then different pathways in this in this network the label is directly going to the discriminator right the generator does not see the label okay the style code does not go to the discriminator but goes to the generator all right so the two inputs from the encoder they one goes to the discriminator which is the label and one goes to the generator which is the style now the generator lastly it takes a source image and it takes this style code right here now the style code is encapsulating as we said the style of the reference image so the style is supposed to be whatever whatever whatever makes this domain of images the same so the style the way we're going to train this is that the style is going to describe somehow all the images that are from this label the style is going to describe whatever the style is it's very hard to it's very hard to explain if we look at the loss it becomes clearer why the things are how they are so it takes the style code and it takes the source image and it combines them and its task is to output this generated image as you can see in this example the generated image is basically this cat but with the style of the reference image and it outputs an image and the discriminator of course then is tasked with differentiating whether that image is real or fake for the given label over here okay so this is the entire thing and you all train this jointly so you jointly train the encoder to produce these class labels and the styles you train the generator to take in the styles and the source images and output the generated image to fool the discriminator and the discriminator at the same time is trained to differentiate between real and fake images based on the label that the encoder gives very very convoluted and complicated but there are a few things that make it easier first of all as you can see here the pseudo label is detached is argmaxed and detached so the pseudo label really is a number and there is no gradient back propagation along this line okay that makes that makes it a lot easier so the so what we first need is we need a way to train the encoder to come up with suitable class labels even though it doesn't get any back propagation signal into that part of its network so that's where we start with the loss functions the way we're going to do this is we're going to take the following approach we're going to take an image and we're going to take a randomly augmented version so for example a random crop or a horizontal flip and so on so the now we bring in ideas from self-supervision and again if you watch the video on learning to classify images without labels this is one of their main staples these self-supervised approaches really tend to learn representations that allow you to self cluster now in that paper they go further and they do this nearest neighbor thing in this paper they just do sort of the first step of this self clustering which i guess makes it such that you could potentially improve this paper by applying the other paper but who knows so we're going to take an image and we're going to augment it okay so that means we're going to like random crop it or in change its luminance or whatnot so we have two versions of the same image and what we want to maximize we want to maximize the mutual information between not between the images themselves but p is going to be this output of the encoder so x goes into the encoder and the encoder outputs the style and the class label and the class label here so p is going to be the class distribution all right so this is going to be like a histogram or maybe the log it's it's already yes so it's going to be a histogram over classes from which we're going to sample the label c or l or whatnot y hat y but the p is the distribution over output classes so since we don't have a label we can't train the distribution like in a supervised way supervised way so what we'll have to say is we want to maximize the mutual information between the output distribution of the image and the output distribution of the augmented image now that entails the following two quantities there's the entropy of p and there's the conditional entropy of p given p augmented now first of all it means we want to maximize this the entropy of p and that's supposed to be over the entire data set so this is the entropy over the entire data set x what it means is that we want different x's so if there's x1 x2 x3 and so on we want those to have different distributions in labels okay so if if the entropy is really high of the distribution p that means that different images get assigned to different classes some something like this all right if this is low then that would mean all the images basically get assigned to the same class and that's not good we want our classifier since we don't have labels it's a it's basically a cluster we want our cluster to sort of fill the space of possible clusters with the images so that's the first thing we want to maximize the mutual information we need to maximize this entropy and then second we want second since this is a minus here we need to minimize the conditional entropy of p given p augmented what does that mean that means if we know the augmented version of an image its class labeling should be the same as the un-augmented version so that means that if i now take one of these x's to x1 augmented of the do a plus augmented right then that shouldn't really change its class label and this is what these so that should sort of keep the class labeling this is horrible but the idea here is that it's kind of a reverse thinking from supervised learning in supervised learning we have the label like this is class this is class five okay this image is class five and our thinking is this augmentation techniques if i random crop an image or if i change its colorization a little bit the class is not going to change right an airplane with a in front of a blue sky is still an airplane in front of a bit bluer sky so i assume that it'll still have the same label here i don't have the label but what i can require is to say whatever you output for the image it should be the same for the augmented image so these two objectives are enough to give you sort of a rough clustering of the output space maximize the entropy minimize the conditional entropy between two versions of the same image okay that's how we train this pseudo labeling approach so now we have a we have a model that can give a label to each image very cool so how do we train the other parts now there are additional um additional losses here so i'm not sure yeah we'll go over it so this style part is also has to be trained right this encoder outputs a labeling we got that covered and it outputs a style part now the style part if you can see from the graphic it actually goes into let me erase some of that stuff here the style part actually is down here and it feeds into the generator and luckily they write detach here and since they don't write detach anywhere here that means that we do get gradient back propagation from the generator to the style code so that means our our encoder here is trained to help the generator with its task of fooling the discriminator okay but um first of all we're going to forget about that for now what we're going to do is simply look at a loss that they impose on the style they wouldn't they don't have to impose that loss but they have an additional loss on the style codes for the encoder in addition to the fact that there is gradient back propagating from g so the second loss we're going to look at is this style loss the style loss is almost the same so the style loss is a contrastive loss so what you want to do is if you have your data set you have your data set of images and you you know take images out and you train your network on and you train then take the next image or you take batches of image you take train and so on like this right and now you have this image what you want to do for this to work is you want to build up sort of a queue of images that you have already looked at like these images these are going and the queue can be let's say 10 long and you would always throw out the oldest and and in queue and newest so when you're done with this image right here you'll put it into the queue you load your next image and so on so now what does this mean you now always have a queue of other images and it's not important what they are as long as they are others right because now we're going to compare ourselves with others and this is this contrastive loss right here so this style loss is going to be a contrastive loss between this and this now the bottom part this here these are the others these are the other images and what are the individual quantities so s is the style code of the image you're considering right now s plus you could have already guessed it is the style code of the augmented image right so we had our image x let's go again with x1 x2 x3 are different images so we put x1 through the encoder that gives us s the style it also gives us the class label but now we care about this head that gives us the style code and we augment x1 to be x1 plus and we go we put that through the encoder that gives us s plus and now we also put all of these other images remember these are the images that we've looked at previously but the only real importance is that there are other images we put those through here and they get the s minus i in this case three and two so now what will require is that the s the style code of our image is closer to the style code of its augmented version so the same principle again we want we'll say that you know these augmentations they don't really change anything about the style now this argument is a bit more wonky but if you think of you know random crops and random flips don't really change anything about the fur color or so of a of a cat so we want those two to be closer together than s is to any of these other images okay so this is a contrast of loss where you pull together two things that you think should be close and you push apart things that you think should be far away from each other so this style loss basically guarantees that you have a distinct style for each image that is robust to the kind of transformations that you do under augmentation okay specifically this style loss doesn't care about the domain right this is for each image you don't know if these other images are from the same domain or from different domains and that's why the style is basically individual to the image but as we as we're going to see the style does capture something of the domain as well but this loss right here is supposed to be each image has a style right so this is the style code of x this n plus one way classification enables e to utilize not only the similarity of the positive pair but also the dissimilarity of the negative pairs where the negative style codes are stored into a queue using previously sampled images we observe that adding this objective significantly improves unsupervised classification accuracy in animal faces from this to that compared to the previous over clustering approach okay so we have two outputs now and now we go to the adversarial loss so the question is how do we train the generator and the discriminator and the discriminator so they have three different losses for the generator and the discriminator and the most important one of course is this adversarial loss right here so the discriminator simply tries to distinguish is an image real or fake conditioned on a class so in case of a real image and that's this line right here it tries to distinguish is this real or fake based on y and y is x fed to the encoder and the encoder gives you a label all right that's and the label selects the head of the discriminator at the same time that the discriminator is trying to distinguish real from fake so these two lines the generator is trying to fool the discriminator so the upper if you've never seen a GAN loss the upper part here that's the real data and the bottom part here is the fake data now at the same time the discriminator is trying to distinguish real from fake and the generator is trying to make the discrim fool the discriminator so both are of the generator and the discriminator are actually using that loss but the sign in front of it is different okay and since the generator is not involved in the top line you can usually leave that away because there is no backprop path through that and there is no backprop backprop path here because we detach the graph right here so there is no gradient signal going to the encoder so this bottom line what does it mean the generator will take in an image and the style now s tilde comes from x tilde it's x tilde going through the encoder giving you s tilde so this is the reference image right this is you want these this style right here this is the reference image and x is the source image so the generator is supposed to take the source image and basically apply the style from the reference image and generate x i don't even know how to call this x not tilde whatever x fake xf and that's supposed to fool the discriminator now the question is which discriminator right because you need a label for the discriminator the label is conditional with this discriminator is pretty easy because it's simply the label of this image now however as you can see the generator learns to translate x to the target domain while reflecting the style code s tilde so y tilde is going to be the label that comes out of this x so this encoder right here is also going to give us y tilde and that's going to go here all right so recap what we want to put into the discriminator is one time a real image like we do up up here and we get its label from the encoder the encoder gets us a label for each image very cool we'll also take the same image put it through the generator task the generator with transferring the style of another image from here onto it we get the style from the encoder and then the generator is supposed to make an image and we feed that to the discriminator and the discriminator discriminates assuming it comes from class y tilde now you see right here the generator never has access to y tilde okay so the generator is kind of at a disadvantage here the discriminator gets told what kind of image it is in terms of class while the generator because it needs to fool the discriminator it needs to come up with an image of that class but it has no idea of the class it only has the style code so it is forced to learn to sort of it is forced to learn to map a style to associate a style with a particular class and that's what with a particular class and that's how you get the domain into the style that's why the style can capture something like fur color of the different cat breeds because the generator is forced to take the style that the encoder gives and map it to an image of the class y tilde that it also the encoder gives but doesn't tell to the generator okay and in fact there is a more path because you now back propagate the loss to the encoder which means that the encoder will even help the generator it will help the generator make style codes that are very class specific now you can maybe think why why wouldn't you just have one output why doesn't the encoder simply output the label also as the style because that would be the easiest and the reason is because we have different losses on the style and the label okay otherwise that would be a valid tactic so that's cool that's the adversary loss that's the most important loss now there's also additional losses so they they do additional losses that they add on top for the generator they say in order to prevent degenerate situation where the generator ignores the style code and synthesizes a random image in the domain y or in the domain y tilde we impose a style contrastive loss to the generator so now there's still the danger that the degenerator simply produces a valid image right from the data set or even from the domain y tilde though i don't know how it would know why tilde or i've just not seen something i in my mind it doesn't get the y tilde but it could read it from the style but here the danger is to ignore the style i'm slightly i'm slightly confused by this part but maybe looking at the loss will will clear it out so they say we impose a style contrastive loss to the generator now this is almost the same is almost the same as we imposed on the encoder so the generator you can see there is a contrastive loss again where you want to be you want these things to be close and you want these things to be far apart so these s minuses these are going to be the ones from your the style codes of the images from your queue so these are just going to be other images here s tilde that's going to be the style that you get from your reference image so your reference image is going through the encoder and that's going to give you this right here now the question is what is s prime here because in the before we simply had s which was our source image our source image style now what is s prime here s prime is going to be it gets more complicated yes s prime is going to be whoops it's going to be the round trip to the encoder so it's going to be if i generate my image from the source image x and the style s tilde of the reference and then i ask my generate my encoder again what style does this have i get the s prime so it's kind of a round trip right so i i take i take this i ask the encoder what style is it that's s tilde right then i take s tilde go to the generator together with a source image x and that gives me like x fake and then i ask my generator again what style would you assign to the fake image i just produced and then the encoder will tell you i'll give it s fake or s prime in this case and then i compare that s prime with the one i gave before okay so it's sort of a round trip loss of my reference image all right so what does that do if i now and then i ask that s prime be close to s tilde so that means if i generate an image with the style of my reference image the upcoming image should better have the style of the reference image that's all it says so the style of the thing i generate given this style they should better be close and especially closer together than the style of the style with any other image in my queue it makes sense but it's kind of convoluted so you go with your out it's kind of a reconstruction loss except in style space all right and then the last thing is an actual image reconstruction loss so what you'll do is your generator will produce x uh sorry will produce an image from the source image and its own style right here that's important before we input s tilde here so this now is we input the source image and its own style so we go with x we go to the e and we put the style here and we tell the generator if i input the input the source image and its own style then what you give me back better be the source image itself right this is a consistency loss that tells the generator that basically it learns now the generator learns to the generator learns to map to recognize an image with its own style sort of because it doesn't know right it doesn't know that what's coming in here is the style of um it of the image x but now you teach it and i think before this loss you'd have a good chance that uh the styles would just be all over the place they would sort of be consistent but they will not be aligned and with this you force that the style of an image itself if you gent if you put that into the generator it will lead to that image itself okay that's it so this is a this is extremely convoluted right the discriminator is the easiest the discriminator is a class conditional discriminator that gets the label from some mechanism that decides on a label right okay that's the easiest the encoder has two parts the pseudo label which is over here which is trained completely unsupervised detached from everything else in a self clustering approach while the style part here is trained first of all in a contrastive way which makes sense and also in a back propagated way from the generator so the style generation mechanism tries to help the generator okay and that means it's going to leak some information about the label into the style because that helps the generator generator needs to if the generator knows what sort of class it's going to produce it's going to be better okay so you can count on that information being in there but also also because of all the other losses that the generator has and the contrastive loss on the style the style code is going to sort of describe the individual style of an image and but is also going to describe what the style of that class is because it technically needs to contain information about the class and that's why i think this works with the style because there is no inherent notion of like this is the pose of a cat or something like this yeah it still seems like a bit magic to me and then the generator is first of all trained to fool the discriminator given a source image and a style and you can fool the discriminator by producing an image that's so good it looks real and specifically it looks real in the class that the pseudo label has given right so in the class that the encoder has given to it so the generator must somehow come up with an image that's of that class and so it will it will be forced to interpret the style code in terms of that class label which makes the style code the style code and also we have these two additional losses which is the round trip loss to the style space so whatever the generator outputs you should be able to recover the style from it by putting it through the encoder again and then lastly there is a consistency loss where you say if i put an image into a source image and i input its own style again going through the encoder you should give me back the source image itself very complex and all of the generator loss is back propagated through to the encoder so this is the full loss as i said discriminator easy adversarial loss generator adversarial loss plus this style round trip consistency plus the own image round trip consistency encoder gets all of the generator loss all of it so all of this goes here so the encoder fully helps the generator and it is also trained with this mutual information and the style contrastive loss wow that's some losses wow that that's a lot of damage so they do different investigations into their model here and i don't even know if we've missed some of the pictures but ultimately what you can now do is you can do image to image translation either that's the cool thing you can have a reference image for one or what you can do is you can ask your discriminator what kind of domains are there sorry you can ask your encoder what kind of domains are there you've guessed the number of domains so it's maybe 10 or in this case it's uh eight eight eight domains of cats and you can simply divide your data set into these eight domains right one two three four five and so on oh this is 10 okay i can't see anymore so 10 domains and then you can simply calculate for each image you calculate the style vector so the style the style and then you simply take the average one over the number in that in that domain you take the average style vector and that's going to be your target style so you can do image to image translation with a reference image or you can do image to image translation for an entire group of images for example all the images in a given domain and that's how they do these graphs right here now just quickly wait until my tablet decides to show me the paper again thank you all right they do a bunch of investigations into their wholly unholy mixture of losses especially the first concern is couldn't we just train the guiding network like by its own on its own and then after that train this gan thing right that's what we had at the very beginning we said there's this guiding network and it does the clustering and all and couldn't we just train this gan architecture on top of the frozen guiding network and their conclusion is no if we train everything together it works better so on the left you have whenever you train the guiding network by itself and what you're seeing here is the t-sne visualization t-sne is a a down like a non-linear visualization tool of style codes extracted by our guiding network the ground truth domains of all test images is represented in different colors so this is a data set that has labels but you don't you don't provide the labels to this algorithm the algorithm is completely unlabeled but for purposes of investigating we'll visualize the labels with colors and what you'll see here are the t-sne visualizations of the style codes so things that are close together they have similar style codes and the ideal case would be if things that are close together here have the same label and that means the style is sort of representative of the domain okay that's what we want we want the style to capture the domain of an image and ideally not the image itself too much now on the left you see that there is quite a bit of overlap between these quite a bit of wash between the style and the group and on the right if you jointly train the gan together with the guiding network you see that these classes of the style codes which have no reason to cluster are much more clustered and separated and they are separated much more along the lines of the ground truth classes okay so that's pretty cool now i would actually be interested in what happens if you do the separate training with the full pipeline of this learning to classify images without labels thing and their nearest neighbor thing because they've also shown that just purely this self-clustering doesn't work too well but if you then do the nearest neighbor thing on top then that improves the classification significantly so this could potentially help either the separate or the joint training right here and there might be a connection between the joint training and whatever they're doing in any case they also show that then these fid which is a quality metric for gans lower is better that the joint training goes way lower in the fid than the separate training okay that's that's the reason why they built this convoluted thing because it works way better and here they ablate they ablate some of the losses to investigate what's really going on and in this case t-sne visualization of the style space of our guiding network trained on this since this does not have ground truth domain labels each data point is colored with the guiding network's prediction so each color is whatever the guiding network says the classes and the dot is one style each is one style each dot is one style vector and they're projected down to two dimensions you can see pretty clearly that the individual classes the individual clusters of style vectors correspond to different labels of the guiding network which is to be expected but also since they overestimate the number of classes in this case you can see that the even though the class label is different the style network will group the very similar classes together you can see here these are both cheetahs and here are both lions so it'll group them together which is pretty cool and sort of verifies that it recognizes these these different things because you force the guiding network to make 10 classes but the style network is simply continuous so it's cool to see that the style network will make one cluster with styles even though it's different labels and here you can see different samples from these domains just to verify that the guiding network has actually learned to separate things i still find this pretty pretty magical too this is completely unsupervised and it sort of finds these clusters by itself all right they have a bunch of images here as i said this is no longer with one reference images image this is where you take the entire domain so you self-label with your guiding network and then you take the mean vector and that's going to be your target style vector and these are the source images that you transfer and you can see that you know it works pretty well so they always have like one adult animal and one child animal and i guess not or just two different ones here this is particularly cute though i have to show you this fox right here what's going on with that fox like someone help that fox yeah um so we're not at perfection yet as you can see but it's you know that that looks like a pretty pretty cool fox maybe okay where did it go maybe it slipped maybe it's an it's an offshoot of this one on the top left yeah who knows these data sets they have their way and um so this is sort of where you can see the limitations right here um that's not how a baby snow leopard looks you see the limitations here in that all of these animal faces they are still pretty aligned like they're fairly frontal not exactly but they're fairly frontal pictures um they're fairly standardized and so on so we're i don't think we're yet at the level where we can just do you know um fully image to image and you see it especially with faces because we as us humans as us humans are extremely good at you know seeing when there's something wrong with a face but it's still it's still pretty impressive what's possible and i think if the past is of any indication here is summer to winter that actually looks good if the past is of any indication then this technology will be pushed pretty hard and soon we'll be able to do this with a simple smartphone app or something like this so i invite you to check out the paper right here they have lots and lots and lots of examples and t-sneak plots and whatnot in their appendix they have the code online as far as i as i have seen and with that let me know what you think in the comments bye bye
[ { "end": 6.48, "start": 0, "text": " Hi there! Today we'll look at rethinking the truly unsupervised image-to-image translation" }, { "end": 16.48, "start": 6.48, "text": " by Kyungjoon Baek, Yoonjae Choi, Yongjung Woo, Ja-Joon Yoo, and Hyeong-Joon Shim." }, { "end": 23.44, "start": 16.48, "text": " So in this paper we'll deal with image-to-image translation in an unsupervised fashion." }, { "end": 31.44, "start": 23.44, "text": " So on a high level they replace the need for domain or really single image label annotations" }, { "end": 37.6, "start": 31.44, "text": " in image-to-image translation by training a guiding network that is able to sort of do a" }, { "end": 44.32, "start": 37.6, "text": " self-clustering of the image domain and therefore that guides the image-to-image translation instead" }, { "end": 52.16, "start": 44.32, "text": " of the previously needed labels. I myself don't know too much about image-to-image translation" }, { "end": 58.8, "start": 52.16, "text": " and style transfer and all of this stuff. This has always been kind of a mystery to me and we'll try" }, { "end": 64.88, "start": 58.8, "text": " to make as much sense as possible out of this paper if you're with me. I might not get everything" }, { "end": 72.16, "start": 64.88, "text": " right but I will give my best of course. As always if you like content like this consider" }, { "end": 79.19999999999999, "start": 72.16, "text": " sharing it out and leaving a like and a comment. I do read the comments so I get a good idea of" }, { "end": 85.76, "start": 79.2, "text": " what you have to say about it. Cool so what we're seeing here is an example of image-to-image" }, { "end": 93.2, "start": 85.76, "text": " translation of like a sort of a style transfer. Now what you'll have on the left is a source image." }, { "end": 100, "start": 93.2, "text": " Now the goal is to translate this source image to a different domain while sort of keeping the" }, { "end": 106.64, "start": 100, "text": " the features of the image the same. And here is sort of I'm always confused because here it's like" }, { "end": 112.8, "start": 106.64, "text": " we keep the pose of the cat the same okay so we sort of keep the same cat but we want to change" }, { "end": 120.16, "start": 112.8, "text": " its style which means it's breed in this particular case. So on the top you can see that" }, { "end": 127.36, "start": 120.16, "text": " the domain images are they come in these different groups and in fact it's not only those four but" }, { "end": 132.88, "start": 127.36, "text": " the entire data set is split into these different groups and among these different groups you have" }, { "end": 140.88, "start": 132.88, "text": " some sort of a shared style. Now this shared style is what you would like to transfer to the" }, { "end": 147.2, "start": 140.88, "text": " source image. So if you transfer the style of all of these cats right here which all seem to be sort" }, { "end": 154.64, "start": 147.2, "text": " of ginger cats to this instance right here what you'll end up with is a cat okay it was ginger" }, { "end": 161.2, "start": 154.64, "text": " before. Might not be the best example but you you sort of get what I mean is that the thing that" }, { "end": 169.28, "start": 161.2, "text": " you transfer is whatever is common among these domain images okay and that's what I guess" }, { "end": 177.6, "start": 169.28, "text": " explains why the pose of the cat stays the same because it only it is basically taught to keep" }, { "end": 184.95999999999998, "start": 177.6, "text": " the image the same except to transfer whatever is common among the images in the domain class" }, { "end": 191.76000000000002, "start": 184.96, "text": " and that's image to image transfer or translation. Now until this paper at least that's what the" }, { "end": 198.64000000000001, "start": 191.76000000000002, "text": " paper claims these image to image translation models they required labels and why is that?" }, { "end": 206.56, "start": 198.64000000000001, "text": " That's because you need to know how to build these domains here at the top to get these" }, { "end": 212.16, "start": 206.56, "text": " different style vectors out or you actually would need label annotations for each image" }, { "end": 217.84, "start": 212.16, "text": " for each single image you would need to know which one you need to know which one of the source" }, { "end": 222.56, "start": 217.84, "text": " corresponds to which one of the target so they have a graphic right here where they explain the" }, { "end": 230.16, "start": 223.04, "text": " sort of different the different stages that image to image translation went through historically" }, { "end": 238, "start": 230.16, "text": " so first you'd have to have corresponding images one to one where you'd say okay here is an example" }, { "end": 243.36, "start": 238, "text": " of a sketch of a shoe and here is the corresponding shoe here is the sketch of another shoe and here" }, { "end": 249.6, "start": 243.36, "text": " is the corresponding shoe and so on and from that you could learn a model that translates from one" }, { "end": 257.36, "start": 249.6, "text": " domain to the other because you have corresponding image level annotations which image corresponds to" }, { "end": 262.88, "start": 257.36, "text": " which so basically which element of the domain a corresponds to which element in the domain b" }, { "end": 268.71999999999997, "start": 262.88, "text": " then the next stage of this was when you only need set level annotations and that's sort of what we" }, { "end": 276, "start": 268.71999999999997, "text": " looked at if you had supervised labels for domains so what you'll say is that there are three domains" }, { "end": 283.52, "start": 276, "text": " a b and c and actually let's let's forget c for a moment and just deal with a and b" }, { "end": 289.44, "start": 283.52, "text": " to make it equivalent to the thing on the left now i just know that these things are instances" }, { "end": 296.48, "start": 289.44, "text": " of class a and these things are instances of class b yet i i don't there's no correspondence" }, { "end": 304.16, "start": 296.48, "text": " right there is no this corresponds to this or or something like this so image to image translation" }, { "end": 312.15999999999997, "start": 304.16, "text": " is now possible between domains when i just have domain level labels but this is still expensive" }, { "end": 317.52, "start": 312.15999999999997, "text": " collecting these labels you know it's is like collecting labels for a supervised data so" }, { "end": 322.88, "start": 317.52, "text": " collecting labels for a supervised data set a human needs to look at each image and then conclude" }, { "end": 331.44, "start": 322.88, "text": " what sort of domain it is their paper introduces the following where you do not have domains anymore" }, { "end": 338.64, "start": 331.44, "text": " you simply have a data set x now this data set your hypothesis is that there are still going to be" }, { "end": 344.47999999999996, "start": 338.64, "text": " domains in the data set they can i guess they can be overlapping or not but there are still going to" }, { "end": 351.52000000000004, "start": 344.48, "text": " be domains you just don't know what they are so in this case um i guess you could differentiate" }, { "end": 357.92, "start": 351.52000000000004, "text": " these people into many many different ways but um in essence you're going to assume that there is" }, { "end": 364.32, "start": 357.92, "text": " some kind of a domain structure you just don't know what it is but if you knew what it was then" }, { "end": 372.24, "start": 364.32, "text": " you could simply apply methods from here to the data set and you'd be done now their paper shows" }, { "end": 378.88, "start": 372.24, "text": " that if you apply something like a self-clustering approach and we've seen these approaches before in" }, { "end": 386.24, "start": 378.88, "text": " the paper about learning to classify images without labels if you have techniques like this you can do" }, { "end": 392.40000000000003, "start": 386.24, "text": " like a self-clustering approach on this data set x right here and then you could learn your image" }, { "end": 399.44, "start": 392.40000000000003, "text": " to image translation yet this paper shows that if you do that the quality is not as good as if you" }, { "end": 407.44, "start": 399.44, "text": " do both things jointly so what this paper does is it jointly learns to cluster let's say to" }, { "end": 415.84, "start": 408, "text": " self-label the images and to make the to do this image to image translation and by doing the tasks" }, { "end": 423.12, "start": 415.84, "text": " jointly they help each other perform better okay that's a general overview so how do they do this" }, { "end": 432.8, "start": 423.12, "text": " they have three different parts to their model there is the encoder or they call this the guiding" }, { "end": 439.44, "start": 432.8, "text": " network there is the generator and there is the discriminator so the generator and the discriminator" }, { "end": 445.84000000000003, "start": 439.44, "text": " they are fairly standard GAN generators and discriminators so general adversarial network" }, { "end": 453.35999999999996, "start": 445.84, "text": " but they have like a bit of some sort of twists so you can already see from the design from the" }, { "end": 460.79999999999995, "start": 453.35999999999996, "text": " drawings right here the discriminator is probably the easiest the discriminator gets an image right" }, { "end": 468.08, "start": 460.79999999999995, "text": " here it doesn't have to be a generated it is a either a generated image or a real image and it" }, { "end": 474.79999999999995, "start": 468.08, "text": " needs to decide you can see right here this means that the input domain is a vector or an image in" }, { "end": 482.72, "start": 474.8, "text": " this case and the output is a number it needs to decide if it's real or fake now in fact it's not" }, { "end": 489.2, "start": 482.72, "text": " as easy because you can see there are these multiple heads right here so this whole thing" }, { "end": 494.64, "start": 489.2, "text": " as I said is built on this kind of pseudo clustering approach there is this pseudo label" }, { "end": 501.68, "start": 494.64, "text": " that comes out of the left side we're going to look at that in a second but in essence you assume" }, { "end": 508.08, "start": 501.68, "text": " that there are multiple classes multiple domains in the data set and the discriminator here has one" }, { "end": 514.96, "start": 508.08, "text": " classification head for each of those classes so from somewhere outside it will get the information" }, { "end": 521.04, "start": 514.96, "text": " oh this is now supposed to be one of those ginger cats right as opposed to one of those black and" }, { "end": 527.52, "start": 521.04, "text": " white cats or one of the brown haired cats no it's one of the ginger cats and then there is a special" }, { "end": 535.4399999999999, "start": 527.52, "text": " head on top of the classifier that only classifies fake from real ginger cats okay which is a" }, { "end": 541.04, "start": 535.4399999999999, "text": " different classifier from the other domains so the discriminator it's sort of a conditional" }, { "end": 547.1999999999999, "start": 541.04, "text": " discriminator conditioned on a label okay from the discriminators point of view it's simply a" }, { "end": 554.0799999999999, "start": 547.1999999999999, "text": " label conditioned discriminator discriminating between real and false and I think that's yeah" }, { "end": 563.44, "start": 554.08, "text": " how you train the discriminator is you would give an image and you would let this encoder here this" }, { "end": 569.36, "start": 563.44, "text": " guiding network label the image and how we come up with this label again that we'll look in a second" }, { "end": 575.44, "start": 569.36, "text": " but this just gives a label and then you'd for that particular label you'd classify the image" }, { "end": 582.4000000000001, "start": 575.44, "text": " into real or false now the fact that there is this shared part right here of course is" }, { "end": 587.92, "start": 582.4, "text": " you could also think of having one discriminator per class but the shared part now gives you some" }, { "end": 592.72, "start": 587.92, "text": " shared features and so on but it's not necessary it's not the the point is that there is a" }, { "end": 602.0799999999999, "start": 592.72, "text": " discriminator per class it's class conditional okay so what about the generator I think is I" }, { "end": 609.52, "start": 602.0799999999999, "text": " guess is the most complex um what about this this encoding network right here it's e for encoder I" }, { "end": 615.76, "start": 609.52, "text": " guess but they also call it the guiding network so what this does is this is what's this supposed to" }, { "end": 624.16, "start": 615.76, "text": " do is it'll take an image any image and it will output two things one is a label and one is a" }, { "end": 635.4399999999999, "start": 624.16, "text": " style code so the label is supposed to be a number between zero and da da da da da k minus one so" }, { "end": 642.48, "start": 635.44, "text": " k minus one so that's supposed to be a class label and how do you know how many classes there are if" }, { "end": 649.36, "start": 642.48, "text": " there are no labels you just guess and your best bet is to slightly over guess so if you expect" }, { "end": 655.9200000000001, "start": 649.36, "text": " there to be between 10 and 15 classes maybe put k to 20 okay you don't want to under guess but you" }, { "end": 665.2800000000001, "start": 655.9200000000001, "text": " can over guess but not by too much of course so you have to have this this estimation of how many" }, { "end": 672.48, "start": 665.28, "text": " classes but then this e it simply comes up with a class label and it also comes up with the style" }, { "end": 681.76, "start": 672.48, "text": " code now these two things are going to go then different pathways in this in this network the" }, { "end": 688.4, "start": 681.76, "text": " label is directly going to the discriminator right the generator does not see the label" }, { "end": 697.28, "start": 688.4, "text": " okay the style code does not go to the discriminator but goes to the generator all right so the two" }, { "end": 703.12, "start": 697.28, "text": " inputs from the encoder they one goes to the discriminator which is the label and one goes" }, { "end": 713.12, "start": 703.12, "text": " to the generator which is the style now the generator lastly it takes a source image and" }, { "end": 719.92, "start": 713.12, "text": " it takes this style code right here now the style code is encapsulating as we said the style of the" }, { "end": 729.12, "start": 719.92, "text": " reference image so the style is supposed to be whatever whatever whatever makes this domain" }, { "end": 735.52, "start": 729.12, "text": " of images the same so the style the way we're going to train this is that the style is going" }, { "end": 743.36, "start": 735.52, "text": " to describe somehow all the images that are from this label the style is going to describe whatever" }, { "end": 749.1999999999999, "start": 743.36, "text": " the style is it's very hard to it's very hard to explain if we look at the loss it becomes clearer" }, { "end": 757.4399999999999, "start": 749.1999999999999, "text": " why the things are how they are so it takes the style code and it takes the source image and it" }, { "end": 763.04, "start": 757.4399999999999, "text": " combines them and its task is to output this generated image as you can see in this example" }, { "end": 770.64, "start": 763.04, "text": " the generated image is basically this cat but with the style of the reference image and it outputs an" }, { "end": 776.24, "start": 770.64, "text": " image and the discriminator of course then is tasked with differentiating whether that image" }, { "end": 784.0799999999999, "start": 776.24, "text": " is real or fake for the given label over here okay so this is the entire thing and you all train" }, { "end": 790.88, "start": 784.0799999999999, "text": " this jointly so you jointly train the encoder to produce these class labels and the styles you" }, { "end": 796.64, "start": 790.88, "text": " train the generator to take in the styles and the source images and output the generated image to" }, { "end": 802.48, "start": 796.64, "text": " fool the discriminator and the discriminator at the same time is trained to differentiate between" }, { "end": 811.12, "start": 802.48, "text": " real and fake images based on the label that the encoder gives very very convoluted and complicated" }, { "end": 819.52, "start": 811.68, "text": " but there are a few things that make it easier first of all as you can see here the pseudo label" }, { "end": 827.36, "start": 819.52, "text": " is detached is argmaxed and detached so the pseudo label really is a number and there is no gradient" }, { "end": 837.28, "start": 827.36, "text": " back propagation along this line okay that makes that makes it a lot easier so the so what we first" }, { "end": 844.48, "start": 837.28, "text": " need is we need a way to train the encoder to come up with suitable class labels even though it" }, { "end": 851.2, "start": 844.48, "text": " doesn't get any back propagation signal into that part of its network so that's where we start with" }, { "end": 857.36, "start": 851.2, "text": " the loss functions the way we're going to do this is we're going to take the following approach" }, { "end": 867.28, "start": 858.16, "text": " we're going to take an image and we're going to take a randomly augmented version so for example" }, { "end": 873.2, "start": 867.28, "text": " a random crop or a horizontal flip and so on so the now we bring in ideas from self-supervision" }, { "end": 879.2800000000001, "start": 873.2, "text": " and again if you watch the video on learning to classify images without labels this is one of" }, { "end": 886.8000000000001, "start": 879.2800000000001, "text": " their main staples these self-supervised approaches really tend to learn representations that allow" }, { "end": 892.32, "start": 886.8000000000001, "text": " you to self cluster now in that paper they go further and they do this nearest neighbor thing" }, { "end": 898.6400000000001, "start": 892.32, "text": " in this paper they just do sort of the first step of this self clustering which i guess makes it such" }, { "end": 905.36, "start": 898.64, "text": " that you could potentially improve this paper by applying the other paper but who knows so" }, { "end": 911.84, "start": 906.3199999999999, "text": " we're going to take an image and we're going to augment it okay so that means we're going to like" }, { "end": 919.1999999999999, "start": 911.84, "text": " random crop it or in change its luminance or whatnot so we have two versions of the same image" }, { "end": 925.4399999999999, "start": 919.1999999999999, "text": " and what we want to maximize we want to maximize the mutual information between not between the" }, { "end": 934, "start": 925.44, "text": " images themselves but p is going to be this output of the encoder so x goes into the encoder and the" }, { "end": 941.7600000000001, "start": 934, "text": " encoder outputs the style and the class label and the class label here so p is going to be the class" }, { "end": 949.36, "start": 941.7600000000001, "text": " distribution all right so this is going to be like a histogram or maybe the log it's it's already" }, { "end": 955.12, "start": 949.36, "text": " yes so it's going to be a histogram over classes from which we're going to sample the label c or" }, { "end": 964.96, "start": 955.12, "text": " l or whatnot y hat y but the p is the distribution over output classes so since we don't have a label" }, { "end": 972.48, "start": 964.96, "text": " we can't train the distribution like in a supervised way supervised way so what we'll have to say is we" }, { "end": 977.04, "start": 972.48, "text": " want to maximize the mutual information between the output distribution of the image and the" }, { "end": 983.4399999999999, "start": 977.04, "text": " output distribution of the augmented image now that entails the following two quantities" }, { "end": 991.04, "start": 983.4399999999999, "text": " there's the entropy of p and there's the conditional entropy of p given p augmented" }, { "end": 1001.12, "start": 991.92, "text": " now first of all it means we want to maximize this the entropy of p and that's supposed to be over" }, { "end": 1009.04, "start": 1001.12, "text": " the entire data set so this is the entropy over the entire data set x what it means is that we want" }, { "end": 1020.16, "start": 1009.92, "text": " different x's so if there's x1 x2 x3 and so on we want those to have different distributions in labels" }, { "end": 1028.56, "start": 1020.16, "text": " okay so if if the entropy is really high of the distribution p that means that different images" }, { "end": 1035.36, "start": 1028.56, "text": " get assigned to different classes some something like this all right if this is low then that would" }, { "end": 1040.3999999999999, "start": 1035.36, "text": " mean all the images basically get assigned to the same class and that's not good we want our" }, { "end": 1046.8799999999999, "start": 1040.3999999999999, "text": " classifier since we don't have labels it's a it's basically a cluster we want our cluster to sort of" }, { "end": 1052.8, "start": 1046.8799999999999, "text": " fill the space of possible clusters with the images so that's the first thing we want to" }, { "end": 1058.08, "start": 1052.8, "text": " maximize the mutual information we need to maximize this entropy and then second we want" }, { "end": 1065.6, "start": 1058.08, "text": " second since this is a minus here we need to minimize the conditional entropy of p given p" }, { "end": 1074.72, "start": 1065.6, "text": " augmented what does that mean that means if we know the augmented version of an image its class" }, { "end": 1082.8799999999999, "start": 1074.72, "text": " labeling should be the same as the un-augmented version so that means that if i now take one of" }, { "end": 1090.72, "start": 1082.88, "text": " these x's to x1 augmented of the do a plus augmented right then that shouldn't really" }, { "end": 1098.3200000000002, "start": 1090.72, "text": " change its class label and this is what these so that should sort of keep the class labeling" }, { "end": 1105.0400000000002, "start": 1098.3200000000002, "text": " this is horrible but the idea here is that it's kind of a reverse thinking from supervised learning" }, { "end": 1112.48, "start": 1105.0400000000002, "text": " in supervised learning we have the label like this is class this is class five okay this image is" }, { "end": 1118.24, "start": 1112.48, "text": " class five and our thinking is this augmentation techniques if i random crop an image or if i" }, { "end": 1123.84, "start": 1118.24, "text": " change its colorization a little bit the class is not going to change right an airplane with a" }, { "end": 1130.88, "start": 1123.84, "text": " in front of a blue sky is still an airplane in front of a bit bluer sky so i assume that it'll" }, { "end": 1137.68, "start": 1130.88, "text": " still have the same label here i don't have the label but what i can require is to say whatever" }, { "end": 1144.72, "start": 1137.68, "text": " you output for the image it should be the same for the augmented image so these two objectives" }, { "end": 1150.88, "start": 1144.72, "text": " are enough to give you sort of a rough clustering of the output space maximize the entropy minimize" }, { "end": 1158.4, "start": 1150.88, "text": " the conditional entropy between two versions of the same image okay that's how we train this" }, { "end": 1165.6000000000001, "start": 1158.4, "text": " pseudo labeling approach so now we have a we have a model that can give a label to each image very" }, { "end": 1178.8, "start": 1165.6, "text": " cool so how do we train the other parts now there are additional um additional losses here so" }, { "end": 1189.36, "start": 1180.7199999999998, "text": " i'm not sure yeah we'll go over it so this style part is also has to be trained right this encoder" }, { "end": 1194.9599999999998, "start": 1189.36, "text": " outputs a labeling we got that covered and it outputs a style part now the style part if you" }, { "end": 1203.04, "start": 1194.96, "text": " can see from the graphic it actually goes into let me erase some of that stuff here the style part" }, { "end": 1210.8, "start": 1203.04, "text": " actually is down here and it feeds into the generator and luckily they write detach here" }, { "end": 1217.44, "start": 1210.8, "text": " and since they don't write detach anywhere here that means that we do get gradient back propagation" }, { "end": 1226.56, "start": 1217.44, "text": " from the generator to the style code so that means our our encoder here is trained to help the" }, { "end": 1234.8, "start": 1226.56, "text": " generator with its task of fooling the discriminator okay but um first of all we're going to forget" }, { "end": 1240.88, "start": 1234.8, "text": " about that for now what we're going to do is simply look at a loss that they impose on the style" }, { "end": 1246.48, "start": 1240.88, "text": " they wouldn't they don't have to impose that loss but they have an additional loss on the style codes" }, { "end": 1253.3600000000001, "start": 1246.48, "text": " for the encoder in addition to the fact that there is gradient back propagating from g so the second" }, { "end": 1261.52, "start": 1253.3600000000001, "text": " loss we're going to look at is this style loss the style loss is almost the same so the style loss is" }, { "end": 1269.52, "start": 1261.52, "text": " a contrastive loss so what you want to do is if you have your data set you have your data set of" }, { "end": 1275.6, "start": 1269.52, "text": " images and you you know take images out and you train your network on and you train then take the" }, { "end": 1282.24, "start": 1275.6, "text": " next image or you take batches of image you take train and so on like this right and now you have" }, { "end": 1288.24, "start": 1282.24, "text": " this image what you want to do for this to work is you want to build up sort of a queue of images" }, { "end": 1293.6799999999998, "start": 1288.24, "text": " that you have already looked at like these images these are going and the queue can be let's say" }, { "end": 1298.3999999999999, "start": 1293.6799999999998, "text": " 10 long and you would always throw out the oldest and and in queue and newest so when you're done" }, { "end": 1303.52, "start": 1298.3999999999999, "text": " with this image right here you'll put it into the queue you load your next image and so on so now" }, { "end": 1310.24, "start": 1303.52, "text": " what does this mean you now always have a queue of other images and it's not important what they" }, { "end": 1320, "start": 1310.24, "text": " are as long as they are others right because now we're going to compare ourselves with others and" }, { "end": 1326, "start": 1320, "text": " this is this contrastive loss right here so this style loss is going to be a contrastive loss" }, { "end": 1335.28, "start": 1326, "text": " between this and this now the bottom part this here these are the others these are the other images" }, { "end": 1344, "start": 1335.92, "text": " and what are the individual quantities so s is the style code of the image you're considering" }, { "end": 1351.36, "start": 1344, "text": " right now s plus you could have already guessed it is the style code of the augmented image right" }, { "end": 1361.76, "start": 1351.36, "text": " so we had our image x let's go again with x1 x2 x3 are different images so we put x1 through the" }, { "end": 1367.36, "start": 1361.76, "text": " encoder that gives us s the style it also gives us the class label but now we care about this head" }, { "end": 1377.1999999999998, "start": 1367.36, "text": " that gives us the style code and we augment x1 to be x1 plus and we go we put that through the" }, { "end": 1385.76, "start": 1377.2, "text": " encoder that gives us s plus and now we also put all of these other images remember these are the" }, { "end": 1390.72, "start": 1385.76, "text": " images that we've looked at previously but the only real importance is that there are other images" }, { "end": 1400.56, "start": 1390.72, "text": " we put those through here and they get the s minus i in this case three and two so now what will" }, { "end": 1408.6399999999999, "start": 1400.56, "text": " require is that the s the style code of our image is closer to the style code of its augmented" }, { "end": 1415.6, "start": 1408.6399999999999, "text": " version so the same principle again we want we'll say that you know these augmentations they don't" }, { "end": 1420.08, "start": 1415.6, "text": " really change anything about the style now this argument is a bit more wonky but if you think of" }, { "end": 1426.48, "start": 1420.08, "text": " you know random crops and random flips don't really change anything about the fur color or so" }, { "end": 1436.8, "start": 1426.48, "text": " of a of a cat so we want those two to be closer together than s is to any of these other images" }, { "end": 1442.4, "start": 1436.8, "text": " okay so this is a contrast of loss where you pull together two things that you think should be close" }, { "end": 1451.44, "start": 1442.4, "text": " and you push apart things that you think should be far away from each other so this style loss" }, { "end": 1458.4, "start": 1451.44, "text": " basically guarantees that you have a distinct style for each image that is robust to the kind" }, { "end": 1467.6000000000001, "start": 1458.4, "text": " of transformations that you do under augmentation okay specifically this style loss doesn't care" }, { "end": 1472.48, "start": 1467.6000000000001, "text": " about the domain right this is for each image you don't know if these other images are from" }, { "end": 1479.1200000000001, "start": 1472.48, "text": " the same domain or from different domains and that's why the style is basically individual" }, { "end": 1488, "start": 1479.12, "text": " to the image but as we as we're going to see the style does capture something of the domain as well" }, { "end": 1495.6, "start": 1488, "text": " but this loss right here is supposed to be each image has a style right so this is the style code" }, { "end": 1500.3999999999999, "start": 1495.6, "text": " of x this n plus one way classification enables e to utilize not only the similarity of the" }, { "end": 1505.9199999999998, "start": 1500.3999999999999, "text": " positive pair but also the dissimilarity of the negative pairs where the negative style codes are" }, { "end": 1513.04, "start": 1505.92, "text": " stored into a queue using previously sampled images we observe that adding this objective" }, { "end": 1518.16, "start": 1513.04, "text": " significantly improves unsupervised classification accuracy in animal faces from this to that" }, { "end": 1526.96, "start": 1518.16, "text": " compared to the previous over clustering approach okay so we have two outputs now and now" }, { "end": 1534.88, "start": 1528, "text": " we go to the adversarial loss so the question is how do we train the generator and the discriminator" }, { "end": 1541.92, "start": 1534.88, "text": " and the discriminator so they have three different losses for the generator and the discriminator and" }, { "end": 1549.0400000000002, "start": 1541.92, "text": " the most important one of course is this adversarial loss right here so the discriminator simply tries" }, { "end": 1558.8000000000002, "start": 1549.0400000000002, "text": " to distinguish is an image real or fake conditioned on a class so in case of a real image and that's" }, { "end": 1569.12, "start": 1558.8, "text": " this line right here it tries to distinguish is this real or fake based on y and y is x fed to the" }, { "end": 1574.72, "start": 1569.12, "text": " encoder and the encoder gives you a label all right that's and the label selects the head of the" }, { "end": 1582, "start": 1574.72, "text": " discriminator at the same time that the discriminator is trying to distinguish real from fake so these" }, { "end": 1587.52, "start": 1582, "text": " two lines the generator is trying to fool the discriminator so the upper if you've never seen" }, { "end": 1595.6, "start": 1587.52, "text": " a GAN loss the upper part here that's the real data and the bottom part here is the fake data" }, { "end": 1603.92, "start": 1596.4, "text": " now at the same time the discriminator is trying to distinguish real from fake and the generator" }, { "end": 1610.4, "start": 1603.92, "text": " is trying to make the discrim fool the discriminator so both are of the generator and the discriminator" }, { "end": 1617.2, "start": 1610.4, "text": " are actually using that loss but the sign in front of it is different okay and since the generator" }, { "end": 1622.8, "start": 1617.2, "text": " is not involved in the top line you can usually leave that away because there is no backprop path" }, { "end": 1630.56, "start": 1623.6000000000001, "text": " through that and there is no backprop backprop path here because we detach the graph right here so" }, { "end": 1637.8400000000001, "start": 1630.56, "text": " there is no gradient signal going to the encoder so this bottom line what does it mean the generator" }, { "end": 1648.1599999999999, "start": 1637.84, "text": " will take in an image and the style now s tilde comes from x tilde it's x tilde going through the" }, { "end": 1655.36, "start": 1648.1599999999999, "text": " encoder giving you s tilde so this is the reference image right this is you want these this style" }, { "end": 1662.24, "start": 1655.36, "text": " right here this is the reference image and x is the source image so the generator is supposed to" }, { "end": 1669.44, "start": 1662.24, "text": " take the source image and basically apply the style from the reference image and generate" }, { "end": 1681.2, "start": 1670.96, "text": " x i don't even know how to call this x not tilde whatever x fake xf and that's supposed to fool" }, { "end": 1690.4, "start": 1681.2, "text": " the discriminator now the question is which discriminator right because you need a label for" }, { "end": 1695.2, "start": 1690.4, "text": " the discriminator the label is conditional with this discriminator is pretty easy because it's" }, { "end": 1703.52, "start": 1695.2, "text": " simply the label of this image now however as you can see the generator learns to translate x to the" }, { "end": 1711.0400000000002, "start": 1703.52, "text": " target domain while reflecting the style code s tilde so y tilde is going to be the label" }, { "end": 1720.6399999999999, "start": 1711.04, "text": " that comes out of this x so this encoder right here is also going to give us y tilde and that's" }, { "end": 1731.44, "start": 1720.6399999999999, "text": " going to go here all right so recap what we want to put into the discriminator is one time a real" }, { "end": 1741.52, "start": 1731.44, "text": " image like we do up up here and we get its label from the encoder the encoder gets us a label for" }, { "end": 1748.96, "start": 1741.52, "text": " each image very cool we'll also take the same image put it through the generator task the" }, { "end": 1756.24, "start": 1748.96, "text": " generator with transferring the style of another image from here onto it we get the style from the" }, { "end": 1763.84, "start": 1756.24, "text": " encoder and then the generator is supposed to make an image and we feed that to the discriminator" }, { "end": 1770.88, "start": 1763.84, "text": " and the discriminator discriminates assuming it comes from class y tilde now you see right here" }, { "end": 1779.52, "start": 1770.88, "text": " the generator never has access to y tilde okay so the generator is kind of at a disadvantage here" }, { "end": 1786.48, "start": 1779.52, "text": " the discriminator gets told what kind of image it is in terms of class while the generator" }, { "end": 1791.44, "start": 1786.48, "text": " because it needs to fool the discriminator it needs to come up with an image of that class" }, { "end": 1799.36, "start": 1791.44, "text": " but it has no idea of the class it only has the style code so it is forced to learn to sort of" }, { "end": 1807.44, "start": 1800.4, "text": " it is forced to learn to map a style to associate a style with a particular class and that's what" }, { "end": 1812.48, "start": 1807.44, "text": " with a particular class and that's how you get the domain into the style that's why the style can" }, { "end": 1820.16, "start": 1812.48, "text": " capture something like fur color of the different cat breeds because the generator is forced to take" }, { "end": 1827.44, "start": 1820.16, "text": " the style that the encoder gives and map it to an image of the class y tilde that it also the" }, { "end": 1835.28, "start": 1827.44, "text": " encoder gives but doesn't tell to the generator okay and in fact there is a more path because you" }, { "end": 1842.24, "start": 1835.28, "text": " now back propagate the loss to the encoder which means that the encoder will even help the generator" }, { "end": 1851.84, "start": 1843.92, "text": " it will help the generator make style codes that are very class specific now you can maybe think" }, { "end": 1857.68, "start": 1851.84, "text": " why why wouldn't you just have one output why doesn't the encoder simply output the label also" }, { "end": 1864.08, "start": 1857.68, "text": " as the style because that would be the easiest and the reason is because we have different losses" }, { "end": 1866.8799999999999, "start": 1864.08, "text": " on the style and the label" }, { "end": 1875.28, "start": 1869.84, "text": " okay otherwise that would be a valid tactic so that's cool that's the adversary loss that's the" }, { "end": 1882.8, "start": 1875.28, "text": " most important loss now there's also additional losses so they they do additional losses that" }, { "end": 1888.8799999999999, "start": 1882.8, "text": " they add on top for the generator they say in order to prevent degenerate situation where the" }, { "end": 1895.5200000000002, "start": 1888.88, "text": " generator ignores the style code and synthesizes a random image in the domain y or in the domain y" }, { "end": 1900.24, "start": 1895.5200000000002, "text": " tilde we impose a style contrastive loss to the generator so now there's still the danger that" }, { "end": 1907.1200000000001, "start": 1900.24, "text": " the degenerator simply produces a valid image right from the data set or even from the domain" }, { "end": 1914.48, "start": 1907.1200000000001, "text": " y tilde though i don't know how it would know why tilde or i've just not seen something" }, { "end": 1918.88, "start": 1914.48, "text": " i in my mind it doesn't get the y tilde" }, { "end": 1927.6, "start": 1920.72, "text": " but it could read it from the style but here the danger is to ignore the style i'm slightly" }, { "end": 1932, "start": 1927.6, "text": " i'm slightly confused by this part but maybe looking at the loss will will clear it out" }, { "end": 1939.28, "start": 1932.8, "text": " so they say we impose a style contrastive loss to the generator now this is almost the same" }, { "end": 1946.8799999999999, "start": 1939.28, "text": " is almost the same as we imposed on the encoder so the generator you can see there is a" }, { "end": 1953.12, "start": 1946.8799999999999, "text": " contrastive loss again where you want to be you want these things to be close and you want these" }, { "end": 1960.96, "start": 1953.12, "text": " things to be far apart so these s minuses these are going to be the ones from your the style codes" }, { "end": 1969.04, "start": 1960.96, "text": " of the images from your queue so these are just going to be other images here s tilde that's" }, { "end": 1974.48, "start": 1969.04, "text": " going to be the style that you get from your reference image so your reference image is going" }, { "end": 1980.48, "start": 1974.48, "text": " through the encoder and that's going to give you this right here now the question is what is s prime" }, { "end": 1988.8, "start": 1981.2, "text": " here because in the before we simply had s which was our source image our source image style" }, { "end": 1997.28, "start": 1989.44, "text": " now what is s prime here s prime is going to be it gets more complicated yes s prime is going to be" }, { "end": 2009.68, "start": 1997.28, "text": " whoops it's going to be the round trip to the encoder so it's going to be if i generate my image" }, { "end": 2020, "start": 2009.68, "text": " from the source image x and the style s tilde of the reference and then i ask my generate my" }, { "end": 2028.48, "start": 2020, "text": " encoder again what style does this have i get the s prime so it's kind of a round trip right so i" }, { "end": 2039.36, "start": 2029.28, "text": " i take i take this i ask the encoder what style is it that's s tilde right then i take s tilde" }, { "end": 2048.72, "start": 2039.36, "text": " go to the generator together with a source image x and that gives me like x fake and then i ask my" }, { "end": 2056.16, "start": 2048.72, "text": " generator again what style would you assign to the fake image i just produced and then the encoder" }, { "end": 2066.72, "start": 2056.16, "text": " will tell you i'll give it s fake or s prime in this case and then i compare that s prime with" }, { "end": 2073.12, "start": 2066.72, "text": " the one i gave before okay so it's sort of a round trip loss of my reference image" }, { "end": 2081.2799999999997, "start": 2073.12, "text": " all right so what does that do if i now and then i ask that s prime be close to s tilde so that" }, { "end": 2088.16, "start": 2081.2799999999997, "text": " means if i generate an image with the style of my reference image the upcoming image should better" }, { "end": 2094.24, "start": 2088.16, "text": " have the style of the reference image that's all it says so the style of the thing i generate" }, { "end": 2102.7999999999997, "start": 2095.04, "text": " given this style they should better be close and especially closer together than the style of the" }, { "end": 2109.84, "start": 2102.8, "text": " style with any other image in my queue it makes sense but it's kind of convoluted so you go with" }, { "end": 2117.76, "start": 2109.84, "text": " your out it's kind of a reconstruction loss except in style space all right and then the last thing" }, { "end": 2126.4, "start": 2117.76, "text": " is an actual image reconstruction loss so what you'll do is your generator will produce x" }, { "end": 2132.96, "start": 2126.4, "text": " uh sorry will produce an image from the source image and its own style right here that's important" }, { "end": 2143.12, "start": 2133.76, "text": " before we input s tilde here so this now is we input the source image and its own style so we" }, { "end": 2152.56, "start": 2143.12, "text": " go with x we go to the e and we put the style here and we tell the generator if i input the" }, { "end": 2159.36, "start": 2152.56, "text": " input the source image and its own style then what you give me back better be the source image itself" }, { "end": 2168.16, "start": 2159.36, "text": " right this is a consistency loss that tells the generator that basically it learns now the generator" }, { "end": 2177.92, "start": 2168.16, "text": " learns to the generator learns to map to recognize an image with its own style sort of because it" }, { "end": 2185.6800000000003, "start": 2177.92, "text": " doesn't know right it doesn't know that what's coming in here is the style of um it of the image" }, { "end": 2195.36, "start": 2185.6800000000003, "text": " x but now you teach it and i think before this loss you'd have a good chance that uh the styles" }, { "end": 2199.76, "start": 2195.36, "text": " would just be all over the place they would sort of be consistent but they will not be aligned" }, { "end": 2206.32, "start": 2199.76, "text": " and with this you force that the style of an image itself if you gent if you put that into" }, { "end": 2216, "start": 2206.32, "text": " the generator it will lead to that image itself okay that's it so this is a this is extremely" }, { "end": 2222.2400000000002, "start": 2216, "text": " convoluted right the discriminator is the easiest the discriminator is a class conditional discriminator" }, { "end": 2229.84, "start": 2222.2400000000002, "text": " that gets the label from some mechanism that decides on a label right okay that's the easiest" }, { "end": 2237.84, "start": 2229.84, "text": " the encoder has two parts the pseudo label which is over here which is trained completely unsupervised" }, { "end": 2246.56, "start": 2237.84, "text": " detached from everything else in a self clustering approach while the style part here is trained" }, { "end": 2254.88, "start": 2246.56, "text": " first of all in a contrastive way which makes sense and also in a back propagated way from the" }, { "end": 2262.56, "start": 2254.88, "text": " generator so the style generation mechanism tries to help the generator okay and that means it's" }, { "end": 2267.36, "start": 2262.56, "text": " going to leak some information about the label into the style because that helps the generator" }, { "end": 2273.52, "start": 2267.36, "text": " generator needs to if the generator knows what sort of class it's going to produce it's going to be" }, { "end": 2279.28, "start": 2273.52, "text": " better okay so you can count on that information being in there but also also because of all the" }, { "end": 2285.0400000000004, "start": 2279.28, "text": " other losses that the generator has and the contrastive loss on the style the style code is" }, { "end": 2293.6800000000003, "start": 2285.0400000000004, "text": " going to sort of describe the individual style of an image and but is also going to describe what" }, { "end": 2299.6000000000004, "start": 2293.6800000000003, "text": " the style of that class is because it technically needs to contain information about the class" }, { "end": 2308.0800000000004, "start": 2300.88, "text": " and that's why i think this works with the style because there is no inherent notion of like" }, { "end": 2312.88, "start": 2308.08, "text": " this is the pose of a cat or something like this" }, { "end": 2320.64, "start": 2314.16, "text": " yeah it still seems like a bit magic to me and then the generator is first of all trained to" }, { "end": 2327.2, "start": 2320.64, "text": " fool the discriminator given a source image and a style and you can fool the discriminator" }, { "end": 2336.4, "start": 2327.2, "text": " by producing an image that's so good it looks real and specifically it looks real in the class" }, { "end": 2342.32, "start": 2336.4, "text": " that the pseudo label has given right so in the class that the encoder has given to it so the" }, { "end": 2350.8, "start": 2342.32, "text": " generator must somehow come up with an image that's of that class and so it will it will be forced" }, { "end": 2357.6800000000003, "start": 2350.8, "text": " to interpret the style code in terms of that class label which makes the style code the style code" }, { "end": 2365.6, "start": 2358.08, "text": " and also we have these two additional losses which is the round trip loss to the style space" }, { "end": 2373.68, "start": 2365.6, "text": " so whatever the generator outputs you should be able to recover the style from it by putting it" }, { "end": 2379.6, "start": 2373.68, "text": " through the encoder again and then lastly there is a consistency loss where you say if i put an" }, { "end": 2387.2799999999997, "start": 2379.6, "text": " image into a source image and i input its own style again going through the encoder you should" }, { "end": 2395.44, "start": 2387.2799999999997, "text": " give me back the source image itself very complex and all of the generator loss is back propagated" }, { "end": 2402.8, "start": 2395.44, "text": " through to the encoder so this is the full loss as i said discriminator easy adversarial loss" }, { "end": 2411.04, "start": 2402.8, "text": " generator adversarial loss plus this style round trip consistency plus the own image round trip" }, { "end": 2420.96, "start": 2411.04, "text": " consistency encoder gets all of the generator loss all of it so all of this goes here so the" }, { "end": 2428.56, "start": 2420.96, "text": " encoder fully helps the generator and it is also trained with this mutual information and the" }, { "end": 2436.16, "start": 2428.56, "text": " style contrastive loss wow that's some losses wow that that's a lot of damage" }, { "end": 2443.36, "start": 2438.2400000000002, "text": " so they do different investigations into their model here and i don't even know if we've" }, { "end": 2448.8, "start": 2443.36, "text": " missed some of the pictures but ultimately what you can now do is you can do image to image" }, { "end": 2455.04, "start": 2448.8, "text": " translation either that's the cool thing you can have a reference image for one or what you can do" }, { "end": 2463.52, "start": 2455.04, "text": " is you can ask your discriminator what kind of domains are there sorry you can ask your" }, { "end": 2469.36, "start": 2463.52, "text": " encoder what kind of domains are there you've guessed the number of domains so it's maybe 10" }, { "end": 2478.2400000000002, "start": 2469.36, "text": " or in this case it's uh eight eight eight domains of cats and you can simply divide your data set" }, { "end": 2484.7999999999997, "start": 2478.24, "text": " into these eight domains right one two three four five and so on oh this is 10 okay i can't see" }, { "end": 2491.68, "start": 2484.7999999999997, "text": " anymore so 10 domains and then you can simply calculate for each image you calculate the style" }, { "end": 2500.72, "start": 2491.68, "text": " vector so the style the style and then you simply take the average one over the number in that in" }, { "end": 2507.4399999999996, "start": 2500.72, "text": " that domain you take the average style vector and that's going to be your target style so you can do" }, { "end": 2511.68, "start": 2507.44, "text": " image to image translation with a reference image or you can do image to image translation" }, { "end": 2518.56, "start": 2512.2400000000002, "text": " for an entire group of images for example all the images in a given domain and that's how they do" }, { "end": 2525.04, "start": 2518.56, "text": " these graphs right here now just quickly wait until my tablet decides to show me the paper again" }, { "end": 2532.16, "start": 2525.04, "text": " thank you all right they do a bunch of investigations into their wholly unholy mixture" }, { "end": 2540.16, "start": 2532.16, "text": " of losses especially the first concern is couldn't we just train the guiding network" }, { "end": 2547.44, "start": 2541.04, "text": " like by its own on its own and then after that train this gan thing right that's what we had" }, { "end": 2552.96, "start": 2547.44, "text": " at the very beginning we said there's this guiding network and it does the clustering and all" }, { "end": 2559.44, "start": 2552.96, "text": " and couldn't we just train this gan architecture on top of the frozen guiding network and their" }, { "end": 2566.08, "start": 2559.44, "text": " conclusion is no if we train everything together it works better so on the left you have whenever" }, { "end": 2573.92, "start": 2566.08, "text": " you train the guiding network by itself and what you're seeing here is the t-sne visualization" }, { "end": 2581.92, "start": 2573.92, "text": " t-sne is a a down like a non-linear visualization tool of style codes extracted by our guiding" }, { "end": 2588.88, "start": 2581.92, "text": " network the ground truth domains of all test images is represented in different colors so" }, { "end": 2594.1600000000003, "start": 2588.88, "text": " this is a data set that has labels but you don't you don't provide the labels to this algorithm" }, { "end": 2599.52, "start": 2594.1600000000003, "text": " the algorithm is completely unlabeled but for purposes of investigating we'll visualize the" }, { "end": 2606.4, "start": 2599.52, "text": " labels with colors and what you'll see here are the t-sne visualizations of the style codes so" }, { "end": 2613.6800000000003, "start": 2606.4, "text": " things that are close together they have similar style codes and the ideal case would be if things" }, { "end": 2621.7599999999998, "start": 2613.68, "text": " that are close together here have the same label and that means the style is sort of representative" }, { "end": 2628.96, "start": 2621.7599999999998, "text": " of the domain okay that's what we want we want the style to capture the domain of an image" }, { "end": 2636.16, "start": 2628.96, "text": " and ideally not the image itself too much now on the left you see that there is quite a bit of" }, { "end": 2642.56, "start": 2636.16, "text": " overlap between these quite a bit of wash between the style and the group and on the right if you" }, { "end": 2649.92, "start": 2642.56, "text": " jointly train the gan together with the guiding network you see that these classes of the style" }, { "end": 2656.4, "start": 2649.92, "text": " codes which have no reason to cluster are much more clustered and separated and they are separated" }, { "end": 2665.44, "start": 2656.4, "text": " much more along the lines of the ground truth classes okay so that's pretty cool now i would" }, { "end": 2670, "start": 2665.44, "text": " actually be interested in what happens if you do the separate training with the full pipeline of" }, { "end": 2674.72, "start": 2670, "text": " this learning to classify images without labels thing and their nearest neighbor thing because" }, { "end": 2681.2, "start": 2674.72, "text": " they've also shown that just purely this self-clustering doesn't work too well but if" }, { "end": 2687.52, "start": 2681.2, "text": " you then do the nearest neighbor thing on top then that improves the classification significantly" }, { "end": 2693.68, "start": 2687.52, "text": " so this could potentially help either the separate or the joint training right here" }, { "end": 2699.8399999999997, "start": 2693.68, "text": " and there might be a connection between the joint training and whatever they're doing in any case" }, { "end": 2706.56, "start": 2699.8399999999997, "text": " they also show that then these fid which is a quality metric for gans lower is better that the" }, { "end": 2714.64, "start": 2706.56, "text": " joint training goes way lower in the fid than the separate training okay that's that's the reason" }, { "end": 2720.08, "start": 2714.64, "text": " why they built this convoluted thing because it works way better and here they ablate they ablate" }, { "end": 2724.56, "start": 2720.08, "text": " some of the losses to investigate what's really going on and in this case" }, { "end": 2731.52, "start": 2726.72, "text": " t-sne visualization of the style space of our guiding network trained on this since this does" }, { "end": 2736.56, "start": 2731.52, "text": " not have ground truth domain labels each data point is colored with the guiding network's prediction" }, { "end": 2746.08, "start": 2738.64, "text": " so each color is whatever the guiding network says the classes and the dot is one style each" }, { "end": 2752.48, "start": 2746.08, "text": " is one style each dot is one style vector and they're projected down to two dimensions you can" }, { "end": 2760.72, "start": 2753.2, "text": " see pretty clearly that the individual classes the individual clusters of style vectors correspond" }, { "end": 2767.92, "start": 2760.72, "text": " to different labels of the guiding network which is to be expected but also since they overestimate" }, { "end": 2776.4, "start": 2767.92, "text": " the number of classes in this case you can see that the even though the class label is different" }, { "end": 2782.56, "start": 2776.4, "text": " the style network will group the very similar classes together you can see here these are both" }, { "end": 2789.44, "start": 2782.56, "text": " cheetahs and here are both lions so it'll group them together which is pretty cool and sort of" }, { "end": 2795.76, "start": 2789.44, "text": " verifies that it recognizes these these different things because you force the guiding network to" }, { "end": 2801.2000000000003, "start": 2795.76, "text": " make 10 classes but the style network is simply continuous so it's cool to see that the style" }, { "end": 2808.32, "start": 2801.2000000000003, "text": " network will make one cluster with styles even though it's different labels and here you can" }, { "end": 2813.1200000000003, "start": 2808.32, "text": " see different samples from these domains just to verify that the guiding network has actually" }, { "end": 2821.2000000000003, "start": 2813.1200000000003, "text": " learned to separate things i still find this pretty pretty magical too this is completely" }, { "end": 2828.7999999999997, "start": 2821.2, "text": " unsupervised and it sort of finds these clusters by itself all right they have a bunch of images" }, { "end": 2834.8799999999997, "start": 2828.7999999999997, "text": " here as i said this is no longer with one reference images image this is where you take the entire" }, { "end": 2839.3599999999997, "start": 2834.8799999999997, "text": " domain so you self-label with your guiding network and then you take the mean vector" }, { "end": 2845.4399999999996, "start": 2840.16, "text": " and that's going to be your target style vector and these are the source images that you transfer" }, { "end": 2850.48, "start": 2845.4399999999996, "text": " and you can see that you know it works pretty well so they always have like one adult animal" }, { "end": 2859.12, "start": 2850.48, "text": " and one child animal and i guess not or just two different ones here this is particularly cute" }, { "end": 2867.84, "start": 2859.12, "text": " though i have to show you this fox right here what's going on with that fox like someone help that" }, { "end": 2876.96, "start": 2867.84, "text": " fox yeah um so we're not at perfection yet as you can see but it's you know that that looks like a" }, { "end": 2886.88, "start": 2876.96, "text": " pretty pretty cool fox maybe okay where did it go maybe it slipped maybe it's an it's an offshoot" }, { "end": 2895.04, "start": 2886.88, "text": " of this one on the top left yeah who knows these data sets they have their way and um so this is" }, { "end": 2901.6, "start": 2895.04, "text": " sort of where you can see the limitations right here um that's not how a baby snow leopard looks" }, { "end": 2908.96, "start": 2901.6, "text": " you see the limitations here in that all of these animal faces they are still pretty aligned like" }, { "end": 2916, "start": 2908.96, "text": " they're fairly frontal not exactly but they're fairly frontal pictures um they're fairly" }, { "end": 2922.56, "start": 2916, "text": " standardized and so on so we're i don't think we're yet at the level where we can just do" }, { "end": 2930.4, "start": 2923.2, "text": " you know um fully image to image and you see it especially with faces because we as us humans" }, { "end": 2935.92, "start": 2930.4, "text": " as us humans are extremely good at you know seeing when there's something wrong with a face" }, { "end": 2943.44, "start": 2936.7200000000003, "text": " but it's still it's still pretty impressive what's possible and i think if the past is of any" }, { "end": 2951.04, "start": 2943.44, "text": " indication here is summer to winter that actually looks good if the past is of any indication then" }, { "end": 2958.8, "start": 2951.84, "text": " this technology will be pushed pretty hard and soon we'll be able to do this with a simple smartphone" }, { "end": 2965.04, "start": 2958.8, "text": " app or something like this so i invite you to check out the paper right here they have lots" }, { "end": 2972.1600000000003, "start": 2965.04, "text": " and lots and lots of examples and t-sneak plots and whatnot in their appendix they have the code" }, { "end": 2989.44, "start": 2972.16, "text": " online as far as i as i have seen and with that let me know what you think in the comments bye bye" } ]
DLq1DUcMh1Q
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
A bio-inspired bistable recurrent cell allows for long-lasting memory (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gru", "lstm", "schmidhuber", "bistable", "bistability", "neurons", "biological", "spiking", "tanh", "stable", "attractor", "fixed points", "memory", "memorize", "sparse", "long sequence", "history", "storage", "remember", "rnn", "recurrent neural network", "gated recurrent unit", "forget", "backpropagation", "biologically inspired" ]
Even though LSTMs and GRUs solve the vanishing and exploding gradient problems, they have trouble learning to remember things over very long time spans. Inspired from bistability, a property of biological neurons, this paper constructs a recurrent cell with an inherent memory property, with only minimal modification to existing architectures. OUTLINE: 0:00 - Intro & Overview 1:10 - Recurrent Neural Networks 6:00 - Gated Recurrent Unit 14:40 - Neuronal Bistability 22:50 - Bistable Recurrent Cell 31:00 - Neuromodulation 32:50 - Copy First Benchmark 37:35 - Denoising Benchmark 48:00 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.05252 Code: https://github.com/nvecoven/BRC Abstract: Recurrent neural networks (RNNs) provide state-of-the-art performances in a wide variety of tasks that require memory. These performances can often be achieved thanks to gated recurrent cells such as gated recurrent units (GRU) and long short-term memory (LSTM). Standard gated cells share a layer internal state to store information at the network level, and long term memory is shaped by network-wide recurrent connection weights. Biological neurons on the other hand are capable of holding information at the cellular level for an arbitrary long amount of time through a process called bistability. Through bistability, cells can stabilize to different stable states depending on their own past state and inputs, which permits the durable storing of past information in neuron state. In this work, we take inspiration from biological neuron bistability to embed RNNs with long-lasting memory at the cellular level. This leads to the introduction of a new bistable biologically-inspired recurrent cell that is shown to strongly improves RNN performance on time-series which require very long memory, despite using only cellular connections (all recurrent connections are from neurons to themselves, i.e. a neuron state is not influenced by the state of other neurons). Furthermore, equipping this cell with recurrent neuromodulation permits to link them to standard GRU cells, taking a step towards the biological plausibility of GRU. Authors: Nicolas Vecoven, Damien Ernst, Guillaume Drion Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at a bio-inspired bistable recurrent cell, allows for long-lasting memory by Nicolas Vécovin, Damien Ernst and Gion Drion of the University of Liège. This paper here is not a paper that wants to push state-of-the-art on anything, it is a paper that takes a concept from the biological research on actual neurons, which is the bistability property and tries to introduce it to recurrent neural networks. And on toy data or small data they show that this has the interesting property that these recurrent neural networks can then remember important things for much much longer than our current recurrent architectures can do. I believe this is a very interesting paper and it's a nice refresher from the whole state-of-the-art number pushing papers. So dive in with me to explore this. If you like content like this also consider subscribing if you aren't and sharing it out and leaving a like and a comment if you have any sort of comments. They basically say recurrent neural networks provide state-of-the-art performance in a wide variety of tasks that will require memory, which is true. So we have these recurrent neural networks and what the recurrent neural networks do is they're basically... So a classic recurrent neural network goes something like this. There is a hidden state at time step T and there is a sequence of inputs that you have to work with. So we'll call them x1, x2, x3, x4 and so on. And then at some point you have to provide an output. This could be at every single time step or sometimes it's just at the end you have to provide an output y. So for example this here could be a piece of text and you need to decide whether or not that piece of text, maybe it's an email, whether or not that's spam. This could be a time series of a patient in an ICU and you need to decide whether or not to give some medication to the patient. So the applications of this are very wide and any sort of series data will do. So there's this hidden state and at each time step this hidden state is updated to a new hidden state. So this call this h0. It's updated to a new hidden state by incorporating the input. So somehow the input x and the previous hidden state are made into a new hidden state. And then the next input is taken in and by using this hidden state a new hidden state is made and so on. So one property here is that the newest hidden state always only depends on the previous hidden state and it doesn't really directly depend on like the hidden state to before itself. It only depends on the hidden state right before itself and the input that corresponds to it. So this is the information flow. The other important property here is that these connections that make a hidden state into the next hidden state and also that incorporate the input, they're always the same. So these functions here that incorporate the input, they're always the same in each time step. So the parameters are shared between them and the same goes for the functions here that transform one hidden state into the next hidden state. Of course there is a joint function between the two that actually produces the next hidden state. So these weights are all shared and for each time step and that's what makes the network recurrent. So we call the single time step here, we call that a recurrent cell. And the question now is how do you construct a recurrent cell? Usually recurrent neural networks they run into this problem of either gradient explosion or vanishing gradients because usually this here, if you are into neural networks you know this, this is a weight matrix multiplied by the previous hidden state and if you just multiply the same weight matrix over and over and over again it pretty much depends on the singular value of that weight matrix if the top singular value is higher than one then the signal is going to explode and if it's lower than one the signal is going to fade over time and there's pretty much nothing you can do. So classic RNNs have looked like this right here. So the next hidden state is a nonlinear function G and G can be some non-linearity like a sigmoid or a hyperbolic tangent but it's a function of the current input and the last hidden state by simply matrix multiplying these two things by some weight matrices and then adding them up. So that's what we've just looked at. Now this is problematic as I said because of the vanishing or exploding gradients and therefore people have come up with methods to solve this and you might know things like LSTMs and GRUs that are used to solve this. Now these cells here are much more complicated than the standard cell that we saw here but they also are much more effective because they don't have this vanishing or exploding gradient problems. Their promise is that they can remember things for longer because they allow the gradient to flow without these problems during backpropagation. Now how does one of these look? In this paper they mainly look at the GRU, the gated recurrent unit which is a simpler version of the LSTM. The LSTM is slightly more complex but the principles are the same. So they look at the GRU right here. What does the GRU do? These are the formulas for the GRU and we're going to try to deconstruct these formulas. So as you can see the inputs are the same. The inputs are going to be this points input, this time steps input and the last hidden state. Those are all the quantities that we need and we need to output somehow the next hidden state. The last hidden state is then used to predict the y by the way in all of these cases. So first we need to calculate two things called Z and R and both of them are the same. They're multiplying these two things by weight matrices and then running them through a sigmoid non-linearity. Let's do that. Let's say we have the last hidden state here and we have Xt here. So first we're going to calculate the Zt and the Rt from that. Now every one of these arrows right here is a multiplication by a weight matrix. So every one of these arrows is transforming the input and let's also let's join this into a sigmoid node and that gives you Zt and let's join these into a sigmoid that gives you Rt. Okay so far so good. Now we need to combine all of this in this last line right here. So you can see that the Z thing here sort of acts as a switch. So Z is the result of a sigmoid and therefore it's between 0 and 1 and here this is the Hadamard product. This is the element-wise product between vectors which sort of means this is like a gating. This is like a switch. If it's 1 it selects this quantity and if it's 0 it selects the quantity over here and of course it can be between 0 and 1 but those are the ends of the spectrum. So Z is a switch that selects between the last hidden state. So let's draw that right here. So the last hidden state goes here and is one of the options of the output. And the option is given by Z. So Zt let's draw a switch like this maybe. So Zt is responsible for modulating this switch right here. Okay this gives you the next hidden state. You see Zt modulates that switch so Ht is the one possibility that the switch can select. What's the other possibility? The other possibility is this quantity right here which is a hyperbolic tangent of whatever that is. So that is combined of X. So let's go from the back right here. Tanh. What's the input to the tanh? It's two things. First of all the X is an input to the tanh so we can draw directly a line from here. The X modulated. Every arrow as you might remember can be a function. Not all arrows are functions like this arrow right here is not a function it's just an arrow. Maybe that's confusing. You get what I mean. And the next thing is R times the hidden the last hidden state or the last hidden state modulated by this matrix. So R is acting as another gate. R can be again between 0 and 1 because it's the result of a sigmoid. So this hidden state will also go here. It will be modulated running out of colors here. It will be modulated by R here as a sort of gate. So R can either close or open this gate right here and then that's fed into the tanh. So it's a rather complicated setup as you can see right here. So let's analyze this. First of all the hidden state is either the last hidden state or it is something new and that's modulated by this Z right here. And Z is calculated from the hidden state and the current input. So this allows the cell to basically look at the hidden state is sort of the information of what happened so far and the current input is the new information that it gets from the sequence. And it sort of gets to look at these two things and decides do I even want to update my hidden state? If not I can just select this path right here and then nothing happens to the hidden state. The next hidden state will be exactly the same as the last hidden state. If it decides, if it thinks wow this new thing in the sequence that's actually important I should remember that. Because remember the task of the network sometimes is to remember things from the sequence. I think we drew this over here. So if this is an email and we want to detect whether it's spam then this word in the sequence right here might be really important because it would say something like gold, like buy gold. These two things might be buy gold and you need to remember that in the hidden state because the only way that information from X is going to flow to Y is through the hidden states. So you would want at this point you would want to remember this input in the hidden state so you would actually want to update the hidden state. And then this here might be not as important so you might want to say I don't want to I don't want to I still want my hidden state to be the old hidden state. So Z is that gate that allows us to do this. If we decide to update the hidden state then what do we do? Again if we decide to update the hidden state we can we can we will incorporate the new input but we will we can also decide to mix that how to mix that new input with the old hidden state. So if we decide to update the hidden state we don't simply discard the old hidden state because the old hidden state will still have a path to be sort of still there to be remembered but it's a longer path and it needs to go through this thing here and through this thing here. So this thing here decides which of the old hidden state pass through. So at each you can see right here this is an element-wise product this R is going to be between 0 and 1 at each point in the vector. So at each point in the vector the R decides is this worth remembering or not? And if it's not worth remembering then this is going to be 0 and that that position of the old hidden state is going to be 0 and then that's going to be forgotten and that's the opportunity for the hidden state to incorporate new information because then it can delete this old information and then it can incorporate the new input and that will result then on this path on the new hidden state. So there's two major things. First we can decide whether or not to even incorporate new information that's achieved by the Z gate and then we can decide which parts of the old hidden state if we want to update it which parts to forget that's the R gate and how to update it is then basically a result of the weight matrix that's associated with this function right here. Alright so that's the gated recurrent unit and it works a lot better than the classic RNNs. So having said that they now turn to this property of neuronal by stability that happens in actual neurons. So this here is sort of a model of a neuron with this property. Now forget everything we said about GRUs we're just going to look at this right now. What happens in a neuron usually is you have this is a single neuron you have input synapses from other neurons so these are connections coming from other neurons into you. They are accumulated right here. Usually they are just in a classic model of a neuron they're just summed up you would sum up all these all these input signals and then you decide you'd run it through like a step function. So if the sum of all the things is smaller than a particular threshold the output would be just nothing and if it's higher than a particular threshold then the output of the neuron would be sort of a firing of the neuron. This can be weighted and whatnot but in this case it's just a function of the inputs and that gives you your input signal. So this is like this is your input signal to the neuron. Now there is this property right here that makes it interesting. The signal goes out here and is integrated. This is an integrator and that's going to be in the output signal but there's this connection this back connection right here and that means that the signal that comes out at time step t is going to be fed back into the signal and actually added to the signal before itself and sort of self modulating right the signal comes out is sent back is added to this input and then sent through again and this here is just an integrator that's integrating everything that's happening. So if you if you look a bit closer you'll see that there is a minus here so it's actually not added it's subtracted and there is an F here which means that this is a nonlinear function. Now if this weren't a nonlinear function we can just sort of try or let's say this is a monotonic function we can sort of try to estimate what happens. If all of this right here is very high it's a high number big number this will be a big number then this sum will be a big number this output will be a big number what happens is this here will be a big number this is monotonic so it will also be a big number and that means it will subtract that big number so that means when whenever the neuron is going to to be very excited this feedback would actually push it back now when it is not very excited so when it's a very low number very negatively excited then the feedback would work in the exact opposite direction this will be very negative this will be very negative and this here would push it towards the positive so this neuron somehow self stabilizes over time to this to the zero point right here and that's simply if this F is the identity function right now so you can sort of see how this property works now we'll make it a bit more complicated in that we'll assume that this F here is not the identity function but let's say they have it somewhere but this right here so the F F of V post is this here it's V post minus alpha tan H of V post or is this the entire F yes that's this this thing right here if this is the case if this is that this if this is the signal minus the tan H then something very very very interesting happens and that that's depending on this alpha right here in that if this alpha is between if the alpha is between 0 and 1 then we simply have our monotonic function so here you can see how big V post is so how big the output signal is here that's the experiment we made before and here you can see what the feedback signal is okay or the integrator the integrated signal maybe this is in the wrong place and maybe F is just minus the tan H I'm not sure but in any case the way they build it after in the GRU it's pretty explicit so this is the thing we said before namely if if the signal is very high then this signal here will be high as well and because it's subtracted right here it's it's going to push the signal back towards zero again if this is lower than zero then this thing here will also be lower than zero and because it's subtracted it's going to push the signal towards zero so this thing here is the stable point it will always push it back towards zero however if we change the function and we change just the parameter alpha to be 1.5 a very different thing happens that you can see right here then it turns out if your output signal is very is very high the same thing happens is going to put me push back but if your output signal is between zero and this point right here there is a regime where actually even though the output signal is positive you will be pushed towards this point right here and therefore there is there are these two stable points right now and the stable point basically means if you deviate if the signal deviates from it it's going to be pushed back towards that point and you can see these two stable points they're not at zero they're actually at at these two points here and that's pretty interesting because that means you can potentially remember things with the cell right an output signal of zero it's basically not informative but here you can be in either the state here or in the state here and the little little perturbations will still keep you in that state so you could potentially be in this state right here as an output and the cell will just keep updating itself and be stable and always output that signal right here and then you could go ahead and if you then provide some huge input signal right here you could potentially throw this over to the other side over this hill and then it would stabilize at this point so this is sort of a way to remember things within these biological cells pretty cool now this here is a non filled out circle that means it's an unstable point it's technically stable in the sense that if you're exactly at zero you will remain at zero but if you perturb even a little bit you will go if you perturb a bit you will go away from it okay I hope this sort of property is right is clear and why this is so fascinating because we can use this these this fact that the stable points are not at zero and are more than just one stable point for remembering things and they're now trying to fill this into the gated recurrent unit so they call this the bi-stable recurrent cell BRC and the formulas are these right here maybe a little smaller come on can't zoom anymore okay it looks almost the same as the GRU so the formulas are these this and this so let's analyze the differences to the GRU the first most striking difference is that a lot of weight matrices here have become single numbers so or single vectors this here used to be a weight matrix and this used to be a matrix multiplication and you'll see this sort of throughout whenever the last hidden state is incorporated into these things then you'll see that it is no longer a weight matrix but is in fact a in a product with a vector a element wise product and that has a reason namely what they want to model is individual neurons so on a biological level and neuron can only feed back onto itself if there is a layer of neurons right here they can only each feed back onto themselves whereas in a recurrent neural network my hidden vector my hidden state is a vector and if I transform this into the next hidden state or any quantity let's say I transform this H into this R right here and this R is a vector too then any interaction is possible so any cell any entry in the vector here can influence any other vector because there's a big weight matrix in the middle they want to leave this away they want to model this as close as possible to actual layers of neurons and therefore they say okay the input X can you know be distributed to all the neurons because technically the input comes from some other neurons down here and they can all have connections to these neurons but these feedbacks we only really observe them in individual neuron this feedback cycle so that's why they model these recurrent weight products by just element wise products with vectors and then the second difference you again see that there is this switch right here this C switch and the C switch is like before it's a sigmoid with where combine the the output and the previous hidden state there's nothing new here so this switch is the same the cell has the possibility of letting in new information or just ignoring the new current information the XT the second thing is here and this is the same as well right the tanh this is a combination of the new information it's in case we want to let in the new information of the new information and you need to decide what things of the old information to forget or remember now the difference here is in this a so this a used to be again this sigmoid of the combination and now it's just slightly different it used to be sigmoid now it's one plus tanh this is a very very slight modification it's tanh because tanh is between minus one and one instead of zero and one like the sigmoid and the one plus makes it such that this is between zero and two and we've seen before that this critical behavior there is two regimes to these functions when it's between zero and one this behaves like a classic gated recurrent unit like a classic GRU but when it's between one and two then you have that exact behavior that we saw before of the bi-stability okay so depending on what the a is if the a is zero to one it's a classic cell and if the a is one to two it's a bi-stable cell and the network can decide by itself what it wants to do because here it has it can actually learn how to do that all right so this is the only change the only change really apart from it only being individual neurons feeding back on themselves is that now this is no longer between zero and one with the sigmoid this is now between zero and two because it's one plus the tan h very simple change but the effect of this is pretty pretty cool so they do some here is like a schematic drawing of this if this a is between zero and one again you have this stable state that's at zero but it's if it's between one and two you have two stable states at two non zero points and again this we already saw this but now this is for I believe this this recurrent cell this by modal recurrent cell not for the neuron itself and here they give an example of what happens when you run this particular signal this particular time series through a cell like this while fixing the C and the a parameters so now the C and their a parameter aren't learned they're just fixed and you see what happens now as you can see the the blue should be a classic the classic behavior so in this blue case what happens you see right here this C is moderately low so we saw the C is the switch of whether to leave in old information or take up new information if it's low it means we want to take up new information this is reasonably low and that's why when the signal goes up here the blue line goes up as well and when the signal goes down the blue line goes down again and so on so the blue line pretty straightforwardly follows the signal right here okay now in contrast to this the red line is over this threshold so a is fixed at 1.5 C is still at 0.2 so again when this line goes up then this line goes up but because this is near a this stable point if it goes down again it doesn't appear to go down enough it sort of remembers that state it was in it doesn't go down with the signal only now that it goes down even further it's over this threshold so we were in this situation now and the first bump down was only like to here and that pushed it up again but now it jumps over here because the signal is even lower and then the cell sort of switches to another state as you can see here it goes down but then this bump here is not enough to bring it up again so it kind of remains in this state so you can see the it sort of remembers the input and small deviations or small changes in signal don't manage to throw it away from that only larger things only it needs to go very the signal needs to go very much down in order for it to change state so that's pretty cool that there's this remembering behavior and now remember in the actual implementation these C and A parameters this C and this A right here aren't fixed they are also determined by the cell itself and therefore the cell can decide by itself when it wants to remember things how hard it wants to remember things and so on so we're going to check this out in an actual implementation so there's this one last modification they make where they say okay they tried this and it doesn't really work because it works sometimes but there is this issue of these neurons connecting only back on themselves which really makes the model much less powerful than a classic recurrent cell it's closer to biology but it's much less powerful and there is this property they say of a neuromodulation where technically in real neurons the one neuron here could influence another neuron by modulating these A and C parameters okay these A and C parameters this is called neuromodulation so there are interconnections between the neurons that influence how much other neurons remember and forget things so they decide let's model that and lo and behold we're now back to having weight matrices right here so this this is sort of they say this is a not really a super biologically plausible way of implementing neuromodulation but it's sort of it's an easier way and it brings us closer to the G back to the GRU and yeah so now the only difference to the GRU is that the fact that here there was a sigmoid now it's a 1 plus tan H okay I find this this pretty cool so now also the only difference here is this property of bi stability this is the only difference and now we can actually compare so let's compare they first give they do these sort of benchmarks which are they're pretty pretty neat so they have this first benchmark where it's the copy first input benchmark I'm having some trouble here moving this paper around with my fingers so the copy first input benchmark is simply a time series in this benchmark the network is presented with a one-dimensional time series of T time steps and the each entry is a is a random number after receiving the last time step the network output value should approximate the very very first input step okay so all the network needs to do is remember the first thing it sees and that's that should be learnable right that should be learnable because you can so you can it's not specified whether the zero with hidden state the initial hidden state is given into the network but technically it doesn't matter because it can just learn whatever that is I can learn to have a designated bit in this hidden state so this hidden state is of size 100 I believe one designated bit in the hidden state of whether it has already encountered the first thing or not if it has not encountered it means that it's at the first time step therefore it should incorporate the new information into the hidden state and if and also set this bit and then for each subsequent step it can see I've already set this bit and it can simply close that gate that makes it incorporate new information so it should be able to carry this information all the way to the end by simply always closing that gate after the first step and what happens in this so as you can see when the result is all the results up here so this is after three so they train it for 300,000 gradient descent iterations and you can see that when these time steps when the series are pretty small the LSTM or the GRUs tend to perform well but you can see that these BRCs they they don't tend to perform poorly they're just performing worse right it's zero it's still the 0.01 regime or something like this of error however when you go up to like 300 steps then you can see the GRUs and the LSTM they start to fail because they are not made explicitly to remember for that long they don't have this by stability property whereas now these things excel you can see they're still pretty low and at 600 steps these things completely fail they completely forget the input so and the NBRC at least is still able to remember the first thing pretty pretty well and yeah so the second one is no this is the first experiment still the copy input benchmark you can see right here that even at this three at this 100 thing where the GRU still learns it it learns it much much later than the BRC which learns it pretty fast only here when the when it's only five when that series are only five steps long does the GRU slightly outperform the BRC so the general notion here is that these classic cells are more powerful in like classic tasks whereas these things are shining whenever these things fail because they can't remember things for very long so they're not these new cells are not state-of-the-art yet possibly there are still some modifications to be made we've had a pretty long history of optimizing GRUs and LSTMs they haven't always worked so well as they do now because we kind of know how to handle them and I expect if these cells here take off especially these NBRC then with time will be as proficient at handling them and they will probably become on par or even outperform the LSTMs or GRUs on every day like on all the tasks and then be especially good on tasks where you have to remember things but for now they're outperformed by LSTMs and GRUs okay so the second thing is a more interesting experiment the denoising benchmark where they say the the copy input benchmark is interesting as a means to highlight the memorization capacity of the recurrent neural network but it does not tackle its ability to successfully exploit complex relationships between different elements of the input signal to predict the output so they have a new benchmark in the denoising benchmark the network is presented with a two-dimensional time series of t time steps five different time steps are sampled uniformly with okay and are communicated in the network okay I'll just tell you what's going on so this new time series is two dimensional in the lower dimension you simply have a bunch of random numbers like five eight two nine actually these are numbers sampled from a uniform Gaussian or so so they're not actually five eight two and nine but you can imagine it like this five eight two nine three four zero two and so on and in the second dimension you have a negative one I believe almost anywhere and then at some points you have a one negative one again and then you have a one and a negative one again and at the last point of the sequence you'll have a zero and so the zero is simply a marker that it's the end of the sequence what the network needs to do is it needs to output all the elements so the output of the network should be in this case should be nine four so all the elements where there was a one in order okay so it remember what it needs to learn it needs to learn to every time it sees a one in the second dimension it needs to take the first dimension put it somehow into the hidden state and then carry that hidden state forward and sees a one again it needs to take the second thing also put it into the hidden state but not override the first thing it put into the hidden state like if it were to just realize I need to put this into the hidden state then it would almost surely override the previous information so it needs to be able to say I've already kind of in my H is going to be a vector of a hundred dimensions it needs to be able to say well I've already stored a bunch of stuff in that part of the vector maybe I should store that thing here over here so this is fairly complex things to remember and technically GRU's and LSTMs are able to do it but as we'll see they're not as much the results are in this table where you can clearly see that whenever the n so the n the n is a parameter that is how far how far in this direction are these ones so when n is 0 the ones can be anywhere but when n here is like 5 that means that the last five ones surely don't contain a one that means only the first whatever a L minus L minus 5 contain the one so the higher this number n is the harder the task because your learning signal is way way farther away from the from what's when you get the output so you can see when the n is low then the GRU's and the LSTMs they perform pretty well but also these cells perform pretty well they're just not performing as well however when the task gets harder and you actually need to learn a sparse signal over a long period of time where in between you don't get any signal the GRU's and the LSTMs fail while the BRC's would still be able to learn these kinds of things so that's that's fairly cool now it's if from a researcher's perspective I wonder if they just first tried this task you know as I described it and then they discovered like ah crap they can still do it and like okay how can we make it such that there's a difference okay let's actually make the task harder like this and then they did that I wonder if they always had the idea with the end here or just introduced this after after it they they failed to produce a difference in the first place I'm not sure but they have they have another benchmark but they basically show that these cells are actually good can incorporate this information can reason about what they need to remember and whatnot and in the end they also have this sequential MNIST where they just feed an MNIST digit digit by digit and at the end I think that the output of the neural network needs to be the class of the of the MNIST digit and again here they have a parameter called N black which means that so they have an MNIST digit it's like a three they unroll it to a single vector right they feed this one by one into the recurrent network and then after that they attach a certain number of just empty pixels black pixels and after that the network needs to predict the Y you can see if they ask the network the class of the digit immediately after it's done then the G are using the LSTM perform fairly well as do the BRCs but if you attach a whole bunch of these black pixels remember an MNIST digit has some seven sorry seven hundred and eighty four maybe entries so attaching 300 black pixels is quite significant in in terms of the length of these sequences and then the GRUs and the LSTMs they can't learn they can't learn to ignore these things because the learning signal is just too far away right here but these things they can because they can exploit this by stability property and remember things again I wonder how this came to be it seems pretty funny but the last thing they do is they investigate what happens in their cells and this I feel is the most interesting part and they do this on this denoising benchmark so the task we've looked at before where you need to remember five randomly selected numbers that are indicated by the second dimension here they show a sequence where the five numbers occur at 3100 246 at 300 and at 376 so these are the five positions where the sequence indicates that the network should remember the thing in the first dimension and then output they analyze two things they analyze the proportion of bi-stable neurons so basically they analyze these out these a quantities and they analyze how many of the neurons in the layer have an a that's higher than one which means that they are in this bi-stable mode and also they analyze what's the average value of C so see if you remember if this is high it means it doesn't let in new information and if this is low it means it lets in new information if you first look at the C you can see that every single time when the second dimension indicates that this is one of the inputs to remember this the network drops immediately drops the C values the different colors here are different layers they build they have a recurrent network has multiple layers of these cells as is usual in the recurrent neural networks so this C as you can see it goes up pretty quickly but then as soon as one of these inputs appear the C drops which basically means that the network realizes it now must let in the new information and then it immediately shoots back up makes it seem like so the network says okay as long as so all of these inputs here they have the negative one in the second dimension right so it recognizes it says there's no reason for me to incorporate that information it's not important and as soon as the second input comes it immediately shoots down again now you can see this here is the last layer of the network the highest layer so sort of the highest abstractive information and you can see that from input to input this value of C gets higher and higher and these spikes as they go down but they go down to a higher and higher point which you know is is the fact that it recognizes it needs to let in new information but it lets in less and less new information the more things it needs to remember so not only does it recognize wait I need to remember this it also recognizes but I probably shouldn't shouldn't you know completely forget what I had previously because I it is important for me to remember these previous things so that's a pretty cool demonstration the fact that these go down at the input and the fact that generally they go up every time after a new input is incorporated into the hidden state this basically this shows that the or this is a pretty good indication that what they're saying is really happening right okay the second thing shows almost the same it shows how many of these neurons are actually in their bi-stable mode and you can also see right here that especially in the last layer you can see that the number of neurons in the bi-stable mode goes up and up and up and up after each of these steps and these spikes here correspond to always the points where they have to let in new information okay cool so I find that I find this to be pretty cool and I find this last experiment to be the coolest where they can actually show look here there's a pretty good indication that the thing we we build does what we say it does they also actually have a proof here of the bi-stability when this a is higher than one I won't go through this right here but if you want you can look at that I'm excited to see what happens with these kinds of architectures in the future because it seems to be a pretty minor modification and maybe with a little bit of more modification or if we sort of just tune this a little bit and kind of figure out what we have to do to make these things actually compete with the classic GRUs and LSTMs in regimes where a long memory isn't necessary I feel this could be a you know kind of a standard building block in the recurrent neural network toolkit even though it's been sort of outperformed by transformers in previous years alright that was it for me and I hope you had fun with this paper I invite you to check it out and bye bye
[ { "end": 5.2, "start": 0, "text": " Hi there, today we're looking at a bio-inspired bistable recurrent cell," }, { "end": 10.52, "start": 5.2, "text": " allows for long-lasting memory by Nicolas Vécovin, Damien Ernst and" }, { "end": 16.52, "start": 10.52, "text": " Gion Drion of the University of Liège. This paper here is not a paper that wants" }, { "end": 21.68, "start": 16.52, "text": " to push state-of-the-art on anything, it is a paper that takes a concept from the" }, { "end": 27.68, "start": 21.68, "text": " biological research on actual neurons, which is the bistability property and" }, { "end": 34.24, "start": 27.68, "text": " tries to introduce it to recurrent neural networks. And on toy data or small" }, { "end": 38.2, "start": 34.24, "text": " data they show that this has the interesting property that these" }, { "end": 43.24, "start": 38.2, "text": " recurrent neural networks can then remember important things for much much" }, { "end": 49.64, "start": 43.24, "text": " longer than our current recurrent architectures can do. I believe this" }, { "end": 53.92, "start": 49.64, "text": " is a very interesting paper and it's a nice refresher from the whole" }, { "end": 61.160000000000004, "start": 53.92, "text": " state-of-the-art number pushing papers. So dive in with me to explore this. If you" }, { "end": 65.58, "start": 61.160000000000004, "text": " like content like this also consider subscribing if you aren't and sharing it" }, { "end": 71.84, "start": 65.58, "text": " out and leaving a like and a comment if you have any sort of comments." }, { "end": 75.88, "start": 71.84, "text": " They basically say recurrent neural networks provide state-of-the-art" }, { "end": 80.48, "start": 75.88, "text": " performance in a wide variety of tasks that will require memory, which is true." }, { "end": 84.44, "start": 80.48, "text": " So we have these recurrent neural networks and what the recurrent neural" }, { "end": 92.2, "start": 84.44, "text": " networks do is they're basically... So a classic recurrent neural network goes" }, { "end": 97.44, "start": 92.2, "text": " something like this. There is a hidden state at time step T and there is a" }, { "end": 105.52000000000001, "start": 97.44, "text": " sequence of inputs that you have to work with. So we'll call them x1, x2, x3, x4" }, { "end": 110.44, "start": 105.52000000000001, "text": " and so on. And then at some point you have to provide an output. This could be" }, { "end": 114.6, "start": 110.44, "text": " at every single time step or sometimes it's just at the end you have to provide" }, { "end": 119.64, "start": 114.6, "text": " an output y. So for example this here could be a piece of text and you need to" }, { "end": 123.67999999999999, "start": 119.64, "text": " decide whether or not that piece of text, maybe it's an email, whether or not" }, { "end": 129.28, "start": 123.67999999999999, "text": " that's spam. This could be a time series of a patient in an ICU and you need to" }, { "end": 133.72, "start": 129.28, "text": " decide whether or not to give some medication to the patient. So the" }, { "end": 141.68, "start": 133.72, "text": " applications of this are very wide and any sort of series data will do. So" }, { "end": 146.64, "start": 141.68, "text": " there's this hidden state and at each time step this hidden state is updated" }, { "end": 153.2, "start": 146.64, "text": " to a new hidden state. So this call this h0. It's updated to a new" }, { "end": 159.66, "start": 153.2, "text": " hidden state by incorporating the input. So somehow the input x and the previous" }, { "end": 165.2, "start": 159.66, "text": " hidden state are made into a new hidden state. And then the next input is taken" }, { "end": 171, "start": 165.2, "text": " in and by using this hidden state a new hidden state is made and so on. So one" }, { "end": 177.12, "start": 171, "text": " property here is that the newest hidden state always only depends on the" }, { "end": 181.68, "start": 177.12, "text": " previous hidden state and it doesn't really directly depend on like the" }, { "end": 186.64, "start": 181.68, "text": " hidden state to before itself. It only depends on the hidden state right before" }, { "end": 192.35999999999999, "start": 186.64, "text": " itself and the input that corresponds to it. So this is the information flow. The" }, { "end": 197.64, "start": 192.35999999999999, "text": " other important property here is that these connections that make a hidden" }, { "end": 202.44, "start": 197.64, "text": " state into the next hidden state and also that incorporate the input, they're" }, { "end": 206.95999999999998, "start": 202.44, "text": " always the same. So these functions here that incorporate the input," }, { "end": 211.39999999999998, "start": 206.95999999999998, "text": " they're always the same in each time step. So the parameters are shared" }, { "end": 218, "start": 211.4, "text": " between them and the same goes for the functions here that transform" }, { "end": 221.32, "start": 218, "text": " one hidden state into the next hidden state. Of course there is a joint" }, { "end": 225.64000000000001, "start": 221.32, "text": " function between the two that actually produces the next hidden state. So these" }, { "end": 232.48000000000002, "start": 225.64000000000001, "text": " weights are all shared and for each time step and that's what makes the network" }, { "end": 238.84, "start": 232.48000000000002, "text": " recurrent. So we call the single time step here, we call that a recurrent cell." }, { "end": 243.64000000000001, "start": 238.84, "text": " And the question now is how do you construct a recurrent cell? Usually" }, { "end": 247.48000000000002, "start": 243.64000000000001, "text": " recurrent neural networks they run into this problem of either gradient" }, { "end": 253.96, "start": 247.48000000000002, "text": " explosion or vanishing gradients because usually this here, if you are into" }, { "end": 259.52, "start": 253.96, "text": " neural networks you know this, this is a weight matrix multiplied by the previous" }, { "end": 264.56, "start": 259.52, "text": " hidden state and if you just multiply the same weight matrix over and over and" }, { "end": 268.52, "start": 264.56, "text": " over again it pretty much depends on the singular value of that weight matrix" }, { "end": 274.47999999999996, "start": 268.52, "text": " if the top singular value is higher than one then the signal is going to explode" }, { "end": 279, "start": 274.47999999999996, "text": " and if it's lower than one the signal is going to fade over time and there's" }, { "end": 286.28, "start": 279, "text": " pretty much nothing you can do. So classic RNNs have looked like" }, { "end": 294.76, "start": 286.28, "text": " this right here. So the next hidden state is a nonlinear function G and G can be" }, { "end": 302.68, "start": 294.76, "text": " some non-linearity like a sigmoid or a hyperbolic tangent but it's a" }, { "end": 308.44, "start": 302.68, "text": " function of the current input and the last hidden state by simply matrix" }, { "end": 313.71999999999997, "start": 308.44, "text": " multiplying these two things by some weight matrices and then adding them up." }, { "end": 319.15999999999997, "start": 313.71999999999997, "text": " So that's what we've just looked at. Now this is problematic as I said because of" }, { "end": 325.36, "start": 319.16, "text": " the vanishing or exploding gradients and therefore people have come up with" }, { "end": 332.28000000000003, "start": 325.36, "text": " methods to solve this and you might know things like LSTMs and GRUs that are" }, { "end": 338.28000000000003, "start": 332.28000000000003, "text": " used to solve this. Now these cells here are much more complicated than the" }, { "end": 343.96000000000004, "start": 338.28000000000003, "text": " standard cell that we saw here but they also are much more effective because" }, { "end": 347.72, "start": 343.96000000000004, "text": " they don't have this vanishing or exploding gradient problems. Their" }, { "end": 353, "start": 347.72, "text": " promise is that they can remember things for longer because they allow the" }, { "end": 359.96000000000004, "start": 353, "text": " gradient to flow without these problems during backpropagation. Now how does one" }, { "end": 364.68, "start": 359.96000000000004, "text": " of these look? In this paper they mainly look at the GRU, the gated recurrent unit" }, { "end": 372.24, "start": 364.68, "text": " which is a simpler version of the LSTM. The LSTM is slightly more complex but" }, { "end": 378.04, "start": 372.24, "text": " the principles are the same. So they look at the GRU right here. What does the GRU" }, { "end": 383.44, "start": 378.04, "text": " do? These are the formulas for the GRU and we're going to try to" }, { "end": 388.56, "start": 383.44, "text": " deconstruct these formulas. So as you can see the inputs are the same. The inputs" }, { "end": 394.2, "start": 388.56, "text": " are going to be this points input, this time steps input and the last hidden" }, { "end": 398.8, "start": 394.2, "text": " state. Those are all the quantities that we need and we need to output" }, { "end": 403.6, "start": 398.8, "text": " somehow the next hidden state. The last hidden state is then used to predict" }, { "end": 409.40000000000003, "start": 403.6, "text": " the y by the way in all of these cases. So first we need to calculate two things" }, { "end": 416.04, "start": 409.40000000000003, "text": " called Z and R and both of them are the same. They're multiplying these two" }, { "end": 420.32, "start": 416.04, "text": " things by weight matrices and then running them through a sigmoid non-linearity." }, { "end": 426.8, "start": 420.32, "text": " Let's do that. Let's say we have the last hidden state here and we" }, { "end": 437.8, "start": 426.8, "text": " have Xt here. So first we're going to calculate the Zt and the Rt from that." }, { "end": 443.64, "start": 437.8, "text": " Now every one of these arrows right here is a multiplication by a weight matrix." }, { "end": 450.64, "start": 443.64, "text": " So every one of these arrows is transforming the input and let's also" }, { "end": 459.15999999999997, "start": 450.64, "text": " let's join this into a sigmoid node and that gives you Zt and let's join these" }, { "end": 468.56, "start": 459.15999999999997, "text": " into a sigmoid that gives you Rt. Okay so far so good. Now we need to combine all" }, { "end": 475.52, "start": 468.56, "text": " of this in this last line right here. So you can see that the Z thing here sort" }, { "end": 480.44, "start": 475.52, "text": " of acts as a switch. So Z is the result of a sigmoid and therefore it's between" }, { "end": 488.8, "start": 480.44, "text": " 0 and 1 and here this is the Hadamard product. This is the element-wise" }, { "end": 494.6, "start": 488.8, "text": " product between vectors which sort of means this is like a gating. This is like" }, { "end": 500.96, "start": 494.6, "text": " a switch. If it's 1 it selects this quantity and if it's 0 it selects the" }, { "end": 505.5, "start": 500.96, "text": " quantity over here and of course it can be between 0 and 1 but those are the" }, { "end": 511.28, "start": 505.5, "text": " ends of the spectrum. So Z is a switch that selects between the last hidden" }, { "end": 520.04, "start": 511.28, "text": " state. So let's draw that right here. So the last hidden state goes here and is" }, { "end": 527.2, "start": 520.04, "text": " one of the options of the output. And the option is given by Z. So Zt" }, { "end": 534.2, "start": 527.2, "text": " let's draw a switch like this maybe. So Zt is responsible for" }, { "end": 541.6400000000001, "start": 534.2, "text": " modulating this switch right here. Okay this gives you the next hidden state. You" }, { "end": 547.08, "start": 541.6400000000001, "text": " see Zt modulates that switch so Ht is the one possibility that the switch can" }, { "end": 552.76, "start": 547.08, "text": " select. What's the other possibility? The other possibility is this quantity right" }, { "end": 561.88, "start": 552.76, "text": " here which is a hyperbolic tangent of whatever that is. So that is combined of" }, { "end": 573.4, "start": 561.88, "text": " X. So let's go from the back right here. Tanh. What's the input to the tanh?" }, { "end": 579.28, "start": 573.4, "text": " It's two things. First of all the X is an input to the tanh so we can draw" }, { "end": 585.4, "start": 579.28, "text": " directly a line from here. The X modulated. Every arrow as you might" }, { "end": 590.2, "start": 585.4, "text": " remember can be a function. Not all arrows are functions like this" }, { "end": 595.2800000000001, "start": 590.2, "text": " arrow right here is not a function it's just an arrow. Maybe that's confusing." }, { "end": 604.9200000000001, "start": 595.2800000000001, "text": " You get what I mean. And the next thing is R times the hidden the last hidden" }, { "end": 612, "start": 604.9200000000001, "text": " state or the last hidden state modulated by this matrix. So R is" }, { "end": 616.6, "start": 612, "text": " acting as another gate. R can be again between 0 and 1 because it's the result" }, { "end": 624.32, "start": 616.6, "text": " of a sigmoid. So this hidden state will also go here. It will be modulated running" }, { "end": 632.24, "start": 624.32, "text": " out of colors here. It will be modulated by R here as a sort of gate. So R can either" }, { "end": 638.48, "start": 632.24, "text": " close or open this gate right here and then that's fed into the tanh. So it's" }, { "end": 645.24, "start": 638.48, "text": " a rather complicated setup as you can see right here. So let's analyze this." }, { "end": 653.48, "start": 645.24, "text": " First of all the hidden state is either the last hidden state or it is something" }, { "end": 658.98, "start": 653.48, "text": " new and that's modulated by this Z right here. And Z is calculated from the hidden" }, { "end": 666.36, "start": 658.98, "text": " state and the current input. So this allows the cell to basically look at" }, { "end": 670.28, "start": 666.36, "text": " the hidden state is sort of the information of what happened so far and" }, { "end": 676.04, "start": 670.28, "text": " the current input is the new information that it gets from the sequence. And it" }, { "end": 680.9599999999999, "start": 676.04, "text": " sort of gets to look at these two things and decides do I even want to update my" }, { "end": 685.68, "start": 680.9599999999999, "text": " hidden state? If not I can just select this path right here and then nothing" }, { "end": 689.68, "start": 685.68, "text": " happens to the hidden state. The next hidden state will be exactly the same as" }, { "end": 695.6, "start": 689.68, "text": " the last hidden state. If it decides, if it thinks wow this new thing in the" }, { "end": 699.12, "start": 695.6, "text": " sequence that's actually important I should remember that. Because" }, { "end": 704.88, "start": 699.12, "text": " remember the task of the network sometimes is to remember things from" }, { "end": 710.92, "start": 704.88, "text": " the sequence. I think we drew this over here. So if this is an email and we want" }, { "end": 715.26, "start": 710.92, "text": " to detect whether it's spam then this word in the sequence right here might be" }, { "end": 719.96, "start": 715.26, "text": " really important because it would say something like gold, like buy gold. These" }, { "end": 725.72, "start": 719.96, "text": " two things might be buy gold and you need to remember that in the hidden" }, { "end": 729.5600000000001, "start": 725.72, "text": " state because the only way that information from X is going to flow to" }, { "end": 733.64, "start": 729.5600000000001, "text": " Y is through the hidden states. So you would want at this point you would want" }, { "end": 737.12, "start": 733.64, "text": " to remember this input in the hidden state so you would actually want to" }, { "end": 741.1600000000001, "start": 737.12, "text": " update the hidden state. And then this here might be not as important so you" }, { "end": 744.76, "start": 741.1600000000001, "text": " might want to say I don't want to I don't want to I still want my hidden" }, { "end": 750.6800000000001, "start": 744.76, "text": " state to be the old hidden state. So Z is that gate that allows us to do" }, { "end": 757.9599999999999, "start": 750.68, "text": " this. If we decide to update the hidden state then what do we do? Again if we" }, { "end": 766.8, "start": 757.9599999999999, "text": " decide to update the hidden state we can we can we will incorporate the new" }, { "end": 773.92, "start": 766.8, "text": " input but we will we can also decide to mix that how to mix that new input with" }, { "end": 779.7199999999999, "start": 773.92, "text": " the old hidden state. So if we decide to update the hidden state we don't" }, { "end": 783.2, "start": 779.72, "text": " simply discard the old hidden state because the old hidden state will still" }, { "end": 791.2, "start": 783.2, "text": " have a path to be sort of still there to be remembered but it's a" }, { "end": 797.24, "start": 791.2, "text": " longer path and it needs to go through this thing here and through this thing" }, { "end": 804.2, "start": 797.24, "text": " here. So this thing here decides which of the old hidden state pass through. So at" }, { "end": 808.8000000000001, "start": 804.2, "text": " each you can see right here this is an element-wise product this R is going to" }, { "end": 813.4, "start": 808.8, "text": " be between 0 and 1 at each point in the vector. So at each point in the vector" }, { "end": 820.3199999999999, "start": 813.4, "text": " the R decides is this worth remembering or not? And if it's not worth" }, { "end": 825.04, "start": 820.3199999999999, "text": " remembering then this is going to be 0 and that that position of the old hidden" }, { "end": 830.16, "start": 825.04, "text": " state is going to be 0 and then that's going to be forgotten and that's the" }, { "end": 835.1999999999999, "start": 830.16, "text": " opportunity for the hidden state to incorporate new information because" }, { "end": 840.96, "start": 835.2, "text": " then it can delete this old information and then it can incorporate" }, { "end": 845.84, "start": 840.96, "text": " the new input and that will result then on this path on the new hidden state." }, { "end": 850.5600000000001, "start": 845.84, "text": " So there's two major things. First we can decide whether or not to even" }, { "end": 855.36, "start": 850.5600000000001, "text": " incorporate new information that's achieved by the Z gate and then we can" }, { "end": 859.8000000000001, "start": 855.36, "text": " decide which parts of the old hidden state if we want to update it which" }, { "end": 865.3599999999999, "start": 859.8, "text": " parts to forget that's the R gate and how to update it is then basically a" }, { "end": 871, "start": 865.3599999999999, "text": " result of the weight matrix that's associated with this function right here." }, { "end": 878.5999999999999, "start": 871, "text": " Alright so that's the gated recurrent unit and it works a lot better than the" }, { "end": 884.64, "start": 878.5999999999999, "text": " classic RNNs. So having said that they now turn to this property of neuronal" }, { "end": 890.48, "start": 884.64, "text": " by stability that happens in actual neurons. So this here is sort of a model" }, { "end": 895.48, "start": 890.48, "text": " of a neuron with this property. Now forget everything we said about GRUs" }, { "end": 900.1999999999999, "start": 895.48, "text": " we're just going to look at this right now. What happens in a neuron usually is" }, { "end": 907.56, "start": 900.1999999999999, "text": " you have this is a single neuron you have input synapses from other neurons" }, { "end": 913.12, "start": 907.56, "text": " so these are connections coming from other neurons into you. They are" }, { "end": 919.24, "start": 913.12, "text": " accumulated right here. Usually they are just in a classic model of a neuron" }, { "end": 923.68, "start": 919.24, "text": " they're just summed up you would sum up all these all these input signals and" }, { "end": 930.84, "start": 923.68, "text": " then you decide you'd run it through like a step function. So if the sum of" }, { "end": 937.6800000000001, "start": 930.84, "text": " all the things is smaller than a particular threshold the output would be" }, { "end": 942.04, "start": 937.6800000000001, "text": " just nothing and if it's higher than a particular threshold then the output of" }, { "end": 947.68, "start": 942.04, "text": " the neuron would be sort of a firing of the neuron. This can be weighted and" }, { "end": 952.68, "start": 947.68, "text": " whatnot but in this case it's just a function of the inputs and that gives" }, { "end": 957.8399999999999, "start": 952.68, "text": " you your input signal. So this is like this is your input signal to" }, { "end": 964, "start": 957.8399999999999, "text": " the neuron. Now there is this property right here that makes it interesting." }, { "end": 972.64, "start": 964, "text": " The signal goes out here and is integrated. This is an integrator and that's" }, { "end": 976.4, "start": 972.64, "text": " going to be in the output signal but there's this connection this back" }, { "end": 982, "start": 976.4, "text": " connection right here and that means that the signal that comes out at time" }, { "end": 988, "start": 982, "text": " step t is going to be fed back into the signal and actually added to the signal" }, { "end": 995.96, "start": 988, "text": " before itself and sort of self modulating right the signal comes out" }, { "end": 1002.4, "start": 995.96, "text": " is sent back is added to this input and then sent through again and this here is" }, { "end": 1007.44, "start": 1002.4, "text": " just an integrator that's integrating everything that's happening. So if you" }, { "end": 1014.4, "start": 1007.44, "text": " if you look a bit closer you'll see that there is a minus here so it's actually" }, { "end": 1019.36, "start": 1014.4, "text": " not added it's subtracted and there is an F here which means that this is a" }, { "end": 1024.44, "start": 1019.36, "text": " nonlinear function. Now if this weren't a nonlinear function we can just sort of" }, { "end": 1029.84, "start": 1024.44, "text": " try or let's say this is a monotonic function we can sort of try to estimate" }, { "end": 1035.4, "start": 1029.84, "text": " what happens. If all of this right here is very high it's a high number big" }, { "end": 1040.08, "start": 1035.4, "text": " number this will be a big number then this sum will be a big number this" }, { "end": 1046.28, "start": 1040.08, "text": " output will be a big number what happens is this here will be a big number this" }, { "end": 1051.04, "start": 1046.28, "text": " is monotonic so it will also be a big number and that means it will subtract" }, { "end": 1058.24, "start": 1051.04, "text": " that big number so that means when whenever the neuron is going to to be" }, { "end": 1063.84, "start": 1058.24, "text": " very excited this feedback would actually push it back now when it is not" }, { "end": 1068.84, "start": 1063.84, "text": " very excited so when it's a very low number very negatively excited then the" }, { "end": 1071.9599999999998, "start": 1068.84, "text": " feedback would work in the exact opposite direction this will be very" }, { "end": 1076.1999999999998, "start": 1071.9599999999998, "text": " negative this will be very negative and this here would push it towards the" }, { "end": 1083.6799999999998, "start": 1076.1999999999998, "text": " positive so this neuron somehow self stabilizes over time to this to the zero" }, { "end": 1091.1599999999999, "start": 1083.6799999999998, "text": " point right here and that's simply if this F is the identity function right" }, { "end": 1098.04, "start": 1091.1599999999999, "text": " now so you can sort of see how this property works now we'll make it a bit" }, { "end": 1104.24, "start": 1098.04, "text": " more complicated in that we'll assume that this F here is not the identity" }, { "end": 1112.44, "start": 1104.24, "text": " function but let's say they have it somewhere but this right here so the F" }, { "end": 1127.28, "start": 1112.44, "text": " F of V post is this here it's V post minus alpha tan H of V post or is this" }, { "end": 1135.84, "start": 1127.28, "text": " the entire F yes that's this this thing right here if this is the case if this" }, { "end": 1143.24, "start": 1135.84, "text": " is that this if this is the signal minus the tan H then something very very very" }, { "end": 1148.24, "start": 1143.24, "text": " interesting happens and that that's depending on this alpha right here in" }, { "end": 1156.6, "start": 1148.24, "text": " that if this alpha is between if the alpha is between 0 and 1 then we simply" }, { "end": 1162.52, "start": 1156.6, "text": " have our monotonic function so here you can see how big V post is so how big the" }, { "end": 1165.9599999999998, "start": 1162.52, "text": " output signal is here that's the experiment we made before and here you" }, { "end": 1173.48, "start": 1165.9599999999998, "text": " can see what the feedback signal is okay or the integrator the integrated signal" }, { "end": 1177.84, "start": 1173.48, "text": " maybe this is in the wrong place and maybe F is just minus the tan H I'm not" }, { "end": 1183.1599999999999, "start": 1177.84, "text": " sure but in any case the way they build it after in the GRU it's pretty explicit" }, { "end": 1191.72, "start": 1183.16, "text": " so this is the thing we said before namely if if the signal is very high" }, { "end": 1199.16, "start": 1191.72, "text": " then this signal here will be high as well and because it's subtracted right" }, { "end": 1206.24, "start": 1199.16, "text": " here it's it's going to push the signal back towards zero again if this is" }, { "end": 1211.48, "start": 1206.24, "text": " lower than zero then this thing here will also be lower than zero and because" }, { "end": 1216.1200000000001, "start": 1211.48, "text": " it's subtracted it's going to push the signal towards zero so this thing here" }, { "end": 1222.52, "start": 1216.1200000000001, "text": " is the stable point it will always push it back towards zero however if we" }, { "end": 1229.92, "start": 1222.52, "text": " change the function and we change just the parameter alpha to be 1.5 a very" }, { "end": 1236.3600000000001, "start": 1229.92, "text": " different thing happens that you can see right here then it turns out if your" }, { "end": 1241.52, "start": 1236.36, "text": " output signal is very is very high the same thing happens is going to put me" }, { "end": 1246.32, "start": 1241.52, "text": " push back but if your output signal is between zero and this point right here" }, { "end": 1253, "start": 1246.32, "text": " there is a regime where actually even though the output signal is positive you" }, { "end": 1260.1599999999999, "start": 1253, "text": " will be pushed towards this point right here and therefore there is there are" }, { "end": 1264.28, "start": 1260.1599999999999, "text": " these two stable points right now and the stable point basically means if you" }, { "end": 1268, "start": 1264.28, "text": " deviate if the signal deviates from it it's going to be pushed back towards" }, { "end": 1271.96, "start": 1268, "text": " that point and you can see these two stable points they're not at zero they're" }, { "end": 1279.52, "start": 1271.96, "text": " actually at at these two points here and that's pretty interesting because that" }, { "end": 1285.16, "start": 1279.52, "text": " means you can potentially remember things with the cell right an output" }, { "end": 1289.44, "start": 1285.16, "text": " signal of zero it's basically not informative but here you can be in either" }, { "end": 1297.16, "start": 1289.44, "text": " the state here or in the state here and the little little perturbations will" }, { "end": 1301.48, "start": 1297.16, "text": " still keep you in that state so you could potentially be in this state right" }, { "end": 1307.28, "start": 1301.48, "text": " here as an output and the cell will just keep updating itself and be stable and" }, { "end": 1314, "start": 1307.28, "text": " always output that signal right here and then you could go ahead and if you then" }, { "end": 1320, "start": 1314, "text": " provide some huge input signal right here you could potentially throw this" }, { "end": 1325.04, "start": 1320, "text": " over to the other side over this hill and then it would stabilize at this point" }, { "end": 1329.8, "start": 1325.04, "text": " so this is sort of a way to remember things within these biological cells" }, { "end": 1333.92, "start": 1329.8, "text": " pretty cool now this here is a non filled out circle that means it's an" }, { "end": 1340.12, "start": 1333.92, "text": " unstable point it's technically stable in the sense that if you're exactly at" }, { "end": 1345.6799999999998, "start": 1340.12, "text": " zero you will remain at zero but if you perturb even a little bit you will go if" }, { "end": 1353.4799999999998, "start": 1345.6799999999998, "text": " you perturb a bit you will go away from it okay I hope this sort of property is" }, { "end": 1357.28, "start": 1353.4799999999998, "text": " right is clear and why this is so fascinating because we can use this" }, { "end": 1362.2399999999998, "start": 1357.28, "text": " these this fact that the stable points are not at zero and are more than just" }, { "end": 1368.32, "start": 1362.2399999999998, "text": " one stable point for remembering things and they're now trying to fill this into" }, { "end": 1379.2, "start": 1368.32, "text": " the gated recurrent unit so they call this the bi-stable recurrent cell BRC" }, { "end": 1386.52, "start": 1379.2, "text": " and the formulas are these right here maybe a little smaller come on can't" }, { "end": 1395.2, "start": 1386.52, "text": " zoom anymore okay it looks almost the same as the GRU so the formulas are" }, { "end": 1404.32, "start": 1395.2, "text": " these this and this so let's analyze the differences to the GRU the first most" }, { "end": 1409.28, "start": 1404.32, "text": " striking difference is that a lot of weight matrices here have become single" }, { "end": 1415.48, "start": 1409.28, "text": " numbers so or single vectors this here used to be a weight matrix and this used" }, { "end": 1420.04, "start": 1415.48, "text": " to be a matrix multiplication and you'll see this sort of throughout whenever the" }, { "end": 1425.96, "start": 1420.04, "text": " last hidden state is incorporated into these things then you'll see that it is" }, { "end": 1434.24, "start": 1425.96, "text": " no longer a weight matrix but is in fact a in a product with a vector a element" }, { "end": 1438.92, "start": 1434.24, "text": " wise product and that has a reason namely what they want to model is" }, { "end": 1445.78, "start": 1438.92, "text": " individual neurons so on a biological level and neuron can only feed back onto" }, { "end": 1451.24, "start": 1445.78, "text": " itself if there is a layer of neurons right here they can only each feed back" }, { "end": 1459.44, "start": 1451.24, "text": " onto themselves whereas in a recurrent neural network my hidden vector my" }, { "end": 1464.2, "start": 1459.44, "text": " hidden state is a vector and if I transform this into the next hidden" }, { "end": 1469, "start": 1464.2, "text": " state or any quantity let's say I transform this H into this R right here" }, { "end": 1477.76, "start": 1469, "text": " and this R is a vector too then any interaction is possible so any cell any" }, { "end": 1481.6, "start": 1477.76, "text": " entry in the vector here can influence any other vector because there's a big" }, { "end": 1486.24, "start": 1481.6, "text": " weight matrix in the middle they want to leave this away they want to model this" }, { "end": 1491.54, "start": 1486.24, "text": " as close as possible to actual layers of neurons and therefore they say okay the" }, { "end": 1497.24, "start": 1491.54, "text": " input X can you know be distributed to all the neurons because technically the" }, { "end": 1501.14, "start": 1497.24, "text": " input comes from some other neurons down here and they can all have connections" }, { "end": 1507.08, "start": 1501.14, "text": " to these neurons but these feedbacks we only really observe them in individual" }, { "end": 1512.1200000000001, "start": 1507.08, "text": " neuron this feedback cycle so that's why they model these recurrent weight" }, { "end": 1518.04, "start": 1512.1200000000001, "text": " products by just element wise products with vectors and then the second" }, { "end": 1523.6, "start": 1518.04, "text": " difference you again see that there is this switch right here this C switch and" }, { "end": 1532.24, "start": 1523.6, "text": " the C switch is like before it's a sigmoid with where combine the the output" }, { "end": 1537.4599999999998, "start": 1532.24, "text": " and the previous hidden state there's nothing new here so this switch is the" }, { "end": 1542.52, "start": 1537.4599999999998, "text": " same the cell has the possibility of letting in new information or just" }, { "end": 1550.4599999999998, "start": 1542.52, "text": " ignoring the new current information the XT the second thing is here and this is" }, { "end": 1555.24, "start": 1550.46, "text": " the same as well right the tanh this is a combination of the new information" }, { "end": 1559.8400000000001, "start": 1555.24, "text": " it's in case we want to let in the new information of the new information and" }, { "end": 1567.64, "start": 1559.8400000000001, "text": " you need to decide what things of the old information to forget or remember now" }, { "end": 1575.2, "start": 1567.64, "text": " the difference here is in this a so this a used to be again this sigmoid of the" }, { "end": 1582.04, "start": 1575.2, "text": " combination and now it's just slightly different it used to be sigmoid now it's" }, { "end": 1590.92, "start": 1582.04, "text": " one plus tanh this is a very very slight modification it's tanh because tanh is" }, { "end": 1595.78, "start": 1590.92, "text": " between minus one and one instead of zero and one like the sigmoid and the" }, { "end": 1601.64, "start": 1595.78, "text": " one plus makes it such that this is between zero and two and we've seen" }, { "end": 1605.6000000000001, "start": 1601.64, "text": " before that this critical behavior there is two regimes to these functions when" }, { "end": 1612.0400000000002, "start": 1605.6000000000001, "text": " it's between zero and one this behaves like a classic gated recurrent unit like" }, { "end": 1617.8400000000001, "start": 1612.0400000000002, "text": " a classic GRU but when it's between one and two then you have that exact" }, { "end": 1622.76, "start": 1617.8400000000001, "text": " behavior that we saw before of the bi-stability okay so depending on what" }, { "end": 1629.0400000000002, "start": 1622.76, "text": " the a is if the a is zero to one it's a classic cell and if the a is one to two" }, { "end": 1635.04, "start": 1629.04, "text": " it's a bi-stable cell and the network can decide by itself what it wants to do" }, { "end": 1641.92, "start": 1635.04, "text": " because here it has it can actually learn how to do that all right so this" }, { "end": 1645.3999999999999, "start": 1641.92, "text": " is the only change the only change really apart from it only being" }, { "end": 1649.92, "start": 1645.3999999999999, "text": " individual neurons feeding back on themselves is that now this is no longer" }, { "end": 1654.72, "start": 1649.92, "text": " between zero and one with the sigmoid this is now between zero and two because" }, { "end": 1663, "start": 1654.72, "text": " it's one plus the tan h very simple change but the effect of this is pretty" }, { "end": 1671.52, "start": 1663, "text": " pretty cool so they do some here is like a schematic drawing of this if this a is" }, { "end": 1676.48, "start": 1671.52, "text": " between zero and one again you have this stable state that's at zero but it's" }, { "end": 1682.24, "start": 1676.48, "text": " if it's between one and two you have two stable states at two non zero points" }, { "end": 1690.28, "start": 1682.24, "text": " and again this we already saw this but now this is for I believe this this" }, { "end": 1697.52, "start": 1690.28, "text": " recurrent cell this by modal recurrent cell not for the neuron itself and here" }, { "end": 1702.36, "start": 1697.52, "text": " they give an example of what happens when you run this particular signal this" }, { "end": 1708, "start": 1702.36, "text": " particular time series through a cell like this while fixing the C and the a" }, { "end": 1711.08, "start": 1708, "text": " parameters so now the C and their a parameter aren't learned they're just" }, { "end": 1720.6399999999999, "start": 1711.08, "text": " fixed and you see what happens now as you can see the the blue should be a" }, { "end": 1727.24, "start": 1720.6399999999999, "text": " classic the classic behavior so in this blue case what happens you see right" }, { "end": 1735.48, "start": 1727.24, "text": " here this C is moderately low so we saw the C is the switch of whether to leave" }, { "end": 1740.1999999999998, "start": 1735.48, "text": " in old information or take up new information if it's low it means we want" }, { "end": 1743.96, "start": 1740.2, "text": " to take up new information this is reasonably low and that's why when the" }, { "end": 1750.1200000000001, "start": 1743.96, "text": " signal goes up here the blue line goes up as well and when the signal goes down" }, { "end": 1753.68, "start": 1750.1200000000001, "text": " the blue line goes down again and so on so the blue line pretty" }, { "end": 1760.0800000000002, "start": 1753.68, "text": " straightforwardly follows the signal right here okay now in contrast to this" }, { "end": 1767.88, "start": 1760.0800000000002, "text": " the red line is over this threshold so a is fixed at 1.5 C is still at 0.2 so" }, { "end": 1776, "start": 1767.88, "text": " again when this line goes up then this line goes up but because this is near a" }, { "end": 1781.2800000000002, "start": 1776, "text": " this stable point if it goes down again it doesn't appear to go down enough it" }, { "end": 1787.5600000000002, "start": 1781.2800000000002, "text": " sort of remembers that state it was in it doesn't go down with the signal only" }, { "end": 1793.0800000000002, "start": 1787.5600000000002, "text": " now that it goes down even further it's over this threshold so we were in this" }, { "end": 1799.36, "start": 1793.08, "text": " situation now and the first bump down was only like to here and that pushed it" }, { "end": 1804.32, "start": 1799.36, "text": " up again but now it jumps over here because the signal is even lower and" }, { "end": 1809.76, "start": 1804.32, "text": " then the cell sort of switches to another state as you can see here it" }, { "end": 1815.4399999999998, "start": 1809.76, "text": " goes down but then this bump here is not enough to bring it up again so it kind" }, { "end": 1822.1999999999998, "start": 1815.4399999999998, "text": " of remains in this state so you can see the it sort of remembers the input and" }, { "end": 1828.68, "start": 1822.2, "text": " small deviations or small changes in signal don't manage to throw it away" }, { "end": 1834.32, "start": 1828.68, "text": " from that only larger things only it needs to go very the signal needs to go" }, { "end": 1839.28, "start": 1834.32, "text": " very much down in order for it to change state so that's pretty cool that there's" }, { "end": 1843.96, "start": 1839.28, "text": " this remembering behavior and now remember in the actual implementation" }, { "end": 1851.3600000000001, "start": 1843.96, "text": " these C and A parameters this C and this A right here aren't fixed they are also" }, { "end": 1856.6799999999998, "start": 1851.36, "text": " determined by the cell itself and therefore the cell can decide by itself" }, { "end": 1861.12, "start": 1856.6799999999998, "text": " when it wants to remember things how hard it wants to remember things and so" }, { "end": 1867.9599999999998, "start": 1861.12, "text": " on so we're going to check this out in an actual implementation so there's this" }, { "end": 1872.76, "start": 1867.9599999999998, "text": " one last modification they make where they say okay they tried this and it" }, { "end": 1878.1999999999998, "start": 1872.76, "text": " doesn't really work because it works sometimes but there is this issue of" }, { "end": 1884.52, "start": 1878.2, "text": " these neurons connecting only back on themselves which really makes the model" }, { "end": 1889.92, "start": 1884.52, "text": " much less powerful than a classic recurrent cell it's closer to biology" }, { "end": 1895.3600000000001, "start": 1889.92, "text": " but it's much less powerful and there is this property they say of a neuromodulation" }, { "end": 1904.3600000000001, "start": 1895.3600000000001, "text": " where technically in real neurons the one neuron here could influence another" }, { "end": 1910.52, "start": 1904.36, "text": " neuron by modulating these A and C parameters okay these A and C parameters" }, { "end": 1915.4399999999998, "start": 1910.52, "text": " this is called neuromodulation so there are interconnections between the neurons" }, { "end": 1920.6, "start": 1915.4399999999998, "text": " that influence how much other neurons remember and forget things so they" }, { "end": 1925.76, "start": 1920.6, "text": " decide let's model that and lo and behold we're now back to having weight" }, { "end": 1933.9199999999998, "start": 1925.76, "text": " matrices right here so this this is sort of they say this is a not really a" }, { "end": 1939.64, "start": 1933.92, "text": " super biologically plausible way of implementing neuromodulation but it's" }, { "end": 1947.04, "start": 1939.64, "text": " sort of it's an easier way and it brings us closer to the G back to the GRU and" }, { "end": 1952.28, "start": 1947.04, "text": " yeah so now the only difference to the GRU is that the fact that here there" }, { "end": 1961.48, "start": 1952.28, "text": " was a sigmoid now it's a 1 plus tan H okay I find this this pretty cool so now" }, { "end": 1967.72, "start": 1961.48, "text": " also the only difference here is this property of bi stability this is the" }, { "end": 1974.24, "start": 1967.72, "text": " only difference and now we can actually compare so let's compare they first" }, { "end": 1981.76, "start": 1974.24, "text": " give they do these sort of benchmarks which are they're pretty pretty neat so" }, { "end": 1985.4, "start": 1981.76, "text": " they have this first benchmark where it's the copy first input benchmark I'm" }, { "end": 1991.64, "start": 1985.4, "text": " having some trouble here moving this paper around with my fingers so the copy" }, { "end": 1996.92, "start": 1991.64, "text": " first input benchmark is simply a time series in this benchmark the network is" }, { "end": 2002.24, "start": 1996.92, "text": " presented with a one-dimensional time series of T time steps and the each" }, { "end": 2009.1200000000001, "start": 2002.24, "text": " entry is a is a random number after receiving the last time step the network" }, { "end": 2014.92, "start": 2009.1200000000001, "text": " output value should approximate the very very first input step okay so all the" }, { "end": 2019.72, "start": 2014.92, "text": " network needs to do is remember the first thing it sees and that's that" }, { "end": 2025.16, "start": 2019.72, "text": " should be learnable right that should be learnable because you can so you can" }, { "end": 2030.5600000000002, "start": 2025.16, "text": " it's not specified whether the zero with hidden state the initial hidden state is" }, { "end": 2035.68, "start": 2030.5600000000002, "text": " given into the network but technically it doesn't matter because it can just" }, { "end": 2043.04, "start": 2035.68, "text": " learn whatever that is I can learn to have a designated bit in this hidden" }, { "end": 2049.2, "start": 2043.04, "text": " state so this hidden state is of size 100 I believe one designated bit in the" }, { "end": 2055.24, "start": 2049.2, "text": " hidden state of whether it has already encountered the first thing or not if it" }, { "end": 2059.2, "start": 2055.24, "text": " has not encountered it means that it's at the first time step therefore it" }, { "end": 2063.94, "start": 2059.2, "text": " should incorporate the new information into the hidden state and if and also" }, { "end": 2067.88, "start": 2063.94, "text": " set this bit and then for each subsequent step it can see I've already" }, { "end": 2072.2799999999997, "start": 2067.88, "text": " set this bit and it can simply close that gate that makes it incorporate new" }, { "end": 2076.88, "start": 2072.28, "text": " information so it should be able to carry this information all the way to" }, { "end": 2083.88, "start": 2076.88, "text": " the end by simply always closing that gate after the first step and what" }, { "end": 2096.8, "start": 2083.88, "text": " happens in this so as you can see when the result is all the results up here so" }, { "end": 2101.6800000000003, "start": 2096.8, "text": " this is after three so they train it for 300,000 gradient descent iterations and" }, { "end": 2106.44, "start": 2101.68, "text": " you can see that when these time steps when the series are pretty small the" }, { "end": 2112.6, "start": 2106.44, "text": " LSTM or the GRUs tend to perform well but you can see that these BRCs they" }, { "end": 2116.96, "start": 2112.6, "text": " they don't tend to perform poorly they're just performing worse right it's" }, { "end": 2123.3599999999997, "start": 2116.96, "text": " zero it's still the 0.01 regime or something like this of error however" }, { "end": 2128.72, "start": 2123.3599999999997, "text": " when you go up to like 300 steps then you can see the GRUs and the LSTM they" }, { "end": 2134.72, "start": 2128.72, "text": " start to fail because they are not made explicitly to remember for that long" }, { "end": 2141.2, "start": 2134.72, "text": " they don't have this by stability property whereas now these things excel" }, { "end": 2145.6, "start": 2141.2, "text": " you can see they're still pretty low and at 600 steps these things completely" }, { "end": 2153.2799999999997, "start": 2145.6, "text": " fail they completely forget the input so and the NBRC at least is still able to" }, { "end": 2162, "start": 2153.28, "text": " remember the first thing pretty pretty well and yeah so the second one is no" }, { "end": 2165.8, "start": 2162, "text": " this is the first experiment still the copy input benchmark you can see right" }, { "end": 2172.1600000000003, "start": 2165.8, "text": " here that even at this three at this 100 thing where the GRU still learns it it" }, { "end": 2178.36, "start": 2172.1600000000003, "text": " learns it much much later than the BRC which learns it pretty fast only here" }, { "end": 2184.6800000000003, "start": 2178.36, "text": " when the when it's only five when that series are only five steps long does the" }, { "end": 2191.52, "start": 2184.6800000000003, "text": " GRU slightly outperform the BRC so the general notion here is that these" }, { "end": 2200.28, "start": 2191.52, "text": " classic cells are more powerful in like classic tasks whereas these things are" }, { "end": 2205.56, "start": 2200.28, "text": " shining whenever these things fail because they can't remember things for" }, { "end": 2211.08, "start": 2205.56, "text": " very long so they're not these new cells are not state-of-the-art yet possibly" }, { "end": 2216.96, "start": 2211.08, "text": " there are still some modifications to be made we've had a pretty long history of" }, { "end": 2222.08, "start": 2216.96, "text": " optimizing GRUs and LSTMs they haven't always worked so well as they do now" }, { "end": 2227.48, "start": 2222.08, "text": " because we kind of know how to handle them and I expect if these cells here" }, { "end": 2235.16, "start": 2227.48, "text": " take off especially these NBRC then with time will be as proficient at handling" }, { "end": 2241.8799999999997, "start": 2235.16, "text": " them and they will probably become on par or even outperform the LSTMs or GRUs on" }, { "end": 2246.64, "start": 2241.8799999999997, "text": " every day like on all the tasks and then be especially good on tasks where you" }, { "end": 2252.3999999999996, "start": 2246.64, "text": " have to remember things but for now they're outperformed by LSTMs and GRUs" }, { "end": 2257, "start": 2252.3999999999996, "text": " okay so the second thing is a more interesting experiment the denoising" }, { "end": 2264.08, "start": 2257, "text": " benchmark where they say the the copy input benchmark is interesting as a" }, { "end": 2267.2, "start": 2264.08, "text": " means to highlight the memorization capacity of the recurrent neural network" }, { "end": 2271.4, "start": 2267.2, "text": " but it does not tackle its ability to successfully exploit complex" }, { "end": 2275.72, "start": 2271.4, "text": " relationships between different elements of the input signal to predict the output" }, { "end": 2280.92, "start": 2275.72, "text": " so they have a new benchmark in the denoising benchmark the network is" }, { "end": 2285.52, "start": 2280.92, "text": " presented with a two-dimensional time series of t time steps five different" }, { "end": 2292.68, "start": 2285.52, "text": " time steps are sampled uniformly with okay and are communicated in the network" }, { "end": 2297.44, "start": 2292.68, "text": " okay I'll just tell you what's going on so this new time series is two" }, { "end": 2301.7999999999997, "start": 2297.44, "text": " dimensional in the lower dimension you simply have a bunch of random numbers" }, { "end": 2308.68, "start": 2301.7999999999997, "text": " like five eight two nine actually these are numbers sampled from a uniform" }, { "end": 2312.72, "start": 2308.68, "text": " Gaussian or so so they're not actually five eight two and nine but you can" }, { "end": 2320.66, "start": 2312.72, "text": " imagine it like this five eight two nine three four zero two and so on and in the" }, { "end": 2326.72, "start": 2320.66, "text": " second dimension you have a negative one I believe almost anywhere and then at" }, { "end": 2331.24, "start": 2326.72, "text": " some points you have a one negative one again and then you have a one and a" }, { "end": 2336.8799999999997, "start": 2331.24, "text": " negative one again and at the last point of the sequence you'll have a zero and" }, { "end": 2342.56, "start": 2336.8799999999997, "text": " so the zero is simply a marker that it's the end of the sequence what the" }, { "end": 2348, "start": 2342.56, "text": " network needs to do is it needs to output all the elements so the output of" }, { "end": 2354.48, "start": 2348, "text": " the network should be in this case should be nine four so all the elements" }, { "end": 2361.08, "start": 2354.48, "text": " where there was a one in order okay so it remember what it needs to learn it" }, { "end": 2366.88, "start": 2361.08, "text": " needs to learn to every time it sees a one in the second dimension it needs to" }, { "end": 2373.12, "start": 2366.88, "text": " take the first dimension put it somehow into the hidden state and then carry" }, { "end": 2378.56, "start": 2373.12, "text": " that hidden state forward and sees a one again it needs to take the second thing" }, { "end": 2383.48, "start": 2378.56, "text": " also put it into the hidden state but not override the first thing it put into" }, { "end": 2387.3199999999997, "start": 2383.48, "text": " the hidden state like if it were to just realize I need to put this into the" }, { "end": 2392.52, "start": 2387.3199999999997, "text": " hidden state then it would almost surely override the previous information so it" }, { "end": 2398.72, "start": 2392.52, "text": " needs to be able to say I've already kind of in my H is going to be a vector" }, { "end": 2404.3599999999997, "start": 2398.72, "text": " of a hundred dimensions it needs to be able to say well I've already stored a" }, { "end": 2409.7999999999997, "start": 2404.3599999999997, "text": " bunch of stuff in that part of the vector maybe I should store that thing" }, { "end": 2415.9599999999996, "start": 2409.7999999999997, "text": " here over here so this is fairly complex things to remember and technically GRU's" }, { "end": 2424.16, "start": 2415.9599999999996, "text": " and LSTMs are able to do it but as we'll see they're not as much the results are" }, { "end": 2432.8399999999997, "start": 2424.16, "text": " in this table where you can clearly see that whenever the n so the n the n is a" }, { "end": 2440.2799999999997, "start": 2432.8399999999997, "text": " parameter that is how far how far in this direction are these ones so when n" }, { "end": 2446.8799999999997, "start": 2440.2799999999997, "text": " is 0 the ones can be anywhere but when n here is like 5 that means that the last" }, { "end": 2453.3999999999996, "start": 2446.8799999999997, "text": " five ones surely don't contain a one that means only the first whatever a L" }, { "end": 2458.52, "start": 2453.4, "text": " minus L minus 5 contain the one so the higher this number n is the harder the" }, { "end": 2465.48, "start": 2458.52, "text": " task because your learning signal is way way farther away from the from what's" }, { "end": 2473.32, "start": 2465.48, "text": " when you get the output so you can see when the n is low then the GRU's and the" }, { "end": 2478.12, "start": 2473.32, "text": " LSTMs they perform pretty well but also these cells perform pretty well they're" }, { "end": 2483.7599999999998, "start": 2478.12, "text": " just not performing as well however when the task gets harder and you actually" }, { "end": 2489.2799999999997, "start": 2483.7599999999998, "text": " need to learn a sparse signal over a long period of time where in between you" }, { "end": 2494.68, "start": 2489.2799999999997, "text": " don't get any signal the GRU's and the LSTMs fail while the BRC's would still" }, { "end": 2501.2, "start": 2494.68, "text": " be able to learn these kinds of things so that's that's fairly cool now it's if" }, { "end": 2506.2, "start": 2501.2, "text": " from a researcher's perspective I wonder if they just first tried this task you" }, { "end": 2510.2799999999997, "start": 2506.2, "text": " know as I described it and then they discovered like ah crap they can still" }, { "end": 2514.96, "start": 2510.2799999999997, "text": " do it and like okay how can we make it such that there's a difference okay" }, { "end": 2519.24, "start": 2514.96, "text": " let's actually make the task harder like this and then they did that I wonder if" }, { "end": 2526.04, "start": 2519.24, "text": " they always had the idea with the end here or just introduced this after after" }, { "end": 2531, "start": 2526.04, "text": " it they they failed to produce a difference in the first place I'm not" }, { "end": 2534.9199999999996, "start": 2531, "text": " sure but they have they have another benchmark but they basically show that" }, { "end": 2539.8, "start": 2534.92, "text": " these cells are actually good can incorporate this information can reason" }, { "end": 2544.4, "start": 2539.8, "text": " about what they need to remember and whatnot and in the end they also have" }, { "end": 2549.7200000000003, "start": 2544.4, "text": " this sequential MNIST where they just feed an MNIST digit digit by digit and" }, { "end": 2555.2000000000003, "start": 2549.7200000000003, "text": " at the end I think that the output of the neural network needs to be the class" }, { "end": 2561.96, "start": 2555.2000000000003, "text": " of the of the MNIST digit and again here they have a parameter called N black" }, { "end": 2567.92, "start": 2561.96, "text": " which means that so they have an MNIST digit it's like a three they unroll it to" }, { "end": 2574.64, "start": 2567.92, "text": " a single vector right they feed this one by one into the recurrent network and" }, { "end": 2580, "start": 2574.64, "text": " then after that they attach a certain number of just empty pixels black pixels" }, { "end": 2586.4, "start": 2580, "text": " and after that the network needs to predict the Y you can see if they ask" }, { "end": 2591.7200000000003, "start": 2586.4, "text": " the network the class of the digit immediately after it's done then the G" }, { "end": 2598.3599999999997, "start": 2591.72, "text": " are using the LSTM perform fairly well as do the BRCs but if you attach a whole" }, { "end": 2603.6, "start": 2598.3599999999997, "text": " bunch of these black pixels remember an MNIST digit has some seven sorry seven" }, { "end": 2612.3999999999996, "start": 2603.6, "text": " hundred and eighty four maybe entries so attaching 300 black pixels is quite" }, { "end": 2617.64, "start": 2612.3999999999996, "text": " significant in in terms of the length of these sequences and then the GRUs and" }, { "end": 2623.64, "start": 2617.64, "text": " the LSTMs they can't learn they can't learn to ignore these things because the" }, { "end": 2630.72, "start": 2623.64, "text": " learning signal is just too far away right here but these things they can" }, { "end": 2636.6, "start": 2630.72, "text": " because they can exploit this by stability property and remember things" }, { "end": 2642.16, "start": 2636.6, "text": " again I wonder how this came to be it seems pretty funny but the last thing" }, { "end": 2647, "start": 2642.16, "text": " they do is they investigate what happens in their cells and this I feel is the" }, { "end": 2651.96, "start": 2647, "text": " most interesting part and they do this on this denoising benchmark so the task" }, { "end": 2656.96, "start": 2651.96, "text": " we've looked at before where you need to remember five randomly selected numbers" }, { "end": 2662.4, "start": 2656.96, "text": " that are indicated by the second dimension here they show a sequence" }, { "end": 2670.12, "start": 2662.4, "text": " where the five numbers occur at 3100 246 at 300 and at 376 so these are the five" }, { "end": 2675.48, "start": 2670.12, "text": " positions where the sequence indicates that the network should remember the" }, { "end": 2681.88, "start": 2675.48, "text": " thing in the first dimension and then output they analyze two things they" }, { "end": 2685.76, "start": 2681.88, "text": " analyze the proportion of bi-stable neurons so basically they analyze these" }, { "end": 2691.96, "start": 2685.76, "text": " out these a quantities and they analyze how many of the neurons in the layer" }, { "end": 2696.68, "start": 2691.96, "text": " have an a that's higher than one which means that they are in this bi-stable" }, { "end": 2702.68, "start": 2696.68, "text": " mode and also they analyze what's the average value of C so see if you" }, { "end": 2707.68, "start": 2702.68, "text": " remember if this is high it means it doesn't let in new information and if" }, { "end": 2712.6, "start": 2707.68, "text": " this is low it means it lets in new information if you first look at the C" }, { "end": 2717.7999999999997, "start": 2712.6, "text": " you can see that every single time when the second dimension indicates that this" }, { "end": 2722.8399999999997, "start": 2717.7999999999997, "text": " is one of the inputs to remember this the network drops immediately drops the" }, { "end": 2726.8399999999997, "start": 2722.8399999999997, "text": " C values the different colors here are different layers they build they have a" }, { "end": 2732.52, "start": 2726.8399999999997, "text": " recurrent network has multiple layers of these cells as is usual in the" }, { "end": 2738.52, "start": 2732.52, "text": " recurrent neural networks so this C as you can see it goes up pretty quickly" }, { "end": 2744.52, "start": 2738.52, "text": " but then as soon as one of these inputs appear the C drops which basically means" }, { "end": 2749.8, "start": 2744.52, "text": " that the network realizes it now must let in the new information and then it" }, { "end": 2756.44, "start": 2749.8, "text": " immediately shoots back up makes it seem like so the network says okay as long as" }, { "end": 2760.52, "start": 2756.44, "text": " so all of these inputs here they have the negative one in the second dimension" }, { "end": 2764.56, "start": 2760.52, "text": " right so it recognizes it says there's no reason for me to incorporate that" }, { "end": 2769.36, "start": 2764.56, "text": " information it's not important and as soon as the second input comes it" }, { "end": 2774.52, "start": 2769.36, "text": " immediately shoots down again now you can see this here is the last layer of" }, { "end": 2780.24, "start": 2774.52, "text": " the network the highest layer so sort of the highest abstractive information and" }, { "end": 2786.8, "start": 2780.24, "text": " you can see that from input to input this value of C gets higher and higher" }, { "end": 2791.6800000000003, "start": 2786.8, "text": " and these spikes as they go down but they go down to a higher and higher" }, { "end": 2798.5600000000004, "start": 2791.6800000000003, "text": " point which you know is is the fact that it recognizes it needs to let in new" }, { "end": 2805.4, "start": 2798.5600000000004, "text": " information but it lets in less and less new information the more things it needs" }, { "end": 2808.6000000000004, "start": 2805.4, "text": " to remember so not only does it recognize wait I need to remember this" }, { "end": 2813.0800000000004, "start": 2808.6000000000004, "text": " it also recognizes but I probably shouldn't shouldn't you know completely" }, { "end": 2819.84, "start": 2813.08, "text": " forget what I had previously because I it is important for me to remember these" }, { "end": 2824.48, "start": 2819.84, "text": " previous things so that's a pretty cool demonstration the fact that these go" }, { "end": 2830.2799999999997, "start": 2824.48, "text": " down at the input and the fact that generally they go up every time after a" }, { "end": 2835.7599999999998, "start": 2830.2799999999997, "text": " new input is incorporated into the hidden state this basically this shows" }, { "end": 2841.7999999999997, "start": 2835.7599999999998, "text": " that the or this is a pretty good indication that what they're saying is" }, { "end": 2848.2400000000002, "start": 2841.8, "text": " really happening right okay the second thing shows almost the same it shows how" }, { "end": 2852.6000000000004, "start": 2848.2400000000002, "text": " many of these neurons are actually in their bi-stable mode and you can also" }, { "end": 2860.2400000000002, "start": 2852.6000000000004, "text": " see right here that especially in the last layer you can see that the number" }, { "end": 2866.1200000000003, "start": 2860.2400000000002, "text": " of neurons in the bi-stable mode goes up and up and up and up after each of these" }, { "end": 2871.6400000000003, "start": 2866.1200000000003, "text": " steps and these spikes here correspond to always the points where they have to" }, { "end": 2881.3199999999997, "start": 2871.64, "text": " let in new information okay cool so I find that I find this to be pretty cool" }, { "end": 2885.52, "start": 2881.3199999999997, "text": " and I find this last experiment to be the coolest where they can actually show" }, { "end": 2891, "start": 2885.52, "text": " look here there's a pretty good indication that the thing we we build" }, { "end": 2897.92, "start": 2891, "text": " does what we say it does they also actually have a proof here of the" }, { "end": 2903.76, "start": 2897.92, "text": " bi-stability when this a is higher than one I won't go through this right here" }, { "end": 2909.7200000000003, "start": 2903.76, "text": " but if you want you can look at that I'm excited to see what happens with these" }, { "end": 2913.6, "start": 2909.7200000000003, "text": " kinds of architectures in the future because it seems to be a pretty minor" }, { "end": 2919.16, "start": 2913.6, "text": " modification and maybe with a little bit of more modification or if we sort of" }, { "end": 2923.2000000000003, "start": 2919.16, "text": " just tune this a little bit and kind of figure out what we have to do to make" }, { "end": 2929.52, "start": 2923.2, "text": " these things actually compete with the classic GRUs and LSTMs in regimes where" }, { "end": 2934.3599999999997, "start": 2929.52, "text": " a long memory isn't necessary I feel this could be a you know kind of a" }, { "end": 2940.04, "start": 2934.3599999999997, "text": " standard building block in the recurrent neural network toolkit even though it's" }, { "end": 2944.9199999999996, "start": 2940.04, "text": " been sort of outperformed by transformers in previous years alright" }, { "end": 2950.02, "start": 2944.9199999999996, "text": " that was it for me and I hope you had fun with this paper I invite you to" }, { "end": 2953.56, "start": 2950.02, "text": " check it out and bye bye" } ]
8l-TDqpoUQs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
SynFlow: Pruning neural networks without any data by iteratively conserving synaptic flow
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "initialization", "lottery ticket hypothesis", "pruning", "training", "magnitude", "snip", "grasp", "init", "xavier", "glorot", "he", "flow", "layer collapse", "iterative", "recompute", "stepwise", "memory", "fast", "prune", "weights", "feedforward", "layer", "neural network" ]
The Lottery Ticket Hypothesis has shown that it's theoretically possible to prune a neural network at the beginning of training and still achieve good performance, if we only knew which weights to prune away. This paper does not only explain where other attempts at pruning fail, but provides an algorithm that provably reaches maximum compression capacity, all without looking at any data! OUTLINE: 0:00 - Intro & Overview 1:00 - Pruning Neural Networks 3:40 - Lottery Ticket Hypothesis 6:00 - Paper Story Overview 9:45 - Layer Collapse 18:15 - Synaptic Saliency Conservation 23:25 - Connecting Layer Collapse & Saliency Conservation 28:30 - Iterative Pruning avoids Layer Collapse 33:20 - The SynFlow Algorithm 40:45 - Experiments 43:35 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.05467 Code: https://github.com/ganguli-lab/Synaptic-Flow My Video on the Lottery Ticket Hypothesis: https://youtu.be/ZVVnvZdUMUk Street Talk about LTH: https://youtu.be/SfjJoevBbjU Abstract: Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.9 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that data must be used to quantify which synapses are important. Authors: Hidenori Tanaka, Daniel Kunin, Daniel L. K. Yamins, Surya Ganguli Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at pruning neural networks without any data by iteratively conserving synaptic flow by Hidenori Tanaka, Daniel Kunin, Daniel L.K. Yamins and Surya Ganguly. So this paper on a high level does what the lottery ticket hypothesis does, but does so without any data. It prunes a neural network at the beginning and it does so. It's able to do that because it claims that its algorithm avoids this problem called layer collapse and then is based on conserving a quantity they call the synaptic flow. And we're going to look at this and it's a pretty cool algorithm. It seems to work pretty well. As always, if you want to help out, you can share this video and let me know in the comments what you think of it. I do read the comments and I would love to hear from you. Alright, let's dive in. So they're saying, pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy, both during training and at test time. Recent works have identified through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable sub networks at initialization. So what is this paper talking about? If you don't know much about pruning, here is kind of a basic overview. So if you have a neural network that consists of many, many layers of neurons, what you can do, that one way of pruning that, what the goal is, is to end up with a small neural network that performs well. But for now, we have a big neural network that doesn't perform well. It hasn't been trained yet, right? So what you can do is you can first train the neural network. And then you have a big neural network that performs well, and then you can prune it. Now, a lot of times, a lot of the time this has been seen as sort of the pruning way. You would train the big neural network and then you would prune it, because the other way was not feasible. First pruning and then training was not feasible. You might ask, okay, we might just want to start with a small one. And yeah, that's correct. So what does this first way buy you? This first way buys you mainly two things. So imagine this network right here is much smaller than the original network. So it is less, it uses less storage. So you can potentially, if you want to ship it to like a customer over the internet, you maybe instead of a gigabyte, you only have to transfer a few megabytes. And that's pretty cool. The second thing, if you prune in the correct way, you can also make it faster because now there's less weights to multiply with, you can actually make it go faster. So pruning is a now this this and it combines with techniques called distillation and so on is our ways to make the networks smaller and faster. So if your customers are for example, on on mobile phones, then you can ship you can train a big network to a good performance on your big GPU server, and then ship it out to a mobile phone. Once it's small, and it will perform fairly well on that mobile phone without GPU. So what about this other way? Now, in order to do the other way, we would sort of have to have an idea which one of these big networks which sub parts of these layers are the good ones, right in order for us to do this first prune and then train. The interesting thing is that the paper the lottery ticket hypothesis, I've done a video on this and we've also interviewed the author on our ML Street talk podcast. This paper has shown that this is in fact possible. A long time people have thought we need the big network in order to train, right? We sort of the bigness of the network, the full connectedness of the network is required for the training dynamics. But this paper has shown this is not possible, you can prune at the very beginning. Now, what does it do? It first trains a neural network, like in the olden days, then it prunes the neural network. And then it remembers which connections of the train neural network it has pruned. And then it simply goes back to the beginning of training right here, up here, and says, I now know which connections are important. And I'm simply going to prune all the other connections other than these ones. And then interestingly, if you prune first, and then train, that works just as well and can actually work even better. The interesting thing here is that I mean, this is a big, big cycle. But the interesting thing that the paper demonstrates is that this is even possible, right? People thought it wasn't possible. And this paper demonstrates if you only knew if you only knew which ones you must retain, you can prune at the beginning of training. The lottery ticket hypothesis paper, though still requires to actually train the full network and then do the pruning, like in a classic way in order to find out which ones you need to prune and which ones you don't. This paper right here takes that idea and says, can we find can we find a pruning algorithm that prunes at the beginning of training, yet does not have to train the full network, in fact, doesn't look at any of the data? Okay, and this is are going to be our starting point. So their story is going to be, it's quite an involved story. And I think the overview is important. Well, as we go through the paper. So first, they named this problem called layer collapse. Now, layer collapse is going to be whenever a pruning algorithm removes the entirety of a neural network layer, which means that no information can flow anymore. And therefore the network can't train. And they claim that this is the main problem, why these current pruning algorithms cannot achieve very high pruning ratios, so can like very high compression ratios is because they do premature layer collapse, they then formulate this maximal critical critical compression axiom that has sort of a guiding principle to build pruning algorithms. Second, they show that this quantity called synaptic saliency, a general class of gradient based scores for pruning is conserved at every hidden unit layer of a neural network. So they show that these are conserved. And they show this because this is their, their argument is going to be first, the argument is layer collapse is a problem. The second argument is these things are conserved and these, the conservation of the synaptic saliency leads to the layer collapse. And we're going to see how that happens. And then third, they say the solution to that is iterative pruning. So they show this at the at the example of iterative magnitude pruning, which we know avoids layer collapse. So iterative magnitude pruning is something that happens in this lottery ticket way of of doing it. This lottery ticket way, you can actually do it not in one step, but it tends to work better when you want to go from 100% of your weights to just 5% of your weights, it tends to work better if you do it in stages. So first you go to 90% to 80 to 70 and so on down to your desired thing. And this iterative procedure, they claim is what is what circumvents this problem of layer collapse. And then at last, they say we proved that a pruning algorithm avoids layer collapse entirely and satisfies blah blah blah, if it uses iterative positive synaptic saliency scores. So they bring it all together and say, if an algorithm satisfies our axiom, and if the algorithm is an algorithm that uses these saliency scores, like this one here, and if the algorithm is iterative, then it is not going to be subject to layer collapse. And therefore, it is going to be able to compress to a very high compression ratio. And then they actually do suggest an algorithm, this iterative synaptic flow pruning, SYNFLOW, that does all of this and never looks at any data. All right, this is quite a story. But remember what we're doing. First, layer collapse is a problem. Second, why is layer collapse a problem? It's because of this synaptic saliency conservation. Third, we can avoid it by doing iterative pruning. And lastly, this algorithm does it without looking at data. Okay, so layer, layer collapse, layer collapse is a pretty simple phenomenon. I've already said it, if you have a neural network, and it has a bunch of layers, and let's draw a couple of neurons here, and the neurons are connected to each other via connections, connections, connections, connections. And you have a pruning algorithm. Now the pruning algorithms they consider here are so called single shot pruning algorithms. What they do is they look at the neural network, and this can be before training or after training. But at some point, they look at the neural network, and they assign a score to each of these weights. Like they'll say, you're a one, you're a five, you're a nine, and so on. And then they simply prune away the lowest scores. Okay, and you tell the network what compression ratio you want to you tell the network, for example, please prune away 90% of the connections. So these algorithms would look would assign the scores once and then remove the bottom 90% of weights. Okay, like this. So those are the single shot pruning algorithms. Now what is layer collapse? Layer collapse is whenever an algorithm removes all of one layer, because maybe so here was a nine, maybe you have like 111213 here. Okay, so and then you're in you're in this situation right here. And the algorithm is pretty, is pretty dumb. It's simply removing the bottom 90% of the connections. And here it figures I need to remove one more to meet that goal. I remove the one with the lowest score, I'm going to remove this one. And it's pretty obvious that now no more information can flow from the beginning to the end of the network because, well, what's where is it going to flow to? It's a bit more complex than that. Like, you can't just retain a layer like a connection. For example, if this were a connection, there would also be no information flow because you'd have no outgoing connection here. But ultimately, layer collapse is whenever an entire layer is removed. Okay. And they they do say somewhere that that's the case. I think layer collapse here, layer collapse occurs when an algorithm prunes all parameters in a single weight layer, even when prunable parameters remain elsewhere in the network. So I'm not as such, I'm not sure that this is like a giant problem. It gets to be a problem. But it could be circumvented fairly easily, right by simply saying, if you're about to prune a connection that's integral to the information flow from the start to the end, don't prune that connection, prune some other connection, right, and then you could simply avoid that. And I'd be interested in how that works out. So but in this case, for purposes of this paper, they simply consider algorithms that assign a score and then prune the bottom couple of percent, okay, for so we don't want any like handcrafted rules in here or something. So they look at this quantity called the max compression. The max compression is a quantity that's basically the maximum achievable compression while still avoiding layer collapse. And they say, for example, for a network with L layers and n parameters, the max compression is n over L, which is basically means every layer only has one parameter remaining and therefore, and if it's the correct one, therefore information can flow from start to the end. All right, so this is the maximum achievable compression, anything beyond that would automatically induce layer collapse. Now, anything before that could induce layer collapse, but there is a way to compress the network to the same level without inducing layer collapse. And their point is basically that these other compression algorithms that they compare with, they always they always induce layer collapse before they actually have to because they cut off a connection that leads to layer collapse before they like there would be another connection that they could cut off that would not lead to layer collapse. And of course, if you are layer, if you have done a layer collapse, then you accuracy immediately drops to zero or to random, because no more information flow. So they look at these things here, random pruning is where you simply assign a random score to each connection. Magnitude pruning is what the lottery ticket hypothesis does, but just they look here at a single shot. So you simply look at the magnitude of the weights and this can be before or after training. I think they do it after training here, which is classically done. You look at the magnitude of the weights and you prune the bottom 90% away. They there are also two more advanced methods, these SNIP and the grasp piece of SNIP and grasp, which look at the gradient of the training loss in the network and they decide according to that gradient, which which things to cut and which things not to cut away. The grasp even involves the Hessian right here. So they're fairly, you know, complex method that have some thoughts behind them about why they do what they do, yet they all induce layer collapse before they actually have to. So they define this thing here called the critical compression. The critical compression is the maximal compression ratio a given algorithm can achieve without inducing layer collapse. So the critical compression here is basically whenever that algorithm goes to zero, that's the critical compression. That's kind of the farthest you can push the algorithm without him without sorry, German speaker, without it induced without it inducing layer collapse. Okay, so you can see that for these baseline algorithms, the layer collapse occurs way below the theoretically possible max compression. And we're going to see that in their algorithm, this sin flow, that this max compression is achieved. And it's actually achieved without any of these handcrafted rules that I mentioned, it is the algorithm by design already achieved this maximum compression ratio. So they formulate this here as a guiding principle, they formulate as an axiom, I would, I would rather say it's like this kind of a, it's kind of a guiding principle of building these algorithms that any algorithm you build should have. So the critical compression ratio of a pruning algorithm applied to a network should always equal the max compression of that network. It basically means when you build a pruning algorithm, if you push that pruning algorithm to its limits, it should not do layer collapse unless it absolutely needs to. Okay. Again, the extent of this problem, I don't, I don't know, but they do demonstrate that that they can push their algorithm a fair bit further. Now, without inducing layer collapse, you already see that these other algorithms, like in this regime, apparently layer collapse hasn't happened yet because they still have sizeable accuracy, but there's still, you know, there is a reasonable difference here between those and the sin flow algorithm. So I'm not too convinced yet that layer collapse as such is the problem because they have a difference before their layer collapsing, as you can see right here. And I have the feeling that this difference here is due to this iterative procedure and not actually due to the phenomenon of layer collapse. But yeah, so if it were only layer collapse, what you'll see is that they do the same, the same, the same, and then at some point it's like, boom, now I have layer collapse. Okay. Yeah. So the layer collapse story, I'm not sure, but it's part of the story. So let's, let's go with that. The second part, which is kind of disconnected. So they established two things. They established a layer collapse problem, and now they establish the synaptic saliency, which then later they're going to connect to the layer collapse. So the synaptic saliency, they say is a score, is any score metric that can be expressed as the Hadamard product of this thing with the parameters. Okay. So each parameter is going to be multiplied by the gradient of some function with respect to that parameter. They say where R is a scalar loss function of the output of a feed forward neural network parameterized by theta. Okay. So many of these pruning algorithms can be formulated in this framework right here. And their, their algorithm can also be formulated in this framework. So you can see the score that the algorithm assigns to a weight can be defined as such. And as I said, many fall into this category or are similar to this, especially for example, they say when R is the training loss L. So this is the simplest case you take, you put data through the network, and then you take the training loss of that data and you sort of back propagate it. And now you're going to prune these connections according to how big the gradient is. If you say the gradient is very big, that must mean the connection is very important because there's lots of information flowing through it. So if it's a training loss L, the resulting synaptic saliency metric is equivalent to the score metric used in skeletonization, one of the first network pruning algorithms. The resulting metrics metric is also closely related to this right here. Now this you can see it's not exactly the same, but it's closely related to the one used in this snip baseline and also closely related to this thing right here used in grasp, where it's not just the gradient, it's actually the gradient multiplied by the Hessian to account for curvature. Okay, so they're going to investigate this synaptic saliency in neural networks. They formulate two theorems right here about the conservation of synaptic saliency. Remember synaptic saliency is any score that respects this that is built like this, any score S. The conservation of synaptic saliency, all synaptic saliency metrics respect two surprising conservation laws that hold at any initialization and step in training. So these are not usually like in distribution or something like this with high probability. These things hold at any point in the neural network. First is the neuron wise conservation of synaptic saliency. For a feet forward neural network with homogeneous activation functions and a homogeneous activation function is an activation function that can be expressed like this, for example, relu's fall into that category, the sum of the synaptic saliency for the incoming parameters is to a hidden neuron is equal to the sum of the synaptic saliency for the outgoing parameters from the hidden neuron. So what does it mean is actually pretty simple. If you have a hidden neuron and you look at all the incoming weights and you look at their synaptic saliency, which is this S score of each of these weights, like what would the pruning algorithm assign to that? And you look at the outgoing ones, then the sum of all the incoming ones is going to be equal to the sum of all the outgoing ones. So that's pretty interesting. And they extend that to layer to the entire network. So an extension of that network wise conservation of synaptic saliency, the sum of the synaptic saliency across any set of parameters that exactly separates the input neurons from the output neurons of a feet forward neural network with homogeneous activation functions equals that. So it basically says it remains equal. So what does it mean? What does it mean to exactly separate the input from the output? That's basically the definition of a layer in a neural network. So what they're saying is that you have a bunch of layers. And if you look at a particular layer like this one here, and you look at the incoming connections, and you sum up all of their synaptic saliency, that's going to be equal to the sum of all the synaptic saliency of the outgoing connections of that layer. And it can also apply to like a group of layers and so on. But the synaptic saliency is conserved in that way. Now, why is that important? And here is where we make the connection with the layer. Was it later drop layer? Whatever. Okay, the fact that the fact that these algorithms tend to drop entire layers before they have to, if you have in your network layers that are of different sizes, so you have large layers, and then smaller layers and smaller layers, what will happen is that since the synaptic saliency is conserved, the sum is conserved, if you have more connections in one layer, so lots of connections, lots of connections, and in the small layers, you don't have as many connections, the sum is equal. So that means each individual one here is much, much smaller. So the S is very small for each individual one here, and the S is very large in there. That means the pruning algorithm is going to really, really kill off these connections in the big layers. And it's actually going to kill them off to a point where it probably is going to eliminate that layer before it even prunes many of the connections of the small layer, just because of that conservation fact. And they do experiments like this. I think there's an experiment up here where I like this one down here better, where they basically show that you have inverse layer size on the bottom, and you have the average score that the pruning algorithm assigns to any connection. Now, these, as we've seen, they're not exactly assigning the scores of this saliency, but they're very close to it. The Synflow algorithm does exactly assign the synaptic saliency as the score for the pruning. Now, we've basically seen that this leads to a bad result, but the synaptic flow is going to compensate for that. But in essence, as you can see, as the layers get, so inverse layer size grows, which means that layer size shrinks as the layer size gets smaller, the average score of the connections in the layer is higher and higher, which basically means that the pruning algorithm, if you just let it go by itself, it's going to kill off the smaller, sorry, the larger layers first because they have the smaller scores. And you can see that even though the other algorithms don't conform exactly to that, they conform to this approximately. So these here, because their score is closely related to what the Synflow does, and the magnitude pruning, because mostly, because now I'm not sure if that's at the end of the training, at the beginning of the training, if you just initialize, then the score is going to be proportional to their magnitude and their magnitude is determined by the initialization scheme. And the initialization scheme is most of the time, like modern initialization schemes, compensate for the fact that you have different number of incoming and outgoing connections and therefore they would automatically assign a higher initialization constant to layers that have the lower number of parameters. So even the magnitude pruning will conform to this. Now, it might be absolutely reasonable to say that that's also the case at the end of training because most parameters aren't going to move super much during training. So this still approximately holds, as you can see here. Of course, the random one doesn't do that. Yet, because you prune randomly, you're still absolutely subject to this layer collapse. In fact, in the random one, the smallest layers would be the ones to go away first because it's just more probable. Okay. So we've discovered that if you do something like saliency scoring or something that's correlated to it, then you're going to remove the biggest layers first. And that's a problem. And that's what they say. This fact of this conservation laws and the single shot nature of these algorithms that they only assign scores once and then they prune away whatever the bottom such and such percent are leads to layer collapse. I think we've established this now that the combination of the two things leads to layer collapse. Now they make a little bit of an excursion and they say there is actually something that doesn't run into layer collapse. And that's iterative pruning algorithms. So specifically, they look at magnitude pruning. They say magnitude pruning, which remember is also if you do it single shot, it also runs into layer collapse. Magnitude pruning avoids layer collapse with conservation and iteration. So because it iterates, it avoids that. And that's what these lottery ticket hypothesis paper does. It does it iteratively removes a couple of connections, then it retrains the network, basically recomputes the magnitudes and therefore recomputes the scores. And then it prunes again, and then it recomputes and and prunes again. And by recomputing, you can basically these some of the connections that weren't important before, but just survive the pruning, they can now be like, wait, I have now way more responsibility as a connection, and they will shoot up in importance to avoid being pruned. So you can see if you push your network to a sorry to a high compression ratio, then if you just do this single shot pruning, you run into this layer collapse at some compression ratio, you simply crash to random performance or zero performance. Yet, if you do multiple iterations, you can see here already two iterations, then it's much longer before you run right here into layer collapse. And if you do three iterations, you do much more. Now this the three iterations doesn't mean you prune more like at this, at this point right here to tend to the one. All of these things prune nine out of every 10 connections. It's just the thing that has three iterations prunes maybe first three and then again, three and then again, three out of the 10. Whereas the one iteration would prune all of the nine at in one go. Okay. And they give a reason for this they give they say that it's the fact that gradient descent encourages conservation. So they give a little toy example here they say to better understand the dynamics of the IMP algorithm during training, the smaller we will consider the a differentiable score, this one. So this is not exactly magnitude pruning, but it is very close, right? The squared it's just the square of the parameter instead of the absolute value of the parameter. They say it's algorithmically equivalent to magnitude score. Consider these scores throughout training with gradient descent on a loss function using an infinitesimal step. In this setting, the temporal derivative of the parameters is equivalent to that. And thus the temporal derivative of the score is this. So now they're going to look at how does the score evolve when they train the network and the score evolves exactly as the negative to the saliency. Surprisingly, this is a form of synaptic saliency. And thus the neuron wise and layer wise conservation laws from section four apply. In particular, this implies that for any two layers of a simple fully connected network, then this quantity holds. So this is not new. But what it basically says is that through training, these connections equalize the saliency again. So if you have a very big layer, and here a very small layer, and because it's a big layer, these scores are very much lower, right? It's just little s and here it's big s per layer. But then if you prune away, and you run gradient descent on this, these scores will tend to become bigger. And in this case, these weights will tend to grow in magnitude. Because you've pruned away the others, they now have more signal probably flowing to them and more gradient flowing to them. And therefore they're going to grow in size. And therefore, their score is going to be bigger. So this gradient descent of this iterative procedure makes the scores better for that. So basically counteracts the layer collapse. So they put all of this together and say, theorem three, iterative positive conservative scoring achieves maximal critical compression. If a pruning algorithm with global masking, and global masking means that you rank all of the connections and then prune from all of the connections, it's a difference to layer wise masking where you say I want to remove 90% of each layer, which sounds like it would avoid layer collapse, but also it works a lot worse than the global one, the global strategy. Assigns positive scores that respect layer wise conservation. And if the algorithm, so respecting layer wise conservation, it basically means your score should be, or if your score is a saliency score, then that's the case. And if the algorithm reevaluates the scores every time a parameter is pruned, then the algorithm satisfies the maximal critical compression axiom. Okay. So that's basically saying that if you have any algorithm that prunes with a saliency score, like theirs is going to do, is going to be able to be pushed to the limit until the maximal capacity is reached if you reevaluate the scores every time a parameter is pruned. So this is basically saying that whatever the lottery ticket hypothesis paper did with magnitude pruning, if you do it with saliency based pruning, you're guaranteed to achieve the maximum possible compression if you push it. But of course we know that whatever the lottery ticket hypothesis paper did is impractical because it needs to retrain the network every single time it wants to prune. Right? So if you want to do this after every parameter, that's going to be a long time. It's going to be impractical. We ideally want to prune the network before we even look at any data. And they're going to do exactly that with the SYNFLOW algorithm. They say theorem three directly motivates the design of our novel pruning algorithm. SYNFLOW that provably reaches maximal critical compression. First, the necessity for iterative score evaluation discourages algorithms that involve back propagation on batches of data and instead motivates the development of an efficient data independent scoring procedure. Second, positivity and conservation probably motivates the construction of a loss function that yields positive synaptic saliency scores. We combine these insights and introduce a new loss function where the one is the all one vectors. Okay, so this is the loss function of their saliency scores. And this might seem like... So what do we have? We have the parameters of layer L, the absolute product, sorry, the absolute value of those parameters, and then you simply multiply all of the layers together. And you have this product here with the ones on the side. So this is a quadratic form, sort of. Okay, this might seem a bit weird, but in practice, and this is also what happens in their code, you can do something pretty easy. So first, you have to transform all your weights to their absolute values. Now in their code, you can look at it, they do remember the signs for later. So but first, you convert all of them to their absolute values. Then second, you simply take a data point that is filled with ones that literally the number one. So if your if your input is an image, you just put a one at each pixel, you feed it through the network with all of these positive weights, and you get out some output, you get some output vector, okay, then you simply you need to do this inner product with the one vector, which is simply a sum, right? I don't I don't get why they it's a bit of a funky way of writing a sum, right? You simply sum that up to get a to get a single number. And this single number now is your is your pseudo loss function. It's simply the loss function that an all one data point gets when the when the loss function is just the sum of the outputs. That's that's it. That's it. And then you back propagate that loss to you back propagate that loss to the layers. Right? So this is our remember this is not the score itself, but our score is going to be the derivative of our with respect to a weight times that weight. Okay, so you want to back propagate, and then you multiply each of these weights by the back propagated signal. And that's going to be your score for each parameter. Now, this doesn't seem too hard, right? You just need you don't even need a batch, you need a single data point, one back propagation, and then you get your scores. Okay, you don't need expensive training or anything like this. This seems pretty cool. And they give an example here. For example, for a simple, come on, for a simple fully connected network, ie this, so they consider here a linear network, right, just so we can look at exactly what happens for linear networks, you can often compute quantities exactly. So if we look at just a linear network without nonlinearities, we can factor the synaptic flow score for any parameter as such. So the score, this is now not the the R, this is going to be the score is going to be this thing right here. So you can see that the parameter is multiplied by this thing, and by this thing. And other than for example, magnitude pruning, this actually takes into account all the input flow because it goes from this one, sorry, it goes from this goes from this one, it goes through all the network, right, every path that arrives at this particular weight is going to be considered. And every path that goes out from this particular weight is going to be considered. And the saliency score is going to depend on all of these paths, all of these all of the information flow from input to output that goes through that weight. And if you do this, then you get a really good pruning algorithm. So yeah, the algorithm is is I've already described it. And in their experiments, as you can see right now, they have a bunch of networks, these VGG networks, or like wide resnet, they have a bunch of data sets like tiny image net or C for 10, where they experiment with these different baselines. And you can see that the baselines often run into this layer collapse problem. Sorry, often run into this where all of a sudden, let's actually look at let's look at this resonant 18. Right here. Maybe you can find a connection between maybe there's differently sized layers in resonant 18. And that's why the collapse happens even earlier. But you can see right here, there's a collapse if you do magnitude pruning, even also if you do random pruning, it falls down pretty hard after a while, the baselines they hold up better. But you can see in different models and different data sets, that the baselines crash at some point as well. Now I've already said the comparison here, it seems a little bit unfair. I might I might have over read something, but I'm pretty sure that the baselines remain single shot, while the sin flow algorithm here is now of course, no longer single shot, it's actually multi shot, and they've made the exact argument that the single shot is the problem. And therefore their algorithm is multi multi shot. And it it seems like they should give the other algorithms the opportunity to also do multi shot, just to compare them fairly. Maybe, as I said, maybe they're doing that, but I'm, I haven't read any anything. So it, you know, it just seems like the comparison is a bit unfair. If you identify the problem, and then just leave the other algorithms with the problem, sin flow is still different from these other algorithms, even if they had the multiple steps. Now, the counter argument to this, of course, is that these other algorithms all require the training data, they require actually passing the data or training the network in the case of magnitude pruning and so on. So that's pretty expensive, whereas sin flow, you simply pass forward one data point, and that's it. That's a good argument. But it seems like the effect of the synaptic saliency scores, and the effect of the multiple steps aren't really disentangled in these experiments right here, it simply shows that it consistently outperforms other pruning methods. And what what I'd like to see is really where that outperforming comes from. Okay, so that's what I think of this. And that was the paper, basically, I'm even even if I am not convinced quite yet. This is pretty cool, right? And I think this will, if not be if it's not used itself, it will inspire kind of a line of work into pruning at the beginning of training without looking at data. And maybe, you know, maybe we can even think of building networks, like, instead of just pruning them, we can think of constructively building networks that observe these properties. And therefore, we can just construct initialized networks already with good properties such that we don't even have to go to a bigger network and then prune it down. It seems wasteful. It seems like we should just be able to derive principles of what we want in the how the weights are structured, and then construct networks that are according to that. And I guess that's what's going to happen in a few papers that are coming. Alright, again, if you like this video, consider subscribing, giving it a like commenting, and let me know what you think. And until next time, bye bye.
[ { "end": 6.24, "start": 0, "text": " Hi there! Today we're looking at pruning neural networks without any data by iteratively conserving" }, { "end": 14.74, "start": 6.24, "text": " synaptic flow by Hidenori Tanaka, Daniel Kunin, Daniel L.K. Yamins and Surya Ganguly. So this" }, { "end": 20.48, "start": 14.74, "text": " paper on a high level does what the lottery ticket hypothesis does, but does so without" }, { "end": 26.44, "start": 20.48, "text": " any data. It prunes a neural network at the beginning and it does so. It's able to do" }, { "end": 32.44, "start": 26.44, "text": " that because it claims that its algorithm avoids this problem called layer collapse" }, { "end": 39.68, "start": 32.44, "text": " and then is based on conserving a quantity they call the synaptic flow. And we're going" }, { "end": 45.88, "start": 39.68, "text": " to look at this and it's a pretty cool algorithm. It seems to work pretty well. As always, if" }, { "end": 52.56, "start": 45.88, "text": " you want to help out, you can share this video and let me know in the comments what you think" }, { "end": 59.92, "start": 52.56, "text": " of it. I do read the comments and I would love to hear from you. Alright, let's dive" }, { "end": 65.74000000000001, "start": 59.92, "text": " in. So they're saying, pruning the parameters of deep neural networks has generated intense" }, { "end": 72.08, "start": 65.74000000000001, "text": " interest due to potential savings in time, memory and energy, both during training and" }, { "end": 77.64, "start": 72.08, "text": " at test time. Recent works have identified through an expensive sequence of training" }, { "end": 83.48, "start": 77.64, "text": " and pruning cycles, the existence of winning lottery tickets or sparse trainable sub networks" }, { "end": 90.88, "start": 83.48, "text": " at initialization. So what is this paper talking about? If you don't know much about pruning," }, { "end": 96.36, "start": 90.88, "text": " here is kind of a basic overview. So if you have a neural network that consists of many," }, { "end": 102.84, "start": 96.36, "text": " many layers of neurons, what you can do, that one way of pruning that, what the goal is," }, { "end": 110.28, "start": 102.84, "text": " is to end up with a small neural network that performs well. But for now, we have a big" }, { "end": 115.16, "start": 110.28, "text": " neural network that doesn't perform well. It hasn't been trained yet, right? So what" }, { "end": 121.52000000000001, "start": 115.16, "text": " you can do is you can first train the neural network. And then you have a big neural network" }, { "end": 128.44, "start": 121.52000000000001, "text": " that performs well, and then you can prune it. Now, a lot of times, a lot of the time" }, { "end": 133.68, "start": 128.44, "text": " this has been seen as sort of the pruning way. You would train the big neural network" }, { "end": 140.16, "start": 133.68, "text": " and then you would prune it, because the other way was not feasible. First pruning and then" }, { "end": 146.98, "start": 140.16, "text": " training was not feasible. You might ask, okay, we might just want to start with a small" }, { "end": 153.76, "start": 146.98, "text": " one. And yeah, that's correct. So what does this first way buy you? This first way buys" }, { "end": 160.35999999999999, "start": 153.76, "text": " you mainly two things. So imagine this network right here is much smaller than the original" }, { "end": 166.88, "start": 160.35999999999999, "text": " network. So it is less, it uses less storage. So you can potentially, if you want to ship" }, { "end": 172.16, "start": 166.88, "text": " it to like a customer over the internet, you maybe instead of a gigabyte, you only have" }, { "end": 178.68, "start": 172.16, "text": " to transfer a few megabytes. And that's pretty cool. The second thing, if you prune in the" }, { "end": 185.28, "start": 178.68, "text": " correct way, you can also make it faster because now there's less weights to multiply with," }, { "end": 192.52, "start": 185.28, "text": " you can actually make it go faster. So pruning is a now this this and it combines with techniques" }, { "end": 198.56, "start": 192.52, "text": " called distillation and so on is our ways to make the networks smaller and faster. So" }, { "end": 205.44, "start": 198.56, "text": " if your customers are for example, on on mobile phones, then you can ship you can train a big" }, { "end": 210.56, "start": 205.44, "text": " network to a good performance on your big GPU server, and then ship it out to a mobile" }, { "end": 218.64, "start": 210.56, "text": " phone. Once it's small, and it will perform fairly well on that mobile phone without GPU." }, { "end": 224.92, "start": 218.64, "text": " So what about this other way? Now, in order to do the other way, we would sort of have" }, { "end": 232.8, "start": 224.92, "text": " to have an idea which one of these big networks which sub parts of these layers are the good" }, { "end": 238.08, "start": 232.8, "text": " ones, right in order for us to do this first prune and then train. The interesting thing" }, { "end": 244, "start": 238.08, "text": " is that the paper the lottery ticket hypothesis, I've done a video on this and we've also interviewed" }, { "end": 249.64000000000001, "start": 244, "text": " the author on our ML Street talk podcast. This paper has shown that this is in fact" }, { "end": 254.72000000000003, "start": 249.64000000000001, "text": " possible. A long time people have thought we need the big network in order to train," }, { "end": 261.1, "start": 254.72000000000003, "text": " right? We sort of the bigness of the network, the full connectedness of the network is required" }, { "end": 265.20000000000005, "start": 261.1, "text": " for the training dynamics. But this paper has shown this is not possible, you can prune" }, { "end": 272.36, "start": 265.20000000000005, "text": " at the very beginning. Now, what does it do? It first trains a neural network, like in" }, { "end": 278.88, "start": 272.36, "text": " the olden days, then it prunes the neural network. And then it remembers which connections" }, { "end": 283.92, "start": 278.88, "text": " of the train neural network it has pruned. And then it simply goes back to the beginning" }, { "end": 289.52000000000004, "start": 283.92, "text": " of training right here, up here, and says, I now know which connections are important." }, { "end": 294.96, "start": 289.52, "text": " And I'm simply going to prune all the other connections other than these ones. And then" }, { "end": 301.91999999999996, "start": 294.96, "text": " interestingly, if you prune first, and then train, that works just as well and can actually" }, { "end": 308.03999999999996, "start": 301.91999999999996, "text": " work even better. The interesting thing here is that I mean, this is a big, big cycle." }, { "end": 315.02, "start": 308.03999999999996, "text": " But the interesting thing that the paper demonstrates is that this is even possible, right? People" }, { "end": 320.44, "start": 315.02, "text": " thought it wasn't possible. And this paper demonstrates if you only knew if you only" }, { "end": 327.21999999999997, "start": 320.44, "text": " knew which ones you must retain, you can prune at the beginning of training. The lottery" }, { "end": 332.79999999999995, "start": 327.21999999999997, "text": " ticket hypothesis paper, though still requires to actually train the full network and then" }, { "end": 338.84, "start": 332.79999999999995, "text": " do the pruning, like in a classic way in order to find out which ones you need to prune and" }, { "end": 346.91999999999996, "start": 338.84, "text": " which ones you don't. This paper right here takes that idea and says, can we find can" }, { "end": 352.96, "start": 346.91999999999996, "text": " we find a pruning algorithm that prunes at the beginning of training, yet does not have" }, { "end": 358.88, "start": 352.96, "text": " to train the full network, in fact, doesn't look at any of the data? Okay, and this is" }, { "end": 364.88, "start": 358.88, "text": " are going to be our starting point. So their story is going to be, it's quite an involved" }, { "end": 372.28, "start": 364.88, "text": " story. And I think the overview is important. Well, as we go through the paper. So first," }, { "end": 379.52, "start": 372.28, "text": " they named this problem called layer collapse. Now, layer collapse is going to be whenever" }, { "end": 385.48, "start": 379.52, "text": " a pruning algorithm removes the entirety of a neural network layer, which means that no" }, { "end": 390.4, "start": 385.48, "text": " information can flow anymore. And therefore the network can't train. And they claim that" }, { "end": 398.12, "start": 390.4, "text": " this is the main problem, why these current pruning algorithms cannot achieve very high" }, { "end": 404.28, "start": 398.12, "text": " pruning ratios, so can like very high compression ratios is because they do premature layer" }, { "end": 412.44, "start": 404.28, "text": " collapse, they then formulate this maximal critical critical compression axiom that has" }, { "end": 419.91999999999996, "start": 412.44, "text": " sort of a guiding principle to build pruning algorithms. Second, they show that this quantity" }, { "end": 425.36, "start": 419.92, "text": " called synaptic saliency, a general class of gradient based scores for pruning is conserved" }, { "end": 431.76, "start": 425.36, "text": " at every hidden unit layer of a neural network. So they show that these are conserved. And" }, { "end": 437.76, "start": 431.76, "text": " they show this because this is their, their argument is going to be first, the argument" }, { "end": 444.20000000000005, "start": 437.76, "text": " is layer collapse is a problem. The second argument is these things are conserved and" }, { "end": 451.03999999999996, "start": 444.2, "text": " these, the conservation of the synaptic saliency leads to the layer collapse. And we're going" }, { "end": 460.34, "start": 451.03999999999996, "text": " to see how that happens. And then third, they say the solution to that is iterative pruning." }, { "end": 466.56, "start": 460.34, "text": " So they show this at the at the example of iterative magnitude pruning, which we know" }, { "end": 471.8, "start": 466.56, "text": " avoids layer collapse. So iterative magnitude pruning is something that happens in this" }, { "end": 479.68, "start": 471.8, "text": " lottery ticket way of of doing it. This lottery ticket way, you can actually do it not in" }, { "end": 485.28000000000003, "start": 479.68, "text": " one step, but it tends to work better when you want to go from 100% of your weights to" }, { "end": 490.12, "start": 485.28000000000003, "text": " just 5% of your weights, it tends to work better if you do it in stages. So first you" }, { "end": 499.40000000000003, "start": 490.12, "text": " go to 90% to 80 to 70 and so on down to your desired thing. And this iterative procedure," }, { "end": 511.09999999999997, "start": 499.4, "text": " they claim is what is what circumvents this problem of layer collapse. And then at last," }, { "end": 517.4399999999999, "start": 511.09999999999997, "text": " they say we proved that a pruning algorithm avoids layer collapse entirely and satisfies" }, { "end": 524.3, "start": 517.4399999999999, "text": " blah blah blah, if it uses iterative positive synaptic saliency scores. So they bring it" }, { "end": 533.4399999999999, "start": 524.3, "text": " all together and say, if an algorithm satisfies our axiom, and if the algorithm is an algorithm" }, { "end": 541.16, "start": 533.4399999999999, "text": " that uses these saliency scores, like this one here, and if the algorithm is iterative," }, { "end": 546.3, "start": 541.16, "text": " then it is not going to be subject to layer collapse. And therefore, it is going to be" }, { "end": 553.56, "start": 546.3, "text": " able to compress to a very high compression ratio. And then they actually do suggest an" }, { "end": 561.4799999999999, "start": 553.56, "text": " algorithm, this iterative synaptic flow pruning, SYNFLOW, that does all of this and never looks" }, { "end": 568.4399999999999, "start": 561.4799999999999, "text": " at any data. All right, this is quite a story. But remember what we're doing. First, layer" }, { "end": 573.3199999999999, "start": 568.4399999999999, "text": " collapse is a problem. Second, why is layer collapse a problem? It's because of this synaptic" }, { "end": 580.3199999999999, "start": 573.3199999999999, "text": " saliency conservation. Third, we can avoid it by doing iterative pruning. And lastly," }, { "end": 590.72, "start": 580.32, "text": " this algorithm does it without looking at data. Okay, so layer, layer collapse, layer" }, { "end": 596.7600000000001, "start": 590.72, "text": " collapse is a pretty simple phenomenon. I've already said it, if you have a neural network," }, { "end": 602.6800000000001, "start": 596.7600000000001, "text": " and it has a bunch of layers, and let's draw a couple of neurons here, and the neurons" }, { "end": 610.08, "start": 602.6800000000001, "text": " are connected to each other via connections, connections, connections, connections. And" }, { "end": 614.5200000000001, "start": 610.08, "text": " you have a pruning algorithm. Now the pruning algorithms they consider here are so called" }, { "end": 618.0400000000001, "start": 614.5200000000001, "text": " single shot pruning algorithms. What they do is they look at the neural network, and" }, { "end": 623.5200000000001, "start": 618.0400000000001, "text": " this can be before training or after training. But at some point, they look at the neural" }, { "end": 630.6800000000001, "start": 623.5200000000001, "text": " network, and they assign a score to each of these weights. Like they'll say, you're a" }, { "end": 637.6800000000001, "start": 630.6800000000001, "text": " one, you're a five, you're a nine, and so on. And then they simply prune away the lowest" }, { "end": 643.16, "start": 637.68, "text": " scores. Okay, and you tell the network what compression ratio you want to you tell the" }, { "end": 648.8399999999999, "start": 643.16, "text": " network, for example, please prune away 90% of the connections. So these algorithms would" }, { "end": 656.3199999999999, "start": 648.8399999999999, "text": " look would assign the scores once and then remove the bottom 90% of weights. Okay, like" }, { "end": 664.1999999999999, "start": 656.3199999999999, "text": " this. So those are the single shot pruning algorithms. Now what is layer collapse? Layer" }, { "end": 671, "start": 664.2, "text": " collapse is whenever an algorithm removes all of one layer, because maybe so here was" }, { "end": 679.8000000000001, "start": 671, "text": " a nine, maybe you have like 111213 here. Okay, so and then you're in you're in this situation" }, { "end": 684.6800000000001, "start": 679.8000000000001, "text": " right here. And the algorithm is pretty, is pretty dumb. It's simply removing the bottom" }, { "end": 690.1600000000001, "start": 684.6800000000001, "text": " 90% of the connections. And here it figures I need to remove one more to meet that goal." }, { "end": 694.16, "start": 690.16, "text": " I remove the one with the lowest score, I'm going to remove this one. And it's pretty obvious" }, { "end": 700.24, "start": 694.16, "text": " that now no more information can flow from the beginning to the end of the network because," }, { "end": 706.4399999999999, "start": 700.24, "text": " well, what's where is it going to flow to? It's a bit more complex than that. Like, you" }, { "end": 711.1999999999999, "start": 706.4399999999999, "text": " can't just retain a layer like a connection. For example, if this were a connection, there" }, { "end": 715.48, "start": 711.1999999999999, "text": " would also be no information flow because you'd have no outgoing connection here. But" }, { "end": 725.88, "start": 715.48, "text": " ultimately, layer collapse is whenever an entire layer is removed. Okay. And they they" }, { "end": 734.84, "start": 725.88, "text": " do say somewhere that that's the case. I think layer collapse here, layer collapse occurs" }, { "end": 740.38, "start": 734.84, "text": " when an algorithm prunes all parameters in a single weight layer, even when prunable" }, { "end": 747.4399999999999, "start": 740.38, "text": " parameters remain elsewhere in the network. So I'm not as such, I'm not sure that this" }, { "end": 754.28, "start": 747.4399999999999, "text": " is like a giant problem. It gets to be a problem. But it could be circumvented fairly easily," }, { "end": 759.52, "start": 754.28, "text": " right by simply saying, if you're about to prune a connection that's integral to the" }, { "end": 764.16, "start": 759.52, "text": " information flow from the start to the end, don't prune that connection, prune some other" }, { "end": 769.44, "start": 764.16, "text": " connection, right, and then you could simply avoid that. And I'd be interested in how that" }, { "end": 776.8800000000001, "start": 769.44, "text": " works out. So but in this case, for purposes of this paper, they simply consider algorithms" }, { "end": 782.8000000000001, "start": 776.8800000000001, "text": " that assign a score and then prune the bottom couple of percent, okay, for so we don't want" }, { "end": 789.96, "start": 782.8000000000001, "text": " any like handcrafted rules in here or something. So they look at this quantity called the max" }, { "end": 797.12, "start": 789.96, "text": " compression. The max compression is a quantity that's basically the maximum achievable compression" }, { "end": 802.12, "start": 797.12, "text": " while still avoiding layer collapse. And they say, for example, for a network with L layers" }, { "end": 807.84, "start": 802.12, "text": " and n parameters, the max compression is n over L, which is basically means every layer" }, { "end": 816.64, "start": 807.84, "text": " only has one parameter remaining and therefore, and if it's the correct one, therefore information" }, { "end": 822.48, "start": 816.64, "text": " can flow from start to the end. All right, so this is the maximum achievable compression," }, { "end": 827.64, "start": 822.48, "text": " anything beyond that would automatically induce layer collapse. Now, anything before that" }, { "end": 833.76, "start": 827.64, "text": " could induce layer collapse, but there is a way to compress the network to the same" }, { "end": 838.08, "start": 833.76, "text": " level without inducing layer collapse. And their point is basically that these other" }, { "end": 843.88, "start": 838.08, "text": " compression algorithms that they compare with, they always they always induce layer collapse" }, { "end": 849.48, "start": 843.88, "text": " before they actually have to because they cut off a connection that leads to layer collapse" }, { "end": 855.2, "start": 849.48, "text": " before they like there would be another connection that they could cut off that would not lead" }, { "end": 860.8000000000001, "start": 855.2, "text": " to layer collapse. And of course, if you are layer, if you have done a layer collapse," }, { "end": 867.04, "start": 860.8000000000001, "text": " then you accuracy immediately drops to zero or to random, because no more information" }, { "end": 872.64, "start": 867.04, "text": " flow. So they look at these things here, random pruning is where you simply assign a random" }, { "end": 879.4, "start": 872.64, "text": " score to each connection. Magnitude pruning is what the lottery ticket hypothesis does," }, { "end": 886.76, "start": 879.4, "text": " but just they look here at a single shot. So you simply look at the magnitude of the" }, { "end": 890.88, "start": 886.76, "text": " weights and this can be before or after training. I think they do it after training here, which" }, { "end": 896.36, "start": 890.88, "text": " is classically done. You look at the magnitude of the weights and you prune the bottom 90%" }, { "end": 904.0799999999999, "start": 896.36, "text": " away. They there are also two more advanced methods, these SNIP and the grasp piece of" }, { "end": 911.96, "start": 904.08, "text": " SNIP and grasp, which look at the gradient of the training loss in the network and they" }, { "end": 917.2800000000001, "start": 911.96, "text": " decide according to that gradient, which which things to cut and which things not to cut" }, { "end": 925.08, "start": 917.2800000000001, "text": " away. The grasp even involves the Hessian right here. So they're fairly, you know, complex" }, { "end": 930.72, "start": 925.08, "text": " method that have some thoughts behind them about why they do what they do, yet they all" }, { "end": 936.08, "start": 930.72, "text": " induce layer collapse before they actually have to. So they define this thing here called" }, { "end": 943.6800000000001, "start": 936.08, "text": " the critical compression. The critical compression is the maximal compression ratio a given algorithm" }, { "end": 948.34, "start": 943.6800000000001, "text": " can achieve without inducing layer collapse. So the critical compression here is basically" }, { "end": 952.96, "start": 948.34, "text": " whenever that algorithm goes to zero, that's the critical compression. That's kind of the" }, { "end": 960.6, "start": 952.96, "text": " farthest you can push the algorithm without him without sorry, German speaker, without" }, { "end": 967.24, "start": 960.6, "text": " it induced without it inducing layer collapse. Okay, so you can see that for these baseline" }, { "end": 974.24, "start": 967.24, "text": " algorithms, the layer collapse occurs way below the theoretically possible max compression." }, { "end": 979.32, "start": 974.24, "text": " And we're going to see that in their algorithm, this sin flow, that this max compression is" }, { "end": 984.6800000000001, "start": 979.32, "text": " achieved. And it's actually achieved without any of these handcrafted rules that I mentioned," }, { "end": 991, "start": 984.6800000000001, "text": " it is the algorithm by design already achieved this maximum compression ratio. So they formulate" }, { "end": 995.5200000000001, "start": 991, "text": " this here as a guiding principle, they formulate as an axiom, I would, I would rather say it's" }, { "end": 1002.12, "start": 995.5200000000001, "text": " like this kind of a, it's kind of a guiding principle of building these algorithms that" }, { "end": 1008.1600000000001, "start": 1002.12, "text": " any algorithm you build should have. So the critical compression ratio of a pruning algorithm" }, { "end": 1013.4, "start": 1008.16, "text": " applied to a network should always equal the max compression of that network. It basically" }, { "end": 1018.6, "start": 1013.4, "text": " means when you build a pruning algorithm, if you push that pruning algorithm to its" }, { "end": 1028.72, "start": 1018.6, "text": " limits, it should not do layer collapse unless it absolutely needs to. Okay. Again, the extent" }, { "end": 1033.98, "start": 1028.72, "text": " of this problem, I don't, I don't know, but they do demonstrate that that they can push" }, { "end": 1040.64, "start": 1033.98, "text": " their algorithm a fair bit further. Now, without inducing layer collapse, you already see that" }, { "end": 1044.96, "start": 1040.64, "text": " these other algorithms, like in this regime, apparently layer collapse hasn't happened" }, { "end": 1050.3600000000001, "start": 1044.96, "text": " yet because they still have sizeable accuracy, but there's still, you know, there is a reasonable" }, { "end": 1056.4, "start": 1050.3600000000001, "text": " difference here between those and the sin flow algorithm. So I'm not too convinced yet" }, { "end": 1062.76, "start": 1056.4, "text": " that layer collapse as such is the problem because they have a difference before their" }, { "end": 1069.08, "start": 1062.76, "text": " layer collapsing, as you can see right here. And I have the feeling that this difference" }, { "end": 1075.28, "start": 1069.08, "text": " here is due to this iterative procedure and not actually due to the phenomenon of layer" }, { "end": 1081.4, "start": 1075.28, "text": " collapse. But yeah, so if it were only layer collapse, what you'll see is that they do" }, { "end": 1085.84, "start": 1081.4, "text": " the same, the same, the same, and then at some point it's like, boom, now I have layer" }, { "end": 1094.12, "start": 1085.84, "text": " collapse. Okay. Yeah. So the layer collapse story, I'm not sure, but it's part of the" }, { "end": 1100.6999999999998, "start": 1094.12, "text": " story. So let's, let's go with that. The second part, which is kind of disconnected. So they" }, { "end": 1104.8, "start": 1100.6999999999998, "text": " established two things. They established a layer collapse problem, and now they establish" }, { "end": 1113.24, "start": 1104.8, "text": " the synaptic saliency, which then later they're going to connect to the layer collapse. So" }, { "end": 1120.18, "start": 1113.24, "text": " the synaptic saliency, they say is a score, is any score metric that can be expressed" }, { "end": 1129.4, "start": 1120.18, "text": " as the Hadamard product of this thing with the parameters. Okay. So each parameter is" }, { "end": 1135.64, "start": 1129.4, "text": " going to be multiplied by the gradient of some function with respect to that parameter." }, { "end": 1142.4, "start": 1135.64, "text": " They say where R is a scalar loss function of the output of a feed forward neural network" }, { "end": 1149.52, "start": 1142.4, "text": " parameterized by theta. Okay. So many of these pruning algorithms can be formulated in this" }, { "end": 1156, "start": 1149.52, "text": " framework right here. And their, their algorithm can also be formulated in this framework." }, { "end": 1162.3200000000002, "start": 1156, "text": " So you can see the score that the algorithm assigns to a weight can be defined as such." }, { "end": 1171.5600000000002, "start": 1162.3200000000002, "text": " And as I said, many fall into this category or are similar to this, especially for example," }, { "end": 1178.48, "start": 1171.56, "text": " they say when R is the training loss L. So this is the simplest case you take, you put" }, { "end": 1183.36, "start": 1178.48, "text": " data through the network, and then you take the training loss of that data and you sort" }, { "end": 1187.96, "start": 1183.36, "text": " of back propagate it. And now you're going to prune these connections according to how" }, { "end": 1193.24, "start": 1187.96, "text": " big the gradient is. If you say the gradient is very big, that must mean the connection" }, { "end": 1199.3799999999999, "start": 1193.24, "text": " is very important because there's lots of information flowing through it. So if it's" }, { "end": 1204.5600000000002, "start": 1199.38, "text": " a training loss L, the resulting synaptic saliency metric is equivalent to the score" }, { "end": 1210.64, "start": 1204.5600000000002, "text": " metric used in skeletonization, one of the first network pruning algorithms. The resulting" }, { "end": 1216.88, "start": 1210.64, "text": " metrics metric is also closely related to this right here. Now this you can see it's" }, { "end": 1222.92, "start": 1216.88, "text": " not exactly the same, but it's closely related to the one used in this snip baseline and" }, { "end": 1231.28, "start": 1222.92, "text": " also closely related to this thing right here used in grasp, where it's not just the gradient," }, { "end": 1240.92, "start": 1231.28, "text": " it's actually the gradient multiplied by the Hessian to account for curvature. Okay, so" }, { "end": 1247.28, "start": 1240.92, "text": " they're going to investigate this synaptic saliency in neural networks. They formulate" }, { "end": 1253.12, "start": 1247.28, "text": " two theorems right here about the conservation of synaptic saliency. Remember synaptic saliency" }, { "end": 1262.44, "start": 1253.12, "text": " is any score that respects this that is built like this, any score S. The conservation of" }, { "end": 1267.84, "start": 1262.44, "text": " synaptic saliency, all synaptic saliency metrics respect two surprising conservation laws that" }, { "end": 1274.16, "start": 1267.84, "text": " hold at any initialization and step in training. So these are not usually like in distribution" }, { "end": 1280.4, "start": 1274.16, "text": " or something like this with high probability. These things hold at any point in the neural" }, { "end": 1287.16, "start": 1280.4, "text": " network. First is the neuron wise conservation of synaptic saliency. For a feet forward neural" }, { "end": 1292.18, "start": 1287.16, "text": " network with homogeneous activation functions and a homogeneous activation function is an" }, { "end": 1298.52, "start": 1292.18, "text": " activation function that can be expressed like this, for example, relu's fall into that" }, { "end": 1306.16, "start": 1298.52, "text": " category, the sum of the synaptic saliency for the incoming parameters is to a hidden" }, { "end": 1311.56, "start": 1306.16, "text": " neuron is equal to the sum of the synaptic saliency for the outgoing parameters from" }, { "end": 1316.32, "start": 1311.56, "text": " the hidden neuron. So what does it mean is actually pretty simple. If you have a hidden" }, { "end": 1323.56, "start": 1316.32, "text": " neuron and you look at all the incoming weights and you look at their synaptic saliency, which" }, { "end": 1329.1599999999999, "start": 1323.56, "text": " is this S score of each of these weights, like what would the pruning algorithm assign" }, { "end": 1336.3999999999999, "start": 1329.1599999999999, "text": " to that? And you look at the outgoing ones, then the sum of all the incoming ones is going" }, { "end": 1344, "start": 1336.3999999999999, "text": " to be equal to the sum of all the outgoing ones. So that's pretty interesting. And they" }, { "end": 1353.04, "start": 1344, "text": " extend that to layer to the entire network. So an extension of that network wise conservation" }, { "end": 1357.96, "start": 1353.04, "text": " of synaptic saliency, the sum of the synaptic saliency across any set of parameters that" }, { "end": 1362.76, "start": 1357.96, "text": " exactly separates the input neurons from the output neurons of a feet forward neural network" }, { "end": 1369.6, "start": 1362.76, "text": " with homogeneous activation functions equals that. So it basically says it remains equal." }, { "end": 1373.48, "start": 1369.6, "text": " So what does it mean? What does it mean to exactly separate the input from the output?" }, { "end": 1377.36, "start": 1373.48, "text": " That's basically the definition of a layer in a neural network. So what they're saying" }, { "end": 1384.3999999999999, "start": 1377.36, "text": " is that you have a bunch of layers. And if you look at a particular layer like this one" }, { "end": 1392.32, "start": 1384.3999999999999, "text": " here, and you look at the incoming connections, and you sum up all of their synaptic saliency," }, { "end": 1398.3999999999999, "start": 1392.32, "text": " that's going to be equal to the sum of all the synaptic saliency of the outgoing connections" }, { "end": 1404.4399999999998, "start": 1398.3999999999999, "text": " of that layer. And it can also apply to like a group of layers and so on. But the synaptic" }, { "end": 1410.76, "start": 1404.44, "text": " saliency is conserved in that way. Now, why is that important? And here is where we make" }, { "end": 1419.88, "start": 1410.76, "text": " the connection with the layer. Was it later drop layer? Whatever. Okay, the fact that" }, { "end": 1426.1200000000001, "start": 1419.88, "text": " the fact that these algorithms tend to drop entire layers before they have to, if you" }, { "end": 1432.92, "start": 1426.1200000000001, "text": " have in your network layers that are of different sizes, so you have large layers, and then" }, { "end": 1439.1200000000001, "start": 1432.92, "text": " smaller layers and smaller layers, what will happen is that since the synaptic saliency" }, { "end": 1445.3200000000002, "start": 1439.1200000000001, "text": " is conserved, the sum is conserved, if you have more connections in one layer, so lots" }, { "end": 1451.24, "start": 1445.3200000000002, "text": " of connections, lots of connections, and in the small layers, you don't have as many connections," }, { "end": 1457.5600000000002, "start": 1451.24, "text": " the sum is equal. So that means each individual one here is much, much smaller. So the S is" }, { "end": 1463.48, "start": 1457.56, "text": " very small for each individual one here, and the S is very large in there. That means the" }, { "end": 1472.32, "start": 1463.48, "text": " pruning algorithm is going to really, really kill off these connections in the big layers." }, { "end": 1477.6399999999999, "start": 1472.32, "text": " And it's actually going to kill them off to a point where it probably is going to eliminate" }, { "end": 1484.4199999999998, "start": 1477.6399999999999, "text": " that layer before it even prunes many of the connections of the small layer, just because" }, { "end": 1494.8000000000002, "start": 1484.42, "text": " of that conservation fact. And they do experiments like this. I think there's an experiment up" }, { "end": 1506.8400000000001, "start": 1494.8000000000002, "text": " here where I like this one down here better, where they basically show that you have inverse" }, { "end": 1513.5600000000002, "start": 1506.8400000000001, "text": " layer size on the bottom, and you have the average score that the pruning algorithm assigns" }, { "end": 1523.24, "start": 1513.56, "text": " to any connection. Now, these, as we've seen, they're not exactly assigning the scores of" }, { "end": 1529.24, "start": 1523.24, "text": " this saliency, but they're very close to it. The Synflow algorithm does exactly assign" }, { "end": 1535.98, "start": 1529.24, "text": " the synaptic saliency as the score for the pruning. Now, we've basically seen that this" }, { "end": 1540.94, "start": 1535.98, "text": " leads to a bad result, but the synaptic flow is going to compensate for that. But in essence," }, { "end": 1547.52, "start": 1540.94, "text": " as you can see, as the layers get, so inverse layer size grows, which means that layer size" }, { "end": 1555.72, "start": 1547.52, "text": " shrinks as the layer size gets smaller, the average score of the connections in the layer" }, { "end": 1560.74, "start": 1555.72, "text": " is higher and higher, which basically means that the pruning algorithm, if you just let" }, { "end": 1566.6200000000001, "start": 1560.74, "text": " it go by itself, it's going to kill off the smaller, sorry, the larger layers first because" }, { "end": 1570.84, "start": 1566.62, "text": " they have the smaller scores. And you can see that even though the other algorithms" }, { "end": 1577.8, "start": 1570.84, "text": " don't conform exactly to that, they conform to this approximately. So these here, because" }, { "end": 1586.56, "start": 1577.8, "text": " their score is closely related to what the Synflow does, and the magnitude pruning, because" }, { "end": 1592.8799999999999, "start": 1586.56, "text": " mostly, because now I'm not sure if that's at the end of the training, at the beginning" }, { "end": 1601.96, "start": 1592.88, "text": " of the training, if you just initialize, then the score is going to be proportional to their" }, { "end": 1607.0800000000002, "start": 1601.96, "text": " magnitude and their magnitude is determined by the initialization scheme. And the initialization" }, { "end": 1614.5200000000002, "start": 1607.0800000000002, "text": " scheme is most of the time, like modern initialization schemes, compensate for the fact that you" }, { "end": 1619.88, "start": 1614.5200000000002, "text": " have different number of incoming and outgoing connections and therefore they would automatically" }, { "end": 1629.8000000000002, "start": 1619.88, "text": " assign a higher initialization constant to layers that have the lower number of parameters." }, { "end": 1636.92, "start": 1629.8000000000002, "text": " So even the magnitude pruning will conform to this. Now, it might be absolutely reasonable" }, { "end": 1641.4, "start": 1636.92, "text": " to say that that's also the case at the end of training because most parameters aren't" }, { "end": 1647.5600000000002, "start": 1641.4, "text": " going to move super much during training. So this still approximately holds, as you" }, { "end": 1654.52, "start": 1647.56, "text": " can see here. Of course, the random one doesn't do that. Yet, because you prune randomly," }, { "end": 1659.6399999999999, "start": 1654.52, "text": " you're still absolutely subject to this layer collapse. In fact, in the random one, the" }, { "end": 1669.76, "start": 1659.6399999999999, "text": " smallest layers would be the ones to go away first because it's just more probable. Okay." }, { "end": 1675.84, "start": 1669.76, "text": " So we've discovered that if you do something like saliency scoring or something that's" }, { "end": 1684.8799999999999, "start": 1675.84, "text": " correlated to it, then you're going to remove the biggest layers first. And that's a problem." }, { "end": 1691.6999999999998, "start": 1684.8799999999999, "text": " And that's what they say. This fact of this conservation laws and the single shot nature" }, { "end": 1697.08, "start": 1691.6999999999998, "text": " of these algorithms that they only assign scores once and then they prune away whatever" }, { "end": 1704.9599999999998, "start": 1697.08, "text": " the bottom such and such percent are leads to layer collapse. I think we've established" }, { "end": 1710.56, "start": 1704.96, "text": " this now that the combination of the two things leads to layer collapse. Now they make a little" }, { "end": 1716.8400000000001, "start": 1710.56, "text": " bit of an excursion and they say there is actually something that doesn't run into layer" }, { "end": 1725.4, "start": 1716.8400000000001, "text": " collapse. And that's iterative pruning algorithms. So specifically, they look at magnitude pruning." }, { "end": 1732.88, "start": 1725.4, "text": " They say magnitude pruning, which remember is also if you do it single shot, it also" }, { "end": 1738.0600000000002, "start": 1732.88, "text": " runs into layer collapse. Magnitude pruning avoids layer collapse with conservation and" }, { "end": 1746, "start": 1738.0600000000002, "text": " iteration. So because it iterates, it avoids that. And that's what these lottery ticket" }, { "end": 1752.3000000000002, "start": 1746, "text": " hypothesis paper does. It does it iteratively removes a couple of connections, then it retrains" }, { "end": 1758, "start": 1752.3000000000002, "text": " the network, basically recomputes the magnitudes and therefore recomputes the scores. And then" }, { "end": 1764.08, "start": 1758, "text": " it prunes again, and then it recomputes and and prunes again. And by recomputing, you" }, { "end": 1769.88, "start": 1764.08, "text": " can basically these some of the connections that weren't important before, but just survive" }, { "end": 1776, "start": 1769.88, "text": " the pruning, they can now be like, wait, I have now way more responsibility as a connection," }, { "end": 1783.76, "start": 1776, "text": " and they will shoot up in importance to avoid being pruned. So you can see if you push your" }, { "end": 1792.8799999999999, "start": 1783.76, "text": " network to a sorry to a high compression ratio, then if you just do this single shot pruning," }, { "end": 1799.92, "start": 1792.8799999999999, "text": " you run into this layer collapse at some compression ratio, you simply crash to random performance" }, { "end": 1808.32, "start": 1799.92, "text": " or zero performance. Yet, if you do multiple iterations, you can see here already two iterations," }, { "end": 1814.48, "start": 1808.32, "text": " then it's much longer before you run right here into layer collapse. And if you do three" }, { "end": 1820.36, "start": 1814.48, "text": " iterations, you do much more. Now this the three iterations doesn't mean you prune more" }, { "end": 1827.84, "start": 1820.36, "text": " like at this, at this point right here to tend to the one. All of these things prune" }, { "end": 1833.3999999999999, "start": 1827.84, "text": " nine out of every 10 connections. It's just the thing that has three iterations prunes" }, { "end": 1840.1200000000001, "start": 1833.4, "text": " maybe first three and then again, three and then again, three out of the 10. Whereas the" }, { "end": 1849.9, "start": 1840.1200000000001, "text": " one iteration would prune all of the nine at in one go. Okay. And they give a reason" }, { "end": 1858.64, "start": 1849.9, "text": " for this they give they say that it's the fact that gradient descent encourages conservation." }, { "end": 1863.96, "start": 1858.64, "text": " So they give a little toy example here they say to better understand the dynamics of the" }, { "end": 1873.96, "start": 1863.96, "text": " IMP algorithm during training, the smaller we will consider the a differentiable score," }, { "end": 1879.2800000000002, "start": 1873.96, "text": " this one. So this is not exactly magnitude pruning, but it is very close, right? The" }, { "end": 1884.64, "start": 1879.2800000000002, "text": " squared it's just the square of the parameter instead of the absolute value of the parameter." }, { "end": 1890.24, "start": 1884.64, "text": " They say it's algorithmically equivalent to magnitude score. Consider these scores throughout" }, { "end": 1896.24, "start": 1890.24, "text": " training with gradient descent on a loss function using an infinitesimal step. In this setting," }, { "end": 1900.3200000000002, "start": 1896.24, "text": " the temporal derivative of the parameters is equivalent to that. And thus the temporal" }, { "end": 1906.8000000000002, "start": 1900.3200000000002, "text": " derivative of the score is this. So now they're going to look at how does the score evolve" }, { "end": 1916.36, "start": 1906.8, "text": " when they train the network and the score evolves exactly as the negative to the saliency." }, { "end": 1924.1599999999999, "start": 1916.36, "text": " Surprisingly, this is a form of synaptic saliency. And thus the neuron wise and layer wise conservation" }, { "end": 1929.44, "start": 1924.1599999999999, "text": " laws from section four apply. In particular, this implies that for any two layers of a" }, { "end": 1937.16, "start": 1929.44, "text": " simple fully connected network, then this quantity holds. So this is not new. But what" }, { "end": 1944.4, "start": 1937.16, "text": " it basically says is that through training, these connections equalize the saliency again." }, { "end": 1953.76, "start": 1944.4, "text": " So if you have a very big layer, and here a very small layer, and because it's a big" }, { "end": 1959.92, "start": 1953.76, "text": " layer, these scores are very much lower, right? It's just little s and here it's big s per" }, { "end": 1966.76, "start": 1959.92, "text": " layer. But then if you prune away, and you run gradient descent on this, these scores" }, { "end": 1974.4, "start": 1966.76, "text": " will tend to become bigger. And in this case, these weights will tend to grow in magnitude." }, { "end": 1979.52, "start": 1974.4, "text": " Because you've pruned away the others, they now have more signal probably flowing to them" }, { "end": 1985.12, "start": 1979.52, "text": " and more gradient flowing to them. And therefore they're going to grow in size. And therefore," }, { "end": 1991.8, "start": 1985.12, "text": " their score is going to be bigger. So this gradient descent of this iterative procedure" }, { "end": 2006.44, "start": 1991.8, "text": " makes the scores better for that. So basically counteracts the layer collapse. So they put" }, { "end": 2015.52, "start": 2006.44, "text": " all of this together and say, theorem three, iterative positive conservative scoring achieves" }, { "end": 2022.28, "start": 2015.52, "text": " maximal critical compression. If a pruning algorithm with global masking, and global" }, { "end": 2029.64, "start": 2022.28, "text": " masking means that you rank all of the connections and then prune from all of the connections," }, { "end": 2035.04, "start": 2029.64, "text": " it's a difference to layer wise masking where you say I want to remove 90% of each layer," }, { "end": 2041.1599999999999, "start": 2035.04, "text": " which sounds like it would avoid layer collapse, but also it works a lot worse than the global" }, { "end": 2047.44, "start": 2041.1599999999999, "text": " one, the global strategy. Assigns positive scores that respect layer wise conservation." }, { "end": 2054.56, "start": 2047.44, "text": " And if the algorithm, so respecting layer wise conservation, it basically means your" }, { "end": 2062.48, "start": 2054.56, "text": " score should be, or if your score is a saliency score, then that's the case. And if the algorithm" }, { "end": 2068.48, "start": 2062.48, "text": " reevaluates the scores every time a parameter is pruned, then the algorithm satisfies the" }, { "end": 2076.56, "start": 2068.48, "text": " maximal critical compression axiom. Okay. So that's basically saying that if you have" }, { "end": 2083.96, "start": 2076.56, "text": " any algorithm that prunes with a saliency score, like theirs is going to do, is going" }, { "end": 2094.16, "start": 2083.96, "text": " to be able to be pushed to the limit until the maximal capacity is reached if you reevaluate" }, { "end": 2100.64, "start": 2094.16, "text": " the scores every time a parameter is pruned. So this is basically saying that whatever" }, { "end": 2108.1, "start": 2100.64, "text": " the lottery ticket hypothesis paper did with magnitude pruning, if you do it with saliency" }, { "end": 2116.16, "start": 2108.1, "text": " based pruning, you're guaranteed to achieve the maximum possible compression if you push" }, { "end": 2126.2799999999997, "start": 2116.16, "text": " it. But of course we know that whatever the lottery ticket hypothesis paper did is impractical" }, { "end": 2131.6, "start": 2126.2799999999997, "text": " because it needs to retrain the network every single time it wants to prune. Right? So if" }, { "end": 2135.04, "start": 2131.6, "text": " you want to do this after every parameter, that's going to be a long time. It's going" }, { "end": 2142.2799999999997, "start": 2135.04, "text": " to be impractical. We ideally want to prune the network before we even look at any data." }, { "end": 2150.2, "start": 2142.2799999999997, "text": " And they're going to do exactly that with the SYNFLOW algorithm. They say theorem three" }, { "end": 2155.24, "start": 2150.2, "text": " directly motivates the design of our novel pruning algorithm. SYNFLOW that provably reaches" }, { "end": 2164.52, "start": 2155.24, "text": " maximal critical compression. First, the necessity for iterative" }, { "end": 2172.16, "start": 2164.52, "text": " score evaluation discourages algorithms that involve back propagation on batches of data" }, { "end": 2176.68, "start": 2172.16, "text": " and instead motivates the development of an efficient data independent scoring procedure." }, { "end": 2184.88, "start": 2176.68, "text": " Second, positivity and conservation probably motivates the construction of a loss function" }, { "end": 2190.92, "start": 2184.88, "text": " that yields positive synaptic saliency scores. We combine these insights and introduce a" }, { "end": 2197.8, "start": 2190.92, "text": " new loss function where the one is the all one vectors. Okay, so this is the loss function" }, { "end": 2204.92, "start": 2197.8, "text": " of their saliency scores. And this might seem like... So what do we have? We have the parameters" }, { "end": 2211.28, "start": 2204.92, "text": " of layer L, the absolute product, sorry, the absolute value of those parameters, and then" }, { "end": 2218.12, "start": 2211.28, "text": " you simply multiply all of the layers together. And you have this product here with the ones" }, { "end": 2226.6, "start": 2218.12, "text": " on the side. So this is a quadratic form, sort of. Okay, this might seem a bit weird," }, { "end": 2233.7599999999998, "start": 2226.6, "text": " but in practice, and this is also what happens in their code, you can do something pretty" }, { "end": 2239.8199999999997, "start": 2233.7599999999998, "text": " easy. So first, you have to transform all your weights to their absolute values. Now" }, { "end": 2245.4, "start": 2239.8199999999997, "text": " in their code, you can look at it, they do remember the signs for later. So but first," }, { "end": 2251.6800000000003, "start": 2245.4, "text": " you convert all of them to their absolute values. Then second, you simply take a data" }, { "end": 2257.84, "start": 2251.6800000000003, "text": " point that is filled with ones that literally the number one. So if your if your input is" }, { "end": 2265.32, "start": 2257.84, "text": " an image, you just put a one at each pixel, you feed it through the network with all of" }, { "end": 2271.64, "start": 2265.32, "text": " these positive weights, and you get out some output, you get some output vector, okay," }, { "end": 2276.96, "start": 2271.64, "text": " then you simply you need to do this inner product with the one vector, which is simply" }, { "end": 2282.16, "start": 2276.96, "text": " a sum, right? I don't I don't get why they it's a bit of a funky way of writing a sum," }, { "end": 2289.2, "start": 2282.16, "text": " right? You simply sum that up to get a to get a single number. And this single number" }, { "end": 2295.3199999999997, "start": 2289.2, "text": " now is your is your pseudo loss function. It's simply the loss function that an all" }, { "end": 2303.92, "start": 2295.32, "text": " one data point gets when the when the loss function is just the sum of the outputs. That's" }, { "end": 2310.32, "start": 2303.92, "text": " that's it. That's it. And then you back propagate that loss to you back propagate that loss" }, { "end": 2316.0800000000004, "start": 2310.32, "text": " to the layers. Right? So this is our remember this is not the score itself, but our score" }, { "end": 2324.8, "start": 2316.0800000000004, "text": " is going to be the derivative of our with respect to a weight times that weight. Okay," }, { "end": 2332.1200000000003, "start": 2324.8, "text": " so you want to back propagate, and then you multiply each of these weights by the back" }, { "end": 2339.28, "start": 2332.1200000000003, "text": " propagated signal. And that's going to be your score for each parameter. Now, this doesn't" }, { "end": 2343.4, "start": 2339.28, "text": " seem too hard, right? You just need you don't even need a batch, you need a single data" }, { "end": 2350.92, "start": 2343.4, "text": " point, one back propagation, and then you get your scores. Okay, you don't need expensive" }, { "end": 2361.7200000000003, "start": 2350.92, "text": " training or anything like this. This seems pretty cool. And they give an example here." }, { "end": 2369.56, "start": 2361.7200000000003, "text": " For example, for a simple, come on, for a simple fully connected network, ie this, so" }, { "end": 2374.96, "start": 2369.56, "text": " they consider here a linear network, right, just so we can look at exactly what happens" }, { "end": 2378.8, "start": 2374.96, "text": " for linear networks, you can often compute quantities exactly. So if we look at just" }, { "end": 2384.52, "start": 2378.8, "text": " a linear network without nonlinearities, we can factor the synaptic flow score for any" }, { "end": 2391.52, "start": 2384.52, "text": " parameter as such. So the score, this is now not the the R, this is going to be the score" }, { "end": 2397.28, "start": 2391.52, "text": " is going to be this thing right here. So you can see that the parameter is multiplied by" }, { "end": 2404.04, "start": 2397.28, "text": " this thing, and by this thing. And other than for example, magnitude pruning, this actually" }, { "end": 2411.24, "start": 2404.04, "text": " takes into account all the input flow because it goes from this one, sorry, it goes from" }, { "end": 2417.32, "start": 2411.24, "text": " this goes from this one, it goes through all the network, right, every path that arrives" }, { "end": 2423.4, "start": 2417.32, "text": " at this particular weight is going to be considered. And every path that goes out from this particular" }, { "end": 2429.88, "start": 2423.4, "text": " weight is going to be considered. And the saliency score is going to depend on all of" }, { "end": 2436.36, "start": 2429.88, "text": " these paths, all of these all of the information flow from input to output that goes through" }, { "end": 2445.2400000000002, "start": 2436.36, "text": " that weight. And if you do this, then you get a really good pruning algorithm. So yeah," }, { "end": 2450.6400000000003, "start": 2445.2400000000002, "text": " the algorithm is is I've already described it. And in their experiments, as you can see" }, { "end": 2456.76, "start": 2450.6400000000003, "text": " right now, they have a bunch of networks, these VGG networks, or like wide resnet, they" }, { "end": 2462, "start": 2456.76, "text": " have a bunch of data sets like tiny image net or C for 10, where they experiment with" }, { "end": 2467.4, "start": 2462, "text": " these different baselines. And you can see that the baselines often run into this layer" }, { "end": 2473.96, "start": 2467.4, "text": " collapse problem. Sorry, often run into this where all of a sudden, let's actually look" }, { "end": 2481.84, "start": 2473.96, "text": " at let's look at this resonant 18. Right here. Maybe you can find a connection between maybe" }, { "end": 2486.2000000000003, "start": 2481.84, "text": " there's differently sized layers in resonant 18. And that's why the collapse happens even" }, { "end": 2490.64, "start": 2486.2, "text": " earlier. But you can see right here, there's a collapse if you do magnitude pruning, even" }, { "end": 2495.3199999999997, "start": 2490.64, "text": " also if you do random pruning, it falls down pretty hard after a while, the baselines they" }, { "end": 2501.52, "start": 2495.3199999999997, "text": " hold up better. But you can see in different models and different data sets, that the baselines" }, { "end": 2508.52, "start": 2501.52, "text": " crash at some point as well. Now I've already said the comparison here, it seems a little" }, { "end": 2515.16, "start": 2508.52, "text": " bit unfair. I might I might have over read something, but I'm pretty sure that the baselines" }, { "end": 2522.64, "start": 2515.16, "text": " remain single shot, while the sin flow algorithm here is now of course, no longer single shot," }, { "end": 2528.2799999999997, "start": 2522.64, "text": " it's actually multi shot, and they've made the exact argument that the single shot is" }, { "end": 2536, "start": 2528.2799999999997, "text": " the problem. And therefore their algorithm is multi multi shot. And it it seems like" }, { "end": 2541.7599999999998, "start": 2536, "text": " they should give the other algorithms the opportunity to also do multi shot, just to" }, { "end": 2548.96, "start": 2541.76, "text": " compare them fairly. Maybe, as I said, maybe they're doing that, but I'm, I haven't read" }, { "end": 2556.36, "start": 2548.96, "text": " any anything. So it, you know, it just seems like the comparison is a bit unfair. If you" }, { "end": 2561.36, "start": 2556.36, "text": " identify the problem, and then just leave the other algorithms with the problem, sin" }, { "end": 2569.48, "start": 2561.36, "text": " flow is still different from these other algorithms, even if they had the multiple steps. Now," }, { "end": 2573.64, "start": 2569.48, "text": " the counter argument to this, of course, is that these other algorithms all require the" }, { "end": 2578.64, "start": 2573.64, "text": " training data, they require actually passing the data or training the network in the case" }, { "end": 2583.4, "start": 2578.64, "text": " of magnitude pruning and so on. So that's pretty expensive, whereas sin flow, you simply" }, { "end": 2590.32, "start": 2583.4, "text": " pass forward one data point, and that's it. That's a good argument. But it seems like" }, { "end": 2598.52, "start": 2590.32, "text": " the effect of the synaptic saliency scores, and the effect of the multiple steps aren't" }, { "end": 2606, "start": 2598.52, "text": " really disentangled in these experiments right here, it simply shows that it consistently" }, { "end": 2610.52, "start": 2606, "text": " outperforms other pruning methods. And what what I'd like to see is really where that" }, { "end": 2619.64, "start": 2610.52, "text": " outperforming comes from. Okay, so that's what I think of this. And that was the paper," }, { "end": 2627.44, "start": 2619.64, "text": " basically, I'm even even if I am not convinced quite yet. This is pretty cool, right? And" }, { "end": 2634.84, "start": 2627.44, "text": " I think this will, if not be if it's not used itself, it will inspire kind of a line of" }, { "end": 2642.16, "start": 2634.84, "text": " work into pruning at the beginning of training without looking at data. And maybe, you know," }, { "end": 2649.48, "start": 2642.16, "text": " maybe we can even think of building networks, like, instead of just pruning them, we can" }, { "end": 2656.96, "start": 2649.48, "text": " think of constructively building networks that observe these properties. And therefore," }, { "end": 2663.2400000000002, "start": 2656.96, "text": " we can just construct initialized networks already with good properties such that we" }, { "end": 2667.16, "start": 2663.2400000000002, "text": " don't even have to go to a bigger network and then prune it down. It seems wasteful." }, { "end": 2672.2400000000002, "start": 2667.16, "text": " It seems like we should just be able to derive principles of what we want in the how the" }, { "end": 2677.88, "start": 2672.2400000000002, "text": " weights are structured, and then construct networks that are according to that. And I" }, { "end": 2683.7200000000003, "start": 2677.88, "text": " guess that's what's going to happen in a few papers that are coming. Alright, again, if" }, { "end": 2689.3599999999997, "start": 2683.72, "text": " you like this video, consider subscribing, giving it a like commenting, and let me know" }, { "end": 2716.7200000000003, "start": 2689.36, "text": " what you think. And until next time, bye bye." } ]
l12GXD0t_RE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Deep Differential System Stability - Learning advanced computations from examples (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "math", "derivative", "ode", "pde", "solution", "integral", "gradient", "jacobian", "mathematics", "language model", "transformer", "symbolic", "numeric", "stability", "equilibrium", "attention", "tokens", "dataset", "abstract" ]
Determining the stability properties of differential systems is a challenging task that involves very advanced symbolic and numeric mathematical manipulations. This paper shows that given enough training data, a simple language model with no underlying knowledge of mathematics can learn to solve these problems with remarkably high accuracy. OUTLINE: 0:00 - Intro & Overview 3:15 - Differential System Tasks 11:30 - Datasets & Models 15:15 - Experiments 21:00 - Discussion & My Comments Paper: https://arxiv.org/abs/2006.06462 My Video on Deep Learning for Symbolic Mathematics: https://youtu.be/p3sAF3gVMMA Abstract: Can advanced mathematical computations be learned from examples? Using transformers over large generated datasets, we train models to learn properties of differential systems, such as local stability, behavior at infinity and controllability. We achieve near perfect estimates of qualitative characteristics of the systems, and good approximations of numerical quantities, demonstrating that neural networks can learn advanced theorems and complex computations without built-in mathematical knowledge. Authors: François Charton, Amaury Hayat, Guillaume Lample Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi, here's a question for the whiz kids among you. Is this system here controllable at a point xe with asymptotic control ue? I'll give you 10 seconds. Okay, 10 seconds are over. So to solve this, it's actually pretty easy. All you need to do is first differentiate the system with respect to its internal variables, which are the x's, to obtain the Jacobian a. Second step, differentiate the system with respect to its control variables, which are the u variables to obtain the matrix B. Look, this is a zero, like this is not hard. Then evaluate a and b at the point that you want and the control point that you want pretty easy. Calculate the controllability matrix. Come on, that's nothing. Another zero. And at the end, you calculate the rank of the controllability matrix. Now, if the if n minus d is zero, this system is controllable. And optionally, if you want, if you feel like it, you can output the control feedback matrix as an equation three, which gives you this here. Now, what's equation three? Equation three is super duper simple. It's just this little sort of integral thing, inverse matrix trace transposed exponential function, outer product thing. Come on, what's the matter with you? Okay, so if you found you can't do this just on the spot, then you are in the same category as most people. But interestingly, apparently, according to this paper, a deep learning system can. So today we're going to look at deep differential system stability, learning advanced computations from examples by Francois Chardon, Amory Hayat, and Jiam Lampel of Facebook AI research and Ecole de Pompari Tech and Rutgers Rutgers University. So at this, in this paper, these authors basically propose that you can learn these complex mathematics with a model that has no clue about mathematics. In fact, it is a language model. And it can output the solutions, for example, whether or not a system is controllable, which are sort of binary solutions, but it can also output actual solutions as in numbers or as in the matrices that you would need to obtain from these problems. So that's pretty cool. And it is built upon this other paper called, I think, some deep learning for symbolic mathematics or something. I have made a video on it if you search for it, and I'll also link it in the description. And you can go check that out because that's sort of the basis. So in this previous paper, I think it was from partially the same authors, they have investigated language models into integrating functions. So you have some sort of function, you're trying to find the integral, and they've tried to do that. Now they go a lot further. So they look at these at these differential systems, which are characterized by these differential equations. So if you've never seen differential equations, it's basically an equation where the the derivation of some variable is characterized by the variable itself. So the the the gradient, if you will, or the code in the derivation according to some input variable here, it's most often it's time and physical systems is a function of that variable itself, and partially also other variables. So you can have systems of differential equations that all depend on each other. And there are a number of questions about these systems. These are very relevant in like physics or engineering, control theory, and so on. So they investigate different problems that you can solve with these. They investigate specifically problems where we already know the solutions, but the solutions require very complex mathematical manipulation, such as as you've seen, calculate the integral of something, take the trace, calculate the rank, invert some matrix. So all of these mathematical steps are required to solve these problems if we were to teach them to math students or engineering students. And this paper basically says, if we just input the problem into a big language model, and, and ask it to output the solution, it can do it. So basically can learn to do all of these things. So that's pretty, pretty surprising, because you don't program it to do any math. So the first problem they look at is this local stability problem. So in, I don't, I don't really want to go into, into much of the actual mathematical problems, but we'll look at the first one to just give you an idea of what these sort of problems are. So Xe is an, if Xe is an equilibrium point, it means that all solutions, if all solutions converge to Xe when their initial positions are close enough, the equilibrium point is said to be locally stable. Okay. This problem is well known if f is differentiable in Xe, and answers provided by the spectral mapping theorem. So in that case, you'd have a system, maybe we can draw one where you'd try to find local points of stability. So this point here would be maybe a local point of stability, if you consider this as sort of an optimization landscape, because if you go from here, if you go a little bit away from it, you're always sort of pushed back. If this, if this is a, if the system is gravity, sort of, so if these differential equations did sort of describe that the height here is the force with which you're pulled down, then this thing here would be a local point of stability. The question is, if I give you a system that's described by these differential equations, can you tell me whether or not it is stable at some point? Okay. And there is a spectral mapping theorem, which says, if you have the Jacobian matrix of f at this point, the matrix of its partial derivatives relative to its variable. And if you take lambda to be the largest real part of its complex eigenvalues, if lambda is positive, then this is an unstable equilibrium. If lambda is negative, then it's a locally stable equilibrium. So an unstable equilibrium likewise would be the point here on top, which means that if you're exactly at the point, then you stay there. But as soon as you're a little bit off, you drift away from it. That would be an unstable equilibrium. Okay. So there are complex steps involved in deriving this solution. And they list them out here just to show you how complex this is. This is not meant to teach you, you don't have to like understand or be able to apply this. This is simply meant to tell you how complicated it is to arrive at a solution. So first you need to differentiate each function with respect to each variable and obtain the formal Jacobian. So they do this here for this example system, which is this system right here. This is a system of two equations, two differential equations in two variables. Okay. So if you derive the Jacobian that will give you a four entry Jacobian. So each one of these is one of the equations derived with respect to one of the variables. You can do that, right? But it requires fairly complex mathematical knowledge. Like knowing that the derivative of the sign here is the cosine and knowing that this cosine doesn't matter for this particular entry because it's in X2 and here we derive by X1. So that's already very challenging. Second, you need to evaluate the Jacobian at that point. So first you've done it symbolically. Now you actually need to put in the numbers at the point you're interested in, which will give you this thing right here, which is a numerical matrix, whereas this was a symbolical matrix. Then you need to calculate the eigenvalues, which is, I mean, you have several methods of that. You have several methods of computing the largest eigenvalue. You could do power method. You could do decomposition. There are numerous ways, but none of these is like particularly easy, right? And then lastly, you need to return the minus max of the real part, which is the speed of convergence of the system. So not only do you need to be able to tell whether it is stable, which is if this is negative, you also need to be able to say, or if this is negative, you need to be able to say whether it is locally stable or locally unstable. And since this is larger than zero, it's locally stable. And this would be the decay rate 0.441. This is what you're asking this model to output, right? Now we'll quickly go over the other things, but not as in depth. But this is in control theory where you're trying, you have almost the same thing, you have a differential equation. But now, in addition to these variables, you have these control variables, which you have power over and you're trying to decide, can I control the system with the appropriate function? And in order to do that, that's kind of the problem that we had at the beginning. I know it's not. Oh, yeah, it is. So what you need to do is again, differentiate the system with respect to its internal variables, differentiate the system with respect to its control variables, evaluate A and B, calculate the controllability matrix with one of the functions above, calculate the rank, and optionally evaluate this equation number three that we saw before. Now, the last task is equally complicated. It relates to the stability of partial differential equations using the Fourier transform. And again, to obtain this, it is a five step intricate process where that is a mix of symbolic complex manipulation and numerical evaluation of that symbolic things. And here you need to simply output two bits, one bit says whether there exists a solution and the other bit says whether it vanishes at t to infinity. So what are they expecting here? What they're doing is they're going to build a data set that is composed of these things. And I think they do it one by one. So they take one of the tasks and they're going to build a giant data set of these things. Since they all have solutions, right? You can build a data set with labels, because you can actually build a you can build software that does these steps, because you program mathematical knowledge into the software, but it's custom made for that particular problem. And they're simply trying to in there, they're not trying to beat that program, they're simply trying to investigate, can a language model that knows nothing of math, do this simply by learning from data. So they're going to try to build a data set, or they are building a data set. And they do it in the same way as this previous paper, which say we generate random functions by sampling unary binary trees and randomly selecting operators, variables and integers for their internal nodes and leaves. So they use any combination of plus minus times divide x blog, and so on. So all these functions can appear in these things. And they basically they build a tree where they say, okay, we go from plus and then here is five, and then here maybe is minus sin of x, and here over is x of y. So that would be five plus sine of x minus x of y. So they build trees like this by randomly sampling. And then they simply feed this to their mathematical program to obtain a solution y. And then they feed all of this into that's now a training data point. This here is x and here you have y. And they generate a giant bunch of these things. And they feed them into the into the model. They say here is seek to seek models, not even sure what kind of models they use. I think they just they use transformers as well. So they they use standard transformers, I believe. So yes, in all experiments, we use the transformer architecture with eight attention heads, we train our models with the atom optimizer learning rate, blah, blah, blah, we vary the dimension, and the number of layers. So that's going to be interesting to see how the size of this language model influences how well this model can solve these things. So as you can see here, they build these data sets for local stability, they include systems with two to six equations, which is already fairly, fairly complicated, and would take a human quite a while to to do this. They say we generate a data set with over 50 million systems. So it's a pretty dense sampling of this space. And I feel this is one of the important components here. They do make sure that none of their tests, so they do a train test split, and they do make sure that none of their test examples is in the training data set, though they claim that they never actually have to remove anything, they just check and the search space, the space of these trees here is so large, that it never happens that the data test sample or almost never happens that a test sample is in the training data set. Alright, so they generate these things. And here are the results. So for local stability, it is trained to predict this lambda that we saw at the beginning, the largest part of the eigenvalue of the Jacobian corresponding to the convergence speed of equilibrium, we consider that predictions are correct when they fall within 10% of the ground truth. And here you can see that their best model achieves 96% if the degree of if it's two equations, so if the degree of the system is two, it achieves in 96% accuracy. So in 96% of the time, the convergence speed is within 10% of the true convergence speed. That's fairly crazy, right? That's pretty good. And here the exact prediction of local convergence speed to given precision. So how many digits actually match of that conversion speed. So it's not only 10% off, they also measure how many digits match. And here, you can see that as you up the degree of equation, sorry, here, the performance drops off, as you can see, less and less and less. And also as you lower the number of layers in your model, or lower the dimensionality, the performance drops quite significantly. So that means the language model sort of is doing real work here. And here you also see that if it's degree two, the convergence speed has pretty even goes to two, three or four digits often. But as you now increase the degree, this accuracy drops off fairly quickly. All right, let's keep that in mind. So the surprising thing is that it actually works. And it works in surprisingly big amount of the time. Now, I don't know, you could bicker about this 10% and how bad or how easy this is, and so on. But it is fairly, it is fairly complicated problem. And to be within 10% of the solution seems quite remarkable. And the same things here happen with the other tasks. So here they say they predict controllability in the autonomous case. So in the control theory, they predict these two things, whether it's controllable, and then they output this K matrix that we saw before. Yeah. So here, you can see that if they have high enough dimensions and high enough layers, with sample systems with three to six equations, they achieve again a 97% in the prediction of whether the system is controllable or not. Now remember, this is a binary prediction, but still, it requires a good understanding of math for a human to solve this. Again, we see this drop off with dimension and layers. But you know, this number here is pretty good compared to the 50% you'd have from random guessing. Also interesting is when they look at is this correct, sorry, is the feedback matrix correct? So this matrix that you optionally have to output, they find that if they analyze whether or not that's within 10% of the true one, they see that pretty, pretty quickly this accuracy drops when they up the degree. Of course, the matrix, as I understand it has more entries at degree six than at degree three. So maybe that's understandable. But it drops off pretty quickly. But what is true is whether they call this correct feedback matrix. So from the feedback matrix, from the entries in that, you can read out whether it's the system is controllable or not, if all of the eigenvalues, I believe, are negative or positive, or all the values are negative or positive. So basically, by saying whether or not these things are positive or negative, you can read out the controllability. And if they check whether that property holds, then that is is fairly well. So they argue here that this shows that it doesn't, it doesn't predict the the matrix they want. But the matrix that it predicts has the appropriate properties to solve this other task right here. Okay, that's experiment two. Experiment three, as you can imagine, is quite different, sorry, quite similar in that they investigate these partial differential equations. In a Fourier transform, the model is given differential operator and an initial condition is trained to predict if a solution exists. And if so, whether it converges to zero, when t goes to infinity, the space, the dimensions between two and six. So the random guessing here would be, I guess, 25%, because it's two bits, you need to output. And this model performs extremely well, even up to this dimension six, there is a drop off with dimension, but it still does perform very, very well. Now, they go into the discussion a bit. And they and this, this is the part that in this paper, interests me, like, how do you interpret these results? Apparently, you give these mathematical things to a language model that has no clue of math. And just by looking at examples, it learns to produce correct solutions. And if you want to teach that to a human, the human would have to go through all these steps, right? So something is happening here. And we'll want to find out what and the discussion is maybe they try to explain a bit why they think this happens. They say we studied five problems of advanced mathematics from widely research. In three of them, we predict qualitative and theoretical features. In two, we perform numerical computations. According to mathematical theory, solving these problems requires a combination of advanced techniques, symbolic and numerical that seem unlikely to be learnable from examples. Yet our model achieves more than 95% accuracy on all qualitative tasks. And between 65 and 85 on numerical computations, such high performances over difficult mathematical tasks may come as a surprise. One way to generate data set of problems with their solutions consists in sampling the solution first and deriving an associated problem. For instance, pairs of functions with their integrals can be generated by sampling and differentiating from random functions. So here they hedge against there's this criticism and this was mainly a criticism of their other paper, which they already addressed in their other paper was if you want to find if you want to create a data set where you have the function, and then the label is the integral of the function, then there is no common solution to derive these integrals. Sorry, the derive is a, there is no common solution to integrate functions. I mean, you can do it numerically, but there is no common symbolic solution to integrate any function. And that's why what you can do if you want to produce a data set is you start with the integral already, and then you differentiate that to get to get a thing. And then you know that if you integrate this, you should get back your original function. But this biases the data set because the sampling is now not over these functions, but the sampling is over these functions. And that might lead to this distribution here being biased. So they hedge against that, which I don't care because it clearly in this paper, and they say in this paper, data sets for all considered tasks are generated using a forward approach by directly sampling as a result, potential bias caused by backward generative model do not apply here. And they studied problems from three different so they hedge against this argument that they could have a bias data set, which I don't think anyone reading this paper would leverage against them. Yeah, so in so here, they basically say how good they are, how surprising this is all of this requires math, this part is irrelevant, because it hedges against an argument that I don't think is reasonable against the paper. And then the last thing in their discussion is an objection traditionally raised is that the model might memorize a very large number of cases and interpolate between them, which I think we know in language model happens often. Right? Oh, by the way, have I shown you how they encode this into the language model? I have not. This is the I guess this is the craziest part. They don't even put the numbers there. Wait, wait, they don't even put the numbers there. They actually put the as I understand it, they put the string tokens here, right? So they put the string tokens of the math, and then even like compose to the number 142, they would put as now there's an integer and then the token one, the token four and the token two. Okay. And the decimal point representation is the sequence float three dot one for e in negative one. So this is it's really just a string, there is no, like the model would even have to learn the decimal representation of numbers to get that this four here is actually not not just a different token than two, but it's 20 times larger because it is in the position one in front of two. So it's not two times larger, like four is to two, but because four is, you know, one digit away from two, it's 20 times larger. And then this here is actually 50 times larger than this. So it seems like a quite inconvenient way to input data into the model. And yet the model is super accurate, right? And we already know that these language models, what they tend to do is they tend to memorize the training data or abstracted in a way that they can sort of interpolate between fuzzy versions of the training data. Here they say this is unlikely, sorry, this is unlikely because first, because the size of our problem space is too large to be memorized. So say for all consider problems, we did not get a single duplicate over 50 million generated examples. Second, because in some of our problems, such as non autonomous control, even a model with a one layer and 64 dimensions obtains a high accuracy and such a small model would never be able to memorize that many examples, which is true, right? This is this is a fair defense against you're just interpolating training data. But I think the kind of broader, the broader scope of this criticism would be something like your model is just kind of learning the pattern regularities of the textual data that you feed in. It's not actually learning math, it's just learning sort of, okay, there is like a cosine. And if there's a cosine here, followed by an exponential function that often leads to like a very low number of this lambda, right? And then if a very similar thing comes, it comes across a very similar thing in the test sample, even though it's not exactly the same thing, it will map it to like a similar place in the label space. I mean, this is literally machine learning, this is literally regression. But I think the more the broader scope of this criticism is that what your model might be doing might simply be sort of a very simple regression on these tokens or on these context dependent tokens, rather than this internal mathematical reasoning. And I don't, while it is true that it's probably not memorizing any examples, this still doesn't. And while it is also true that they did not get a single exact duplicate, what would be interesting to know is how many like approximate duplicates, so can you basically solve the problem with a nearest neighbor approach? That would be my question. Can you solve the problem with a nearest neighbor approach over their training data set? Because that means you basically don't need the mathematical knowledge. They say third, because for some of our problems, we know from mathematical theory, that solutions, IEG, the real value of eigenvalues cannot be obtained by simple interpolation. And I mean, that is also a valid defense. But I think the argument goes further than just simple interpolation. What we mean by interpolation is not we interpolate the real values of the eigenvalues. What we mean by interpolation is sort of interpolation in the regression space of these tokens. Like we know that if we go from a sine to a cosine, maybe the sine of the output flips at the end. And that's what we mean by interpolation. Like when we see two equations that are very similar, like x squared plus 4x minus the sine of x, and then we see x squared plus 5x minus twice the sine of x. What we mean by interpolation is that we now get a test example that says x squared plus, let's say, 4x minus 3 times the sine of x. Then what we would interpolate is sort of these things. I'm making a bad example right here. Maybe I should go with x squared and this is x third. I know these things aren't exactly equal, but this in the middle would be sort of an interpolation in token space. If you train the language model, it will recognize that maybe I can interpolate whenever the coefficient here is just different or I can interpolate when there's just, you know, if there's like a log x here, that doesn't really change anything. So I can interpolate between the two, but I might not be able to interpolate when the exponent here is different. So if you give a training data set, you teach the model where it can interpolate and where it can't. Now again, it's not able to remember the training data, but it will be able to sort of abstract it and store it fuzzily and abstract the patterns from it, which is good, right? That's machine learning. But there's no evidence here that this does any mathematical reasoning. So up until now, all that has built up is sort of if you read the abstract, can advanced mathematical computations be learned from examples? Neural networks can learn advanced theorems, complex computation without built in mathematical knowledge. All of the story here, all of this showing of, hey, look at what steps is required to solve these problems. And even this discussion here basically says, hey, you need mathematical complex reasoning to arrive at the solutions. And then in the conclusion, in the conclusion, they say, it seems that our models have learned to solve these problems, but that does not mean they learned these techniques we use to compute their solutions. Problems such as non-autonomous control involve long and complex chain of operations, yet even small models, so means one layer transformers with 64 dimensions, achieve high accuracy. Most probably, our models learn shortcuts that allow them to solve specific problems without having to learn or understand their theoretical background. Such a situation is common in everyday life. Yada, yada, yada. So here, in this paragraph here, they sort of counter their whole narrative of the paper. And that's, I guess that's sort of to, it's fair, right? They criticize their own work, which is good for research. It's also to hedge against criticism, and it's to be a bit real. This, it's a good paper, right? Because it's a nice and interesting story. And then at the end, you also say, look, this might actually not be all that what it's made up or what it seems like to be. And I agree with this statement right here. It's that probably the model learns shortcuts and the shortcuts might be just in a way of pattern matching. The pattern matching of whatever patterns you extract from the training data, you pattern match that and you relatively simply interpolate between those matched patterns, not between the training data itself, but between the match patterns. And therefore, you can arrive at approximately good solutions. So what I would have liked to see from such a paper, right? They say that we leave that to future research after making really kind of big claims in the introduction and the abstract. They have taken three different problems here, right? There's this local stability, then there is this control theory, and then there is this stability. They have three different problems. And okay, they try to show that they can apply this to a diverse range. But what I would have expected from a paper like this is they even spell out, here are four things that you need to do to solve this if we were to teach this to a human, right? Now, if you have trained a model and you evaluated it, it is really good at this task for which you thought you need, you know, to do these four steps. What would be really interesting is to now introspect your model and see, can I somehow show that my model has in somewhere in the intermediate layers has this quantity right here? And it's not just nearest neighboring in some learned pattern space. That would be an actually interesting research question, right? So rather, in my mind, rather than having three different things where they all, you know, they demonstrate the same thing over and over and over again, that this actually works, it would be a much more interesting question to introspect the model and parse out can can I, for example, you can see, can I reconstruct this quantity from the inside of the model? When the model isn't specifically trained to give me back this quantity, because I know this quantity would be a step on these on the path of the solution, right? If I want to get the solution, I almost have to calculate this quantity. Can I parse this out from the middle of the model somewhere? When the model isn't explicitly trained to give me this, if I can, then I can really make the point that the model does something like this and learn something like this from data. Whereas if I can't, that would be more of an evidence that the model is simply sort of pattern matching, close enough, seen examples in the training data. Right? So that's a bit of my of my criticism right here is that they they show it works, which is pretty cool. But then they, they don't do the sort of interesting experiments of these of this introspection right here, which is a bit sad, but you know, they leave it for future research, which I guess is going to be themselves. And that's how you make two papers. So no, I don't want to be too critical. It's a very cool paper. And I invite you to check it out and leave a like and subscribe and leave a comment of what you think of this kind of research of this paper, and whether or not you think I'm totally wrong. That's entirely possible. Okay, I'll see you next time. Bye bye. Transcribed by https://otter.ai
[ { "end": 7.5, "start": 0, "text": " Hi, here's a question for the whiz kids among you. Is this system here controllable at a" }, { "end": 14.700000000000001, "start": 7.5, "text": " point xe with asymptotic control ue? I'll give you 10 seconds. Okay, 10 seconds are" }, { "end": 19.62, "start": 14.700000000000001, "text": " over. So to solve this, it's actually pretty easy. All you need to do is first differentiate" }, { "end": 24.54, "start": 19.62, "text": " the system with respect to its internal variables, which are the x's, to obtain the Jacobian" }, { "end": 29.54, "start": 24.54, "text": " a. Second step, differentiate the system with respect to its control variables, which are" }, { "end": 35.68, "start": 29.54, "text": " the u variables to obtain the matrix B. Look, this is a zero, like this is not hard. Then" }, { "end": 40.2, "start": 35.68, "text": " evaluate a and b at the point that you want and the control point that you want pretty" }, { "end": 48.16, "start": 40.2, "text": " easy. Calculate the controllability matrix. Come on, that's nothing. Another zero. And" }, { "end": 54.6, "start": 48.16, "text": " at the end, you calculate the rank of the controllability matrix. Now, if the if n minus" }, { "end": 60.52, "start": 54.6, "text": " d is zero, this system is controllable. And optionally, if you want, if you feel like" }, { "end": 65.52, "start": 60.52, "text": " it, you can output the control feedback matrix as an equation three, which gives you this" }, { "end": 73.8, "start": 65.52, "text": " here. Now, what's equation three? Equation three is super duper simple. It's just this" }, { "end": 82.5, "start": 73.8, "text": " little sort of integral thing, inverse matrix trace transposed exponential function, outer" }, { "end": 90.36, "start": 82.5, "text": " product thing. Come on, what's the matter with you? Okay, so if you found you can't" }, { "end": 98.84, "start": 90.36, "text": " do this just on the spot, then you are in the same category as most people. But interestingly," }, { "end": 104.68, "start": 98.84, "text": " apparently, according to this paper, a deep learning system can. So today we're going" }, { "end": 110.56, "start": 104.68, "text": " to look at deep differential system stability, learning advanced computations from examples" }, { "end": 118.48, "start": 110.56, "text": " by Francois Chardon, Amory Hayat, and Jiam Lampel of Facebook AI research and Ecole de" }, { "end": 126.96000000000001, "start": 118.48, "text": " Pompari Tech and Rutgers Rutgers University. So at this, in this paper, these authors basically" }, { "end": 133.84, "start": 126.96000000000001, "text": " propose that you can learn these complex mathematics with a model that has no clue about mathematics." }, { "end": 141.20000000000002, "start": 133.84, "text": " In fact, it is a language model. And it can output the solutions, for example, whether" }, { "end": 146, "start": 141.20000000000002, "text": " or not a system is controllable, which are sort of binary solutions, but it can also" }, { "end": 154.32, "start": 146, "text": " output actual solutions as in numbers or as in the matrices that you would need to obtain" }, { "end": 162.16, "start": 154.32, "text": " from these problems. So that's pretty cool. And it is built upon this other paper called," }, { "end": 167.48, "start": 162.16, "text": " I think, some deep learning for symbolic mathematics or something. I have made a video on it if" }, { "end": 174.8, "start": 167.48, "text": " you search for it, and I'll also link it in the description. And you can go check that" }, { "end": 180.1, "start": 174.8, "text": " out because that's sort of the basis. So in this previous paper, I think it was from partially" }, { "end": 187.04, "start": 180.1, "text": " the same authors, they have investigated language models into integrating functions. So you" }, { "end": 191.72, "start": 187.04, "text": " have some sort of function, you're trying to find the integral, and they've tried to" }, { "end": 200.24, "start": 191.72, "text": " do that. Now they go a lot further. So they look at these at these differential systems," }, { "end": 205.24, "start": 200.24, "text": " which are characterized by these differential equations. So if you've never seen differential" }, { "end": 213.44, "start": 205.24, "text": " equations, it's basically an equation where the the derivation of some variable is characterized" }, { "end": 220.4, "start": 213.44, "text": " by the variable itself. So the the the gradient, if you will, or the code in the derivation" }, { "end": 226.16, "start": 220.4, "text": " according to some input variable here, it's most often it's time and physical systems" }, { "end": 231.76, "start": 226.16, "text": " is a function of that variable itself, and partially also other variables. So you can" }, { "end": 237.48000000000002, "start": 231.76, "text": " have systems of differential equations that all depend on each other. And there are a" }, { "end": 243.48000000000002, "start": 237.48000000000002, "text": " number of questions about these systems. These are very relevant in like physics or engineering," }, { "end": 250.32, "start": 243.48000000000002, "text": " control theory, and so on. So they investigate different problems that you can solve with" }, { "end": 256.32, "start": 250.32, "text": " these. They investigate specifically problems where we already know the solutions, but the" }, { "end": 265.12, "start": 256.32, "text": " solutions require very complex mathematical manipulation, such as as you've seen, calculate" }, { "end": 270.15999999999997, "start": 265.12, "text": " the integral of something, take the trace, calculate the rank, invert some matrix. So" }, { "end": 274.44, "start": 270.15999999999997, "text": " all of these mathematical steps are required to solve these problems if we were to teach" }, { "end": 281.12, "start": 274.44, "text": " them to math students or engineering students. And this paper basically says, if we just" }, { "end": 289.8, "start": 281.12, "text": " input the problem into a big language model, and, and ask it to output the solution, it" }, { "end": 294.76, "start": 289.8, "text": " can do it. So basically can learn to do all of these things. So that's pretty, pretty" }, { "end": 300.32, "start": 294.76, "text": " surprising, because you don't program it to do any math. So the first problem they look" }, { "end": 310.04, "start": 300.32, "text": " at is this local stability problem. So in, I don't, I don't really want to go into, into" }, { "end": 314.42, "start": 310.04, "text": " much of the actual mathematical problems, but we'll look at the first one to just give" }, { "end": 322.12, "start": 314.42, "text": " you an idea of what these sort of problems are. So Xe is an, if Xe is an equilibrium" }, { "end": 329.71999999999997, "start": 322.12, "text": " point, it means that all solutions, if all solutions converge to Xe when their initial" }, { "end": 336.04, "start": 329.72, "text": " positions are close enough, the equilibrium point is said to be locally stable. Okay." }, { "end": 341.76000000000005, "start": 336.04, "text": " This problem is well known if f is differentiable in Xe, and answers provided by the spectral" }, { "end": 351.04, "start": 341.76000000000005, "text": " mapping theorem. So in that case, you'd have a system, maybe we can draw one where you'd" }, { "end": 356.44000000000005, "start": 351.04, "text": " try to find local points of stability. So this point here would be maybe a local point" }, { "end": 363.48, "start": 356.44, "text": " of stability, if you consider this as sort of an optimization landscape, because if you" }, { "end": 368.82, "start": 363.48, "text": " go from here, if you go a little bit away from it, you're always sort of pushed back." }, { "end": 376.44, "start": 368.82, "text": " If this, if this is a, if the system is gravity, sort of, so if these differential equations" }, { "end": 386.08, "start": 376.44, "text": " did sort of describe that the height here is the force with which you're pulled down," }, { "end": 390.28, "start": 386.08, "text": " then this thing here would be a local point of stability. The question is, if I give you" }, { "end": 395.41999999999996, "start": 390.28, "text": " a system that's described by these differential equations, can you tell me whether or not" }, { "end": 403.68, "start": 395.41999999999996, "text": " it is stable at some point? Okay. And there is a spectral mapping theorem, which says," }, { "end": 409.4, "start": 403.68, "text": " if you have the Jacobian matrix of f at this point, the matrix of its partial derivatives" }, { "end": 415.91999999999996, "start": 409.4, "text": " relative to its variable. And if you take lambda to be the largest real part of its" }, { "end": 423.64, "start": 415.91999999999996, "text": " complex eigenvalues, if lambda is positive, then this is an unstable equilibrium. If lambda" }, { "end": 428.71999999999997, "start": 423.64, "text": " is negative, then it's a locally stable equilibrium. So an unstable equilibrium likewise would" }, { "end": 436.64, "start": 428.71999999999997, "text": " be the point here on top, which means that if you're exactly at the point, then you stay" }, { "end": 441, "start": 436.64, "text": " there. But as soon as you're a little bit off, you drift away from it. That would be" }, { "end": 448.76, "start": 441, "text": " an unstable equilibrium. Okay. So there are complex steps involved in deriving this solution." }, { "end": 452.91999999999996, "start": 448.76, "text": " And they list them out here just to show you how complex this is. This is not meant to" }, { "end": 458.91999999999996, "start": 452.91999999999996, "text": " teach you, you don't have to like understand or be able to apply this. This is simply meant" }, { "end": 464.58, "start": 458.91999999999996, "text": " to tell you how complicated it is to arrive at a solution. So first you need to differentiate" }, { "end": 470.15999999999997, "start": 464.58, "text": " each function with respect to each variable and obtain the formal Jacobian. So they do" }, { "end": 476.52, "start": 470.15999999999997, "text": " this here for this example system, which is this system right here. This is a system of" }, { "end": 484.26, "start": 476.52, "text": " two equations, two differential equations in two variables. Okay. So if you derive the" }, { "end": 490.79999999999995, "start": 484.26, "text": " Jacobian that will give you a four entry Jacobian. So each one of these is one of the equations" }, { "end": 497.04, "start": 490.8, "text": " derived with respect to one of the variables. You can do that, right? But it requires fairly" }, { "end": 502.48, "start": 497.04, "text": " complex mathematical knowledge. Like knowing that the derivative of the sign here is the" }, { "end": 508.8, "start": 502.48, "text": " cosine and knowing that this cosine doesn't matter for this particular entry because it's" }, { "end": 517.82, "start": 508.8, "text": " in X2 and here we derive by X1. So that's already very challenging. Second, you need" }, { "end": 524.2800000000001, "start": 517.82, "text": " to evaluate the Jacobian at that point. So first you've done it symbolically. Now you" }, { "end": 528.4000000000001, "start": 524.2800000000001, "text": " actually need to put in the numbers at the point you're interested in, which will give" }, { "end": 535.6800000000001, "start": 528.4000000000001, "text": " you this thing right here, which is a numerical matrix, whereas this was a symbolical matrix." }, { "end": 542.0600000000001, "start": 535.6800000000001, "text": " Then you need to calculate the eigenvalues, which is, I mean, you have several methods" }, { "end": 548.16, "start": 542.06, "text": " of that. You have several methods of computing the largest eigenvalue. You could do power" }, { "end": 555.4799999999999, "start": 548.16, "text": " method. You could do decomposition. There are numerous ways, but none of these is like" }, { "end": 563.7199999999999, "start": 555.4799999999999, "text": " particularly easy, right? And then lastly, you need to return the minus max of the real" }, { "end": 568.9599999999999, "start": 563.7199999999999, "text": " part, which is the speed of convergence of the system. So not only do you need to be" }, { "end": 576.08, "start": 568.96, "text": " able to tell whether it is stable, which is if this is negative, you also need to be able" }, { "end": 584.52, "start": 576.08, "text": " to say, or if this is negative, you need to be able to say whether it is locally stable" }, { "end": 593.4000000000001, "start": 584.52, "text": " or locally unstable. And since this is larger than zero, it's locally stable. And this would" }, { "end": 601.24, "start": 593.4, "text": " be the decay rate 0.441. This is what you're asking this model to output, right? Now we'll" }, { "end": 608.36, "start": 601.24, "text": " quickly go over the other things, but not as in depth. But this is in control theory" }, { "end": 612.76, "start": 608.36, "text": " where you're trying, you have almost the same thing, you have a differential equation. But" }, { "end": 617.64, "start": 612.76, "text": " now, in addition to these variables, you have these control variables, which you have power" }, { "end": 624.88, "start": 617.64, "text": " over and you're trying to decide, can I control the system with the appropriate function?" }, { "end": 629.48, "start": 624.88, "text": " And in order to do that, that's kind of the problem that we had at the beginning. I know" }, { "end": 635.96, "start": 629.48, "text": " it's not. Oh, yeah, it is. So what you need to do is again, differentiate the system with" }, { "end": 641.4, "start": 635.96, "text": " respect to its internal variables, differentiate the system with respect to its control variables," }, { "end": 649.64, "start": 641.4, "text": " evaluate A and B, calculate the controllability matrix with one of the functions above, calculate" }, { "end": 657.52, "start": 649.64, "text": " the rank, and optionally evaluate this equation number three that we saw before. Now, the" }, { "end": 663.76, "start": 657.52, "text": " last task is equally complicated. It relates to the stability of partial differential equations" }, { "end": 671.48, "start": 663.76, "text": " using the Fourier transform. And again, to obtain this, it is a five step intricate process" }, { "end": 679.36, "start": 671.48, "text": " where that is a mix of symbolic complex manipulation and numerical evaluation of that symbolic" }, { "end": 687.72, "start": 679.36, "text": " things. And here you need to simply output two bits, one bit says whether there exists" }, { "end": 695.44, "start": 687.72, "text": " a solution and the other bit says whether it vanishes at t to infinity. So what are" }, { "end": 700.28, "start": 695.44, "text": " they expecting here? What they're doing is they're going to build a data set that is" }, { "end": 705.2, "start": 700.28, "text": " composed of these things. And I think they do it one by one. So they take one of the" }, { "end": 710.9200000000001, "start": 705.2, "text": " tasks and they're going to build a giant data set of these things. Since they all have solutions," }, { "end": 717.44, "start": 710.9200000000001, "text": " right? You can build a data set with labels, because you can actually build a you can build" }, { "end": 723.36, "start": 717.44, "text": " software that does these steps, because you program mathematical knowledge into the software," }, { "end": 729.08, "start": 723.36, "text": " but it's custom made for that particular problem. And they're simply trying to in there, they're" }, { "end": 733.8000000000001, "start": 729.08, "text": " not trying to beat that program, they're simply trying to investigate, can a language model" }, { "end": 740.96, "start": 733.8000000000001, "text": " that knows nothing of math, do this simply by learning from data. So they're going to" }, { "end": 746.24, "start": 740.96, "text": " try to build a data set, or they are building a data set. And they do it in the same way" }, { "end": 753.5600000000001, "start": 746.24, "text": " as this previous paper, which say we generate random functions by sampling unary binary" }, { "end": 759.04, "start": 753.5600000000001, "text": " trees and randomly selecting operators, variables and integers for their internal nodes and" }, { "end": 765.04, "start": 759.04, "text": " leaves. So they use any combination of plus minus times divide x blog, and so on. So all" }, { "end": 770.5600000000001, "start": 765.04, "text": " these functions can appear in these things. And they basically they build a tree where" }, { "end": 777.88, "start": 770.56, "text": " they say, okay, we go from plus and then here is five, and then here maybe is minus sin" }, { "end": 793.28, "start": 777.88, "text": " of x, and here over is x of y. So that would be five plus sine of x minus x of y. So they" }, { "end": 799.88, "start": 793.28, "text": " build trees like this by randomly sampling. And then they simply feed this to their mathematical" }, { "end": 807.2, "start": 799.88, "text": " program to obtain a solution y. And then they feed all of this into that's now a training" }, { "end": 814.16, "start": 807.2, "text": " data point. This here is x and here you have y. And they generate a giant bunch of these" }, { "end": 822.96, "start": 814.16, "text": " things. And they feed them into the into the model. They say here is seek to seek models," }, { "end": 828.96, "start": 822.96, "text": " not even sure what kind of models they use. I think they just they use transformers as" }, { "end": 834.2800000000001, "start": 828.96, "text": " well. So they they use standard transformers, I believe. So yes, in all experiments, we" }, { "end": 838.6, "start": 834.2800000000001, "text": " use the transformer architecture with eight attention heads, we train our models with" }, { "end": 843.52, "start": 838.6, "text": " the atom optimizer learning rate, blah, blah, blah, we vary the dimension, and the number" }, { "end": 848.12, "start": 843.52, "text": " of layers. So that's going to be interesting to see how the size of this language model" }, { "end": 856, "start": 848.12, "text": " influences how well this model can solve these things. So as you can see here, they build" }, { "end": 861.64, "start": 856, "text": " these data sets for local stability, they include systems with two to six equations," }, { "end": 867.68, "start": 861.64, "text": " which is already fairly, fairly complicated, and would take a human quite a while to to" }, { "end": 875.28, "start": 867.68, "text": " do this. They say we generate a data set with over 50 million systems. So it's a pretty" }, { "end": 881.72, "start": 875.28, "text": " dense sampling of this space. And I feel this is one of the important components here. They" }, { "end": 886.48, "start": 881.72, "text": " do make sure that none of their tests, so they do a train test split, and they do make" }, { "end": 891.6, "start": 886.48, "text": " sure that none of their test examples is in the training data set, though they claim that" }, { "end": 897.24, "start": 891.6, "text": " they never actually have to remove anything, they just check and the search space, the" }, { "end": 904.48, "start": 897.24, "text": " space of these trees here is so large, that it never happens that the data test sample" }, { "end": 911.1600000000001, "start": 904.48, "text": " or almost never happens that a test sample is in the training data set. Alright, so they" }, { "end": 920.56, "start": 911.16, "text": " generate these things. And here are the results. So for local stability, it is trained to predict" }, { "end": 924.24, "start": 920.56, "text": " this lambda that we saw at the beginning, the largest part of the eigenvalue of the" }, { "end": 930.6, "start": 924.24, "text": " Jacobian corresponding to the convergence speed of equilibrium, we consider that predictions" }, { "end": 937.0799999999999, "start": 930.6, "text": " are correct when they fall within 10% of the ground truth. And here you can see that their" }, { "end": 945.36, "start": 937.08, "text": " best model achieves 96% if the degree of if it's two equations, so if the degree of the" }, { "end": 953.48, "start": 945.36, "text": " system is two, it achieves in 96% accuracy. So in 96% of the time, the convergence speed" }, { "end": 959.64, "start": 953.48, "text": " is within 10% of the true convergence speed. That's fairly crazy, right? That's pretty" }, { "end": 968.04, "start": 959.64, "text": " good. And here the exact prediction of local convergence speed to given precision. So how" }, { "end": 974.36, "start": 968.04, "text": " many digits actually match of that conversion speed. So it's not only 10% off, they also" }, { "end": 981.76, "start": 974.36, "text": " measure how many digits match. And here, you can see that as you up the degree of equation," }, { "end": 990.12, "start": 981.76, "text": " sorry, here, the performance drops off, as you can see, less and less and less. And also" }, { "end": 998.16, "start": 990.12, "text": " as you lower the number of layers in your model, or lower the dimensionality, the performance" }, { "end": 1004.64, "start": 998.16, "text": " drops quite significantly. So that means the language model sort of is doing real work" }, { "end": 1014.8, "start": 1004.64, "text": " here. And here you also see that if it's degree two, the convergence speed has pretty even" }, { "end": 1023.88, "start": 1014.8, "text": " goes to two, three or four digits often. But as you now increase the degree, this accuracy" }, { "end": 1028.94, "start": 1023.88, "text": " drops off fairly quickly. All right, let's keep that in mind. So the surprising thing" }, { "end": 1036.96, "start": 1028.94, "text": " is that it actually works. And it works in surprisingly big amount of the time. Now," }, { "end": 1041.48, "start": 1036.96, "text": " I don't know, you could bicker about this 10% and how bad or how easy this is, and so" }, { "end": 1049.24, "start": 1041.48, "text": " on. But it is fairly, it is fairly complicated problem. And to be within 10% of the solution" }, { "end": 1058.8400000000001, "start": 1049.24, "text": " seems quite remarkable. And the same things here happen with the other tasks. So here" }, { "end": 1065.56, "start": 1058.84, "text": " they say they predict controllability in the autonomous case. So in the control theory," }, { "end": 1070.32, "start": 1065.56, "text": " they predict these two things, whether it's controllable, and then they output this K" }, { "end": 1080.9199999999998, "start": 1070.32, "text": " matrix that we saw before. Yeah. So here, you can see that if they have high enough" }, { "end": 1088.36, "start": 1080.9199999999998, "text": " dimensions and high enough layers, with sample systems with three to six equations, they" }, { "end": 1094.9399999999998, "start": 1088.36, "text": " achieve again a 97% in the prediction of whether the system is controllable or not. Now remember," }, { "end": 1104.62, "start": 1094.9399999999998, "text": " this is a binary prediction, but still, it requires a good understanding of math for" }, { "end": 1111.8799999999999, "start": 1104.62, "text": " a human to solve this. Again, we see this drop off with dimension and layers. But you" }, { "end": 1126.3600000000001, "start": 1111.88, "text": " know, this number here is pretty good compared to the 50% you'd have from random guessing." }, { "end": 1133.38, "start": 1126.3600000000001, "text": " Also interesting is when they look at is this correct, sorry, is the feedback matrix correct?" }, { "end": 1139.74, "start": 1133.38, "text": " So this matrix that you optionally have to output, they find that if they analyze whether" }, { "end": 1147.56, "start": 1139.74, "text": " or not that's within 10% of the true one, they see that pretty, pretty quickly this" }, { "end": 1153.36, "start": 1147.56, "text": " accuracy drops when they up the degree. Of course, the matrix, as I understand it has" }, { "end": 1160.36, "start": 1153.36, "text": " more entries at degree six than at degree three. So maybe that's understandable. But" }, { "end": 1166.86, "start": 1160.36, "text": " it drops off pretty quickly. But what is true is whether they call this correct feedback" }, { "end": 1174.1, "start": 1166.86, "text": " matrix. So from the feedback matrix, from the entries in that, you can read out whether" }, { "end": 1181.08, "start": 1174.1, "text": " it's the system is controllable or not, if all of the eigenvalues, I believe, are negative" }, { "end": 1185.4599999999998, "start": 1181.08, "text": " or positive, or all the values are negative or positive. So basically, by saying whether" }, { "end": 1190.8, "start": 1185.4599999999998, "text": " or not these things are positive or negative, you can read out the controllability. And" }, { "end": 1197.6599999999999, "start": 1190.8, "text": " if they check whether that property holds, then that is is fairly well. So they argue" }, { "end": 1204.84, "start": 1197.6599999999999, "text": " here that this shows that it doesn't, it doesn't predict the the matrix they want. But the" }, { "end": 1211.6399999999999, "start": 1204.84, "text": " matrix that it predicts has the appropriate properties to solve this other task right" }, { "end": 1220.92, "start": 1211.64, "text": " here. Okay, that's experiment two. Experiment three, as you can imagine, is quite different," }, { "end": 1227, "start": 1220.92, "text": " sorry, quite similar in that they investigate these partial differential equations. In a" }, { "end": 1232.3200000000002, "start": 1227, "text": " Fourier transform, the model is given differential operator and an initial condition is trained" }, { "end": 1238.68, "start": 1232.3200000000002, "text": " to predict if a solution exists. And if so, whether it converges to zero, when t goes" }, { "end": 1244.72, "start": 1238.68, "text": " to infinity, the space, the dimensions between two and six. So the random guessing here" }, { "end": 1250.98, "start": 1244.72, "text": " would be, I guess, 25%, because it's two bits, you need to output. And this model performs" }, { "end": 1256.68, "start": 1250.98, "text": " extremely well, even up to this dimension six, there is a drop off with dimension, but" }, { "end": 1266.3600000000001, "start": 1256.68, "text": " it still does perform very, very well. Now, they go into the discussion a bit. And they" }, { "end": 1272.52, "start": 1266.36, "text": " and this, this is the part that in this paper, interests me, like, how do you interpret these" }, { "end": 1277.76, "start": 1272.52, "text": " results? Apparently, you give these mathematical things to a language model that has no clue" }, { "end": 1283.24, "start": 1277.76, "text": " of math. And just by looking at examples, it learns to produce correct solutions. And" }, { "end": 1287.84, "start": 1283.24, "text": " if you want to teach that to a human, the human would have to go through all these steps," }, { "end": 1294.6799999999998, "start": 1287.84, "text": " right? So something is happening here. And we'll want to find out what and the discussion" }, { "end": 1302.2, "start": 1294.68, "text": " is maybe they try to explain a bit why they think this happens. They say we studied five" }, { "end": 1307.4, "start": 1302.2, "text": " problems of advanced mathematics from widely research. In three of them, we predict qualitative" }, { "end": 1313.3200000000002, "start": 1307.4, "text": " and theoretical features. In two, we perform numerical computations. According to mathematical" }, { "end": 1319.04, "start": 1313.3200000000002, "text": " theory, solving these problems requires a combination of advanced techniques, symbolic" }, { "end": 1325.28, "start": 1319.04, "text": " and numerical that seem unlikely to be learnable from examples. Yet our model achieves more" }, { "end": 1333.36, "start": 1325.28, "text": " than 95% accuracy on all qualitative tasks. And between 65 and 85 on numerical computations," }, { "end": 1341.12, "start": 1333.36, "text": " such high performances over difficult mathematical tasks may come as a surprise. One way to generate" }, { "end": 1346.96, "start": 1341.12, "text": " data set of problems with their solutions consists in sampling the solution first and" }, { "end": 1352.16, "start": 1346.96, "text": " deriving an associated problem. For instance, pairs of functions with their integrals can" }, { "end": 1357.68, "start": 1352.16, "text": " be generated by sampling and differentiating from random functions. So here they hedge" }, { "end": 1362.26, "start": 1357.68, "text": " against there's this criticism and this was mainly a criticism of their other paper, which" }, { "end": 1367.92, "start": 1362.26, "text": " they already addressed in their other paper was if you want to find if you want to create" }, { "end": 1374, "start": 1367.92, "text": " a data set where you have the function, and then the label is the integral of the function," }, { "end": 1382.56, "start": 1374, "text": " then there is no common solution to derive these integrals. Sorry, the derive is a, there" }, { "end": 1387.68, "start": 1382.56, "text": " is no common solution to integrate functions. I mean, you can do it numerically, but there" }, { "end": 1394.44, "start": 1387.68, "text": " is no common symbolic solution to integrate any function. And that's why what you can" }, { "end": 1399.4, "start": 1394.44, "text": " do if you want to produce a data set is you start with the integral already, and then" }, { "end": 1408.72, "start": 1399.4, "text": " you differentiate that to get to get a thing. And then you know that if you integrate this," }, { "end": 1414.66, "start": 1408.72, "text": " you should get back your original function. But this biases the data set because the sampling" }, { "end": 1420.76, "start": 1414.66, "text": " is now not over these functions, but the sampling is over these functions. And that might lead" }, { "end": 1427.4, "start": 1420.76, "text": " to this distribution here being biased. So they hedge against that, which I don't care" }, { "end": 1432.76, "start": 1427.4, "text": " because it clearly in this paper, and they say in this paper, data sets for all considered" }, { "end": 1437.5400000000002, "start": 1432.76, "text": " tasks are generated using a forward approach by directly sampling as a result, potential" }, { "end": 1442.4, "start": 1437.5400000000002, "text": " bias caused by backward generative model do not apply here. And they studied problems" }, { "end": 1445.48, "start": 1442.4, "text": " from three different so they hedge against this argument that they could have a bias" }, { "end": 1452.8400000000001, "start": 1445.48, "text": " data set, which I don't think anyone reading this paper would leverage against them. Yeah," }, { "end": 1461.72, "start": 1452.84, "text": " so in so here, they basically say how good they are, how surprising this is all of this" }, { "end": 1466.56, "start": 1461.72, "text": " requires math, this part is irrelevant, because it hedges against an argument that I don't" }, { "end": 1472.1, "start": 1466.56, "text": " think is reasonable against the paper. And then the last thing in their discussion is" }, { "end": 1476.84, "start": 1472.1, "text": " an objection traditionally raised is that the model might memorize a very large number" }, { "end": 1481.6799999999998, "start": 1476.84, "text": " of cases and interpolate between them, which I think we know in language model happens" }, { "end": 1486.8400000000001, "start": 1481.68, "text": " often. Right? Oh, by the way, have I shown you how they encode this into the language" }, { "end": 1493.6000000000001, "start": 1486.8400000000001, "text": " model? I have not. This is the I guess this is the craziest part. They don't even put" }, { "end": 1500.8400000000001, "start": 1493.6000000000001, "text": " the numbers there. Wait, wait, they don't even put the numbers there. They actually" }, { "end": 1507.24, "start": 1500.8400000000001, "text": " put the as I understand it, they put the string tokens here, right? So they put the string" }, { "end": 1516.2, "start": 1507.24, "text": " tokens of the math, and then even like compose to the number 142, they would put as now there's" }, { "end": 1523.74, "start": 1516.2, "text": " an integer and then the token one, the token four and the token two. Okay. And the decimal" }, { "end": 1532.52, "start": 1523.74, "text": " point representation is the sequence float three dot one for e in negative one. So this" }, { "end": 1537.8799999999999, "start": 1532.52, "text": " is it's really just a string, there is no, like the model would even have to learn the" }, { "end": 1547.32, "start": 1537.8799999999999, "text": " decimal representation of numbers to get that this four here is actually not not just a" }, { "end": 1553.54, "start": 1547.32, "text": " different token than two, but it's 20 times larger because it is in the position one in" }, { "end": 1558.2, "start": 1553.54, "text": " front of two. So it's not two times larger, like four is to two, but because four is," }, { "end": 1563.6000000000001, "start": 1558.2, "text": " you know, one digit away from two, it's 20 times larger. And then this here is actually" }, { "end": 1570.04, "start": 1563.6000000000001, "text": " 50 times larger than this. So it seems like a quite inconvenient way to input data into" }, { "end": 1575.56, "start": 1570.04, "text": " the model. And yet the model is super accurate, right? And we already know that these language" }, { "end": 1580.44, "start": 1575.56, "text": " models, what they tend to do is they tend to memorize the training data or abstracted" }, { "end": 1587.76, "start": 1580.44, "text": " in a way that they can sort of interpolate between fuzzy versions of the training data." }, { "end": 1593.34, "start": 1587.76, "text": " Here they say this is unlikely, sorry, this is unlikely because first, because the size" }, { "end": 1600.28, "start": 1593.34, "text": " of our problem space is too large to be memorized. So say for all consider problems, we did not" }, { "end": 1607.12, "start": 1600.28, "text": " get a single duplicate over 50 million generated examples. Second, because in some of our problems," }, { "end": 1612.92, "start": 1607.12, "text": " such as non autonomous control, even a model with a one layer and 64 dimensions obtains" }, { "end": 1618.6000000000001, "start": 1612.92, "text": " a high accuracy and such a small model would never be able to memorize that many examples," }, { "end": 1623.98, "start": 1618.6000000000001, "text": " which is true, right? This is this is a fair defense against you're just interpolating" }, { "end": 1631.48, "start": 1623.98, "text": " training data. But I think the kind of broader, the broader scope of this criticism would" }, { "end": 1637.72, "start": 1631.48, "text": " be something like your model is just kind of learning the pattern regularities of the" }, { "end": 1643.64, "start": 1637.72, "text": " textual data that you feed in. It's not actually learning math, it's just learning sort of," }, { "end": 1648.92, "start": 1643.64, "text": " okay, there is like a cosine. And if there's a cosine here, followed by an exponential" }, { "end": 1657.2, "start": 1648.92, "text": " function that often leads to like a very low number of this lambda, right? And then if" }, { "end": 1662.1200000000001, "start": 1657.2, "text": " a very similar thing comes, it comes across a very similar thing in the test sample, even" }, { "end": 1666.76, "start": 1662.1200000000001, "text": " though it's not exactly the same thing, it will map it to like a similar place in the" }, { "end": 1671.72, "start": 1666.76, "text": " label space. I mean, this is literally machine learning, this is literally regression. But" }, { "end": 1681.2, "start": 1671.72, "text": " I think the more the broader scope of this criticism is that what your model might be" }, { "end": 1688.36, "start": 1681.2, "text": " doing might simply be sort of a very simple regression on these tokens or on these context" }, { "end": 1695.08, "start": 1688.36, "text": " dependent tokens, rather than this internal mathematical reasoning. And I don't, while" }, { "end": 1703.12, "start": 1695.08, "text": " it is true that it's probably not memorizing any examples, this still doesn't. And while" }, { "end": 1709.6, "start": 1703.12, "text": " it is also true that they did not get a single exact duplicate, what would be interesting" }, { "end": 1715.8, "start": 1709.6, "text": " to know is how many like approximate duplicates, so can you basically solve the problem with" }, { "end": 1720.76, "start": 1715.8, "text": " a nearest neighbor approach? That would be my question. Can you solve the problem with" }, { "end": 1726.24, "start": 1720.76, "text": " a nearest neighbor approach over their training data set? Because that means you basically" }, { "end": 1734.7, "start": 1726.24, "text": " don't need the mathematical knowledge. They say third, because for some of our problems," }, { "end": 1740.4, "start": 1734.7, "text": " we know from mathematical theory, that solutions, IEG, the real value of eigenvalues cannot" }, { "end": 1746.84, "start": 1740.4, "text": " be obtained by simple interpolation. And I mean, that is also a valid defense. But I" }, { "end": 1752.6399999999999, "start": 1746.84, "text": " think the argument goes further than just simple interpolation. What we mean by interpolation" }, { "end": 1759.32, "start": 1752.6399999999999, "text": " is not we interpolate the real values of the eigenvalues. What we mean by interpolation" }, { "end": 1765.04, "start": 1759.32, "text": " is sort of interpolation in the regression space of these tokens. Like we know that if" }, { "end": 1774.9199999999998, "start": 1765.04, "text": " we go from a sine to a cosine, maybe the sine of the output flips at the end. And that's" }, { "end": 1781.52, "start": 1774.92, "text": " what we mean by interpolation. Like when we see two equations that are very similar, like" }, { "end": 1793.76, "start": 1781.52, "text": " x squared plus 4x minus the sine of x, and then we see x squared plus 5x minus twice" }, { "end": 1800.26, "start": 1793.76, "text": " the sine of x. What we mean by interpolation is that we now get a test example that says" }, { "end": 1812.48, "start": 1800.26, "text": " x squared plus, let's say, 4x minus 3 times the sine of x. Then what we would interpolate" }, { "end": 1821.48, "start": 1812.48, "text": " is sort of these things. I'm making a bad example right here. Maybe I should go with" }, { "end": 1828.44, "start": 1821.48, "text": " x squared and this is x third. I know these things aren't exactly equal, but this in the" }, { "end": 1835.8400000000001, "start": 1828.44, "text": " middle would be sort of an interpolation in token space. If you train the language model," }, { "end": 1842.56, "start": 1835.8400000000001, "text": " it will recognize that maybe I can interpolate whenever the coefficient here is just different" }, { "end": 1847.5800000000002, "start": 1842.56, "text": " or I can interpolate when there's just, you know, if there's like a log x here, that doesn't" }, { "end": 1852.4, "start": 1847.5800000000002, "text": " really change anything. So I can interpolate between the two, but I might not be able to" }, { "end": 1857.54, "start": 1852.4, "text": " interpolate when the exponent here is different. So if you give a training data set, you teach" }, { "end": 1862.1599999999999, "start": 1857.54, "text": " the model where it can interpolate and where it can't. Now again, it's not able to remember" }, { "end": 1869.12, "start": 1862.1599999999999, "text": " the training data, but it will be able to sort of abstract it and store it fuzzily and" }, { "end": 1873.32, "start": 1869.12, "text": " abstract the patterns from it, which is good, right? That's machine learning. But there's" }, { "end": 1878.94, "start": 1873.32, "text": " no evidence here that this does any mathematical reasoning. So up until now, all that has built" }, { "end": 1887.3, "start": 1878.94, "text": " up is sort of if you read the abstract, can advanced mathematical computations be learned" }, { "end": 1894.56, "start": 1887.3, "text": " from examples? Neural networks can learn advanced theorems, complex computation without built" }, { "end": 1903.44, "start": 1894.56, "text": " in mathematical knowledge. All of the story here, all of this showing of, hey, look at" }, { "end": 1911.48, "start": 1903.44, "text": " what steps is required to solve these problems. And even this discussion here basically says," }, { "end": 1919.6, "start": 1911.48, "text": " hey, you need mathematical complex reasoning to arrive at the solutions. And then in the" }, { "end": 1928.84, "start": 1919.6, "text": " conclusion, in the conclusion, they say, it seems that our models have learned to solve" }, { "end": 1933, "start": 1928.84, "text": " these problems, but that does not mean they learned these techniques we use to compute" }, { "end": 1938.52, "start": 1933, "text": " their solutions. Problems such as non-autonomous control involve long and complex chain of" }, { "end": 1942.96, "start": 1938.52, "text": " operations, yet even small models, so means one layer transformers with 64 dimensions," }, { "end": 1948.4, "start": 1942.96, "text": " achieve high accuracy. Most probably, our models learn shortcuts that allow them to" }, { "end": 1954.96, "start": 1948.4, "text": " solve specific problems without having to learn or understand their theoretical background." }, { "end": 1964.8, "start": 1954.96, "text": " Such a situation is common in everyday life. Yada, yada, yada. So here, in this paragraph" }, { "end": 1970.12, "start": 1964.8, "text": " here, they sort of counter their whole narrative of the paper. And that's, I guess that's sort" }, { "end": 1974.72, "start": 1970.12, "text": " of to, it's fair, right? They criticize their own work, which is good for research. It's" }, { "end": 1981.2, "start": 1974.72, "text": " also to hedge against criticism, and it's to be a bit real. This, it's a good paper," }, { "end": 1985.84, "start": 1981.2, "text": " right? Because it's a nice and interesting story. And then at the end, you also say," }, { "end": 1991.52, "start": 1985.84, "text": " look, this might actually not be all that what it's made up or what it seems like to" }, { "end": 1998.28, "start": 1991.52, "text": " be. And I agree with this statement right here. It's that probably the model learns" }, { "end": 2004.08, "start": 1998.28, "text": " shortcuts and the shortcuts might be just in a way of pattern matching. The pattern" }, { "end": 2008.56, "start": 2004.08, "text": " matching of whatever patterns you extract from the training data, you pattern match" }, { "end": 2014.92, "start": 2008.56, "text": " that and you relatively simply interpolate between those matched patterns, not between" }, { "end": 2019.2, "start": 2014.92, "text": " the training data itself, but between the match patterns. And therefore, you can arrive" }, { "end": 2024.8, "start": 2019.2, "text": " at approximately good solutions. So what I would have liked to see from such a paper," }, { "end": 2030.76, "start": 2024.8, "text": " right? They say that we leave that to future research after making really kind of big claims" }, { "end": 2036.04, "start": 2030.76, "text": " in the introduction and the abstract. They have taken three different problems here," }, { "end": 2045.32, "start": 2036.04, "text": " right? There's this local stability, then there is this control theory, and then there" }, { "end": 2049.44, "start": 2045.32, "text": " is this stability. They have three different problems. And okay, they try to show that" }, { "end": 2056.2799999999997, "start": 2049.44, "text": " they can apply this to a diverse range. But what I would have expected from a paper like" }, { "end": 2064.04, "start": 2056.2799999999997, "text": " this is they even spell out, here are four things that you need to do to solve this if" }, { "end": 2070.6, "start": 2064.04, "text": " we were to teach this to a human, right? Now, if you have trained a model and you evaluated" }, { "end": 2075.44, "start": 2070.6, "text": " it, it is really good at this task for which you thought you need, you know, to do these" }, { "end": 2083.24, "start": 2075.44, "text": " four steps. What would be really interesting is to now introspect your model and see, can" }, { "end": 2091.52, "start": 2083.24, "text": " I somehow show that my model has in somewhere in the intermediate layers has this quantity" }, { "end": 2096.88, "start": 2091.52, "text": " right here? And it's not just nearest neighboring in some learned pattern space. That would" }, { "end": 2102.2400000000002, "start": 2096.88, "text": " be an actually interesting research question, right? So rather, in my mind, rather than" }, { "end": 2106.76, "start": 2102.2400000000002, "text": " having three different things where they all, you know, they demonstrate the same thing" }, { "end": 2111.42, "start": 2106.76, "text": " over and over and over again, that this actually works, it would be a much more interesting" }, { "end": 2116.2400000000002, "start": 2111.42, "text": " question to introspect the model and parse out can can I, for example, you can see, can" }, { "end": 2123.26, "start": 2116.2400000000002, "text": " I reconstruct this quantity from the inside of the model? When the model isn't specifically" }, { "end": 2130, "start": 2123.26, "text": " trained to give me back this quantity, because I know this quantity would be a step on these" }, { "end": 2135.2000000000003, "start": 2130, "text": " on the path of the solution, right? If I want to get the solution, I almost have to calculate" }, { "end": 2141.6400000000003, "start": 2135.2000000000003, "text": " this quantity. Can I parse this out from the middle of the model somewhere? When the model" }, { "end": 2146.44, "start": 2141.6400000000003, "text": " isn't explicitly trained to give me this, if I can, then I can really make the point" }, { "end": 2151.94, "start": 2146.44, "text": " that the model does something like this and learn something like this from data. Whereas" }, { "end": 2157.48, "start": 2151.94, "text": " if I can't, that would be more of an evidence that the model is simply sort of pattern matching," }, { "end": 2165.68, "start": 2157.48, "text": " close enough, seen examples in the training data. Right? So that's a bit of my of my criticism" }, { "end": 2172, "start": 2165.68, "text": " right here is that they they show it works, which is pretty cool. But then they, they" }, { "end": 2178.52, "start": 2172, "text": " don't do the sort of interesting experiments of these of this introspection right here," }, { "end": 2184.16, "start": 2178.52, "text": " which is a bit sad, but you know, they leave it for future research, which I guess is going" }, { "end": 2190.24, "start": 2184.16, "text": " to be themselves. And that's how you make two papers. So no, I don't want to be too" }, { "end": 2198, "start": 2190.24, "text": " critical. It's a very cool paper. And I invite you to check it out and leave a like and subscribe" }, { "end": 2203.2, "start": 2198, "text": " and leave a comment of what you think of this kind of research of this paper, and whether" }, { "end": 2207.56, "start": 2203.2, "text": " or not you think I'm totally wrong. That's entirely possible. Okay, I'll see you next" }, { "end": 2208.56, "start": 2207.56, "text": " time. Bye bye." }, { "end": 2237.56, "start": 2208.56, "text": " Transcribed by https://otter.ai" } ]
ZfDZRX3WiJg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
VirTex: Learning Visual Representations from Textual Annotations (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "cnn", "visual", "resnet", "caption", "nlp", "transformer", "vasvani", "attention", "text", "coco", "imagenet", "convolutional neural network", "adaptation", "transfer learning", "quality", "unsupervised", "self-supervised" ]
Pre-training a CNN backbone for visual transfer learning has recently seen a big push into the direction of incorporating more data, at the cost of less supervision. This paper investigates the opposite: Visual transfer learning by pre-training from very few, but very high-quality samples on an image captioning task. OUTLINE: 0:00 - Intro & Overview 1:00 - Pre-Training for Visual Tasks 3:40 - Quality-Quantity Tradeoff 5:50 - Image Captioning 8:35 - VirTex Method 14:30 - Linear Classification 20:30 - Ablations 22:05 - Fine-Tuning 25:45 - Attention Visualization 27:30 - Conclusion & Remarks Paper: https://arxiv.org/abs/2006.06666 Code: https://github.com/kdexd/virtex Abstract: The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images. Authors: Karan Desai, Justin Johnson Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at Vertex Learning Visual Representations from Textual Annotations by Karen Desai and Justin Johnson of the University of Michigan. So this paper at its core is pretty simple. On a high level it proposes to take the task of image captioning, which is where you're given an image and you're asked to produce a caption for the image, and basically train a model to do this, and then just take the visual part of it as a baseline to transfer learn on other visual tasks. And that appears to work surprisingly well if you don't have much data. So if you don't have much data to pre-train on, this appears to work very well. Alright, as always, if you like content like this, then consider sharing it out, subscribing to the channel, or tell me what you think in the comments. So as I already said, the idea here is pretty simple. So people have been looking for pre-training tasks for visual tasks. So a visual task is anything where the input is an image, and then you usually have some sort of neural network that processes the image, and then at the end you can have many things. So you could have a classifier that classifies the image into one of many classes. If you know ImageNet, that's a thing. So if there's a cat here, then the ImageNet classifier here would say cat. Or you could have something like an object detector that tries to predict on the image where the cat is, like with a bounding box. You could have a semantic segmentation where it's like all of these pixels here are cats, and maybe all of these pixels here are sky. And so it labels every pixel. There's many visual tasks that you can formulate, and they all sort of share the same architecture. And specifically, they all share this part right here. If you will, this is the visual encoder. It's usually a convolutional neural network. And what's really different between the tasks is mostly this last part here that does the actual task. But this is often called the backbone. So this is the backbone. And the idea now is, if I have a bunch of these tasks, sometimes I don't have many labels for these tasks. I don't have many labeled images so that I could train this big architecture from scratch, like in medical images or just in domains where you don't have many images. So couldn't I somehow come up with a method to create this backbone beforehand? So to create backbone given another dataset. And the simplest variant here is you take a big image dataset, such as ImageNet, and then you train a classifier, like we said, to predict some classes on it. And then because an ImageNet has a lot of images, then this is your backbone. And then whenever you have a different task, you simply take the backbone, transfer it over, and then train the other. Basically, you continue training on the other task. That's called transfer learning. The question is, how do you get a good backbone? So if you train on something like ImageNet, then this is of course a supervised task. You have a very good learning signal, but even ImageNet has like 1 million images. But for example, the internet has many more images. So what you could do is you could train on this much bigger dataset that you collected from the internet. Let's call it internet. But there you don't have labels, right? So what you'll have to resort to is instead of supervised learning is self supervised learning, where you have an image and maybe you rotate it to the right. So here is our cat. You rotate it to the right. And then you have a classifier that predicts that this image was rotated to the right. And then that will become your backbone. These self supervised methods, they work very well. There is a different number of them. For example, MoCo, things like this. And there's also a number of techniques that do supervised pre training and then transfer learning. You can maybe watch my video on big transfer, which is a very large attempt to do to pre train a backbone for visual tasks. All right. Now, you can see right here that the sort of direction is that the more data the better. So that's sort of the idea here that ImageNet is a big data set, we can train a really good backbone. But you know, the internet is an even bigger data set, we don't have labels. So there's a trade off. But we potentially can train an even better visual backbone to then transfer learn with. This paper goes into a different direction. They say, look, if you go in this direction right here, you get more images, but you get less information per image. So with ImageNet, at least you have the label, right per image. But if you simply take a photo of the internet, you don't even have to label you have to resort to self supervised. What if we go into the other direction, and we look for images that have very high quality annotations, but maybe we don't have as many? Can we can we do the same thing? Can we learn good backbones by trading off quality for quantity in this case, and their quantity and quality trade off is they go for descriptions. So they'll go for something like this, where you'll have an image, and you'll have a caption for the image. And so they show these on a line here, semantically dense, semantically sparse, but their task is going to be caption generation. So their back their mod, their task is given an image, I want to produce a caption. And there are data sets that you can train this from in a supervised fashion, which of course, these are very expensive to create. I mean, if you want to create an ImageNet data set, then you have to label each image. But if you want to create a caption data set, that's even harder because human really needs to sit down, look at the image. And in ImageNet, everything is like one class. But here you need to look at the image. And then you'll have to come up with like an adequate description. Here the adequate description is an orange, sorry, an orange and white, an orange and white cat near a plate, and the white cake. Okay. So that's, that's the caption right here. And of course, the caption is ambiguous. So you'll have to collect multiple captions per image. And you'll have to make sure that the humans that do this do a good job and so on. So this these are very, very expensive data sets, but they are very high quality. If you think of what does what does a single label, let's just take ImageNet, ImageNet has a single label per class. Let's say this is cat or cake for that matter. It just sort of gives you very few bits of information. But if you consider the text here, an orange cat and a white cat, an orange and white cat, you know that there is a cat, right? You know that it's one cat, you know what its color is, orange and white, then you know that there is a white cake, right? So you know the other object. And you know the relation, they are near each other. Okay, same for here, a brown and white puppy. So this is one object and the description of the object. There is a there are apples, there is a green lawn, and the relations between them are also clear. The puppy is lying on the green lawn and looking at the apples. So the information in captions is so much more dense than just labels. And that's the that's the backdrop here to say, Hey, can't we can't we do? Can't we pre train a backbone from maybe a small data set, but that has so much information, like a caption date, image caption data set. Okay, so their method is nothing more. They train image captioning, and then they use the visual backbone for transfer learning. So this is the model, there's an image, the image goes into this visual backbone right here, which is a resin at 50. So this is a very, very standard convolutional neural network. And that gives you these features. So these features are seven by seven by 2048. This is the standard output of a resin at 50. And then from this part on, they do a linear projection, such that they can now input it into a language model. So they have visual features. And now they feed those into the language model. And the language model is just a transformer, actually two transformers. So one transformer, they're both autoregressive, one transformer tries to predict the caption in a forward way. And the other transformer tries to predict the caption in a backward way. And that's down here. So in this direction is backward because the caption has been reversed. If you don't know what a transformer is, I've made several videos on transformers. The first one is attention is all you need. And that's sort of the same, the same kind of transformer they use here. So as you can see right here, you have this multi-head attention, the layer normalization attention from the decoder. Now the difference between the original Vasvani attention is all you need transformer. And this one is that in the original transformer, you had, for example, if you had a machine translation task, you would have the French, maybe a French sentence over here. And then you would have the beginnings of German sentence here, right? This is what you have already produced. And now you're asking what should the next word be. And the architecture was such that there is a decoder transformer right here and that there is an encoder transformer that encodes whatever you already had. And then at some point there is this cross attention, right? There is the signal from the decoder going into the encoder and the encoder incorporating that. And then at the end right here, the encoder would predict or the entire transformer would predict what the next word will be. The only difference right here is that the decode this, sorry, I mixed this up. This is the decoder. This is the encoder. The only difference right here is that this encoder is no longer a transformer, but is this ResNet, this ResNet 50. Okay, because now you don't have an image as a, you can think of it like a translation task. You want to translate from images to text. Okay, so your input is going to be an image and the signal is going like it would go in the original transformer into the decoder. It would come from the image. So from these visual features goes here. So in this drawing, this thing is going in here. And then you simply predict the next word and you do it in both directions. And the reason you can do it in both directions here, this wasn't, is not the case, of course, if you have a decoder like a standard transformer task, because you don't need to do inference at this, you just need to do training. And training you can do using teacher forcing. And so you can do this in a bi directional way. You don't need, you don't need this at inference time. So at inference time, you simply cut off this part right here. That's your visual backbone. Okay. And these features here, those are going to be the features that you then train your task on. And sometimes you fine tune this or sometimes you keep it frozen, you can choose that. Alright, so convolutional network to encode the images that gives you features, visual features. Those visual features go into two transformers, both try to predict the caption of the image, one in a forward motion, one in a backward motion. And you train it to predict as accurately as possible the gold standard captions that you have in your data set. That's it. If you train this model well, that means the model can produce accurate captions for these images, which means that it has learned something meaningful about the image to the degree of course, that the original caption that was in your data set was a good descriptive caption. But we're just we're going to assume that the in these data sets, this is the case. Alright, that's what they do. Now, interesting thing here is that in their standard in their standard in their standard setup, they only have one of these transformer layers. So of these things right here, they only have one. And that's like I think it's like 2000 units wide, but or sorry, the hidden dimension is 2000 units or 2048. But they only have one layer. So what that means is that this transformer is not very powerful. So most that you force most of the power to come from the visual encoder, the visual encoder had basically has to do most of the work. And then the transformer is going to simply be a very shallow language model on top of that. And that of course makes your visual backbone even better. Alright, we can pretty much skip the rest. That's the idea. Like that there's nothing more to it. You train this from the beginning, you don't use any pre trained, whatever you train this from scratch. And then you use this. And then the first experiment, they simply train a linear classifier on top of that representation. So they freeze the backbone, and then they use a linear classifier. And they compare this to baselines. So one of the baseline is image net supervised, where you use the same backbone, but you train it on image net in a supervised fashion. Okay, and then you use that backbone to transfer out of the text, it's kind of like what big transfer does, but just on the regular 1000 class image net baseline. Then you have the sort of the unsupervised pre training ones. So moco, so pearly pearly some, I want to go into pearl, but moco is this momentum contrast, which is one of these supervised methods that has been shown to work really, really well. And this is also moco en is trained on image net, but now without the labels, because moco is unsupervised. And moco cocoa is trained on the cocoa data set. And the cocoa data set is what this paper here, the vertex paper uses cocoa is this image captioning data set. Now what's important to note is that cocoa has about 10% only of the images of image net. So it's considerably smaller. Now let's see how these things fair. Right here, you can see on the x axis, the number of images, okay, the number of images that the data set or that the pre training method trains on. Now, of course, some of these are going to be capped because for some data sets, there are just not more images available, right? So they're going to be capped here, the ones that are training on cocoa and the ones that are training on image net are going to be capped here. And you can already see that the vertex outperforms the image net supervised baseline by pretty much when you only give it this many images. Okay, so the way you do it is, in this case, you simply train these models. Now the brown one is when you take one caption per image, but the data set actually has more more than one caption per image. So when you use more than one, you can still boost your performance a bit. And that works way better than when you do the supervised pre training on image net, which would get you here with about the same amount of images. Now, when you use all of image net, you can see here you can get to a similar performance right here, but you have to use a 10 times bigger data set to get there. Right, so this already shows you sort of the advantage here. Now also consider the difference to the unsupervised ones. So if you look at the same amount of images, the unsupervised self supervised baselines are even lower. But if you go to more images, they sort of get closer to image net. And in their own papers, there are there are some evidence that if you self supervised train for long enough, you can actually surpass image net supervised pre training, but I'm not so sure that that's really the case. But you can see here the trade off between higher quality information, but smaller data sets versus lower quality information, but more data per data set. And yeah, if I guess if you were to if you were to pre train these self supervised methods with lots more data in a self supervised manner, they would maybe end up even higher than image net. Now this graph here is sort of the same thing where they also train a linear classifier. And you can see right here that now the image net supervised baseline is outperforming vertex by a lot. So what's happening here? Now this here is actually this is on image net. So the task here that you transfer learn is image net. Here it was like a neutral task Pascal VOC. None of these methods have trained on Pascal. They simply have trained on their own data set. These have trained on cocoa. This has trained on image net. And then they have transfer learned to Pascal. Now, the task is actually the transfer learning task is image net. So naturally, the the thing that was pre trained in a supervised fashion on image net is going to have a huge advantage in this task, because it basically has already learned the task beforehand, whereas the vertex, it has pre trained on cocoa, not on image net. And you can see, if you give it the same amount of images for pre training, it can actually it's it's fairly close to the image net baseline. So that's pretty respectable right there. Now, again, of course, if you use more images on the same data set that you then train for, then of course, the the image that baseline is going to outperform it. But so pretty cool to see here that in this smaller image regime, and also consider this down here, if you go even an order of magnitude lower, it's really shining that if you have higher quality information, and you make use of it, you don't need as many images. And now we knew this for a long time. But this now is showing the same for transfer learning for visual transfer learning. So this was when we froze the backbone. And then we trained a linear classifier on top, they go, they make a short excursion here and show how different parts of their model affect their final performance. And they find that, for example, the by captioning, which I believe is the is forward and backward captioning significantly helps, for example, compared to only forward captioning. And they also find that it's significantly outperforms other pre training tasks that they could do. And they also investigate whether how big their models should be. So here, this is their baseline model. Oh, I was I was wrong, actually, they the it's one layer of with 1024. You can see as you make the layer bigger and bigger, that generally helps. But I guess they decided against it because the gains are too little to, to afford to make it worth. And also if you make the network deeper here, you make the transformer have more layers, the performance goes up. But again, the gains are marginal. So I guess they're going to leave it away. So their baseline, as you can see, is these resin at 50 with the one layer of 1024 size. So this is now the last task, it's the fine tuning task. So this is what most people would do is they would train a backbone, and then they would fine tune it on a different data set on or on a different task where they don't have much labels. And here the situation looks a bit different. So if you look at, for example, a task on cocoa, so there are several tasks on cocoa, one of them is image captioning, which they use for peritra for pre training. If you do other tasks on cocoa, you can see right here that compared to the supervised baseline, this vertex, it performs about the same or maybe a bit worse. But what you can see is it performs significantly better than, for example, moco that was only trained on cocoa. So again, this shows that if you have the same data set, higher quality information makes it worth it. And it's even better, as you can see, on moco that was trained on image net is just not quite as good as the supervised baseline. But all of them, of course, are better than just a randomly initialized network that is trained from scratch. I mean, that's the entire point of transfer learning, that you are better than simply learning from scratch. And this shows throughout this experiment, except in this LVS masking task, where they do outperform the other the other things, the other methods significantly. Now the lower numbers on this tasks task also means that the task is harder than these tasks right here. And therefore, there are more gains to be made. And therefore, you could hypothesize that the bigger, the more quality information that you input can be used in a better way. So maybe more complex, also, the more complex a task is might also have an influence on how well the transfer learning works, if you come from a high quality transfer learning task versus a low quality transfer learning tasks. Yeah, so the lastly compare here with the again with Pascal VOC object detection, and these iNaturalist classification, where I believe this is also a transfer learning task with fine tuning. And as you can see, they can also hold up against the supervised baseline, or even outperform it at sometimes the green triangles mean that they outperform it by a significant margin. But then on this task right here, they again lag behind. So I think the point of the paper isn't really to show that that this is the best thing ever. But the point of the paper is to show that you can go about pre trainings, basically, the the common assumption is that you need more and more and more and more data for your model to learn about the data set. And they conclude here, no, actually, you can do with with very few data points as long as they have high quality annotations. Okay, so I think that's the point of the of the paper that and they don't always outperform the other baselines and whatnot. But they keep, they keep the performance the same, which basically means that this is an option. Here is a pretty cool result where they visualize the attention of their image captioning model, because they train an image captioning model. And you can really see that the image captioning model learns something meaningful about the image. So when it's a bird flying, the attention is mainly on the bird, as you can see, then over the the attention widens out over the image, air. So over the air, the attention is here in the sky and on the on the ocean. And then it goes near the ocean. And then the attention is on the ocean itself. As you can see, so they have a bunch of these images and they're they're pretty cool here a dog, so focused on the dog riding on and then you can see the attention going down because on is riding on means probably there's something below the dog. A surfboard. Now the attention is fully on the surfboard in. So as soon as you say in the attention, as you can see, it widens out. So I think that's that's fairly cool, fairly cool demonstration that the model understands sort of the the in relation, namely, if it is focused on something, and that something is in something else, then it widens the attention out to see what it is in, okay, the ocean, and then it focuses the attention on the ocean. So that's, that's a pretty, that's a pretty cool result. I guess we already knew this because we could train image captioning models before. It's just to show that it actually makes sense to use them as a pre training task for backbones. Now, what's the future of this, the authors here in their introduction, they make a claim that this has a good future because they here they only train on this small data set, right, it's smaller than image net, as you can see here, and they already get the same performance as if you train on the whole image net data set in a supervised fashion. Of course, they're also supervised, but they have 10 times less images. And they they say something to the effect of you do you know, it would be pretty easy to collect more data for this task, because the internet is full of images. And mostly these images have like some text with them. They, you know, they have these descriptions or they have text around it, people write something about the images, you could like mine Twitter, and then the responses when someone posts an image might tell you something about the image. But this definitely counteracts their this definitely counteracts their notion that these are very high quality labels, right? Their entire point here was that these annotations, these, these data sets with these these image captioning data sets like Coco, they have very, very high quality annotations. So this this text here is very high quality is really a descriptive text of the image that tries to capture what a human can see visually in the image. And as soon as you go out to the internet and collect a text around images, that's not going to be the case that information is again going to be quite low quality. And so I doubt that the performance here would hold up or that the claim you can easily, you know, you can easily create more data for this task holds up. So that's a bit my worry about the future of this, but it's definitely cool and definitely shows these quality quantity trade off very well. Alright, that was my two cents to the paper. I invite you to read it and tell me in the comments what you think about it and I'll see you next time.
[ { "end": 6.08, "start": 0, "text": " Hi there! Today we're looking at Vertex Learning Visual Representations from Textual Annotations" }, { "end": 13, "start": 6.08, "text": " by Karen Desai and Justin Johnson of the University of Michigan. So this paper at its core is pretty" }, { "end": 18.8, "start": 13, "text": " simple. On a high level it proposes to take the task of image captioning, which is where" }, { "end": 23.8, "start": 18.8, "text": " you're given an image and you're asked to produce a caption for the image, and basically" }, { "end": 30.96, "start": 23.8, "text": " train a model to do this, and then just take the visual part of it as a baseline to transfer" }, { "end": 38.38, "start": 30.96, "text": " learn on other visual tasks. And that appears to work surprisingly well if you don't have" }, { "end": 45.6, "start": 38.38, "text": " much data. So if you don't have much data to pre-train on, this appears to work very" }, { "end": 53.66, "start": 45.6, "text": " well. Alright, as always, if you like content like this, then consider sharing it out, subscribing" }, { "end": 61.64, "start": 53.66, "text": " to the channel, or tell me what you think in the comments. So as I already said, the" }, { "end": 68.84, "start": 61.64, "text": " idea here is pretty simple. So people have been looking for pre-training tasks for visual" }, { "end": 75.72, "start": 68.84, "text": " tasks. So a visual task is anything where the input is an image, and then you usually" }, { "end": 80.44, "start": 75.72, "text": " have some sort of neural network that processes the image, and then at the end you can have" }, { "end": 85.39999999999999, "start": 80.44, "text": " many things. So you could have a classifier that classifies the image into one of many" }, { "end": 93.96, "start": 85.39999999999999, "text": " classes. If you know ImageNet, that's a thing. So if there's a cat here, then the ImageNet" }, { "end": 100.92, "start": 93.96, "text": " classifier here would say cat. Or you could have something like an object detector that" }, { "end": 108.4, "start": 100.92, "text": " tries to predict on the image where the cat is, like with a bounding box. You could have" }, { "end": 115.32000000000001, "start": 108.4, "text": " a semantic segmentation where it's like all of these pixels here are cats, and maybe all" }, { "end": 122.92, "start": 115.32000000000001, "text": " of these pixels here are sky. And so it labels every pixel. There's many visual tasks that" }, { "end": 128.36, "start": 122.92, "text": " you can formulate, and they all sort of share the same architecture. And specifically, they" }, { "end": 135.08, "start": 128.36, "text": " all share this part right here. If you will, this is the visual encoder. It's usually a" }, { "end": 140.68, "start": 135.08, "text": " convolutional neural network. And what's really different between the tasks is mostly this" }, { "end": 146.68, "start": 140.68, "text": " last part here that does the actual task. But this is often called the backbone. So" }, { "end": 154.32000000000002, "start": 146.68, "text": " this is the backbone. And the idea now is, if I have a bunch of these tasks, sometimes" }, { "end": 158.28, "start": 154.32000000000002, "text": " I don't have many labels for these tasks. I don't have many labeled images so that I" }, { "end": 165.48, "start": 158.28, "text": " could train this big architecture from scratch, like in medical images or just in domains" }, { "end": 170.68, "start": 165.48, "text": " where you don't have many images. So couldn't I somehow come up with a method to create" }, { "end": 178.44, "start": 170.68, "text": " this backbone beforehand? So to create backbone given another dataset. And the simplest variant" }, { "end": 185.72, "start": 178.44, "text": " here is you take a big image dataset, such as ImageNet, and then you train a classifier," }, { "end": 190.44, "start": 185.72, "text": " like we said, to predict some classes on it. And then because an ImageNet has a lot of" }, { "end": 194.6, "start": 190.44, "text": " images, then this is your backbone. And then whenever you have a different task, you simply" }, { "end": 201.92, "start": 194.6, "text": " take the backbone, transfer it over, and then train the other. Basically, you continue training" }, { "end": 207.4, "start": 201.92, "text": " on the other task. That's called transfer learning. The question is, how do you get" }, { "end": 214.64, "start": 207.4, "text": " a good backbone? So if you train on something like ImageNet, then this is of course a supervised" }, { "end": 220, "start": 214.64, "text": " task. You have a very good learning signal, but even ImageNet has like 1 million images." }, { "end": 224.83999999999997, "start": 220, "text": " But for example, the internet has many more images. So what you could do is you could" }, { "end": 229.95999999999998, "start": 224.83999999999997, "text": " train on this much bigger dataset that you collected from the internet. Let's call it" }, { "end": 235.23999999999998, "start": 229.95999999999998, "text": " internet. But there you don't have labels, right? So what you'll have to resort to is" }, { "end": 240, "start": 235.23999999999998, "text": " instead of supervised learning is self supervised learning, where you have an image and maybe" }, { "end": 245.64, "start": 240, "text": " you rotate it to the right. So here is our cat. You rotate it to the right. And then" }, { "end": 252.6, "start": 245.64, "text": " you have a classifier that predicts that this image was rotated to the right. And then that" }, { "end": 259.72, "start": 252.6, "text": " will become your backbone. These self supervised methods, they work very well. There is a different" }, { "end": 265.6, "start": 259.72, "text": " number of them. For example, MoCo, things like this. And there's also a number of techniques" }, { "end": 271.28000000000003, "start": 265.6, "text": " that do supervised pre training and then transfer learning. You can maybe watch my video on" }, { "end": 277.84000000000003, "start": 271.28000000000003, "text": " big transfer, which is a very large attempt to do to pre train a backbone for visual" }, { "end": 286, "start": 277.84000000000003, "text": " tasks. All right. Now, you can see right here that the sort of direction is that the more" }, { "end": 290.88, "start": 286, "text": " data the better. So that's sort of the idea here that ImageNet is a big data set, we can" }, { "end": 295.56, "start": 290.88, "text": " train a really good backbone. But you know, the internet is an even bigger data set, we" }, { "end": 300.15999999999997, "start": 295.56, "text": " don't have labels. So there's a trade off. But we potentially can train an even better" }, { "end": 306.12, "start": 300.15999999999997, "text": " visual backbone to then transfer learn with. This paper goes into a different direction." }, { "end": 311.92, "start": 306.12, "text": " They say, look, if you go in this direction right here, you get more images, but you get" }, { "end": 318.2, "start": 311.92, "text": " less information per image. So with ImageNet, at least you have the label, right per image." }, { "end": 322.8, "start": 318.2, "text": " But if you simply take a photo of the internet, you don't even have to label you have to resort" }, { "end": 329.47999999999996, "start": 322.8, "text": " to self supervised. What if we go into the other direction, and we look for images that" }, { "end": 336.36, "start": 329.47999999999996, "text": " have very high quality annotations, but maybe we don't have as many? Can we can we do the" }, { "end": 343.88, "start": 336.36, "text": " same thing? Can we learn good backbones by trading off quality for quantity in this case," }, { "end": 352.68, "start": 343.88, "text": " and their quantity and quality trade off is they go for descriptions. So they'll go for" }, { "end": 359.68, "start": 352.68, "text": " something like this, where you'll have an image, and you'll have a caption for the image." }, { "end": 366.08, "start": 359.68, "text": " And so they show these on a line here, semantically dense, semantically sparse, but their task" }, { "end": 372.6, "start": 366.08, "text": " is going to be caption generation. So their back their mod, their task is given an image," }, { "end": 378.56, "start": 372.6, "text": " I want to produce a caption. And there are data sets that you can train this from in" }, { "end": 383.76000000000005, "start": 378.56, "text": " a supervised fashion, which of course, these are very expensive to create. I mean, if you" }, { "end": 389.3, "start": 383.76000000000005, "text": " want to create an ImageNet data set, then you have to label each image. But if you want" }, { "end": 394.58000000000004, "start": 389.3, "text": " to create a caption data set, that's even harder because human really needs to sit down," }, { "end": 400.36, "start": 394.58000000000004, "text": " look at the image. And in ImageNet, everything is like one class. But here you need to look" }, { "end": 404.40000000000003, "start": 400.36, "text": " at the image. And then you'll have to come up with like an adequate description. Here" }, { "end": 411.04, "start": 404.40000000000003, "text": " the adequate description is an orange, sorry, an orange and white, an orange and white cat" }, { "end": 417.84000000000003, "start": 411.04, "text": " near a plate, and the white cake. Okay. So that's, that's the caption right here. And" }, { "end": 423.52000000000004, "start": 417.84000000000003, "text": " of course, the caption is ambiguous. So you'll have to collect multiple captions per image." }, { "end": 427.52000000000004, "start": 423.52000000000004, "text": " And you'll have to make sure that the humans that do this do a good job and so on. So this" }, { "end": 432.96, "start": 427.52, "text": " these are very, very expensive data sets, but they are very high quality. If you think" }, { "end": 437.96, "start": 432.96, "text": " of what does what does a single label, let's just take ImageNet, ImageNet has a single" }, { "end": 444.52, "start": 437.96, "text": " label per class. Let's say this is cat or cake for that matter. It just sort of gives" }, { "end": 450.88, "start": 444.52, "text": " you very few bits of information. But if you consider the text here, an orange cat and" }, { "end": 457.6, "start": 450.88, "text": " a white cat, an orange and white cat, you know that there is a cat, right? You know" }, { "end": 463.52, "start": 457.6, "text": " that it's one cat, you know what its color is, orange and white, then you know that there" }, { "end": 469.71999999999997, "start": 463.52, "text": " is a white cake, right? So you know the other object. And you know the relation, they are" }, { "end": 476.68, "start": 469.71999999999997, "text": " near each other. Okay, same for here, a brown and white puppy. So this is one object and" }, { "end": 483.44, "start": 476.68, "text": " the description of the object. There is a there are apples, there is a green lawn, and" }, { "end": 488.56, "start": 483.44, "text": " the relations between them are also clear. The puppy is lying on the green lawn and looking" }, { "end": 496.4, "start": 488.56, "text": " at the apples. So the information in captions is so much more dense than just labels. And" }, { "end": 504.9, "start": 496.4, "text": " that's the that's the backdrop here to say, Hey, can't we can't we do? Can't we pre train" }, { "end": 510.67999999999995, "start": 504.9, "text": " a backbone from maybe a small data set, but that has so much information, like a caption" }, { "end": 519.36, "start": 510.67999999999995, "text": " date, image caption data set. Okay, so their method is nothing more. They train image captioning," }, { "end": 523.4399999999999, "start": 519.36, "text": " and then they use the visual backbone for transfer learning. So this is the model, there's" }, { "end": 529.3, "start": 523.4399999999999, "text": " an image, the image goes into this visual backbone right here, which is a resin at 50." }, { "end": 536.3199999999999, "start": 529.3, "text": " So this is a very, very standard convolutional neural network. And that gives you these features." }, { "end": 543.3399999999999, "start": 536.3199999999999, "text": " So these features are seven by seven by 2048. This is the standard output of a resin at" }, { "end": 550, "start": 543.3399999999999, "text": " 50. And then from this part on, they do a linear projection, such that they can now" }, { "end": 556.3199999999999, "start": 550, "text": " input it into a language model. So they have visual features. And now they feed those into" }, { "end": 564.34, "start": 556.32, "text": " the language model. And the language model is just a transformer, actually two transformers." }, { "end": 571.1600000000001, "start": 564.34, "text": " So one transformer, they're both autoregressive, one transformer tries to predict the caption" }, { "end": 576.4000000000001, "start": 571.1600000000001, "text": " in a forward way. And the other transformer tries to predict the caption in a backward" }, { "end": 582.72, "start": 576.4000000000001, "text": " way. And that's down here. So in this direction is backward because the caption has been reversed." }, { "end": 586.8000000000001, "start": 582.72, "text": " If you don't know what a transformer is, I've made several videos on transformers. The first" }, { "end": 592.96, "start": 586.8000000000001, "text": " one is attention is all you need. And that's sort of the same, the same kind of transformer" }, { "end": 600.5600000000001, "start": 592.96, "text": " they use here. So as you can see right here, you have this multi-head attention, the layer" }, { "end": 607.48, "start": 600.5600000000001, "text": " normalization attention from the decoder. Now the difference between the original Vasvani" }, { "end": 614.24, "start": 607.48, "text": " attention is all you need transformer. And this one is that in the original transformer," }, { "end": 619.08, "start": 614.24, "text": " you had, for example, if you had a machine translation task, you would have the French," }, { "end": 626.64, "start": 619.08, "text": " maybe a French sentence over here. And then you would have the beginnings of German sentence" }, { "end": 630.36, "start": 626.64, "text": " here, right? This is what you have already produced. And now you're asking what should" }, { "end": 637.22, "start": 630.36, "text": " the next word be. And the architecture was such that there is a decoder transformer right" }, { "end": 644.5600000000001, "start": 637.22, "text": " here and that there is an encoder transformer that encodes whatever you already had. And" }, { "end": 649.96, "start": 644.5600000000001, "text": " then at some point there is this cross attention, right? There is the signal from the decoder" }, { "end": 657.08, "start": 649.96, "text": " going into the encoder and the encoder incorporating that. And then at the end right here, the encoder" }, { "end": 662.36, "start": 657.08, "text": " would predict or the entire transformer would predict what the next word will be. The only" }, { "end": 669.52, "start": 662.36, "text": " difference right here is that the decode this, sorry, I mixed this up. This is the decoder." }, { "end": 677.2, "start": 669.52, "text": " This is the encoder. The only difference right here is that this encoder is no longer a transformer," }, { "end": 685.02, "start": 677.2, "text": " but is this ResNet, this ResNet 50. Okay, because now you don't have an image as a," }, { "end": 690.72, "start": 685.02, "text": " you can think of it like a translation task. You want to translate from images to text." }, { "end": 696.6800000000001, "start": 690.72, "text": " Okay, so your input is going to be an image and the signal is going like it would go in" }, { "end": 701.88, "start": 696.6800000000001, "text": " the original transformer into the decoder. It would come from the image. So from these" }, { "end": 711.6800000000001, "start": 701.88, "text": " visual features goes here. So in this drawing, this thing is going in here. And then you" }, { "end": 716.6600000000001, "start": 711.6800000000001, "text": " simply predict the next word and you do it in both directions. And the reason you can" }, { "end": 724, "start": 716.66, "text": " do it in both directions here, this wasn't, is not the case, of course, if you have a" }, { "end": 728.16, "start": 724, "text": " decoder like a standard transformer task, because you don't need to do inference at" }, { "end": 734.0799999999999, "start": 728.16, "text": " this, you just need to do training. And training you can do using teacher forcing. And so you" }, { "end": 740.28, "start": 734.0799999999999, "text": " can do this in a bi directional way. You don't need, you don't need this at inference time." }, { "end": 748, "start": 740.28, "text": " So at inference time, you simply cut off this part right here. That's your visual backbone." }, { "end": 753.72, "start": 748, "text": " Okay. And these features here, those are going to be the features that you then train your" }, { "end": 758.88, "start": 753.72, "text": " task on. And sometimes you fine tune this or sometimes you keep it frozen, you can choose" }, { "end": 766.06, "start": 758.88, "text": " that. Alright, so convolutional network to encode the images that gives you features," }, { "end": 771.4799999999999, "start": 766.06, "text": " visual features. Those visual features go into two transformers, both try to predict" }, { "end": 778.3199999999999, "start": 771.4799999999999, "text": " the caption of the image, one in a forward motion, one in a backward motion. And you" }, { "end": 784.8199999999999, "start": 778.3199999999999, "text": " train it to predict as accurately as possible the gold standard captions that you have in" }, { "end": 790.1999999999999, "start": 784.8199999999999, "text": " your data set. That's it. If you train this model well, that means the model can produce" }, { "end": 796.2, "start": 790.2, "text": " accurate captions for these images, which means that it has learned something meaningful" }, { "end": 800.9200000000001, "start": 796.2, "text": " about the image to the degree of course, that the original caption that was in your data" }, { "end": 806.7800000000001, "start": 800.9200000000001, "text": " set was a good descriptive caption. But we're just we're going to assume that the in these" }, { "end": 813.6400000000001, "start": 806.7800000000001, "text": " data sets, this is the case. Alright, that's what they do. Now, interesting thing here" }, { "end": 820.4399999999999, "start": 813.64, "text": " is that in their standard in their standard in their standard setup, they only have one" }, { "end": 825.96, "start": 820.4399999999999, "text": " of these transformer layers. So of these things right here, they only have one. And that's" }, { "end": 832.92, "start": 825.96, "text": " like I think it's like 2000 units wide, but or sorry, the hidden dimension is 2000 units" }, { "end": 838.8, "start": 832.92, "text": " or 2048. But they only have one layer. So what that means is that this transformer is" }, { "end": 846.68, "start": 838.8, "text": " not very powerful. So most that you force most of the power to come from the visual" }, { "end": 852.4, "start": 846.68, "text": " encoder, the visual encoder had basically has to do most of the work. And then the transformer" }, { "end": 861.12, "start": 852.4, "text": " is going to simply be a very shallow language model on top of that. And that of course makes" }, { "end": 867.8, "start": 861.12, "text": " your visual backbone even better. Alright, we can pretty much skip the rest. That's the" }, { "end": 871.5999999999999, "start": 867.8, "text": " idea. Like that there's nothing more to it. You train this from the beginning, you don't" }, { "end": 877.64, "start": 871.5999999999999, "text": " use any pre trained, whatever you train this from scratch. And then you use this. And then" }, { "end": 882.9599999999999, "start": 877.64, "text": " the first experiment, they simply train a linear classifier on top of that representation." }, { "end": 888, "start": 882.9599999999999, "text": " So they freeze the backbone, and then they use a linear classifier. And they compare" }, { "end": 893.88, "start": 888, "text": " this to baselines. So one of the baseline is image net supervised, where you use the" }, { "end": 900.28, "start": 893.88, "text": " same backbone, but you train it on image net in a supervised fashion. Okay, and then you" }, { "end": 904.2, "start": 900.28, "text": " use that backbone to transfer out of the text, it's kind of like what big transfer does," }, { "end": 914.24, "start": 904.2, "text": " but just on the regular 1000 class image net baseline. Then you have the sort of the unsupervised" }, { "end": 922.6, "start": 914.24, "text": " pre training ones. So moco, so pearly pearly some, I want to go into pearl, but moco is" }, { "end": 927.32, "start": 922.6, "text": " this momentum contrast, which is one of these supervised methods that has been shown to" }, { "end": 935.36, "start": 927.32, "text": " work really, really well. And this is also moco en is trained on image net, but now without" }, { "end": 941.6800000000001, "start": 935.36, "text": " the labels, because moco is unsupervised. And moco cocoa is trained on the cocoa data" }, { "end": 949.32, "start": 941.6800000000001, "text": " set. And the cocoa data set is what this paper here, the vertex paper uses cocoa is this image" }, { "end": 957.5600000000001, "start": 949.32, "text": " captioning data set. Now what's important to note is that cocoa has about 10% only of" }, { "end": 968.12, "start": 957.5600000000001, "text": " the images of image net. So it's considerably smaller. Now let's see how these things fair." }, { "end": 974.72, "start": 968.12, "text": " Right here, you can see on the x axis, the number of images, okay, the number of images" }, { "end": 980.4, "start": 974.72, "text": " that the data set or that the pre training method trains on. Now, of course, some of" }, { "end": 985.0400000000001, "start": 980.4, "text": " these are going to be capped because for some data sets, there are just not more images" }, { "end": 990.9200000000001, "start": 985.0400000000001, "text": " available, right? So they're going to be capped here, the ones that are training on cocoa" }, { "end": 994.5600000000001, "start": 990.9200000000001, "text": " and the ones that are training on image net are going to be capped here. And you can already" }, { "end": 1004.26, "start": 994.5600000000001, "text": " see that the vertex outperforms the image net supervised baseline by pretty much when" }, { "end": 1010, "start": 1004.26, "text": " you only give it this many images. Okay, so the way you do it is, in this case, you simply" }, { "end": 1017.08, "start": 1010, "text": " train these models. Now the brown one is when you take one caption per image, but the data" }, { "end": 1021.6, "start": 1017.08, "text": " set actually has more more than one caption per image. So when you use more than one," }, { "end": 1029.22, "start": 1021.6, "text": " you can still boost your performance a bit. And that works way better than when you do" }, { "end": 1035.6000000000001, "start": 1029.22, "text": " the supervised pre training on image net, which would get you here with about the same" }, { "end": 1040.64, "start": 1035.6000000000001, "text": " amount of images. Now, when you use all of image net, you can see here you can get to" }, { "end": 1046.56, "start": 1040.64, "text": " a similar performance right here, but you have to use a 10 times bigger data set to" }, { "end": 1054.52, "start": 1046.56, "text": " get there. Right, so this already shows you sort of the advantage here. Now also consider" }, { "end": 1060.24, "start": 1054.52, "text": " the difference to the unsupervised ones. So if you look at the same amount of images," }, { "end": 1068.68, "start": 1060.24, "text": " the unsupervised self supervised baselines are even lower. But if you go to more images," }, { "end": 1073.8, "start": 1068.68, "text": " they sort of get closer to image net. And in their own papers, there are there are some" }, { "end": 1080.84, "start": 1073.8, "text": " evidence that if you self supervised train for long enough, you can actually surpass" }, { "end": 1089.08, "start": 1080.84, "text": " image net supervised pre training, but I'm not so sure that that's really the case. But" }, { "end": 1098.56, "start": 1089.08, "text": " you can see here the trade off between higher quality information, but smaller data sets" }, { "end": 1108.84, "start": 1098.56, "text": " versus lower quality information, but more data per data set. And yeah, if I guess if" }, { "end": 1114.6799999999998, "start": 1108.84, "text": " you were to if you were to pre train these self supervised methods with lots more data" }, { "end": 1122.52, "start": 1114.6799999999998, "text": " in a self supervised manner, they would maybe end up even higher than image net. Now this" }, { "end": 1126.9599999999998, "start": 1122.52, "text": " graph here is sort of the same thing where they also train a linear classifier. And you" }, { "end": 1132.24, "start": 1126.9599999999998, "text": " can see right here that now the image net supervised baseline is outperforming vertex" }, { "end": 1137.28, "start": 1132.24, "text": " by a lot. So what's happening here? Now this here is actually this is on image net. So" }, { "end": 1143.6399999999999, "start": 1137.28, "text": " the task here that you transfer learn is image net. Here it was like a neutral task Pascal" }, { "end": 1149.6399999999999, "start": 1143.6399999999999, "text": " VOC. None of these methods have trained on Pascal. They simply have trained on their" }, { "end": 1153.76, "start": 1149.6399999999999, "text": " own data set. These have trained on cocoa. This has trained on image net. And then they" }, { "end": 1159.8, "start": 1153.76, "text": " have transfer learned to Pascal. Now, the task is actually the transfer learning task" }, { "end": 1167.68, "start": 1159.8, "text": " is image net. So naturally, the the thing that was pre trained in a supervised fashion" }, { "end": 1173, "start": 1167.68, "text": " on image net is going to have a huge advantage in this task, because it basically has already" }, { "end": 1180.32, "start": 1173, "text": " learned the task beforehand, whereas the vertex, it has pre trained on cocoa, not on image" }, { "end": 1186.44, "start": 1180.32, "text": " net. And you can see, if you give it the same amount of images for pre training, it can" }, { "end": 1191.96, "start": 1186.44, "text": " actually it's it's fairly close to the image net baseline. So that's pretty respectable" }, { "end": 1196.68, "start": 1191.96, "text": " right there. Now, again, of course, if you use more images on the same data set that" }, { "end": 1201.6000000000001, "start": 1196.68, "text": " you then train for, then of course, the the image that baseline is going to outperform" }, { "end": 1210.06, "start": 1201.6000000000001, "text": " it. But so pretty cool to see here that in this smaller image regime, and also consider" }, { "end": 1215.98, "start": 1210.06, "text": " this down here, if you go even an order of magnitude lower, it's really shining that" }, { "end": 1221.98, "start": 1215.98, "text": " if you have higher quality information, and you make use of it, you don't need as many" }, { "end": 1229.2, "start": 1221.98, "text": " images. And now we knew this for a long time. But this now is showing the same for transfer" }, { "end": 1238.46, "start": 1229.2, "text": " learning for visual transfer learning. So this was when we froze the backbone. And then" }, { "end": 1246.88, "start": 1238.46, "text": " we trained a linear classifier on top, they go, they make a short excursion here and show" }, { "end": 1253.2, "start": 1246.88, "text": " how different parts of their model affect their final performance. And they find that," }, { "end": 1261.52, "start": 1253.2, "text": " for example, the by captioning, which I believe is the is forward and backward captioning" }, { "end": 1268.7, "start": 1261.52, "text": " significantly helps, for example, compared to only forward captioning. And they also" }, { "end": 1274.6399999999999, "start": 1268.7, "text": " find that it's significantly outperforms other pre training tasks that they could do. And" }, { "end": 1280.44, "start": 1274.6399999999999, "text": " they also investigate whether how big their models should be. So here, this is their baseline" }, { "end": 1291.94, "start": 1280.44, "text": " model. Oh, I was I was wrong, actually, they the it's one layer of with 1024. You can see" }, { "end": 1297.68, "start": 1291.94, "text": " as you make the layer bigger and bigger, that generally helps. But I guess they decided" }, { "end": 1304.8200000000002, "start": 1297.68, "text": " against it because the gains are too little to, to afford to make it worth. And also if" }, { "end": 1311.08, "start": 1304.82, "text": " you make the network deeper here, you make the transformer have more layers, the performance" }, { "end": 1315.24, "start": 1311.08, "text": " goes up. But again, the gains are marginal. So I guess they're going to leave it away." }, { "end": 1325.48, "start": 1315.24, "text": " So their baseline, as you can see, is these resin at 50 with the one layer of 1024 size." }, { "end": 1333.4199999999998, "start": 1325.48, "text": " So this is now the last task, it's the fine tuning task. So this is what most people would" }, { "end": 1338.76, "start": 1333.42, "text": " do is they would train a backbone, and then they would fine tune it on a different data" }, { "end": 1344.66, "start": 1338.76, "text": " set on or on a different task where they don't have much labels. And here the situation looks" }, { "end": 1352.1000000000001, "start": 1344.66, "text": " a bit different. So if you look at, for example, a task on cocoa, so there are several tasks" }, { "end": 1358.14, "start": 1352.1000000000001, "text": " on cocoa, one of them is image captioning, which they use for peritra for pre training." }, { "end": 1365.92, "start": 1358.14, "text": " If you do other tasks on cocoa, you can see right here that compared to the supervised" }, { "end": 1374.6000000000001, "start": 1365.92, "text": " baseline, this vertex, it performs about the same or maybe a bit worse. But what you can" }, { "end": 1382.68, "start": 1374.6000000000001, "text": " see is it performs significantly better than, for example, moco that was only trained on" }, { "end": 1388.42, "start": 1382.68, "text": " cocoa. So again, this shows that if you have the same data set, higher quality information" }, { "end": 1394.6000000000001, "start": 1388.42, "text": " makes it worth it. And it's even better, as you can see, on moco that was trained on image" }, { "end": 1400.78, "start": 1394.6000000000001, "text": " net is just not quite as good as the supervised baseline. But all of them, of course, are" }, { "end": 1405.48, "start": 1400.78, "text": " better than just a randomly initialized network that is trained from scratch. I mean, that's" }, { "end": 1411.3400000000001, "start": 1405.48, "text": " the entire point of transfer learning, that you are better than simply learning from scratch." }, { "end": 1419.3799999999999, "start": 1411.34, "text": " And this shows throughout this experiment, except in this LVS masking task, where they" }, { "end": 1427.22, "start": 1419.3799999999999, "text": " do outperform the other the other things, the other methods significantly. Now the lower" }, { "end": 1433.6, "start": 1427.22, "text": " numbers on this tasks task also means that the task is harder than these tasks right" }, { "end": 1438.74, "start": 1433.6, "text": " here. And therefore, there are more gains to be made. And therefore, you could hypothesize" }, { "end": 1446.16, "start": 1438.74, "text": " that the bigger, the more quality information that you input can be used in a better way." }, { "end": 1451.76, "start": 1446.16, "text": " So maybe more complex, also, the more complex a task is might also have an influence on" }, { "end": 1457.3, "start": 1451.76, "text": " how well the transfer learning works, if you come from a high quality transfer learning" }, { "end": 1462.06, "start": 1457.3, "text": " task versus a low quality transfer learning tasks." }, { "end": 1473.26, "start": 1462.06, "text": " Yeah, so the lastly compare here with the again with Pascal VOC object detection, and" }, { "end": 1479.46, "start": 1473.26, "text": " these iNaturalist classification, where I believe this is also a transfer learning task" }, { "end": 1487.6399999999999, "start": 1479.46, "text": " with fine tuning. And as you can see, they can also hold up against the supervised baseline," }, { "end": 1493.0400000000002, "start": 1487.64, "text": " or even outperform it at sometimes the green triangles mean that they outperform it by" }, { "end": 1499.3000000000002, "start": 1493.0400000000002, "text": " a significant margin. But then on this task right here, they again lag behind. So I think" }, { "end": 1507.3400000000001, "start": 1499.3000000000002, "text": " the point of the paper isn't really to show that that this is the best thing ever. But" }, { "end": 1513.7, "start": 1507.3400000000001, "text": " the point of the paper is to show that you can go about pre trainings, basically, the" }, { "end": 1518.42, "start": 1513.7, "text": " the common assumption is that you need more and more and more and more data for your model" }, { "end": 1525.66, "start": 1518.42, "text": " to learn about the data set. And they conclude here, no, actually, you can do with with very" }, { "end": 1532.76, "start": 1525.66, "text": " few data points as long as they have high quality annotations. Okay, so I think that's" }, { "end": 1537.7, "start": 1532.76, "text": " the point of the of the paper that and they don't always outperform the other baselines" }, { "end": 1544.14, "start": 1537.7, "text": " and whatnot. But they keep, they keep the performance the same, which basically means" }, { "end": 1550.38, "start": 1544.14, "text": " that this is an option. Here is a pretty cool result where they visualize the attention" }, { "end": 1555.06, "start": 1550.38, "text": " of their image captioning model, because they train an image captioning model. And you can" }, { "end": 1559.98, "start": 1555.06, "text": " really see that the image captioning model learns something meaningful about the image." }, { "end": 1566.74, "start": 1559.98, "text": " So when it's a bird flying, the attention is mainly on the bird, as you can see, then" }, { "end": 1573.66, "start": 1566.74, "text": " over the the attention widens out over the image, air. So over the air, the attention" }, { "end": 1580.02, "start": 1573.66, "text": " is here in the sky and on the on the ocean. And then it goes near the ocean. And then" }, { "end": 1586.54, "start": 1580.02, "text": " the attention is on the ocean itself. As you can see, so they have a bunch of these images" }, { "end": 1592.6200000000001, "start": 1586.54, "text": " and they're they're pretty cool here a dog, so focused on the dog riding on and then you" }, { "end": 1600.06, "start": 1592.62, "text": " can see the attention going down because on is riding on means probably there's something" }, { "end": 1608.3, "start": 1600.06, "text": " below the dog. A surfboard. Now the attention is fully on the surfboard in. So as soon as" }, { "end": 1614.4199999999998, "start": 1608.3, "text": " you say in the attention, as you can see, it widens out. So I think that's that's fairly" }, { "end": 1620.6599999999999, "start": 1614.4199999999998, "text": " cool, fairly cool demonstration that the model understands sort of the the in relation, namely," }, { "end": 1627.0600000000002, "start": 1620.66, "text": " if it is focused on something, and that something is in something else, then it widens the attention" }, { "end": 1634.3400000000001, "start": 1627.0600000000002, "text": " out to see what it is in, okay, the ocean, and then it focuses the attention on the ocean." }, { "end": 1638.8600000000001, "start": 1634.3400000000001, "text": " So that's, that's a pretty, that's a pretty cool result. I guess we already knew this" }, { "end": 1644.4, "start": 1638.8600000000001, "text": " because we could train image captioning models before. It's just to show that it actually" }, { "end": 1651.74, "start": 1644.4, "text": " makes sense to use them as a pre training task for backbones. Now, what's the future" }, { "end": 1657.9, "start": 1651.74, "text": " of this, the authors here in their introduction, they make a claim that this has a good future" }, { "end": 1663.74, "start": 1657.9, "text": " because they here they only train on this small data set, right, it's smaller than image" }, { "end": 1669.18, "start": 1663.74, "text": " net, as you can see here, and they already get the same performance as if you train on" }, { "end": 1674.5800000000002, "start": 1669.18, "text": " the whole image net data set in a supervised fashion. Of course, they're also supervised," }, { "end": 1681.38, "start": 1674.5800000000002, "text": " but they have 10 times less images. And they they say something to the effect of you do" }, { "end": 1686.38, "start": 1681.38, "text": " you know, it would be pretty easy to collect more data for this task, because the internet" }, { "end": 1694.78, "start": 1686.38, "text": " is full of images. And mostly these images have like some text with them. They, you know," }, { "end": 1698.18, "start": 1694.78, "text": " they have these descriptions or they have text around it, people write something about" }, { "end": 1703.18, "start": 1698.18, "text": " the images, you could like mine Twitter, and then the responses when someone posts an image" }, { "end": 1709.98, "start": 1703.18, "text": " might tell you something about the image. But this definitely counteracts their this" }, { "end": 1715.3400000000001, "start": 1709.98, "text": " definitely counteracts their notion that these are very high quality labels, right? Their" }, { "end": 1721.8, "start": 1715.3400000000001, "text": " entire point here was that these annotations, these, these data sets with these these image" }, { "end": 1727.18, "start": 1721.8, "text": " captioning data sets like Coco, they have very, very high quality annotations. So this" }, { "end": 1733.5800000000002, "start": 1727.18, "text": " this text here is very high quality is really a descriptive text of the image that tries" }, { "end": 1741.14, "start": 1733.5800000000002, "text": " to capture what a human can see visually in the image. And as soon as you go out to the" }, { "end": 1746.54, "start": 1741.14, "text": " internet and collect a text around images, that's not going to be the case that information" }, { "end": 1752.04, "start": 1746.54, "text": " is again going to be quite low quality. And so I doubt that the performance here would" }, { "end": 1758.98, "start": 1752.04, "text": " hold up or that the claim you can easily, you know, you can easily create more data" }, { "end": 1764.52, "start": 1758.98, "text": " for this task holds up. So that's a bit my worry about the future of this, but it's definitely" }, { "end": 1772.02, "start": 1764.52, "text": " cool and definitely shows these quality quantity trade off very well. Alright, that was my" }, { "end": 1778.34, "start": 1772.02, "text": " two cents to the paper. I invite you to read it and tell me in the comments what you think" }, { "end": 1782.34, "start": 1778.34, "text": " about it and I'll see you next time." } ]
-_2AF9Lhweo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Linformer: Self-Attention with Linear Complexity (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "facebook", "linear", "quadratic", "transformer", "attention", "self-attention", "multi-head attention", "t2t", "vasvani", "bert", "devlin", "roberta", "glue", "language modeling", "perplexity", "dot product", "johnson", "lindenstrauss", "random projection" ]
Transformers are notoriously resource-intensive because their self-attention mechanism requires a squared number of memory and computations in the length of the input sequence. The Linformer Model gets around that by using the fact that often, the actual information in the attention matrix is of lower rank and can be approximated. OUTLINE: 0:00 - Intro & Overview 1:40 - The Complexity of Self-Attention 4:50 - Embedding Dimension & Multiple Heads 8:45 - Formal Attention 10:30 - Empirical Investigation into RoBERTa 20:00 - Theorem: Self-Attention is Low Rank 28:10 - Linear Self-Attention Method 36:15 - Theorem: Linear Self-Attention 44:10 - Language Modeling 46:40 - NLP Benchmarks 47:50 - Compute Time & Memory Gains 48:20 - Broader Impact Statement 49:55 - Conclusion Paper: https://arxiv.org/abs/2006.04768 Abstract: Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses O(n2) time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from O(n2) to O(n) in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient. Authors: Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're going to look at Linformer self-attention with linear complexity by Sinon Wang, Belinda Li, Madian Kabsa, Han Feng and Hao Ma of Facebook AI. So on a high level this paper observes that often the way we build transformers the self-attention matrix is low rank and can be approximated by first projecting the signal to a lower dimensional space and then performing these inner products that are responsible for attention in there. And thereby you save a lot of the complexity of multiplying full sequence length by full sequence length matrices but instead do these operations in the lower dimensional space. And they achieve a linear scaling of the transformer attention and we'll figure out how that is. As always if you like content like this consider subscribing, sharing, liking and commenting if you feel like it. Okay let's dive in. They say large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. Okay so these, if you don't know what a transformer model is you can watch my video on the paper Attention is all you need. That was sort of the beginning of these transformers and it introduces the attention mechanism that we're going to look at today. If you don't know what an attention mechanism is you're not going to have a fun time in this paper. They say however training and deploying these models can be prohibitively costly for long sequences as the standard self attention mechanism of the transformer uses n squared time and space with respect to the sequence length. Now why is that? So really shortly to recap recap this the attention mechanism. These attention, these transformers they transform for basics let's say they transform one sequence into another. So here we have five tokens and the next layer will output five tokens. Okay so five tokens in five tokens out and the question is how do you route information between these five tokens from the first layer to produce the next layer. In a feed-forward network you would simply connect everything to everything and sort of learn the weights of these of these connections. That's not what we do here. In a convolutional network you would simply connect each node to its immediate neighbors like this. But this is also not what we do here. What we do here is we route the information according to the information itself. So according to the incoming information right here we route the information that goes out and we do that by expressing queries and keys. So this incoming information is transformed first of all into what are called keys. Now keys are simply vectors so each node is going to expose a vector right here and each node in the higher layer. Now these are produced by the same from the same information down here but I'm going to draw it conceptually on the higher layer. So each node here is going to expose a query which is sort of like calling the query is calling for what kind of information do you want from the lower layer and the key is sort of exposing what type of information this node contains right now. Now the information is simply routed by looking at the inner products of the keys and the queries. So this information right here would probably be routed to this node right here whereas this one would probably be routed here. This one would be routed here. In fact this is a soft assignment so it's not like a hard routing it's a soft routing. Everything is routed to everything with different weights but the majority goes to the place where the inner product is high and this one is again routed here. So you can see this is the attention mechanism. In order to do this we need to compute the inner product of every single one of these queries with every single one of these keys. And this if our sequence length here is of length n is going to require n squared operations. Now here is another parameter we need to pay attention. These vectors here they have a certain dimension and the certain dimension we're going to call D. The inner the embedding dimension of the vectors. Now in modern transformers you can think of n as something like maybe 512 tokens go into a transformer like this. And the hidden dimension here also is in the same order of magnitude. So you can also imagine this to be something like 512. Now if you think of these matrices if you multiply the keys by the queries however you want to let's do it like this then you have the keys are n by D and the queries are D by n. Now since n and D in this case are the same dimension this matrix is of rank 512. It doesn't have to be but it's a pretty good bet that it's of rank 512. Maybe it's approximately lower rank but now this isn't actually the modern way of transformers as such because usually what we have is multi-head attention which means that we're going to split this inner dimension right here. We're going to split these vectors into many many lower dimensional vectors and then have attention mechanism on these lower dimensional vectors. And that's such that you don't only have one attention mechanism you have multiple attention mechanisms so you can route different kinds of information with these multiple attention heads. Now sometimes you would split this you could split this in a modern transformer up to like 16 different heads but here we're going to let's say we're going to split this into four subvectors each of 128 dimensions. So we're going to split this up and now if this product here is only computed on these lower dimensional vectors so all of a sudden you no longer have n by D but you have like n by D over 4 and now this is 512 still but this now is 128 so the rank of this matrix is going to be 128. Mind it's still the thing that comes out is still a 512 by 512 matrix but it is of rank 128 and that means even though this matrix contains vectors that are of size 512 they could be they could be represented accurately by a matrix that's just 128 dimensions. Okay so these these 512 dimensions actually only contain information that is 128 dimensional in nature. It's just distributed over 512 dimensions but most of these are redundant. So in fact in these modern transformers these thing here this matrix here is low rank and therefore that's what this paper sort of exploits we could we could approximate this by 128 dimensions. Okay this is our starting point. They go on and they say in this paper we demonstrate that the self-attention mechanism can be approximated by a low rank matrix. We further exploit this finding to propose a new self-attention mechanism which reduces the overall self-attention complexity from n squared to n in both time and space. The resulting linear transformer the Lin former performs on par with standard transformer models while being much more memory and time efficient. Alright so let's dive into their thing. This is how they formulate the attention mechanism. So right here the attention has queries and keys as you can see here. Now these W matrices you can largely ignore. The W simply maps the queries to so this is these are simply d by d matrices that are a linear transformation of the queries. You can sort of overlook them for the arguments in this paper. So these are the keys and the queries we talked about. The values here this is the actual information that's being routed. So what we want to do is we want to compute this product between queries and keys right here and scale it appropriately. But ultimately this is this product. Then run this through a softmax operation. That means we normalize it such that it sums to one, the distribution sums to one. And then we want to route this information according to that distribution. So that's how they formulate an attention mechanism. Now notice something. This thing in here is what they call the matrix A and this is what I've demonstrated to be low rank. Now the actual thing that you would need to be low rank for their paper to hold is the matrix P which is different because this is after the softmax right. So if the matrix P is low rank then you have a legitimate claim of approximating this routing via a low rank matrix. However if P is not low rank you don't. Okay all right now the first thing they're going to show is that this is in fact low rank. So self-attention is low rank. And for that they make an empirical investigation into Roberta. So Roberta is a model that's based on BERT and I have made videos of both BERT and Roberta I believe. If sorry if you want to go look those up. But it is one of these transformer models and they take two data sets wiki103 and IMDB and they run them through this model and they look at this P matrix. So they look at how this information routing matrix is built and then they calculate the eigenvalues of that. So you calculate the eigenvalues and by looking at the eigenvalues you can look at the rank of a matrix broadly speaking. So if you list the eigenvalues in order of their size then a matrix that is sort of high dimensional has a high rank would have sort of a slope like this and that means as you go as you go to the next and next and next eigenvalue they drop like if you order a set of uniformly distributed numbers if you order them then it would look like this right so there is no particular dimension that's that's better than any or has much more information than any other. However if the matrix is approximately low rank you would look something like this and that would mean that most of the information is concentrated in very few dimensions and those are the ones with very high eigenvalues and most of the dimensions have no information. The thing you see here is simply the cumulative sum of these things so if you calculate the cumulative sum of this you'll get that over here. So if this is very high rank you would expect a curve that goes like this sort of slanted but not very. If this is very low rank you would expect a curve that goes very much into the corner right here and they show that the general shape here is such that there is this kind of a kink to it as you can see here. Now also notice that the axis here starts at 0.4 so actually this comes from down here somewhere and goes up and then goes like this. So they have a I feel they have a legitimate claim here that these matrices are approximately low rank and here they look at I don't actually know at which layer this is or if this is in all of the layers overall or something like this but they look at how this develops inside the layers so they look at the always the 128th eigenvalue and they discover that as they go deeper and deeper into the network this cumulative eigenvalue is higher and higher that means that network puts more and more information into fewer and fewer dimension in this routing as you go up the layers so it gets more and more skewed as you go up the layers it gets more and more into this corner right here so their claim appears to be more and more true. Now I have sort of thought about this a little and I've tried it out a bit myself and I invite you to just follow me here shortly. So right here I have a matrix that is just a random Gaussian matrix of size 512 by 512. If we look at the eigen spectrum of that so I have this function SVD it simply gives me the eigen spectrum of that then you can see that it sort of falls off uniformly and that will result in a in this cumulative sum of pretty much flat curve or slowly ascending curve like this. Now if we actually have a low rank matrix this would look different this would have this sort of typical kink in it and we can demonstrate that by making a lower dimensional matrix so let's just take let's just go 512 by 128 of this lower dimensional and and let's look at the MT now this only goes to 128 because we only get back 128 singular values so let's make a lower dimensional matrix that's actually 512 by 512 so if we do this this is sort of what they're doing in this this will construct a 512 by 512 matrix but that is only of rank 128 right and you can see that at the 128 singular or eigenvalue this snaps right at the at the one so it's sort of like what they what they have okay so we've seen the difference between a let's say higher rank matrix and the low rank matrix in this cumulative sum plot now I want to go back to the original matrix right here of course there the matrices they look at these routing matrices they're not Gaussian they're not sort of distributed with mean zero and the nice variance they are the result of a softmax operation and in particular that means they're all positive and that means that their mean is not zero so if you look at a data set and it's mean it's not zero and you calculate like the the eigenvalues or in this case the principal component you will find that the first one will be very strong because that must account for the fact that the mean is not at the center or the first few will be like this so it is sort of maybe we can replicate this right here so let's say we'll put M through let's first go with the absolute value of M okay not much of a change but you already see that this axis doesn't start at zero so let's go let's actually how do we do this xlim right xlim zero none so haha okay so the first one you simply have to imagine or I can do even something something more we can just put a zero in front here and that should do the trick no yes oh that's X I meant Y calm and dumb never mind this will work as well so you already get this sort of of kink and let's put it into the softmax so we'll put a softmax and that gives you also this kink now you might think that wait this is that this kink looks a lot smaller than the other kink so but if we simply modify let's modify the standard deviation of this random matrix and you can see that this spectrum immediately changes right because of the interaction now between the softmax and the standard deviation if I only were to change the standard deviation on the normal M matrix and we can actually try this right here that wouldn't do much that would still look pretty much the same it's just differently scaled but in the interaction with the softmax now this changes the spectrum dramatically and here as you know these these transformers have always sort of like layer normalization and so on so probably the standard deviation if we if if these are sort of Gaussian the standard deviation before the softmax would be a lot smaller so let's go something like this so smaller than one and can we run this please and you can see that this kink immediately appears now it's not it's it's it's not the same thing as this other as this here because this is a lot smoother as you can see right here but still I feel that this might not actually be a result of the you know the fact that this is an attention mechanism but it simply might be the result of that you apply a softmax now still that doesn't change the fact that it is approximately a lower rank matrix everything they say holds but yeah maybe maybe one should also look into why exactly that happens but in fact it is low rank okay it is approximately low rank they've demonstrated this and now they go to their first first theory below we provide a theoretical answer a theoretical analysis of the above spectrum results okay so the theoretical analysis theorem one is self-attention is low rank and we're going to go through this just glance at it for now they say for any of these query key values and these matrices which of course you can ignore for now for any column vector W of matrix V W and W here that's the information that needs to be routed there exists a low rank matrix P tilde so this P tilde here is going to be their low rank approximation of the P matrix you can see it's still n by n but it's going to be low rank in fact it's going to be of the order of the logarithm of the rank of the full matrix or well the full matrix of the rank that the full matrix could have as we have already seen the full matrix doesn't have full rank but yeah okay so if you use and this is the type of guarantee you get so what do we see here it basically means that this distance here is smaller than this and this here this is just the norm of one of these vectors projected times this error coefficient epsilon so all it says is that the distance on the left is smaller than something and that occurs with high probability okay so the entire guarantee here the entire formula just basically means that this thing is small this norm is small what's this norm this norm is the distance between these two things now what are these two things this is the information that we want to route and this is the routing matrix and that simply means that if I route my information using the P tilde this approximation then I won't be too far away as if I had routed my information using the original P matrix okay that's that's it that's what the theorem says the theorem says if I route my information using this approximation then I am not too far away as it had I routed my information using the original routing matrix that I don't say how they're going to construct they simply say there exists a low rank matrix like this and the proof of this and it's sort of worth looking at the proof of it it uses the Johnson Linden Strauss lemma this thing here or the JL for short and they're going to get this out of the JL now the Johnson Linden Strauss lemma in a classic sense says something like this if I have data in a high dimensional space here in a three dimensional space okay I have data distributed and I use a certain kind of projection matrix and there are a number so the the JL gives conditions on what these projections can be but for example a randomly sampled matrix with zero mean Gaussian entries and 1 over K standard deviation where K is the dimension you project into can do the trick so if I project my data in a certain way into a lower dimension here dimension 2 then the projected data is related to the original data by the fact that the distances between the points in the original space will not be distorted too much so the distances between these points are approximately preserved through this projection okay so that's that's the that's the Johnson Linden Strauss lemma now you'll notice here there is no reference to the fact that this data is or isn't low rank it's simply high dimensional data projected to lower dimension and the distances are approximately preserved and this theory here and I've looked at it for a while now they simply define okay they define this P matrix as this attention mechanism and here you can see the A matrix we've discussed before which is actually low rank but we don't know yet if the softmax is they write it as this form right here of the exponential of each entry of a divided by this diagonal right here so in the softmax of course you have the exponential of each entry divided by the sum of the entries and they write this simply as two matrices but ultimately this is a matrix right here and all they do is they take this P matrix and they apply the Johnson Linden Strauss lemma by having this projection matrix R and R is entries from this Gaussian as I said so this is the special type of projection that the JL addresses and then simply says if you pull if you this here is going to be your P tilde so if you project R in this manner and obtain P tilde and then you use P tilde instead of P then this this is going to be very close in fact you can reformulate the JL into different variants such that it gives you things like this things like saying that the distance between this projected version and this unprojected version is going to be a constant smaller than a constant time the norms of the unprojected version that is equivalent to saying that the distances are preserved now you can see right here nowhere in this theorem is the fact that this is self-attention and nowhere in the theorem appears the fact that this inner matrix A is low rank or even that this matrix A exists it's you can do this with any matrix P right the JL doesn't concern itself with the nature of this matrix P it says any matrix any sort of high dimensional data you can project to low dimensional data and this holds if you choose the projection correctly which they do right here so to claim that this theorem proves that self-attention is low rank to me is a bit it's a bit of a statement that is not warranted like this here should read something like the Johnson Lindenstrout's lemma exists or something like this it I'm not I'm not sure like convince me otherwise but yeah so they go with this so they say given the low rank property of the context mapping matrix P now again I disagree that this has been shown except empirically one straightforward idea is to use singularality composition to approximate P with a low rank matrix P low as follows so what you could do is you could simply learn these low rank matrices and approximate P through it or you can decompose P as such and then have these easier inner products in dimension K but they say however this approach requires performing an SVD decomposition in each self-attention matrix which adds additional complexity therefore we propose another approach for a low rank approximation that avoids this added complexity okay so they now come up with their model and their model goes as follows so here on the left you see a classic attention mechanism with their projections built in what they're proposing is they say let's project the matrix K using one of these random projections and then this attention routing if you route if you now multiply so you multiply K and Q right here K times Q and then you put it into the softmax and then you use it to route this W so they say if we build in this projection matrix that will project K to a lower dimension and then we won't have as expensive of inner products now the important part to see here is that if you think of this lower projection the first thing you think is that you project this inner this hidden dimension D right to a lower dimension and that's not the case here you actually project the N so in in a conceptual framework so you can see right here forget about this this is this W matrix in a conceptual framework you see here is this N by D matrix which are the keys so N is the sequence length and D is the dimensions and what you want to do is you want to project that by this matrix which is K by N so you want to reduce the sequence length you can see in this matrix right here why that might work because N is much larger than D and that means this matrix can be at most rank D right so you should not lose too much you should sort of be able to preserve the information if you project this N to a K where the K if the K is still larger than the D or approximately in the same order of magnitude you should be able to preserve that information if you do it in a smart way so conceptually if we have our five token sequence like here and the next layer produces five tokens again what we first do is we say we know we know that the information we want is not five dimensional it's actually two dimensional because okay let's say these this inner dimension D is is two as well so we have two dimensional vectors each thing exposes two dimensional vectors so we first project the sequence of length five to a sequence of length two and we simply do that in a random manner so we have a random Gaussian matrix that assigns weights to mix these five into these two and again because the JL works for any sort of data but in my argumentation if you you know think that this here is low rank it's of rank two then you shouldn't lose too much information by projecting it to a sequence length two and now we do this attention mechanism so now we expose the keys and now we expose the queries up here and now you can see instead of routing five things with five things you only have to route five things with two things and so instead of having O and squared you now have O N K if K K is the number right here okay so this is the idea you project the sequence length and it comes from the fact that the sequence length is much larger than the dimensionality and therefore you can sort of preserve the information if you project in a smart way they build this in this fashion right here so the attention mechanism now before we saw it was between the queries and the keys right here they built now this projection matrix here that projects the keys into a lower dimensional sequence and the now such that this will result in an N by K attention matrix we saw over here you don't need to route N by N things you need to route N by K so this this routing table in here is now N by K now the next layer as you can see here it actually needs to produce a sequence of length five again right so we always transform sequence of length five into sequence of length five but now we have we have this N corresponds to the sorry corresponds to the next layer and this K corresponds to the down projected sequence of the last layer and in order for that to fit we of course also need to down project the information that we're routing so if we don't project the routing table we also need to down project the information that we're routing that's we do this by a similar matrix F that is also sampled in this way in this special way and that gives us a K by D so we have projected the sequence to size K and if we multiply these two things again of course we'll get out an N by D matrix which is the signal for the next layer okay so an N by D signal comes in down here it's projected down to K sequence length it's and it's routed up again to N sequence length and you have again an N by D matrix here cool so that's how they do it and they build this into the transformer now as I understand it these projection matrices again they're not learned they are built up in this JL conscribed way they are not learned they are fixed once and then that's that's that at least that's how I understand it so there are no more learnable parameters okay so here they have a demonstration where they up the sequence length and you can see the batch size decreases but that's just to sort of keep the total amount of flops to be done the same you up the sequence length and down the batch size as the sequence length increases the standard transformers requirement in inference time goes up and this here as you can see this is not a linear scale it's a log scale log 2 so this goes up with the sequence length and it should go up quadratically right and you can also see that the Lin former keeps fairly constant for the same K now of course as you increase the K of the Lin former the inference time will go up because now it's dependent on N times K and not on N times N okay so let's look a bit further of how you have to choose that K up here in the first theorem we there was a already a hint to it in the first theorem you had to choose K by 5 log N and this is a problem so here you have log N that means it's not so O of N K is equal to O of N log N now that's not linear that's actually that's the same as the reformer but they want to get to a linear place and theorem 2 explains goes now to a linear shows how you can make self-attention linear okay they show again blah blah blah blah now you have to choose K at the minimum of these two things and you can see right here that one of them is independent of N so that means as N grows of course the minimum is no longer going to be this here the minimum is actually going to be the thing on the left and that is dependent on just D okay so you have D log D in here and that makes sense because in the very beginning we said hey D is actually much smaller than N and that means the information that is contained in these matrices is at most rank D so if we down project to K we should adjust K to what D is right if we adjust K to about the same thing as D we're guaranteed to not lose too much information so now we choose K according to D instead of according to N and therefore the computation is linear in N and N times K is like N times D to log D so it's linear in K and linear in D how do we get there so the first thing they do is they make these sort of Johnson-Lindenstrout statements again but now instead of the general statement they plug in their actual modified attention mechanism so here they have a bound on the distance between if I route my this is the information to be routed right if I route my information using the original softmax and this in here is the matrix A if the original tension mechanism I won't be too far away as if I were to route my information using this modified attention mechanism now the tricky part here mathematically I believe is that is is exactly the softmax what what I alluded to right so this softmax is the tricky part because if this weren't a softmax so if the softmax weren't here this would simply be a projection down and a projection up and the dilemma would almost apply as it is written right there you wouldn't have to actually do anything but the question is if this inside the softmax is is low rank can you make a claim that the entire softmax then is also low rank and it's not entirely clear because because oh yes we've done this so you can see right here that the softmax we have actually done the softmax of a low rank matrix so we have already seen the low rank matrix itself and how it immediately snaps to the to the upper axis after 128 now if we do the same thing for the softmax of that and we probably have to take away some of these dimensions the first few let's go with let's go to dimension 100 and look from there okay same thing okay that's pretty good I did not expect that hi there so this is Yannick from the future I've realized I've been an idiot in how I constructed these low rank matrices right here by multiplying MT by itself of course what's a better way to do it is to construct two independent 128 dimensional matrices like these two sub slices of M right here and then multiplying those together and looking at the SVD and you as you can see right here so the softmax of this is now not of this super low rank anymore it's still low rank but it's not not very it's not like hard low rank so if I just look at the matrix without the softmax then you can see it has a very peak that by at the 128 which gives us the indication it's actually 128 rank which we already knew but if we now introduce the softmax then you can see that this vanishes and it's no longer 128 dimensional and it's only approximately low rank as you can see all right back to Yannick in the past who is wholly surprised that the two that if you multiply MT by itself that that will give you back the the exact same thing all right so did we try this before maybe we did okay but the mathematical difficulty still remains and their main thing here is so they have a first first version where they pretty much plug it into the JL again and they they get out this K is the K needs to be by log n but they say this result does not utilize the low rank property of matrix A and the resultant K has a dependency on sequence length n and then in the appendix they finally go through the math to show that now if they choose E and F like this they can actually pull out this and show that the K is where you have it the decay is independent of n like this and I think the main the main step in this proof is the step B here where they say uses the fact that the exponential function is Lipschitz continuous in a compact region then we can choose a small enough Delta such that the as you can see here this now directly relates to this projection matrix within the exponential function to the projection matrix out of the exponential function so you can basically say that if I project first and then use the exponential function that's not too different than if I first use the exponential function and then project okay so that's the that's the sort of of of catch here now they only do this for the exponential function not the actual softmax as you can see here throughout they do it to the exponential function and also here in their statements the softmax isn't the exponential function the softmax is the exponential function divided by the sum of the exponential functions but I believe that this generalizes straightforwardly alright so for given choices of Delta and K they have shown that the Lin former in fact can do in a linear fashion what a transformer can do in a quadratic fashion and they are not too far off ok that's that's their point right here the results on these benchmarks sorry let's first go to the perplexities in language modeling so they show right here that they pretty much can keep up with the standard transformer as you can see here so with the standard transformer they can keep up here now think that this the the computation is n times K ok so something like this Lin former with K cost 256 will only so instead of n by n it's n times K it won't save you too much in that case but it's it's not too surprising that in fact you have the same performance because probably the standard transformer is distributed over more heads than two so the information necessarily has a lower dimensionality 10 to 56 one thing I want to draw attention to though here is that you can see that here it's not really done learning yet and as you can see the standard transformer sort of surpasses all of these models towards the end I wonder I wonder what happens I wouldn't be surprised if they end up sort of at the same place but I wonder if these diverge even more right here after that they also compare with a higher sequence length and the standard transformer outperforms the Lin former but of course the point here is that the Lin former is much much much faster and can keep up now also the scale here of the perplexity you see these are percentage points in perplexity but I can't actually tell if that matters or not I think I think in the original transformer paper the perplexities hovered between like three point something and five point something so this might actually be sort of significant differences and I'm not sure they investigate different methods of sharing these weights of these of these projections and they seems like they don't find real differences but I don't want to go into that because this video is already really long and then they look at what happens if they up the sequence length that they put into the Lin former and you can see that the Lin former can deal with higher sequence lengths and arrive at the same perplexities though again I don't know how much different that is and the scale here is larger than before but yeah so how does this fare on these benchmarks where you first train a transformer with pre training with language modeling and then you use it to do certain NLP tasks and here you can see that the Lin former is on par in some of these tasks with the original transformer but also you can see like a pattern where you can see pretty wild results in that you know sometimes the the Lin former here will be better than this but then also variants of the Lin former will be worse and they'll even be worse than this and sometimes they'll be better sometimes this Lin former is good and sometimes the original model is the best so this sort of points to you can make the general claim that the Lin former doesn't destroy your your gains but also it's not like a a better model it's simply a faster model that in some tasks can keep up with the original model and they show that of course this is the real deal here that as you go up in length the performance gains and also sorry this this way the performance gains and the memory gains that you get by the Lin former are dramatic of course the longer and you go and to the lower dimension you project the more these gains are but of course the more performance you're going to lose potentially hello again Yannick from the future just wanted to draw your attention on this beautiful broader impact statement in this paper saying our work focuses on making transformers more efficient everything cool potential positive in spec impacts of efficient transformers that's pretty cool it also has potential impact on training transformers on images since we can support very long sequences very cool furthermore there are positive environmental benefits very cool I mean these are all very cool things they say as such we see no immediate negative ethical or societal impacts of our work beyond what applies to the core building blocks of deep learning do better now this this honestly I agree with them right I completely agree with them that this is sort of a good thing you might trade off you know some accuracy might some make some approximations but you will get a much faster model and this model has any model can be used you know for things and and that they now have to pull out of there out of their but some way in in in over five steps of intermediate layers this could be used for bad it just seems ridiculous so good on them for defying the please also think about negative impacts right here all right back to back back to past Yannick all right this was the Lin former paper I hope this somewhat makes sense made sense to you I had to read it multiple times for it to make sense to me but ultimately it's all about the fact that you have these multiple heads and therefore your information is probably lower dimensional and you can abuse that and to just calculate in this lower dimension all right I'll see you next time bye bye
[ { "end": 4.96, "start": 0, "text": " Hi there! Today we're going to look at Linformer self-attention with linear" }, { "end": 11.84, "start": 4.96, "text": " complexity by Sinon Wang, Belinda Li, Madian Kabsa, Han Feng and Hao Ma of" }, { "end": 17.94, "start": 11.84, "text": " Facebook AI. So on a high level this paper observes that often the way we" }, { "end": 24, "start": 17.94, "text": " build transformers the self-attention matrix is low rank and can be" }, { "end": 29.080000000000002, "start": 24, "text": " approximated by first projecting the signal to a lower dimensional space and" }, { "end": 33.48, "start": 29.08, "text": " then performing these inner products that are responsible for attention in" }, { "end": 40.76, "start": 33.48, "text": " there. And thereby you save a lot of the complexity of multiplying full sequence" }, { "end": 46.84, "start": 40.76, "text": " length by full sequence length matrices but instead do these" }, { "end": 53.76, "start": 46.84, "text": " operations in the lower dimensional space. And they achieve a linear scaling" }, { "end": 60.64, "start": 53.76, "text": " of the transformer attention and we'll figure out how that is. As always if you" }, { "end": 65.48, "start": 60.64, "text": " like content like this consider subscribing, sharing, liking and" }, { "end": 73.4, "start": 65.48, "text": " commenting if you feel like it. Okay let's dive in. They say large transformer" }, { "end": 77.8, "start": 73.4, "text": " models have shown extraordinary success in achieving state-of-the-art results in" }, { "end": 83.44, "start": 77.8, "text": " many natural language processing applications. Okay so these, if you don't" }, { "end": 87.36, "start": 83.44, "text": " know what a transformer model is you can watch my video on the paper" }, { "end": 92.44, "start": 87.36, "text": " Attention is all you need. That was sort of the beginning of these" }, { "end": 96.75999999999999, "start": 92.44, "text": " transformers and it introduces the attention mechanism that we're going to" }, { "end": 100.72, "start": 96.75999999999999, "text": " look at today. If you don't know what an attention mechanism is you're not going" }, { "end": 106.96, "start": 100.72, "text": " to have a fun time in this paper. They say however training and deploying these" }, { "end": 111.92, "start": 106.96, "text": " models can be prohibitively costly for long sequences as the standard self" }, { "end": 117.12, "start": 111.92, "text": " attention mechanism of the transformer uses n squared time and space with" }, { "end": 122.44, "start": 117.12, "text": " respect to the sequence length. Now why is that? So really shortly to recap" }, { "end": 128.72, "start": 122.44, "text": " recap this the attention mechanism. These attention, these transformers they" }, { "end": 134.44, "start": 128.72, "text": " transform for basics let's say they transform one sequence into another. So" }, { "end": 141.28, "start": 134.44, "text": " here we have five tokens and the next layer will output five tokens. Okay so" }, { "end": 146.2, "start": 141.28, "text": " five tokens in five tokens out and the question is how do you route" }, { "end": 152.48, "start": 146.2, "text": " information between these five tokens from the first layer to produce the next" }, { "end": 156.92000000000002, "start": 152.48, "text": " layer. In a feed-forward network you would simply connect everything to" }, { "end": 162.56, "start": 156.92000000000002, "text": " everything and sort of learn the weights of these of these connections. That's not" }, { "end": 166.12, "start": 162.56, "text": " what we do here. In a convolutional network you would simply connect each" }, { "end": 172.36, "start": 166.12, "text": " node to its immediate neighbors like this. But this is also not what we do here." }, { "end": 176.98000000000002, "start": 172.36, "text": " What we do here is we route the information according to the information" }, { "end": 181.36, "start": 176.98000000000002, "text": " itself. So according to the incoming information right here we route the" }, { "end": 186.64000000000001, "start": 181.36, "text": " information that goes out and we do that by expressing" }, { "end": 192.32, "start": 186.64000000000001, "text": " queries and keys. So this incoming information is transformed first of all" }, { "end": 197.28, "start": 192.32, "text": " into what are called keys. Now keys are simply vectors so each node is going to" }, { "end": 203.23999999999998, "start": 197.28, "text": " expose a vector right here and each node in the higher layer. Now these are" }, { "end": 207.44, "start": 203.23999999999998, "text": " produced by the same from the same information down here but I'm going to" }, { "end": 212.24, "start": 207.44, "text": " draw it conceptually on the higher layer. So each node here is going to expose a" }, { "end": 217.64, "start": 212.24, "text": " query which is sort of like calling the query is calling for what kind of" }, { "end": 221.6, "start": 217.64, "text": " information do you want from the lower layer and the key is sort of exposing" }, { "end": 229.12, "start": 221.6, "text": " what type of information this node contains right now. Now the information" }, { "end": 234, "start": 229.12, "text": " is simply routed by looking at the inner products of the keys and" }, { "end": 238.72, "start": 234, "text": " the queries. So this information right here would probably be routed to this" }, { "end": 244.35999999999999, "start": 238.72, "text": " node right here whereas this one would probably be routed here. This one would" }, { "end": 248.28, "start": 244.35999999999999, "text": " be routed here. In fact this is a soft assignment so it's not like a hard" }, { "end": 252.56, "start": 248.28, "text": " routing it's a soft routing. Everything is routed to everything with different" }, { "end": 257.56, "start": 252.56, "text": " weights but the majority goes to the place where the inner product is high" }, { "end": 262.24, "start": 257.56, "text": " and this one is again routed here. So you can see this is the attention" }, { "end": 268.04, "start": 262.24, "text": " mechanism. In order to do this we need to compute the inner product of every" }, { "end": 273.96, "start": 268.04, "text": " single one of these queries with every single one of these keys. And this if" }, { "end": 281.28, "start": 273.96, "text": " our sequence length here is of length n is going to require n squared" }, { "end": 288.56, "start": 281.28, "text": " operations. Now here is another parameter we need to pay attention." }, { "end": 293.64, "start": 288.56, "text": " These vectors here they have a certain dimension and the certain dimension" }, { "end": 300.35999999999996, "start": 293.64, "text": " we're going to call D. The inner the embedding dimension of the vectors. Now" }, { "end": 308.40000000000003, "start": 300.36, "text": " in modern transformers you can think of n as something like maybe 512 tokens go" }, { "end": 313.40000000000003, "start": 308.40000000000003, "text": " into a transformer like this. And the hidden dimension here also is in the" }, { "end": 318.44, "start": 313.40000000000003, "text": " same order of magnitude. So you can also imagine this to be something like 512." }, { "end": 323.96000000000004, "start": 318.44, "text": " Now if you think of these matrices if you multiply the keys by the queries" }, { "end": 330.32, "start": 323.96, "text": " however you want to let's do it like this then you have the keys are n by D and" }, { "end": 336.56, "start": 330.32, "text": " the queries are D by n. Now since n and D in this case are the same" }, { "end": 342.79999999999995, "start": 336.56, "text": " dimension this matrix is of rank 512. It doesn't have to be but it's" }, { "end": 348.67999999999995, "start": 342.79999999999995, "text": " a pretty good bet that it's of rank 512. Maybe it's approximately lower rank but" }, { "end": 357.08, "start": 348.68, "text": " now this isn't actually the modern way of transformers as such because usually" }, { "end": 361.6, "start": 357.08, "text": " what we have is multi-head attention which means that we're going to split" }, { "end": 366.2, "start": 361.6, "text": " this inner dimension right here. We're going to split these vectors into many" }, { "end": 370.88, "start": 366.2, "text": " many lower dimensional vectors and then have attention mechanism on these lower" }, { "end": 376.36, "start": 370.88, "text": " dimensional vectors. And that's such that you don't only have one attention" }, { "end": 380.62, "start": 376.36, "text": " mechanism you have multiple attention mechanisms so you can route different" }, { "end": 386.88, "start": 380.62, "text": " kinds of information with these multiple attention heads. Now sometimes you would" }, { "end": 391.44, "start": 386.88, "text": " split this you could split this in a modern transformer up to like 16" }, { "end": 395.72, "start": 391.44, "text": " different heads but here we're going to let's say we're going to split this into" }, { "end": 403.12, "start": 395.72, "text": " four subvectors each of 128 dimensions. So we're going to split this up" }, { "end": 408.16, "start": 403.12, "text": " and now if this product here is only computed on these lower" }, { "end": 412.84000000000003, "start": 408.16, "text": " dimensional vectors so all of a sudden you no longer have n by D but you have" }, { "end": 421.7, "start": 412.84000000000003, "text": " like n by D over 4 and now this is 512 still but this now is 128 so the rank of" }, { "end": 427.88, "start": 421.7, "text": " this matrix is going to be 128. Mind it's still the thing that comes out is" }, { "end": 437.15999999999997, "start": 427.88, "text": " still a 512 by 512 matrix but it is of rank 128 and that means even though this" }, { "end": 444.56, "start": 437.15999999999997, "text": " matrix contains vectors that are of size 512 they could be they could be" }, { "end": 453.6, "start": 444.56, "text": " represented accurately by a matrix that's just 128 dimensions. Okay so these" }, { "end": 459.72, "start": 453.6, "text": " these 512 dimensions actually only contain information that is 128" }, { "end": 466.08000000000004, "start": 459.72, "text": " dimensional in nature. It's just distributed over 512 dimensions but most" }, { "end": 472.88, "start": 466.08000000000004, "text": " of these are redundant. So in fact in these modern transformers these thing" }, { "end": 478.84000000000003, "start": 472.88, "text": " here this matrix here is low rank and therefore that's what this paper sort" }, { "end": 488.88, "start": 478.84, "text": " of exploits we could we could approximate this by 128 dimensions. Okay" }, { "end": 494.56, "start": 488.88, "text": " this is our starting point. They go on and they say in this paper we" }, { "end": 499.79999999999995, "start": 494.56, "text": " demonstrate that the self-attention mechanism can be approximated by a low" }, { "end": 504.79999999999995, "start": 499.79999999999995, "text": " rank matrix. We further exploit this finding to propose a new self-attention" }, { "end": 510.16, "start": 504.8, "text": " mechanism which reduces the overall self-attention complexity from n squared" }, { "end": 515.08, "start": 510.16, "text": " to n in both time and space. The resulting linear transformer the Lin" }, { "end": 520.28, "start": 515.08, "text": " former performs on par with standard transformer models while being much more" }, { "end": 527.92, "start": 520.28, "text": " memory and time efficient. Alright so let's dive into their thing. This is how" }, { "end": 534.64, "start": 527.92, "text": " they formulate the attention mechanism. So right here the attention has queries" }, { "end": 540.16, "start": 534.64, "text": " and keys as you can see here. Now these W matrices you can largely ignore. The W" }, { "end": 545.84, "start": 540.16, "text": " simply maps the queries to so this is these are simply d by d matrices that" }, { "end": 550.72, "start": 545.84, "text": " are a linear transformation of the queries. You can sort of overlook them" }, { "end": 556.24, "start": 550.72, "text": " for the arguments in this paper. So these are the keys and the queries we" }, { "end": 561.12, "start": 556.24, "text": " talked about. The values here this is the actual information that's being routed." }, { "end": 564.88, "start": 561.12, "text": " So what we want to do is we want to compute this product between queries and" }, { "end": 570, "start": 564.88, "text": " keys right here and scale it appropriately. But ultimately this is" }, { "end": 577.44, "start": 570, "text": " this product. Then run this through a softmax operation. That means we" }, { "end": 583.48, "start": 577.44, "text": " normalize it such that it sums to one, the distribution sums to one. And then we" }, { "end": 591.5600000000001, "start": 583.48, "text": " want to route this information according to that distribution. So that's" }, { "end": 595.6, "start": 591.5600000000001, "text": " how they formulate an attention mechanism. Now notice something. This thing in here" }, { "end": 601.32, "start": 595.6, "text": " is what they call the matrix A and this is what I've demonstrated to be low rank." }, { "end": 608.44, "start": 601.32, "text": " Now the actual thing that you would need to be low rank for their paper to hold" }, { "end": 614.44, "start": 608.44, "text": " is the matrix P which is different because this is after the softmax right." }, { "end": 620.08, "start": 614.44, "text": " So if the matrix P is low rank then you have a legitimate claim of approximating" }, { "end": 626.6800000000001, "start": 620.08, "text": " this routing via a low rank matrix. However if P is not low rank you don't." }, { "end": 634.84, "start": 626.6800000000001, "text": " Okay all right now the first thing they're going to show is that this is in" }, { "end": 639.72, "start": 634.84, "text": " fact low rank. So self-attention is low rank. And for that they make an" }, { "end": 647.08, "start": 639.72, "text": " empirical investigation into Roberta. So Roberta is a model that's based on BERT" }, { "end": 653.32, "start": 647.08, "text": " and I have made videos of both BERT and Roberta I believe. If sorry if you want" }, { "end": 658.9200000000001, "start": 653.32, "text": " to go look those up. But it is one of these transformer models and they take" }, { "end": 665.4, "start": 658.92, "text": " two data sets wiki103 and IMDB and they run them through this model and they" }, { "end": 671.8, "start": 665.4, "text": " look at this P matrix. So they look at how this information routing matrix is" }, { "end": 679.4, "start": 671.8, "text": " built and then they calculate the eigenvalues of that. So you calculate" }, { "end": 684.24, "start": 679.4, "text": " the eigenvalues and by looking at the eigenvalues you can look at the rank of" }, { "end": 692.12, "start": 684.24, "text": " a matrix broadly speaking. So if you list the eigenvalues in order of their size" }, { "end": 698.76, "start": 692.12, "text": " then a matrix that is sort of high dimensional has a high rank would have" }, { "end": 707.12, "start": 698.76, "text": " sort of a slope like this and that means as you go as you go to the next and" }, { "end": 712.2, "start": 707.12, "text": " next and next eigenvalue they drop like if you order a set of uniformly" }, { "end": 717.9200000000001, "start": 712.2, "text": " distributed numbers if you order them then it would look like this right so" }, { "end": 723.12, "start": 717.9200000000001, "text": " there is no particular dimension that's that's better than any or has much more" }, { "end": 728.4000000000001, "start": 723.12, "text": " information than any other. However if the matrix is approximately low rank you" }, { "end": 732, "start": 728.4000000000001, "text": " would look something like this and that would mean that most of the information" }, { "end": 736.74, "start": 732, "text": " is concentrated in very few dimensions and those are the ones with very high" }, { "end": 742.96, "start": 736.74, "text": " eigenvalues and most of the dimensions have no information. The thing you see" }, { "end": 747.6800000000001, "start": 742.96, "text": " here is simply the cumulative sum of these things so if you calculate the" }, { "end": 754, "start": 747.6800000000001, "text": " cumulative sum of this you'll get that over here. So if this is very high rank" }, { "end": 760.72, "start": 754, "text": " you would expect a curve that goes like this sort of slanted but not very. If" }, { "end": 765.4, "start": 760.72, "text": " this is very low rank you would expect a curve that goes very much into the" }, { "end": 773.84, "start": 765.4, "text": " corner right here and they show that the general shape here is such that there is" }, { "end": 780.88, "start": 773.84, "text": " this kind of a kink to it as you can see here. Now also notice that the axis here" }, { "end": 785.1999999999999, "start": 780.88, "text": " starts at 0.4 so actually this comes from down here somewhere and goes up and" }, { "end": 790.12, "start": 785.1999999999999, "text": " then goes like this. So they have a I feel they have a legitimate claim here" }, { "end": 795.92, "start": 790.12, "text": " that these matrices are approximately low rank and here they look at I don't" }, { "end": 800.68, "start": 795.92, "text": " actually know at which layer this is or if this is in all of the layers overall" }, { "end": 806.24, "start": 800.68, "text": " or something like this but they look at how this develops inside the layers so" }, { "end": 813.48, "start": 806.24, "text": " they look at the always the 128th eigenvalue and they discover that as" }, { "end": 818.94, "start": 813.48, "text": " they go deeper and deeper into the network this cumulative eigenvalue is" }, { "end": 823.2800000000001, "start": 818.94, "text": " higher and higher that means that network puts more and more information" }, { "end": 829.48, "start": 823.2800000000001, "text": " into fewer and fewer dimension in this routing as you go up the layers so it" }, { "end": 834.44, "start": 829.48, "text": " gets more and more skewed as you go up the layers it gets more and more into" }, { "end": 840.48, "start": 834.44, "text": " this corner right here so their claim appears to be more and more true. Now I" }, { "end": 846.44, "start": 840.48, "text": " have sort of thought about this a little and I've tried it out a bit myself and I" }, { "end": 852.24, "start": 846.44, "text": " invite you to just follow me here shortly. So right here I have a matrix" }, { "end": 860.6400000000001, "start": 852.24, "text": " that is just a random Gaussian matrix of size 512 by 512. If we look at the" }, { "end": 864.6400000000001, "start": 860.6400000000001, "text": " eigen spectrum of that so I have this function SVD it simply gives me the" }, { "end": 871.6400000000001, "start": 864.6400000000001, "text": " eigen spectrum of that then you can see that it sort of falls off uniformly and" }, { "end": 882.8, "start": 871.64, "text": " that will result in a in this cumulative sum of pretty much flat curve or slowly" }, { "end": 890.4, "start": 882.8, "text": " ascending curve like this. Now if we actually have a low rank matrix this" }, { "end": 894.8199999999999, "start": 890.4, "text": " would look different this would have this sort of typical kink in it and we" }, { "end": 899.48, "start": 894.8199999999999, "text": " can demonstrate that by making a lower dimensional matrix so let's just take" }, { "end": 910.12, "start": 899.48, "text": " let's just go 512 by 128 of this lower dimensional and and let's look at the MT" }, { "end": 917.48, "start": 910.12, "text": " now this only goes to 128 because we only get back 128 singular values so" }, { "end": 923.44, "start": 917.48, "text": " let's make a lower dimensional matrix that's actually 512 by 512 so if we do" }, { "end": 931.72, "start": 923.44, "text": " this this is sort of what they're doing in this this will construct a" }, { "end": 941.44, "start": 931.72, "text": " 512 by 512 matrix but that is only of rank 128 right and you can see that at" }, { "end": 948.84, "start": 941.44, "text": " the 128 singular or eigenvalue this snaps right at the at the one so it's" }, { "end": 955.08, "start": 948.84, "text": " sort of like what they what they have okay so we've seen the difference between" }, { "end": 960.64, "start": 955.08, "text": " a let's say higher rank matrix and the low rank matrix in this cumulative sum" }, { "end": 966.64, "start": 960.64, "text": " plot now I want to go back to the original matrix right here of course" }, { "end": 971.5600000000001, "start": 966.64, "text": " there the matrices they look at these routing matrices they're not Gaussian" }, { "end": 977, "start": 971.5600000000001, "text": " they're not sort of distributed with mean zero and the nice variance they are" }, { "end": 981.36, "start": 977, "text": " the result of a softmax operation and in particular that means they're all" }, { "end": 986.96, "start": 981.36, "text": " positive and that means that their mean is not zero so if you look at a data" }, { "end": 992.24, "start": 986.96, "text": " set and it's mean it's not zero and you calculate like the the eigenvalues or in" }, { "end": 998.64, "start": 992.24, "text": " this case the principal component you will find that the first one will be" }, { "end": 1003.48, "start": 998.64, "text": " very strong because that must account for the fact that the mean is not at the" }, { "end": 1012.24, "start": 1003.48, "text": " center or the first few will be like this so it is sort of maybe we can" }, { "end": 1018, "start": 1012.24, "text": " replicate this right here so let's say we'll put M through let's first go with" }, { "end": 1027.6, "start": 1018, "text": " the absolute value of M okay not much of a change but you already see that this" }, { "end": 1037.1999999999998, "start": 1027.6, "text": " axis doesn't start at zero so let's go let's actually how do we do this xlim" }, { "end": 1050.36, "start": 1037.1999999999998, "text": " right xlim zero none so haha okay so the first one you simply have to imagine or" }, { "end": 1055.7199999999998, "start": 1050.36, "text": " I can do even something something more we can just put a zero in front here and" }, { "end": 1067.96, "start": 1055.72, "text": " that should do the trick no yes oh that's X I meant Y calm and dumb never" }, { "end": 1072.28, "start": 1067.96, "text": " mind this will work as well so you already get this sort of of kink and" }, { "end": 1082.32, "start": 1072.28, "text": " let's put it into the softmax so we'll put a softmax and that gives you also" }, { "end": 1087, "start": 1082.32, "text": " this kink now you might think that wait this is that this kink looks a lot" }, { "end": 1093.76, "start": 1087, "text": " smaller than the other kink so but if we simply modify let's modify the standard" }, { "end": 1098.12, "start": 1093.76, "text": " deviation of this random matrix and you can see that this spectrum immediately" }, { "end": 1103.1599999999999, "start": 1098.12, "text": " changes right because of the interaction now between the softmax and the" }, { "end": 1108.04, "start": 1103.1599999999999, "text": " standard deviation if I only were to change the standard deviation on the" }, { "end": 1115.1599999999999, "start": 1108.04, "text": " normal M matrix and we can actually try this right here that wouldn't do much" }, { "end": 1119.3999999999999, "start": 1115.1599999999999, "text": " that would still look pretty much the same it's just differently scaled but in" }, { "end": 1124.56, "start": 1119.3999999999999, "text": " the interaction with the softmax now this changes the spectrum dramatically" }, { "end": 1129.04, "start": 1124.56, "text": " and here as you know these these transformers have always sort of like" }, { "end": 1134, "start": 1129.04, "text": " layer normalization and so on so probably the standard deviation if we" }, { "end": 1139.76, "start": 1134, "text": " if if these are sort of Gaussian the standard deviation before the softmax" }, { "end": 1147.72, "start": 1139.76, "text": " would be a lot smaller so let's go something like this so smaller than one" }, { "end": 1155.44, "start": 1147.72, "text": " and can we run this please and you can see that this kink immediately appears" }, { "end": 1162.56, "start": 1155.44, "text": " now it's not it's it's it's not the same thing as this other as this here because" }, { "end": 1168.56, "start": 1162.56, "text": " this is a lot smoother as you can see right here but still I feel that this" }, { "end": 1173.36, "start": 1168.56, "text": " might not actually be a result of the you know the fact that this is an" }, { "end": 1178.72, "start": 1173.36, "text": " attention mechanism but it simply might be the result of that you apply a softmax" }, { "end": 1185.08, "start": 1178.72, "text": " now still that doesn't change the fact that it is approximately a lower rank" }, { "end": 1192.8799999999999, "start": 1185.08, "text": " matrix everything they say holds but yeah maybe maybe one should also look" }, { "end": 1198.6799999999998, "start": 1192.8799999999999, "text": " into why exactly that happens but in fact it is low rank okay it is" }, { "end": 1202, "start": 1198.6799999999998, "text": " approximately low rank they've demonstrated this and now they go to" }, { "end": 1208.4399999999998, "start": 1202, "text": " their first first theory below we provide a theoretical answer a" }, { "end": 1214.56, "start": 1208.4399999999998, "text": " theoretical analysis of the above spectrum results okay so the theoretical" }, { "end": 1220.32, "start": 1214.56, "text": " analysis theorem one is self-attention is low rank and we're going to go" }, { "end": 1227.56, "start": 1220.32, "text": " through this just glance at it for now they say for any of these query key" }, { "end": 1233.1599999999999, "start": 1227.56, "text": " values and these matrices which of course you can ignore for now for any" }, { "end": 1239.6399999999999, "start": 1233.1599999999999, "text": " column vector W of matrix V W and W here that's the information that needs to be" }, { "end": 1247, "start": 1239.64, "text": " routed there exists a low rank matrix P tilde so this P tilde here is going to" }, { "end": 1254.8000000000002, "start": 1247, "text": " be their low rank approximation of the P matrix you can see it's still n by n but" }, { "end": 1259.68, "start": 1254.8000000000002, "text": " it's going to be low rank in fact it's going to be of the order of the" }, { "end": 1267.0400000000002, "start": 1259.68, "text": " logarithm of the rank of the full matrix or well the full matrix of the rank that" }, { "end": 1271.76, "start": 1267.04, "text": " the full matrix could have as we have already seen the full matrix doesn't" }, { "end": 1279.44, "start": 1271.76, "text": " have full rank but yeah okay so if you use and this is the type of guarantee" }, { "end": 1284.84, "start": 1279.44, "text": " you get so what do we see here it basically means that this distance here" }, { "end": 1291.2, "start": 1284.84, "text": " is smaller than this and this here this is just the norm of one of these vectors" }, { "end": 1297.3600000000001, "start": 1291.2, "text": " projected times this error coefficient epsilon so all it says is that the" }, { "end": 1302.32, "start": 1297.3600000000001, "text": " distance on the left is smaller than something and that occurs with high" }, { "end": 1307.64, "start": 1302.32, "text": " probability okay so the entire guarantee here the entire formula just basically" }, { "end": 1314.04, "start": 1307.64, "text": " means that this thing is small this norm is small what's this norm this norm is" }, { "end": 1319.8400000000001, "start": 1314.04, "text": " the distance between these two things now what are these two things this is" }, { "end": 1324.9199999999998, "start": 1319.84, "text": " the information that we want to route and this is the routing matrix and that" }, { "end": 1331.08, "start": 1324.9199999999998, "text": " simply means that if I route my information using the P tilde this" }, { "end": 1337.8799999999999, "start": 1331.08, "text": " approximation then I won't be too far away as if I had routed my information" }, { "end": 1344.12, "start": 1337.8799999999999, "text": " using the original P matrix okay that's that's it that's what the theorem says" }, { "end": 1348.04, "start": 1344.12, "text": " the theorem says if I route my information using this approximation" }, { "end": 1354.32, "start": 1348.04, "text": " then I am not too far away as it had I routed my information using the original" }, { "end": 1358.92, "start": 1354.32, "text": " routing matrix that I don't say how they're going to construct they simply" }, { "end": 1367.08, "start": 1358.92, "text": " say there exists a low rank matrix like this and the proof of this and it's sort" }, { "end": 1371.92, "start": 1367.08, "text": " of worth looking at the proof of it it uses the Johnson Linden Strauss lemma" }, { "end": 1381.3200000000002, "start": 1371.92, "text": " this thing here or the JL for short and they're going to get this out of the JL" }, { "end": 1385.96, "start": 1381.3200000000002, "text": " now the Johnson Linden Strauss lemma in a classic sense says something like this" }, { "end": 1390.52, "start": 1385.96, "text": " if I have data in a high dimensional space here in a three dimensional space" }, { "end": 1398, "start": 1390.52, "text": " okay I have data distributed and I use a certain kind of projection matrix and" }, { "end": 1403.48, "start": 1398, "text": " there are a number so the the JL gives conditions on what these projections can" }, { "end": 1410.12, "start": 1403.48, "text": " be but for example a randomly sampled matrix with zero mean Gaussian entries" }, { "end": 1418.04, "start": 1410.12, "text": " and 1 over K standard deviation where K is the dimension you project into can do" }, { "end": 1424.08, "start": 1418.04, "text": " the trick so if I project my data in a certain way into a lower dimension here" }, { "end": 1431.04, "start": 1424.08, "text": " dimension 2 then the projected data is related to the original data by the fact" }, { "end": 1438.32, "start": 1431.04, "text": " that the distances between the points in the original space will not be distorted" }, { "end": 1443.8799999999999, "start": 1438.32, "text": " too much so the distances between these points are approximately preserved" }, { "end": 1450.6799999999998, "start": 1443.8799999999999, "text": " through this projection okay so that's that's the that's the Johnson Linden" }, { "end": 1455.92, "start": 1450.68, "text": " Strauss lemma now you'll notice here there is no reference to the fact that" }, { "end": 1462.72, "start": 1455.92, "text": " this data is or isn't low rank it's simply high dimensional data projected" }, { "end": 1468.1200000000001, "start": 1462.72, "text": " to lower dimension and the distances are approximately preserved and this theory" }, { "end": 1474.2, "start": 1468.1200000000001, "text": " here and I've looked at it for a while now they simply define okay they define" }, { "end": 1478.64, "start": 1474.2, "text": " this P matrix as this attention mechanism and here you can see the A" }, { "end": 1482.44, "start": 1478.64, "text": " matrix we've discussed before which is actually low rank but we don't know yet" }, { "end": 1489.3600000000001, "start": 1482.44, "text": " if the softmax is they write it as this form right here of the exponential of" }, { "end": 1496.8000000000002, "start": 1489.3600000000001, "text": " each entry of a divided by this diagonal right here so in the softmax of course" }, { "end": 1500.68, "start": 1496.8000000000002, "text": " you have the exponential of each entry divided by the sum of the entries and" }, { "end": 1504.8400000000001, "start": 1500.68, "text": " they write this simply as two matrices but ultimately this is a matrix right" }, { "end": 1510.8799999999999, "start": 1504.84, "text": " here and all they do is they take this P matrix and they apply the Johnson" }, { "end": 1519.12, "start": 1510.8799999999999, "text": " Linden Strauss lemma by having this projection matrix R and R is entries" }, { "end": 1523.8, "start": 1519.12, "text": " from this Gaussian as I said so this is the special type of projection that the" }, { "end": 1530, "start": 1523.8, "text": " JL addresses and then simply says if you pull if you this here is going to be" }, { "end": 1536.96, "start": 1530, "text": " your P tilde so if you project R in this manner and obtain P tilde and then you" }, { "end": 1544.84, "start": 1536.96, "text": " use P tilde instead of P then this this is going to be very close in fact you" }, { "end": 1548.52, "start": 1544.84, "text": " can reformulate the JL into different variants such that it gives you things" }, { "end": 1553.8, "start": 1548.52, "text": " like this things like saying that the distance between this projected version" }, { "end": 1558.36, "start": 1553.8, "text": " and this unprojected version is going to be a constant smaller than a constant" }, { "end": 1562.9199999999998, "start": 1558.36, "text": " time the norms of the unprojected version that is equivalent to saying that" }, { "end": 1568, "start": 1562.9199999999998, "text": " the distances are preserved now you can see right here nowhere in this theorem" }, { "end": 1575.6399999999999, "start": 1568, "text": " is the fact that this is self-attention and nowhere in the theorem appears the" }, { "end": 1581.12, "start": 1575.6399999999999, "text": " fact that this inner matrix A is low rank or even that this matrix A exists" }, { "end": 1586.36, "start": 1581.12, "text": " it's you can do this with any matrix P right the JL doesn't concern itself with" }, { "end": 1591.7199999999998, "start": 1586.36, "text": " the nature of this matrix P it says any matrix any sort of high dimensional data" }, { "end": 1596, "start": 1591.7199999999998, "text": " you can project to low dimensional data and this holds if you choose the" }, { "end": 1601.7199999999998, "start": 1596, "text": " projection correctly which they do right here so to claim that this theorem" }, { "end": 1611.52, "start": 1601.7199999999998, "text": " proves that self-attention is low rank to me is a bit it's a bit of a statement" }, { "end": 1618.4, "start": 1611.52, "text": " that is not warranted like this here should read something like the Johnson" }, { "end": 1626.52, "start": 1618.4, "text": " Lindenstrout's lemma exists or something like this it I'm not I'm not sure like" }, { "end": 1634.84, "start": 1626.52, "text": " convince me otherwise but yeah so they go with this so they say given the low" }, { "end": 1640.68, "start": 1634.84, "text": " rank property of the context mapping matrix P now again I disagree that this" }, { "end": 1646.44, "start": 1640.68, "text": " has been shown except empirically one straightforward idea is to use" }, { "end": 1650.4, "start": 1646.44, "text": " singularality composition to approximate P with a low rank matrix P low as" }, { "end": 1654.44, "start": 1650.4, "text": " follows so what you could do is you could simply learn these low rank" }, { "end": 1660.2, "start": 1654.44, "text": " matrices and approximate P through it or you can decompose P as such and then" }, { "end": 1670, "start": 1660.2, "text": " have these easier inner products in dimension K but they say however this" }, { "end": 1674.48, "start": 1670, "text": " approach requires performing an SVD decomposition in each self-attention" }, { "end": 1679.16, "start": 1674.48, "text": " matrix which adds additional complexity therefore we propose another approach" }, { "end": 1687.32, "start": 1679.16, "text": " for a low rank approximation that avoids this added complexity okay so they now" }, { "end": 1691.72, "start": 1687.32, "text": " come up with their model and their model goes as follows so here on the left you" }, { "end": 1696.28, "start": 1691.72, "text": " see a classic attention mechanism with their projections built in what they're" }, { "end": 1704.32, "start": 1696.28, "text": " proposing is they say let's project the matrix K using one of these random" }, { "end": 1710.6, "start": 1704.32, "text": " projections and then this attention routing if you route if you now" }, { "end": 1717.32, "start": 1710.6, "text": " multiply so you multiply K and Q right here K times Q and then you put it into" }, { "end": 1722.48, "start": 1717.32, "text": " the softmax and then you use it to route this W so they say if we build in this" }, { "end": 1728.3600000000001, "start": 1722.48, "text": " projection matrix that will project K to a lower dimension and then we won't have" }, { "end": 1733.68, "start": 1728.3600000000001, "text": " as expensive of inner products now the important part to see here is that if" }, { "end": 1737.32, "start": 1733.68, "text": " you think of this lower projection the first thing you think is that you" }, { "end": 1743.6, "start": 1737.32, "text": " project this inner this hidden dimension D right to a lower dimension and that's" }, { "end": 1751.08, "start": 1743.6, "text": " not the case here you actually project the N so in in a conceptual framework so" }, { "end": 1755.4399999999998, "start": 1751.08, "text": " you can see right here forget about this this is this W matrix in a conceptual" }, { "end": 1760.1999999999998, "start": 1755.4399999999998, "text": " framework you see here is this N by D matrix which are the keys so N is the" }, { "end": 1766.1599999999999, "start": 1760.1999999999998, "text": " sequence length and D is the dimensions and what you want to do is you want to" }, { "end": 1771.6399999999999, "start": 1766.1599999999999, "text": " project that by this matrix which is K by N so you want to reduce the sequence" }, { "end": 1776.6, "start": 1771.6399999999999, "text": " length you can see in this matrix right here why that might work because N is" }, { "end": 1784.1599999999999, "start": 1776.6, "text": " much larger than D and that means this matrix can be at most rank D right so" }, { "end": 1788.84, "start": 1784.1599999999999, "text": " you should not lose too much you should sort of be able to preserve the" }, { "end": 1795.56, "start": 1788.84, "text": " information if you project this N to a K where the K if the K is still larger" }, { "end": 1799.9599999999998, "start": 1795.56, "text": " than the D or approximately in the same order of magnitude you should be able to" }, { "end": 1803.8999999999999, "start": 1799.9599999999998, "text": " preserve that information if you do it in a smart way so conceptually if we" }, { "end": 1810.26, "start": 1803.9, "text": " have our five token sequence like here and the next layer produces five tokens" }, { "end": 1817.16, "start": 1810.26, "text": " again what we first do is we say we know we know that the information we want is" }, { "end": 1825.4, "start": 1817.16, "text": " not five dimensional it's actually two dimensional because okay let's say these" }, { "end": 1832.52, "start": 1825.4, "text": " this inner dimension D is is two as well so we have two dimensional vectors each" }, { "end": 1838, "start": 1832.52, "text": " thing exposes two dimensional vectors so we first project the sequence of length" }, { "end": 1843.8799999999999, "start": 1838, "text": " five to a sequence of length two and we simply do that in a random manner so we" }, { "end": 1850.1, "start": 1843.8799999999999, "text": " have a random Gaussian matrix that assigns weights to mix these five into" }, { "end": 1856.48, "start": 1850.1, "text": " these two and again because the JL works for any sort of data but in my" }, { "end": 1862.72, "start": 1856.48, "text": " argumentation if you you know think that this here is low rank it's of rank two" }, { "end": 1867.04, "start": 1862.72, "text": " then you shouldn't lose too much information by projecting it to a" }, { "end": 1872.84, "start": 1867.04, "text": " sequence length two and now we do this attention mechanism so now we expose the" }, { "end": 1879.88, "start": 1872.84, "text": " keys and now we expose the queries up here and now you can see instead of" }, { "end": 1885.08, "start": 1879.88, "text": " routing five things with five things you only have to route five things with two" }, { "end": 1894.3999999999999, "start": 1885.08, "text": " things and so instead of having O and squared you now have O N K if K K is the" }, { "end": 1901.8, "start": 1894.3999999999999, "text": " number right here okay so this is the idea you project the sequence length and" }, { "end": 1907.62, "start": 1901.8, "text": " it comes from the fact that the sequence length is much larger than the" }, { "end": 1913.02, "start": 1907.62, "text": " dimensionality and therefore you can sort of preserve the information if you" }, { "end": 1920.52, "start": 1913.02, "text": " project in a smart way they build this in this fashion right here so the" }, { "end": 1926.32, "start": 1920.52, "text": " attention mechanism now before we saw it was between the queries and the keys" }, { "end": 1933.16, "start": 1926.32, "text": " right here they built now this projection matrix here that projects the" }, { "end": 1940.76, "start": 1933.16, "text": " keys into a lower dimensional sequence and the now such that this will result" }, { "end": 1946.64, "start": 1940.76, "text": " in an N by K attention matrix we saw over here you don't need to route N by" }, { "end": 1953.08, "start": 1946.64, "text": " N things you need to route N by K so this this routing table in here is now N" }, { "end": 1959.24, "start": 1953.08, "text": " by K now the next layer as you can see here it actually needs to produce a" }, { "end": 1963.8799999999999, "start": 1959.24, "text": " sequence of length five again right so we always transform sequence of length" }, { "end": 1972.5200000000002, "start": 1963.88, "text": " five into sequence of length five but now we have we have this N corresponds" }, { "end": 1976.7600000000002, "start": 1972.5200000000002, "text": " to the sorry corresponds to the next layer and this K corresponds to the" }, { "end": 1983.3200000000002, "start": 1976.7600000000002, "text": " down projected sequence of the last layer and in order for that to fit we of" }, { "end": 1987.68, "start": 1983.3200000000002, "text": " course also need to down project the information that we're routing so if we" }, { "end": 1991.5600000000002, "start": 1987.68, "text": " don't project the routing table we also need to down project the information" }, { "end": 1998, "start": 1991.56, "text": " that we're routing that's we do this by a similar matrix F that is also sampled" }, { "end": 2006.96, "start": 1998, "text": " in this way in this special way and that gives us a K by D so we have projected" }, { "end": 2011.72, "start": 2006.96, "text": " the sequence to size K and if we multiply these two things again of" }, { "end": 2018.6, "start": 2011.72, "text": " course we'll get out an N by D matrix which is the signal for the next layer" }, { "end": 2027.32, "start": 2018.6, "text": " okay so an N by D signal comes in down here it's projected down to K sequence" }, { "end": 2032.84, "start": 2027.32, "text": " length it's and it's routed up again to N sequence length and you have again an" }, { "end": 2041.36, "start": 2032.84, "text": " N by D matrix here cool so that's how they do it and they build this into the" }, { "end": 2046.6, "start": 2041.36, "text": " transformer now as I understand it these projection matrices again they're not" }, { "end": 2054.56, "start": 2046.6, "text": " learned they are built up in this JL conscribed way they are not" }, { "end": 2061.56, "start": 2054.56, "text": " learned they are fixed once and then that's that's that at least that's how I" }, { "end": 2070.2799999999997, "start": 2061.56, "text": " understand it so there are no more learnable parameters okay so here they" }, { "end": 2075.44, "start": 2070.2799999999997, "text": " have a demonstration where they up the sequence length and you can see the" }, { "end": 2080.2400000000002, "start": 2075.44, "text": " batch size decreases but that's just to sort of keep the total amount of flops" }, { "end": 2085, "start": 2080.2400000000002, "text": " to be done the same you up the sequence length and down the batch size as the" }, { "end": 2090.68, "start": 2085, "text": " sequence length increases the standard transformers requirement in inference" }, { "end": 2095.32, "start": 2090.68, "text": " time goes up and this here as you can see this is not a linear scale it's a" }, { "end": 2103.04, "start": 2095.32, "text": " log scale log 2 so this goes up with the sequence length and it should go up" }, { "end": 2108.64, "start": 2103.04, "text": " quadratically right and you can also see that the Lin former keeps fairly" }, { "end": 2114.72, "start": 2108.64, "text": " constant for the same K now of course as you increase the K of the Lin former" }, { "end": 2121.84, "start": 2114.72, "text": " the inference time will go up because now it's dependent on N times K and not" }, { "end": 2129.84, "start": 2121.84, "text": " on N times N okay so let's look a bit further of how you have to choose that" }, { "end": 2136.6800000000003, "start": 2129.84, "text": " K up here in the first theorem we there was a already a hint to it in the first" }, { "end": 2145.2000000000003, "start": 2136.6800000000003, "text": " theorem you had to choose K by 5 log N and this is a problem so here you have" }, { "end": 2155.1600000000003, "start": 2145.2000000000003, "text": " log N that means it's not so O of N K is equal to O of N log N now that's not" }, { "end": 2159.82, "start": 2155.1600000000003, "text": " linear that's actually that's the same as the reformer but they want to get" }, { "end": 2169.6800000000003, "start": 2159.82, "text": " to a linear place and theorem 2 explains goes now to a linear shows how you can" }, { "end": 2178.92, "start": 2169.6800000000003, "text": " make self-attention linear okay they show again blah blah blah blah now you" }, { "end": 2184.44, "start": 2178.92, "text": " have to choose K at the minimum of these two things and you can see right here" }, { "end": 2191.04, "start": 2184.44, "text": " that one of them is independent of N so that means as N grows of course the" }, { "end": 2194.52, "start": 2191.04, "text": " minimum is no longer going to be this here the minimum is actually going to" }, { "end": 2201.16, "start": 2194.52, "text": " be the thing on the left and that is dependent on just D okay so you have D" }, { "end": 2207.6, "start": 2201.16, "text": " log D in here and that makes sense because in the very beginning we said" }, { "end": 2216.08, "start": 2207.6, "text": " hey D is actually much smaller than N and that means the information that is" }, { "end": 2222.72, "start": 2216.08, "text": " contained in these matrices is at most rank D so if we down project to K we" }, { "end": 2228.7599999999998, "start": 2222.72, "text": " should adjust K to what D is right if we adjust K to about the same thing as D" }, { "end": 2236.7599999999998, "start": 2228.7599999999998, "text": " we're guaranteed to not lose too much information so now we choose K" }, { "end": 2242.36, "start": 2236.76, "text": " according to D instead of according to N and therefore the computation is linear" }, { "end": 2251, "start": 2242.36, "text": " in N and N times K is like N times D to log D so it's linear in K and linear in" }, { "end": 2259, "start": 2251, "text": " D how do we get there so the first thing they do is they make these sort of" }, { "end": 2266.1600000000003, "start": 2259, "text": " Johnson-Lindenstrout statements again but now instead of the general statement" }, { "end": 2270.92, "start": 2266.16, "text": " they plug in their actual modified attention mechanism so here they have a" }, { "end": 2277.24, "start": 2270.92, "text": " bound on the distance between if I route my this is the information to be routed" }, { "end": 2283.52, "start": 2277.24, "text": " right if I route my information using the original softmax and this in here" }, { "end": 2291.12, "start": 2283.52, "text": " is the matrix A if the original tension mechanism I won't be too far away as if" }, { "end": 2298.68, "start": 2291.12, "text": " I were to route my information using this modified attention mechanism now" }, { "end": 2308.2, "start": 2298.68, "text": " the tricky part here mathematically I believe is that is is exactly the softmax" }, { "end": 2314.2, "start": 2308.2, "text": " what what I alluded to right so this softmax is the tricky part because if" }, { "end": 2318.56, "start": 2314.2, "text": " this weren't a softmax so if the softmax weren't here this would simply be a" }, { "end": 2324.12, "start": 2318.56, "text": " projection down and a projection up and the dilemma would almost apply as it is" }, { "end": 2328.7599999999998, "start": 2324.12, "text": " written right there you wouldn't have to actually do anything but the question is" }, { "end": 2336.16, "start": 2328.7599999999998, "text": " if this inside the softmax is is low rank can you make a claim that the" }, { "end": 2344.7599999999998, "start": 2336.16, "text": " entire softmax then is also low rank and it's not entirely clear because because" }, { "end": 2351.6000000000004, "start": 2344.76, "text": " oh yes we've done this so you can see right here that the softmax we have" }, { "end": 2355.32, "start": 2351.6000000000004, "text": " actually done the softmax of a low rank matrix so we have already seen the low" }, { "end": 2360.76, "start": 2355.32, "text": " rank matrix itself and how it immediately snaps to the to the upper" }, { "end": 2371.2400000000002, "start": 2360.76, "text": " axis after 128 now if we do the same thing for the softmax of that and we" }, { "end": 2379.8799999999997, "start": 2371.24, "text": " probably have to take away some of these dimensions the first few let's go with" }, { "end": 2388.72, "start": 2379.8799999999997, "text": " let's go to dimension 100 and look from there okay same thing okay that's pretty" }, { "end": 2399.68, "start": 2388.72, "text": " good I did not expect that hi there so this is Yannick from the future I've" }, { "end": 2403.8799999999997, "start": 2399.68, "text": " realized I've been an idiot in how I constructed these low rank matrices" }, { "end": 2410, "start": 2403.8799999999997, "text": " right here by multiplying MT by itself of course what's a better way to do it" }, { "end": 2416.3999999999996, "start": 2410, "text": " is to construct two independent 128 dimensional matrices like these two" }, { "end": 2421.2799999999997, "start": 2416.3999999999996, "text": " sub slices of M right here and then multiplying those together and looking" }, { "end": 2429.3599999999997, "start": 2421.2799999999997, "text": " at the SVD and you as you can see right here so the softmax of this is now not" }, { "end": 2436.56, "start": 2429.36, "text": " of this super low rank anymore it's still low rank but it's not not very it's" }, { "end": 2442.92, "start": 2436.56, "text": " not like hard low rank so if I just look at the matrix without the softmax then" }, { "end": 2448.6400000000003, "start": 2442.92, "text": " you can see it has a very peak that by at the 128 which gives us the indication" }, { "end": 2455.92, "start": 2448.6400000000003, "text": " it's actually 128 rank which we already knew but if we now introduce the softmax" }, { "end": 2463.28, "start": 2455.92, "text": " then you can see that this vanishes and it's no longer 128 dimensional and it's" }, { "end": 2469.64, "start": 2463.28, "text": " only approximately low rank as you can see all right back to Yannick in the" }, { "end": 2475.84, "start": 2469.64, "text": " past who is wholly surprised that the two that if you multiply MT by itself" }, { "end": 2483.32, "start": 2475.84, "text": " that that will give you back the the exact same thing all right so did we try" }, { "end": 2489.76, "start": 2483.32, "text": " this before maybe we did okay but the mathematical difficulty still remains" }, { "end": 2495.04, "start": 2489.76, "text": " and their main thing here is so they have a first first version where they" }, { "end": 2502.1600000000003, "start": 2495.04, "text": " pretty much plug it into the JL again and they they get out this K is the K" }, { "end": 2506.92, "start": 2502.1600000000003, "text": " needs to be by log n but they say this result does not utilize the low rank" }, { "end": 2511.88, "start": 2506.92, "text": " property of matrix A and the resultant K has a dependency on sequence length n" }, { "end": 2522.6400000000003, "start": 2511.88, "text": " and then in the appendix they finally go through the math to show that now if" }, { "end": 2532.2400000000002, "start": 2522.6400000000003, "text": " they choose E and F like this they can actually pull out this and show that" }, { "end": 2542.9199999999996, "start": 2532.24, "text": " the K is where you have it the decay is independent of n like this and I think" }, { "end": 2551.8399999999997, "start": 2542.9199999999996, "text": " the main the main step in this proof is the step B here where they say uses the" }, { "end": 2556.12, "start": 2551.8399999999997, "text": " fact that the exponential function is Lipschitz continuous in a compact" }, { "end": 2562.6, "start": 2556.12, "text": " region then we can choose a small enough Delta such that the as you can see here" }, { "end": 2568.3199999999997, "start": 2562.6, "text": " this now directly relates to this projection matrix within the exponential" }, { "end": 2572.92, "start": 2568.3199999999997, "text": " function to the projection matrix out of the exponential function so you can" }, { "end": 2577.8399999999997, "start": 2572.92, "text": " basically say that if I project first and then use the exponential function" }, { "end": 2582.52, "start": 2577.8399999999997, "text": " that's not too different than if I first use the exponential function and then" }, { "end": 2591.52, "start": 2582.52, "text": " project okay so that's the that's the sort of of of catch here now they only" }, { "end": 2596.36, "start": 2591.52, "text": " do this for the exponential function not the actual softmax as you can see here" }, { "end": 2600.96, "start": 2596.36, "text": " throughout they do it to the exponential function and also here in their" }, { "end": 2607.04, "start": 2600.96, "text": " statements the softmax isn't the exponential function the softmax is the" }, { "end": 2611.92, "start": 2607.04, "text": " exponential function divided by the sum of the exponential functions but I" }, { "end": 2615.88, "start": 2611.92, "text": " believe that this generalizes straightforwardly" }, { "end": 2623.8, "start": 2615.88, "text": " alright so for given choices of Delta and K they have shown that the Lin" }, { "end": 2629.52, "start": 2623.8, "text": " former in fact can do in a linear fashion what a transformer can do in a" }, { "end": 2634, "start": 2629.52, "text": " quadratic fashion and they are not too far off" }, { "end": 2640.7200000000003, "start": 2634, "text": " ok that's that's their point right here the results on these benchmarks" }, { "end": 2645.52, "start": 2640.72, "text": " sorry let's first go to the perplexities in language modeling so they show right" }, { "end": 2650.08, "start": 2645.52, "text": " here that they pretty much can keep up with the standard transformer as you can" }, { "end": 2655.3599999999997, "start": 2650.08, "text": " see here so with the standard transformer they can keep up here now" }, { "end": 2663.24, "start": 2655.3599999999997, "text": " think that this the the computation is n times K ok so something like this Lin" }, { "end": 2670.56, "start": 2663.24, "text": " former with K cost 256 will only so instead of n by n it's n times K it" }, { "end": 2678.64, "start": 2670.56, "text": " won't save you too much in that case but it's it's not too surprising that in" }, { "end": 2683.4799999999996, "start": 2678.64, "text": " fact you have the same performance because probably the standard transformer" }, { "end": 2688.3599999999997, "start": 2683.4799999999996, "text": " is distributed over more heads than two so the information necessarily has a" }, { "end": 2694.2000000000003, "start": 2688.36, "text": " lower dimensionality 10 to 56 one thing I want to draw attention to though here" }, { "end": 2700.96, "start": 2694.2000000000003, "text": " is that you can see that here it's not really done learning yet and as you can" }, { "end": 2707.1600000000003, "start": 2700.96, "text": " see the standard transformer sort of surpasses all of these models towards" }, { "end": 2713.6400000000003, "start": 2707.1600000000003, "text": " the end I wonder I wonder what happens I wouldn't be surprised if they end up" }, { "end": 2719.04, "start": 2713.64, "text": " sort of at the same place but I wonder if these diverge even more right here" }, { "end": 2727.52, "start": 2719.04, "text": " after that they also compare with a higher sequence length and the standard" }, { "end": 2731.2799999999997, "start": 2727.52, "text": " transformer outperforms the Lin former but of course the point here is that the" }, { "end": 2739.72, "start": 2731.2799999999997, "text": " Lin former is much much much faster and can keep up now also the scale here of" }, { "end": 2745.7599999999998, "start": 2739.72, "text": " the perplexity you see these are percentage points in perplexity but I" }, { "end": 2751.52, "start": 2745.7599999999998, "text": " can't actually tell if that matters or not I think I think in the original" }, { "end": 2755.7999999999997, "start": 2751.52, "text": " transformer paper the perplexities hovered between like three point" }, { "end": 2762.12, "start": 2755.7999999999997, "text": " something and five point something so this might actually be sort of" }, { "end": 2768.04, "start": 2762.12, "text": " significant differences and I'm not sure they investigate different methods of" }, { "end": 2773.24, "start": 2768.04, "text": " sharing these weights of these of these projections and they seems like they" }, { "end": 2776.36, "start": 2773.24, "text": " don't find real differences but I don't want to go into that because this video" }, { "end": 2781.92, "start": 2776.36, "text": " is already really long and then they look at what happens if they up the" }, { "end": 2786.94, "start": 2781.92, "text": " sequence length that they put into the Lin former and you can see that the Lin" }, { "end": 2792.88, "start": 2786.94, "text": " former can deal with higher sequence lengths and arrive at the same" }, { "end": 2798.6400000000003, "start": 2792.88, "text": " perplexities though again I don't know how much different that is and" }, { "end": 2806.08, "start": 2798.6400000000003, "text": " the scale here is larger than before but yeah so how does this fare on these" }, { "end": 2811.32, "start": 2806.08, "text": " benchmarks where you first train a transformer with pre training with" }, { "end": 2817.2000000000003, "start": 2811.32, "text": " language modeling and then you use it to do certain NLP tasks and here you can" }, { "end": 2822.46, "start": 2817.2000000000003, "text": " see that the Lin former is on par in some of these tasks with the original" }, { "end": 2829.32, "start": 2822.46, "text": " transformer but also you can see like a pattern where you can see pretty wild" }, { "end": 2836.4, "start": 2829.32, "text": " results in that you know sometimes the the Lin former here will be better than" }, { "end": 2842.44, "start": 2836.4, "text": " this but then also variants of the Lin former will be worse and they'll even be" }, { "end": 2846.64, "start": 2842.44, "text": " worse than this and sometimes they'll be better sometimes this Lin former is good" }, { "end": 2854.2799999999997, "start": 2846.64, "text": " and sometimes the original model is the best so this sort of points to you can" }, { "end": 2860.7999999999997, "start": 2854.2799999999997, "text": " make the general claim that the Lin former doesn't destroy your your gains" }, { "end": 2867.2, "start": 2860.7999999999997, "text": " but also it's not like a a better model it's simply a faster model that in some" }, { "end": 2873.04, "start": 2867.2, "text": " tasks can keep up with the original model and they show that of course this" }, { "end": 2879.7599999999998, "start": 2873.04, "text": " is the real deal here that as you go up in length the performance gains and also" }, { "end": 2886.7599999999998, "start": 2879.7599999999998, "text": " sorry this this way the performance gains and the memory gains that you get" }, { "end": 2892.08, "start": 2886.7599999999998, "text": " by the Lin former are dramatic of course the longer and you go and to the lower" }, { "end": 2896.2799999999997, "start": 2892.08, "text": " dimension you project the more these gains are but of course the more" }, { "end": 2900.92, "start": 2896.2799999999997, "text": " performance you're going to lose potentially hello again Yannick from the" }, { "end": 2904.56, "start": 2900.92, "text": " future just wanted to draw your attention on this beautiful broader" }, { "end": 2911.28, "start": 2904.56, "text": " impact statement in this paper saying our work focuses on making transformers" }, { "end": 2915.7200000000003, "start": 2911.28, "text": " more efficient everything cool potential positive in spec impacts of efficient" }, { "end": 2919.48, "start": 2915.7200000000003, "text": " transformers that's pretty cool it also has potential impact on training" }, { "end": 2924.12, "start": 2919.48, "text": " transformers on images since we can support very long sequences very cool" }, { "end": 2929.2400000000002, "start": 2924.12, "text": " furthermore there are positive environmental benefits very cool I mean" }, { "end": 2934.9199999999996, "start": 2929.24, "text": " these are all very cool things they say as such we see no immediate negative" }, { "end": 2940.04, "start": 2934.9199999999996, "text": " ethical or societal impacts of our work beyond what applies to the core building" }, { "end": 2947.7599999999998, "start": 2940.04, "text": " blocks of deep learning do better now this this honestly I agree with them" }, { "end": 2953.4799999999996, "start": 2947.7599999999998, "text": " right I completely agree with them that this is sort of a good thing you might" }, { "end": 2957.52, "start": 2953.4799999999996, "text": " trade off you know some accuracy might some make some approximations but you" }, { "end": 2963.36, "start": 2957.52, "text": " will get a much faster model and this model has any model can be used you" }, { "end": 2969.2, "start": 2963.36, "text": " know for things and and that they now have to pull out of there out of their" }, { "end": 2980.16, "start": 2969.2, "text": " but some way in in in over five steps of intermediate layers this could be used" }, { "end": 2987.96, "start": 2980.16, "text": " for bad it just seems ridiculous so good on them for defying the please also" }, { "end": 2994, "start": 2987.96, "text": " think about negative impacts right here all right back to back back to past" }, { "end": 3000.24, "start": 2994, "text": " Yannick all right this was the Lin former paper I hope this somewhat makes" }, { "end": 3007.16, "start": 3000.24, "text": " sense made sense to you I had to read it multiple times for it to make sense to" }, { "end": 3011.3999999999996, "start": 3007.16, "text": " me but ultimately it's all about the fact that you have these multiple heads" }, { "end": 3016, "start": 3011.3999999999996, "text": " and therefore your information is probably lower dimensional and you can" }, { "end": 3022, "start": 3016, "text": " abuse that and to just calculate in this lower dimension all right I'll see you" }, { "end": 3037.84, "start": 3022, "text": " next time bye bye" } ]
WTB2p4bqtXU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
End-to-End Adversarial Text-to-Speech (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "tts", "text-to-speech", "aligner", "convolutions", "spectrogram", "mel", "alignment", "phonemes", "deepmind", "deep mind", "dynamic time warping", "gaussian kernel", "adversarial", "gan", "discriminator", "tokens", "sound wave", "speech" ]
Text-to-speech engines are usually multi-stage pipelines that transform the signal into many intermediate representations and require supervision at each step. When trying to train TTS end-to-end, the alignment problem arises: Which text corresponds to which piece of sound? This paper uses an alignment module to tackle this problem and produces astonishingly good sound. OUTLINE: 0:00 - Intro & Overview 1:55 - Problems with Text-to-Speech 3:55 - Adversarial Training 5:20 - End-to-End Training 7:20 - Discriminator Architecture 10:40 - Generator Architecture 12:20 - The Alignment Problem 14:40 - Aligner Architecture 24:00 - Spectrogram Prediction Loss 32:30 - Dynamic Time Warping 38:30 - Conclusion Paper: https://arxiv.org/abs/2006.03575 Website: https://deepmind.com/research/publications/End-to-End-Adversarial-Text-to-Speech Abstract: Modern text-to-speech synthesis pipelines typically involve multiple processing stages, each of which is designed or learnt independently from the rest. In this work, we take on the challenging task of learning to synthesise speech from normalised text or phonemes in an end-to-end manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs. Our proposed generator is feed-forward and thus efficient for both training and inference, using a differentiable monotonic interpolation scheme to predict the duration of each input token. It learns to produce high fidelity audio through a combination of adversarial feedback and prediction losses constraining the generated audio to roughly match the ground truth in terms of its total duration and mel-spectrogram. To allow the model to capture temporal variation in the generated audio, we employ soft dynamic time warping in the spectrogram-based prediction loss. The resulting model achieves a mean opinion score exceeding 4 on a 5 point scale, which is comparable to the state-of-the-art models relying on multi-stage training and additional supervision. Authors: Jeff Donahue, Sander Dieleman, Mikołaj Bińkowski, Erich Elsen, Karen Simonyan Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
In this work, we take on the challenging task of learning to synthesize speech from normalized text or phonemes in an entwined manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs. Okay, that wasn't the real model. I just thought it sounded really funny. This is a text to speech model, and it actually sounds like this. In this work, we take on the challenging task of learning to synthesize speech from normalized text or phonemes in an entwined manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs. Okay, now you've probably, if you have listened to the text and not just how the text sounds, you have gotten what this paper is about. So the paper is called End to End Adversarial Text to Speech by Jeff Donahue, Sander Dieleman, Mikolai Binkowski, Eric Elson and Karen Simonian of, I believe, of mostly deep mind. And this paper on a high level, it produces speech, so the sound waves of speech directly from text or from what they call normalized text or phoneme text. And it does so without any intermediate supervised representations. And that's a challenging task. And the main problems here are the alignment problem that they have to solve and actually making this work in an adversarial manner. So we're going to look at this paper. As always, if you like work like this, consider subscribing and sharing it out. And if you have any comments, leave them in the comment section. Okay, so what's the problem with text to speech? Text to speech is basically you take a piece of text like this one, modern text to speech synthesis pipelines typically involve blah, blah, blah. And you want to make a model that takes this and outputs sound waves, as if a human would say it right so you do modern text to speech and so on. Now you have multiple problems when doing this. First of all, the text here is words. Let's say we can tokenize the text into words. So you have modern text to speech. Those are four tokens. However, sound waves are, of course, much, much densely sampled. So these sound waves, they are typically in the order of something like 24 kilohertz sampled. So that's the the ratio of one token to output samples is super high. So one token will produce many, many thousand samples in the speech. So that's the first problem. The second problem is that if you have training data, so you have data that has a piece of text and you have the sound wave that a human you know the human read that particular piece of text, you still don't know which word exactly corresponds to which portion of that text. You simply know the entire text corresponds to the entire sound wave. You don't know this word text right here. You don't know. You don't know where it starts and where it ends in this sound wave. And the last problem you obviously have is that you want to make this in a way that it generalizes, that it sounds like a human, but also generalizes to some other text. And this paper here solves all of these problems jointly by doing an adversarial approach to learning. And it does it end to end. Now adversarial simply means that you have a generator that takes in the piece of text right here and generates this sound wave. And then you have a discriminator that looks at this sound wave and it looks at the real sound wave. So the real, okay, the real sense. So let's say this over here is real. And this is what the generator has produced. The discriminator tries to discriminate between the two. Now this is not entirely the same thing as a supervised loss. Usually in a GAN, you do not have the corresponding samples, right? You simply input a real sample here and the generator produces a generated sample here. You do not necessarily have in a classic GAN the corresponding sample. Here you assume that you have the corresponding real sample, but it's still different than supervised learning in that both go through a discriminator and the discriminator tries to tell, the discriminator is a neural network, tries to tell which one is real and which one is generated. In fact, the discriminator is a set of neural networks. We're going to go into that shortly. So it's adversarial in the sense that there is a generator and a discriminator. And it is end to end in the sense that usually what these pipelines do is they take the text. And we've looked at this, for example, in the video about this Facebook's text to speech system. So they take the text. And the first thing they do is they would produce a set of whatever they would call features like textual features. So these are these are sort of intermediate features for the text to be produced. And then another model would take these and it would produce something like spectrograms spectrograms. And then another model would take the spectrograms and finally produce sound or speech. So you have usually in these systems, you have intermediate representation. And each of these models right here can be trained by itself. So that's an advantage that you can train. For example, you can train a model that goes from a spectrogram to a sound wave by itself. And you don't need you simply need sound for that. The computation from sound to spectrogram is super easy. So you can generate your own training data. So you can go from spectrogram to sound. You can train a model like this. So in these pipelines, usually there are multiple stages and each of these models has to be trained by itself. This paper tries to do this end to end. That means you input the text and you get out the you get out the sound wave. And and there is nothing in between that the I mean, of course, there are latent representations, but you train it ends to end in one go. Okay, so let's look at the different systems they employ. First of all, let's look at the discriminators, because that's the easiest. So they have these discriminators and these are adopted from this GAN TTS paper. Now, as we already said, the discriminator try to to differentiate between real and fake sound. And they do it in a sort of. So if they were to just look at the entire sound wave, then it would just basically reduce to comparing the two. But instead, the discriminators, they operate on very small windows. So in specific, they have five different discriminators and each of the five different discriminators take a different size window length. But all of these windows are super short. So one discriminator might take this long windows. Another discriminator might take a bit longer windows and another discriminator might take a bit shorter windows. And that, of course, from the real and from the fake and the discriminators simply try to discriminate only in these windows, whether it's real or fake. And now we're a bit more into the GAN setting where, you know, you simply have one data point of the real and one data point of the fake. And you have to compare them. And this here, I believe, is one of the keys why this model generalizes, because the discriminators, basically, they try to assess whether a short sequence sounds like real or sounds like fake and whether the two samples sound alike in different scales of time. And by only doing this on these short scales, you can you can the loss sort of generalizes. Otherwise, if you compare it on the entire sound wave, it would just reduce to comparing point like point by point right here. And if the generator produces something that's not exactly aligned, then, of course, every point would be wrong. And you'd run into all sorts of questions. So this is a set of five discriminators that all try to take each takes a different length of sub sub sound of this wave tries to discriminate real from fake. That's the discriminator loss. They have an additional discriminator loss where they compute spectrograms of these things. So spectrograms and compute spectrogram of this. And they have a discriminator, another neural network here that it tries to distinguish which one is real and which one is fake. Note that this is not the same as down here where the spectrogram is an intermediate representation. Here, the sound is the output and from the sound you compute the spectrogram and then you compare the two. So this is simply a different spectrogram is a different feature space for the discriminator to compute the loss. It is not an intermediate representation on the way to produce the sound itself so that the difference. That's the difference here between the classic approach and this approach. So it's end to end adversarial. So we got the discriminator. The discriminators simply try to differentiate the sound waves and short scales as well as the spectrograms. Now, the second part is, of course, the generator. How do we even produce sound? And that's this diagram right here. So you have this Gantt generator. This is a generator that takes in a hidden representation. It takes in tokens. Let's say token one, token two. Let's go from the one before. Think of a sentence. Hello there. There. OK, so it takes these tokens and of course it takes like hidden representations of the tokens. And it will output. It will output for each one or for this joint sequence. One hidden, two. It would output the sound wave. OK. And this has been a paper before, the Gantt TTS. And you also you condition it on the speaker and on latent variables like how you want the pitch to be and so on. That's not really important for us right here. The generator can simply take these token embeddings and produce sound. The problem is in the original paper, you had an alignment. You knew which token corresponded to which piece of sound. And therefore you sort of knew. So after that, you need to compare this to the generator thing. And you knew which token corresponded to which piece of sound right here. So the generator knew what it had to produce from each token, how long it should be and so on. So this is the generally the alignment problem or what I call the alignment problem. So if you take a piece of text like this entire paragraph right here. Let's look at this paragraph. This paragraph to read it out takes like 30 to 60 seconds. You can't train really models that output this long of sound. It would be too big of a sample. You want to train ideally on segments. They train segments that are, I believe here, two second windows from each examples. So because they say if we train on 20 seconds, that would just be wasteful and prohibitively expensive. Now, the problem, of course, is if I simply take a window here like this one of two seconds and I have my human that has read this entire paragraph in one go, I have no clue again which part of the entire sound wave of this paragraph this subsequence corresponds to. A good guess would be to go like, well, this is about 50 percent in. So maybe here to here. Maybe. Who knows. And even within that, and that's what we discussed before, you have no clue how long this word here is going to take up within this piece of sound. And that's the general alignment problem right here. So in this entire sound wave, where is the piece and how do these words distribute across the wave of sound? The original model had, as I understand it, such alignments and therefore this generator could work really well because you had these alignments without these alignments. It doesn't work as well. And you can on their website that I've shown you initially, you can listen to samples where they disable each of these things. So this generator is really good at producing sound when it has these alignments. So the challenging task here is how do you compute these alignments? How do you compute this thing? If you don't have it, if you're if you don't have it in your training data, so it needs to be part of the loss. So that's what this entire architecture down here is. So the text is down here. It goes in. And the first thing they do is they normalize the text and they transform it into phonemes, which is you can do this in a deterministic fashion. There are scripts that do this. This is the only preprocessing they do. And they can also leave it away in the heaven ablation on their website. So this is like phoneme text. Cat sat on the mat. Now, this phoneme text goes through these big block of convolutions and of dilated convolutions. And this outputs a 200 hertz representation token length. Alignment. OK, I should specify this. So for each token here, it outputs a length. So this thing predicts the length of each of the tokens. These all of this all of this thing here is to to embed the tokens in hidden space and then predict its length. You can see that right here. OK, so first we use F to to take X. So F is a stack of dilated convolutions and it takes X and outputs a hidden representation. So H. So first X goes to H and then H is used to predict the length of each of the tokens. H is used to predict L and L is the length of that token. So we embed this into a hidden representation with this right here. And then we use this stack to predict the length of each token. So this could be this could say something like this cat token right here is 20 milliseconds long. Or instead of milliseconds, you would use something like frames or data points. Maybe this is 200 data points long. And then SAT is a bit shorter. So this is 100 long. And ON is really short. So this is 50 long. So for each token, it predicts the length. All right. So now if we have the length of each, we can sort of calculate where the starting point is. So if we want to know if we know that here is the beginning and the beginning of the sentence, we we conservatively assume that there is so they give some silence buffer here. But roughly, you can assume that the beginning of the speech corresponds to the first the first token. Right. You can simply trace the waveform. And whenever it goes up, that's where the first token starts. And then since we know that's where the first token starts, and if we could predict the length of each one correctly, we could simply sum those up to figure out where our word starts. So if we want to know where on starts, we simply go from the beginning and go 200 plus 100 milliseconds. So our data points 200 plus 100. Here is where on starts. OK. And if we want to figure out the middle of on, we simply add the half of this number. So plus 25 gets you to the middle. So this is this here is the center of the token on. So for each token, we predict the length like this. And thereby, we can just calculate for each one by summing up from the beginning and then adding half of its own length where the center of that token in the entire sequence is. And now we do this. We said we take random two second audio, but we do this procedure for the entire for the entire text. OK, for the for every single token in the text that we look at in the 20 second text, we do this because then for each token, we'll get a token center. And now the aligners job here is to align that to the actual sound. So what we also give the generators here, the offset. So let's say we have this 20 second of speech and we randomly sampled these two seconds. And that's maybe five seconds from the beginning. We also tell it this is five seconds right here. So what we can now do is we can calculate back sort of and say, OK, here I have I first need to discard five seconds of my signal and I have a prediction how long each token is. So I can just cross out tokens until I have basically wasted five seconds. And then I know, OK, from here to wherever these things sum up to two seconds from here to that. Those are my two seconds that I want to look at. Now, this is how I figure out where in the big sound wave my fragment is. Because I have this offset where I sampled it and I simply add use this and the predicted lengths to figure it out. I still need to figure out these tokens that are actually in the span. How do they distribute? And that's what this aligner here does. Since we've already predicted the token centers, we simply assume that if these are correct, right, then if this is, let's say if this is one second long, I assume that the middle is after point five seconds. So this is one second. The middle is point five seconds. So I think that this token is aligned right here. This is the center of the token. Now, we want to be a little bit a little bit fuzzy with respect to that. So what they do is they sort of use a Gaussian kernel right here. So for each token, as you can see here, each token has a center, which is here. So the y axis is the time in sound and the x axis is the token. And for each token, we say, well, it doesn't have to be exactly there. It can be so they put a Gaussian kernel like this. OK, if you imagine this kernel popping out of the frame, they say this is about where the center is. And for this token, right for this token right here, they say, well, it's it's probably here in the middle, but it could also be here or here or here or here. And we weigh this like this. So these are these are the weights. And then you simply sum up the weights with these embeddings. So for each token out of this dilated convolution block, you get a hidden embedding. And by using this alignment matrix that you computed by predicting the lengths and therefore predicting the centers of the tokens, you can then sort of shift. So first, you assume that h1, h2, h3, if you were to do nothing, these would just all take up like a third of the time. And now by multiplying with this matrix, you have the opportunity because you predicted a longer length for the first token. You have the opportunity to shift that a bit to the right and maybe shorten the second token a bit. And then the third token goes until the end. OK, that's what this aligner thing is. This is not a model by itself. All that this takes in is the computation right here of the token lengths. This estimates these token lengths for each of the tokens. And the rest is deterministic. It's simply saying, OK, how much is the offset? Cool. That's how we know where in the sound wave we are. And then where is each of the centers? And we simply do that by summing up the predicted token lengths. And then we use a Gaussian kernel with like a set hyperparameter to be a little bit fuzzy with respect to these lengths right here. So to be differentiable, basically. And that will that will ultimately train this loss, this model right here that computes the token lengths. Right. So we sum up in a weighted fashion these embeddings right here. And that's what goes into the generator. So now we have embeddings and we have the alignments for the embeddings, which are these pieces of where in the sound wave these are. And from that, the generator can now produce the sound wave itself. OK. And that's basically that's just an up sampling here. I think that's just an up convolution up sampling from 200 hertz signal to a 24 kilohertz signal. Cool. So that's that. Now they discover this doesn't work. And why doesn't it work? It's because at the beginning of training, these token length predictions here are pretty crappy. And so that means that I guess especially this part, even where you say, well, where where in the sound wave of my 20 seconds do I even need to cut out to compare with the discriminator? Right. So if you give if you sample this piece here and that's what you give to the discriminator, but your length predictions are so far off that the generator is trying to produce this particular piece because it thinks it thinks, oh, instead of producing this token here, which is what the discriminator looks at, it produces these tokens here. Of course, you have no chance, no matter how good your adversarial loss is. Remember, the this is these length predictions are used to see basically to see which of these tokens the generator needs to produce the sound for and how they're aligned. So they have an additional loss right here. What they do is they produce from the again, they go via the spectrograms within this spectrogram prediction loss. So they say we discovered that adversarial feedback is insufficient to learn alignment. At the start of training, the aligner does not produce an accurate alignment. So the information in the input tokens is incorrectly temporally distributed. This encourages the decoder to ignore the aligner output. The unconditional discriminators provide no useful signal to correct this. Oh, yeah, I should have mentioned this. The discriminators here, since you don't know, you don't know which tokens you should produce. The discriminators are unconditional. They don't know which text is produced. You don't give them the tokens. You simply give them the sound waves. That's something I find particularly interesting here. Now, you of course, this wouldn't work in a like a traditional again, because you simply have a data sample here and a data sample right here. But in this case, you of course have the corresponding sound samples. But still, they are, you know, they are cut down to a subsequence. So you don't know which text you're producing. So you have to make the discriminators unconditional. And therefore, they are going to discriminate, as we said, between potentially between two completely non overlapping pieces of the sound wave, which, of course, doesn't help you. And then the aligner can also not learn anything because there is no learning signal because everything just says this is not the same. OK. And that's what they say here. We face a different problem. We do not have aligned ground truth. Conditional discriminators, which they don't have, need an aligner module, which cannot function correctly at the start of training, effectively turning them into unconditional discriminators. So even if they were to input the text, it would still be the wrong text because their aligner is wrong at the beginning. Although it should be possible in theory to train the discriminators aligner module adversarially, we find that this does not work in practice and training gets stuck. So what do they do? They say instead we propose to guide learning by using an explicit prediction loss in the spectrogram domain. We minimize the L one loss between the log scale male spectrograms of the generator output and the corresponding ground truth training window. This helps learning to take off and renders conditional discriminators unnecessary, simplifying the model. So they take the they take the spectrogram of the generator output and the corresponding ground truth training window and they simply calculate the L one difference of the spectrograms. Now this, as I understand it, this is different from this is different from because we said they also have a discriminator on the spectrograms. This is different from that. This is even in addition to that. So here somewhere we had this. This was the discriminator on the spectrograms. And I think this is even different. So what they're doing is they also the discriminator simply decides do the spectrograms look real or fake? Does the spectrogram look real or fake? Now they also take the spectrograms and compare them with the L one loss. So this is exactly what they said they wouldn't do right here. Now it's still the case, right? It's still the case that they don't use spectrograms as intermediate representations, but they now do have a supervised loss on the spectrograms. And one of the motivations to do this end to end is saying, you know, maybe these auxiliary losses and supervised losses, they sort of distract. They're good to guide the training, but they sort of distract. And now they see, OK, maybe we have to introduce this one right here in order to make the training start, because this is a real signal. But again, you run into a problem, namely, if you produce something with the generator. And so first of all, this is not a discriminator anymore. This is a true L one loss. So we potentially run into this problem, right? Of the of the generator simply copying the input because you always tell it what the correct input is. This is now a supervised loss that we guide the training with. And what was I going to say? Yeah, so you take the generator output, you transform it into a spectrogram. You take the real output, transform it into a spectrogram, compare the L one loss. Now, you sort of run into the same problem in that if these are completely not aligned, then this is not going to work. But since you have a supervised loss, this it can it gives you a much stronger learning signal of what the generator should produce. So you're kind of counting at the beginning of training, you're counting on sort of a reverse reverse learning process in that the real the real sound will go into a spectrogram. And the generator will go here. And then that learning signal will sort of travel to make the generator produce more of whatever the real sound is. And that almost like if you think that the aligner is so bad that we have even non overlapping fragments, basically you teach the generator to ignore the inputs that it gets from down here, that it gets from its entire backbone. You teach it to sort of ignore all of that. If if that makes any sense. It simply produces the sound according to this supervised loss. Now, of course, it doesn't ignore it. It still takes the features, but it ignores the this whole alignment thing. And now once the generator gets a better signal of what it should produce, that signal can travel back to the aligner module to this length estimation module. And guide that one to make better predictions about the lengths. Okay, so that's how you at the beginning of training, you sort of rely on this path of learning to make to initialize this module of the aligner. And then once these length predictors are better, then the the loss can travel in its intended path where you forward produce these aligned sound waves. And then these discriminators take over. I don't exactly know if they trade this off during training or they simply set it to a number such that it helps them at the beginning. But it's a it's a good idea. And it's a it's a good trick to introduce here a supervised portion to make the beginning easier. But of course, you'd run into the same problem as I said, and that the fact that if you have two spectrograms, they not don't necessarily align again. And here they use this dynamic time warping loss. Now, this looks very, very similar to the aligner, but it is something different. Because now you have to the difference here is you have two things that you know should match. Right. You have this thing and you have this thing and they both have the same amount of entries. So they both have a, b, c, d, e. This has an a, a b, a c, a d and an e slot. And this also has an a, a b, a c, a d and an e slot. And you know that you assume so here is something you assume you assume that the beginning and the ends match. This is not true, of course, because we could have completely unaligned. But they say in practice, this works. So you assume that sort of at least a little bit. These are aligned. Right. So they have, by the way, there's so much to this paper, by the way, they have an auxiliary loss where the produced lengths, all the lengths that the this length prediction module produces, they I don't remember where that is, but they have an auxiliary loss where all the lengths must add up right here. All the lengths that these length predictors must add up to the total length of the sound, which in our case, I guess, is the two seconds. OK, so that's how they if so really quickly, these length predictions will sort of at least the least thing they can do is they can all predict like L over N. And that will give you a sort of a rough alignment such that it it kind of makes sense to to do this dynamic time warping to assume that the beginnings and the endings align. All right, so we have two things with they have the same amount of of slots. We know the beginnings and ends align or we assume that. How do we make it? How do we find out which slots align to which? And this is a dynamic programming. They formulate this as a dynamic programming problem that you might, you know, from you might know from from like these are often taught in algorithms and data structure courses and so on, where you you can figure out which of these align. So if you go a step here, that means that you go one step in each in each of the sequences. And then if you go a step here, that means only this one advances and this one still corresponds to this one right here. And OK, I formulated wrong at the beginning. You don't have ABCDE. I guess you would actually have all of these slots and you would figure out which ones correspond to which. And we have the same problem here. And we have the same problem again, where we have a different selection. Yeah, but I hope you recognize these sort of problems where and the here you align them again. So these are classic dynamic programming alignment problems and they align it like this. And this is a larger penalty we give. So they give a penalty with respect to how much this path deviates. So here you can see how much the spectrogram of the generated the generated sound aligns with the spectrogram of the ground truth. And here is a penalty for each time that the two spectrograms don't align correctly. So they align in a soft way. So they do every single possible path right here. And you can again do this using dynamic programming. And the entire catch here is that the alignment must be monotonic because no matter how long or short the sequences are, they always follow one after another in both of the spectrograms and both of the sounds. So that's why you can optimize it in a way. So over all the possible paths that you can align them, you weigh these paths by their score that you give them here. And then you calculate the loss across all these different paths. And that will give you that is sort of a fuzzy loss. So you don't compare the spectrograms directly, but you compare them and you sort of forgive them for not aligning too well. But the more they don't align, you give a penalty. And that's how you sort of force the generator. Again, you force the generator to produce things that are aligned. You produce these length predictions that make the spectrograms closer to each other. So that's how you calculate the spectrogram loss. This is entirely deterministic. There's no learned weights right here. Okay, cool. Last thing they say is that they use this phony miser. That's the very beginning, but they also ablate that. So in the results, they do a lot, lot of ablation studies, which I don't want to go into right now. I've already shown you some. And they do a even I think they do a human evaluation. Do they do a human evaluation? I know this might have been in another paper. But as you have heard from the examples, this sounds extremely realistic. I'll link the website to the samples in the in in the video description for sure. So I think we've gone over everything. The generator starts off with text, puts that into normalized text, calculates hidden features right here. These hidden features on one hand are used to predict the lengths of each of the tokens in the sound and are also used to as an input to the generator here. Now, they can only be used as an input to the generator if the generator knows how to align them in time and how to align them in time is predicted from these predicted lengths right here via this aligner algorithm. This is the lengths are the only thing that is predicted. Everything then is deterministic. The aligner is simply a Gaussian kernel over the predicted locations on the on the time axis. It is so the Gaussian kernel is to make it to make this alignment a bit fuzzy to make this prediction fuzzy. You perform a weighted sum with these features and then the generator knows where to put the feet where to put the tokens. Finally, the generator can up sample the token, the now aligned tokens into sound. This goes into the discriminator. The discriminator is actually five different discriminators, which try each try to discriminate the original from the real. Sorry, the generated from the real at different time scales. In addition to that, you have a discriminator on the spectrograms and you also have an L1 loss on the spectrograms, which helps especially at the beginning of training for the L1 loss of the spectrograms. You have to again compute an alignment, but you do this in a deterministic way by this thing down here. This dynamic time warping where you simply assume that they are aligned and forgive them for not being aligned with a with a a soft penalty and not a hard hard zero score. All right, this was the paper. Again, if you like this, leave a like a comment, share it out, subscribe and have a good day. Bye bye.
[ { "end": 14, "start": 0, "text": " In this work, we take on the challenging task of learning to synthesize speech from normalized text or phonemes in an entwined manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs." }, { "end": 26, "start": 16, "text": " Okay, that wasn't the real model. I just thought it sounded really funny. This is a text to speech model, and it actually sounds like this." }, { "end": 40, "start": 26, "text": " In this work, we take on the challenging task of learning to synthesize speech from normalized text or phonemes in an entwined manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs." }, { "end": 68, "start": 40, "text": " Okay, now you've probably, if you have listened to the text and not just how the text sounds, you have gotten what this paper is about. So the paper is called End to End Adversarial Text to Speech by Jeff Donahue, Sander Dieleman, Mikolai Binkowski, Eric Elson and Karen Simonian of, I believe, of mostly deep mind." }, { "end": 83, "start": 70, "text": " And this paper on a high level, it produces speech, so the sound waves of speech directly from text or from what they call normalized text or phoneme text." }, { "end": 100, "start": 83, "text": " And it does so without any intermediate supervised representations. And that's a challenging task. And the main problems here are the alignment problem that they have to solve and actually making this work in an adversarial manner." }, { "end": 111, "start": 101, "text": " So we're going to look at this paper. As always, if you like work like this, consider subscribing and sharing it out. And if you have any comments, leave them in the comment section." }, { "end": 122, "start": 111, "text": " Okay, so what's the problem with text to speech? Text to speech is basically you take a piece of text like this one, modern text to speech synthesis pipelines typically involve blah, blah, blah." }, { "end": 136, "start": 123, "text": " And you want to make a model that takes this and outputs sound waves, as if a human would say it right so you do modern text to speech and so on." }, { "end": 146, "start": 136, "text": " Now you have multiple problems when doing this. First of all, the text here is words. Let's say we can tokenize the text into words." }, { "end": 156, "start": 147, "text": " So you have modern text to speech. Those are four tokens. However, sound waves are, of course, much, much densely sampled." }, { "end": 171, "start": 156, "text": " So these sound waves, they are typically in the order of something like 24 kilohertz sampled. So that's the the ratio of one token to output samples is super high." }, { "end": 179, "start": 172, "text": " So one token will produce many, many thousand samples in the speech. So that's the first problem." }, { "end": 192, "start": 179, "text": " The second problem is that if you have training data, so you have data that has a piece of text and you have the sound wave that a human you know the human read that particular piece of text," }, { "end": 202, "start": 193, "text": " you still don't know which word exactly corresponds to which portion of that text. You simply know the entire text corresponds to the entire sound wave." }, { "end": 209, "start": 202, "text": " You don't know this word text right here. You don't know. You don't know where it starts and where it ends in this sound wave." }, { "end": 223, "start": 210, "text": " And the last problem you obviously have is that you want to make this in a way that it generalizes, that it sounds like a human, but also generalizes to some other text." }, { "end": 231, "start": 223, "text": " And this paper here solves all of these problems jointly by doing an adversarial approach to learning." }, { "end": 244, "start": 232, "text": " And it does it end to end. Now adversarial simply means that you have a generator that takes in the piece of text right here and generates this sound wave." }, { "end": 254, "start": 244, "text": " And then you have a discriminator that looks at this sound wave and it looks at the real sound wave. So the real, okay, the real sense." }, { "end": 263, "start": 255, "text": " So let's say this over here is real. And this is what the generator has produced. The discriminator tries to discriminate between the two." }, { "end": 274, "start": 263, "text": " Now this is not entirely the same thing as a supervised loss. Usually in a GAN, you do not have the corresponding samples, right?" }, { "end": 281, "start": 275, "text": " You simply input a real sample here and the generator produces a generated sample here." }, { "end": 290, "start": 282, "text": " You do not necessarily have in a classic GAN the corresponding sample. Here you assume that you have the corresponding real sample," }, { "end": 300, "start": 290, "text": " but it's still different than supervised learning in that both go through a discriminator and the discriminator tries to tell, the discriminator is a neural network," }, { "end": 311, "start": 301, "text": " tries to tell which one is real and which one is generated. In fact, the discriminator is a set of neural networks. We're going to go into that shortly." }, { "end": 323, "start": 311, "text": " So it's adversarial in the sense that there is a generator and a discriminator. And it is end to end in the sense that usually what these pipelines do is they take the text." }, { "end": 330, "start": 324, "text": " And we've looked at this, for example, in the video about this Facebook's text to speech system. So they take the text." }, { "end": 344, "start": 330, "text": " And the first thing they do is they would produce a set of whatever they would call features like textual features." }, { "end": 352, "start": 345, "text": " So these are these are sort of intermediate features for the text to be produced." }, { "end": 361, "start": 352, "text": " And then another model would take these and it would produce something like spectrograms spectrograms." }, { "end": 368, "start": 362, "text": " And then another model would take the spectrograms and finally produce sound or speech." }, { "end": 374, "start": 369, "text": " So you have usually in these systems, you have intermediate representation." }, { "end": 381, "start": 374, "text": " And each of these models right here can be trained by itself. So that's an advantage that you can train." }, { "end": 391, "start": 382, "text": " For example, you can train a model that goes from a spectrogram to a sound wave by itself. And you don't need you simply need sound for that." }, { "end": 398, "start": 392, "text": " The computation from sound to spectrogram is super easy. So you can generate your own training data." }, { "end": 403, "start": 398, "text": " So you can go from spectrogram to sound. You can train a model like this." }, { "end": 410, "start": 404, "text": " So in these pipelines, usually there are multiple stages and each of these models has to be trained by itself." }, { "end": 420, "start": 411, "text": " This paper tries to do this end to end. That means you input the text and you get out the you get out the sound wave." }, { "end": 432, "start": 420, "text": " And and there is nothing in between that the I mean, of course, there are latent representations, but you train it ends to end in one go." }, { "end": 437, "start": 433, "text": " Okay, so let's look at the different systems they employ." }, { "end": 443, "start": 438, "text": " First of all, let's look at the discriminators, because that's the easiest." }, { "end": 449, "start": 443, "text": " So they have these discriminators and these are adopted from this GAN TTS paper." }, { "end": 457, "start": 450, "text": " Now, as we already said, the discriminator try to to differentiate between real and fake sound." }, { "end": 468, "start": 458, "text": " And they do it in a sort of. So if they were to just look at the entire sound wave, then it would just basically reduce to comparing the two." }, { "end": 472, "start": 468, "text": " But instead, the discriminators, they operate on very small windows." }, { "end": 480, "start": 473, "text": " So in specific, they have five different discriminators and each of the five different discriminators take a different size window length." }, { "end": 486, "start": 481, "text": " But all of these windows are super short. So one discriminator might take this long windows." }, { "end": 493, "start": 487, "text": " Another discriminator might take a bit longer windows and another discriminator might take a bit shorter windows." }, { "end": 502, "start": 493, "text": " And that, of course, from the real and from the fake and the discriminators simply try to discriminate only in these windows, whether it's real or fake." }, { "end": 511, "start": 503, "text": " And now we're a bit more into the GAN setting where, you know, you simply have one data point of the real and one data point of the fake." }, { "end": 520, "start": 512, "text": " And you have to compare them. And this here, I believe, is one of the keys why this model generalizes, because the discriminators, basically, they try to assess" }, { "end": 531, "start": 520, "text": " whether a short sequence sounds like real or sounds like fake and whether the two samples sound alike in different scales of time." }, { "end": 537, "start": 532, "text": " And by only doing this on these short scales, you can you can the loss sort of generalizes." }, { "end": 547, "start": 538, "text": " Otherwise, if you compare it on the entire sound wave, it would just reduce to comparing point like point by point right here." }, { "end": 554, "start": 547, "text": " And if the generator produces something that's not exactly aligned, then, of course, every point would be wrong." }, { "end": 569, "start": 555, "text": " And you'd run into all sorts of questions. So this is a set of five discriminators that all try to take each takes a different length of sub sub sound of this wave tries to discriminate real from fake." }, { "end": 577, "start": 569, "text": " That's the discriminator loss. They have an additional discriminator loss where they compute spectrograms of these things." }, { "end": 584, "start": 578, "text": " So spectrograms and compute spectrogram of this." }, { "end": 592, "start": 585, "text": " And they have a discriminator, another neural network here that it tries to distinguish which one is real and which one is fake." }, { "end": 598, "start": 592, "text": " Note that this is not the same as down here where the spectrogram is an intermediate representation." }, { "end": 605, "start": 599, "text": " Here, the sound is the output and from the sound you compute the spectrogram and then you compare the two." }, { "end": 612, "start": 606, "text": " So this is simply a different spectrogram is a different feature space for the discriminator to compute the loss." }, { "end": 620, "start": 613, "text": " It is not an intermediate representation on the way to produce the sound itself so that the difference." }, { "end": 626, "start": 620, "text": " That's the difference here between the classic approach and this approach." }, { "end": 630, "start": 627, "text": " So it's end to end adversarial. So we got the discriminator." }, { "end": 636, "start": 631, "text": " The discriminators simply try to differentiate the sound waves and short scales as well as the spectrograms." }, { "end": 642, "start": 637, "text": " Now, the second part is, of course, the generator. How do we even produce sound?" }, { "end": 645, "start": 643, "text": " And that's this diagram right here." }, { "end": 655, "start": 645, "text": " So you have this Gantt generator. This is a generator that takes in a hidden representation." }, { "end": 659, "start": 656, "text": " It takes in tokens. Let's say token one, token two." }, { "end": 667, "start": 660, "text": " Let's go from the one before. Think of a sentence. Hello there." }, { "end": 676, "start": 667, "text": " There. OK, so it takes these tokens and of course it takes like hidden representations of the tokens." }, { "end": 683, "start": 677, "text": " And it will output. It will output for each one or for this joint sequence." }, { "end": 688, "start": 684, "text": " One hidden, two. It would output the sound wave. OK." }, { "end": 700, "start": 688, "text": " And this has been a paper before, the Gantt TTS. And you also you condition it on the speaker and on latent variables like how you want the pitch to be and so on." }, { "end": 708, "start": 701, "text": " That's not really important for us right here. The generator can simply take these token embeddings and produce sound." }, { "end": 718, "start": 708, "text": " The problem is in the original paper, you had an alignment. You knew which token corresponded to which piece of sound." }, { "end": 725, "start": 719, "text": " And therefore you sort of knew. So after that, you need to compare this to the generator thing." }, { "end": 730, "start": 726, "text": " And you knew which token corresponded to which piece of sound right here." }, { "end": 736, "start": 731, "text": " So the generator knew what it had to produce from each token, how long it should be and so on." }, { "end": 741, "start": 736, "text": " So this is the generally the alignment problem or what I call the alignment problem." }, { "end": 746, "start": 742, "text": " So if you take a piece of text like this entire paragraph right here." }, { "end": 753, "start": 747, "text": " Let's look at this paragraph. This paragraph to read it out takes like 30 to 60 seconds." }, { "end": 761, "start": 754, "text": " You can't train really models that output this long of sound. It would be too big of a sample." }, { "end": 770, "start": 761, "text": " You want to train ideally on segments. They train segments that are, I believe here, two second windows from each examples." }, { "end": 777, "start": 771, "text": " So because they say if we train on 20 seconds, that would just be wasteful and prohibitively expensive." }, { "end": 790, "start": 777, "text": " Now, the problem, of course, is if I simply take a window here like this one of two seconds and I have my human that has read this entire paragraph in one go," }, { "end": 800, "start": 791, "text": " I have no clue again which part of the entire sound wave of this paragraph this subsequence corresponds to." }, { "end": 808, "start": 800, "text": " A good guess would be to go like, well, this is about 50 percent in. So maybe here to here. Maybe. Who knows." }, { "end": 818, "start": 809, "text": " And even within that, and that's what we discussed before, you have no clue how long this word here is going to take up within this piece of sound." }, { "end": 830, "start": 818, "text": " And that's the general alignment problem right here. So in this entire sound wave, where is the piece and how do these words distribute across the wave of sound?" }, { "end": 841, "start": 831, "text": " The original model had, as I understand it, such alignments and therefore this generator could work really well because you had these alignments without these alignments." }, { "end": 849, "start": 841, "text": " It doesn't work as well. And you can on their website that I've shown you initially, you can listen to samples where they disable each of these things." }, { "end": 857, "start": 850, "text": " So this generator is really good at producing sound when it has these alignments." }, { "end": 862, "start": 857, "text": " So the challenging task here is how do you compute these alignments?" }, { "end": 864, "start": 862, "text": " How do you compute this thing?" }, { "end": 871, "start": 864, "text": " If you don't have it, if you're if you don't have it in your training data, so it needs to be part of the loss." }, { "end": 874, "start": 871, "text": " So that's what this entire architecture down here is." }, { "end": 878, "start": 875, "text": " So the text is down here." }, { "end": 888, "start": 878, "text": " It goes in. And the first thing they do is they normalize the text and they transform it into phonemes, which is you can do this in a deterministic fashion." }, { "end": 890, "start": 888, "text": " There are scripts that do this." }, { "end": 894, "start": 890, "text": " This is the only preprocessing they do." }, { "end": 898, "start": 894, "text": " And they can also leave it away in the heaven ablation on their website." }, { "end": 902, "start": 898, "text": " So this is like phoneme text. Cat sat on the mat." }, { "end": 910, "start": 902, "text": " Now, this phoneme text goes through these big block of convolutions and of dilated convolutions." }, { "end": 916, "start": 910, "text": " And this outputs a 200 hertz representation token length." }, { "end": 920, "start": 916, "text": " Alignment. OK, I should specify this." }, { "end": 926, "start": 920, "text": " So for each token here, it outputs a length." }, { "end": 930, "start": 926, "text": " So this thing predicts the length of each of the tokens." }, { "end": 939, "start": 930, "text": " These all of this all of this thing here is to to embed the tokens in hidden space and then predict its length." }, { "end": 946, "start": 939, "text": " You can see that right here." }, { "end": 954, "start": 946, "text": " OK, so first we use F to to take X." }, { "end": 961, "start": 954, "text": " So F is a stack of dilated convolutions and it takes X and outputs a hidden representation." }, { "end": 968, "start": 961, "text": " So H. So first X goes to H and then H is used to predict the length of each of the tokens." }, { "end": 976, "start": 968, "text": " H is used to predict L and L is the length of that token." }, { "end": 981, "start": 976, "text": " So we embed this into a hidden representation with this right here." }, { "end": 985, "start": 981, "text": " And then we use this stack to predict the length of each token." }, { "end": 994, "start": 985, "text": " So this could be this could say something like this cat token right here is 20 milliseconds long." }, { "end": 999, "start": 994, "text": " Or instead of milliseconds, you would use something like frames or data points." }, { "end": 1005, "start": 999, "text": " Maybe this is 200 data points long." }, { "end": 1008, "start": 1005, "text": " And then SAT is a bit shorter. So this is 100 long." }, { "end": 1012, "start": 1008, "text": " And ON is really short. So this is 50 long." }, { "end": 1015, "start": 1012, "text": " So for each token, it predicts the length." }, { "end": 1022, "start": 1015, "text": " All right. So now if we have the length of each, we can sort of calculate where the starting point is." }, { "end": 1027, "start": 1022, "text": " So if we want to know if we know that here is the beginning and the beginning of the sentence," }, { "end": 1034, "start": 1027, "text": " we we conservatively assume that there is so they give some silence buffer here." }, { "end": 1041, "start": 1034, "text": " But roughly, you can assume that the beginning of the speech corresponds to the first the first token." }, { "end": 1044, "start": 1041, "text": " Right. You can simply trace the waveform." }, { "end": 1048, "start": 1044, "text": " And whenever it goes up, that's where the first token starts." }, { "end": 1052, "start": 1048, "text": " And then since we know that's where the first token starts," }, { "end": 1056, "start": 1052, "text": " and if we could predict the length of each one correctly," }, { "end": 1060, "start": 1056, "text": " we could simply sum those up to figure out where our word starts." }, { "end": 1066, "start": 1060, "text": " So if we want to know where on starts, we simply go from the beginning and go 200 plus 100 milliseconds." }, { "end": 1073, "start": 1066, "text": " So our data points 200 plus 100. Here is where on starts." }, { "end": 1081, "start": 1073, "text": " OK. And if we want to figure out the middle of on, we simply add the half of this number." }, { "end": 1084, "start": 1081, "text": " So plus 25 gets you to the middle." }, { "end": 1092, "start": 1084, "text": " So this is this here is the center of the token on." }, { "end": 1095, "start": 1092, "text": " So for each token, we predict the length like this." }, { "end": 1101, "start": 1095, "text": " And thereby, we can just calculate for each one by summing up from the beginning" }, { "end": 1108, "start": 1101, "text": " and then adding half of its own length where the center of that token in the entire sequence is." }, { "end": 1113, "start": 1108, "text": " And now we do this. We said we take random two second audio," }, { "end": 1119, "start": 1113, "text": " but we do this procedure for the entire for the entire text." }, { "end": 1125, "start": 1119, "text": " OK, for the for every single token in the text that we look at in the 20 second text," }, { "end": 1133, "start": 1125, "text": " we do this because then for each token, we'll get a token center." }, { "end": 1141, "start": 1133, "text": " And now the aligners job here is to align that to the actual sound." }, { "end": 1145, "start": 1141, "text": " So what we also give the generators here, the offset." }, { "end": 1152, "start": 1145, "text": " So let's say we have this 20 second of speech and we randomly sampled these two seconds." }, { "end": 1155, "start": 1152, "text": " And that's maybe five seconds from the beginning." }, { "end": 1160, "start": 1155, "text": " We also tell it this is five seconds right here." }, { "end": 1170, "start": 1160, "text": " So what we can now do is we can calculate back sort of and say, OK, here I have" }, { "end": 1176, "start": 1170, "text": " I first need to discard five seconds of my signal and I have a prediction how long each token is." }, { "end": 1182, "start": 1176, "text": " So I can just cross out tokens until I have basically wasted five seconds." }, { "end": 1190, "start": 1182, "text": " And then I know, OK, from here to wherever these things sum up to two seconds from here to that." }, { "end": 1194, "start": 1190, "text": " Those are my two seconds that I want to look at." }, { "end": 1201, "start": 1194, "text": " Now, this is how I figure out where in the big sound wave my fragment is." }, { "end": 1209, "start": 1201, "text": " Because I have this offset where I sampled it and I simply add use this and the predicted lengths to figure it out." }, { "end": 1213, "start": 1209, "text": " I still need to figure out these tokens that are actually in the span." }, { "end": 1218, "start": 1213, "text": " How do they distribute? And that's what this aligner here does." }, { "end": 1225, "start": 1218, "text": " Since we've already predicted the token centers, we simply assume that if these are correct, right," }, { "end": 1234, "start": 1225, "text": " then if this is, let's say if this is one second long, I assume that the middle is after point five seconds." }, { "end": 1237, "start": 1234, "text": " So this is one second. The middle is point five seconds." }, { "end": 1241, "start": 1237, "text": " So I think that this token is aligned right here." }, { "end": 1243, "start": 1241, "text": " This is the center of the token." }, { "end": 1251, "start": 1243, "text": " Now, we want to be a little bit a little bit fuzzy with respect to that." }, { "end": 1256, "start": 1251, "text": " So what they do is they sort of use a Gaussian kernel right here." }, { "end": 1263, "start": 1256, "text": " So for each token, as you can see here, each token has a center, which is here." }, { "end": 1267, "start": 1263, "text": " So the y axis is the time in sound and the x axis is the token." }, { "end": 1271, "start": 1267, "text": " And for each token, we say, well, it doesn't have to be exactly there." }, { "end": 1276, "start": 1271, "text": " It can be so they put a Gaussian kernel like this." }, { "end": 1283, "start": 1276, "text": " OK, if you imagine this kernel popping out of the frame, they say this is about where the center is." }, { "end": 1290, "start": 1283, "text": " And for this token, right for this token right here, they say, well, it's it's probably here in the middle," }, { "end": 1293, "start": 1290, "text": " but it could also be here or here or here or here." }, { "end": 1297, "start": 1293, "text": " And we weigh this like this." }, { "end": 1300, "start": 1297, "text": " So these are these are the weights." }, { "end": 1305, "start": 1300, "text": " And then you simply sum up the weights with these embeddings." }, { "end": 1310, "start": 1305, "text": " So for each token out of this dilated convolution block, you get a hidden embedding." }, { "end": 1316, "start": 1310, "text": " And by using this alignment matrix that you computed by predicting the lengths" }, { "end": 1322, "start": 1316, "text": " and therefore predicting the centers of the tokens, you can then sort of shift." }, { "end": 1328, "start": 1322, "text": " So first, you assume that h1, h2, h3, if you were to do nothing," }, { "end": 1332, "start": 1328, "text": " these would just all take up like a third of the time." }, { "end": 1340, "start": 1332, "text": " And now by multiplying with this matrix, you have the opportunity because you predicted a longer length for the first token." }, { "end": 1348, "start": 1340, "text": " You have the opportunity to shift that a bit to the right and maybe shorten the second token a bit." }, { "end": 1351, "start": 1348, "text": " And then the third token goes until the end." }, { "end": 1353, "start": 1351, "text": " OK, that's what this aligner thing is." }, { "end": 1355, "start": 1353, "text": " This is not a model by itself." }, { "end": 1360, "start": 1355, "text": " All that this takes in is the computation right here of the token lengths." }, { "end": 1364, "start": 1360, "text": " This estimates these token lengths for each of the tokens." }, { "end": 1366, "start": 1364, "text": " And the rest is deterministic." }, { "end": 1369, "start": 1366, "text": " It's simply saying, OK, how much is the offset?" }, { "end": 1370, "start": 1369, "text": " Cool." }, { "end": 1372, "start": 1370, "text": " That's how we know where in the sound wave we are." }, { "end": 1375, "start": 1372, "text": " And then where is each of the centers?" }, { "end": 1379, "start": 1375, "text": " And we simply do that by summing up the predicted token lengths." }, { "end": 1387, "start": 1379, "text": " And then we use a Gaussian kernel with like a set hyperparameter to be a little bit fuzzy with respect to these lengths right here." }, { "end": 1390, "start": 1387, "text": " So to be differentiable, basically." }, { "end": 1398, "start": 1390, "text": " And that will that will ultimately train this loss, this model right here that computes the token lengths." }, { "end": 1403, "start": 1398, "text": " Right. So we sum up in a weighted fashion these embeddings right here." }, { "end": 1405, "start": 1403, "text": " And that's what goes into the generator." }, { "end": 1416, "start": 1405, "text": " So now we have embeddings and we have the alignments for the embeddings, which are these pieces of where in the sound wave these are." }, { "end": 1421, "start": 1416, "text": " And from that, the generator can now produce the sound wave itself." }, { "end": 1422, "start": 1421, "text": " OK." }, { "end": 1424, "start": 1422, "text": " And that's basically that's just an up sampling here." }, { "end": 1437, "start": 1424, "text": " I think that's just an up convolution up sampling from 200 hertz signal to a 24 kilohertz signal." }, { "end": 1438, "start": 1437, "text": " Cool." }, { "end": 1441, "start": 1438, "text": " So that's that." }, { "end": 1444, "start": 1441, "text": " Now they discover this doesn't work." }, { "end": 1446, "start": 1444, "text": " And why doesn't it work?" }, { "end": 1453, "start": 1446, "text": " It's because at the beginning of training, these token length predictions here are pretty crappy." }, { "end": 1468, "start": 1453, "text": " And so that means that I guess especially this part, even where you say, well, where where in the sound wave of my 20 seconds do I even need to cut out to compare with the discriminator?" }, { "end": 1469, "start": 1468, "text": " Right." }, { "end": 1491, "start": 1469, "text": " So if you give if you sample this piece here and that's what you give to the discriminator, but your length predictions are so far off that the generator is trying to produce this particular piece because it thinks it thinks, oh, instead of producing this token here, which is what the discriminator looks at, it produces these tokens here." }, { "end": 1497, "start": 1491, "text": " Of course, you have no chance, no matter how good your adversarial loss is." }, { "end": 1510, "start": 1497, "text": " Remember, the this is these length predictions are used to see basically to see which of these tokens the generator needs to produce the sound for and how they're aligned." }, { "end": 1515, "start": 1510, "text": " So they have an additional loss right here." }, { "end": 1525, "start": 1515, "text": " What they do is they produce from the again, they go via the spectrograms within this spectrogram prediction loss." }, { "end": 1531, "start": 1525, "text": " So they say we discovered that adversarial feedback is insufficient to learn alignment." }, { "end": 1535, "start": 1531, "text": " At the start of training, the aligner does not produce an accurate alignment." }, { "end": 1540, "start": 1535, "text": " So the information in the input tokens is incorrectly temporally distributed." }, { "end": 1545, "start": 1540, "text": " This encourages the decoder to ignore the aligner output." }, { "end": 1549, "start": 1545, "text": " The unconditional discriminators provide no useful signal to correct this." }, { "end": 1550, "start": 1549, "text": " Oh, yeah, I should have mentioned this." }, { "end": 1556, "start": 1550, "text": " The discriminators here, since you don't know, you don't know which tokens you should produce." }, { "end": 1558, "start": 1556, "text": " The discriminators are unconditional." }, { "end": 1561, "start": 1558, "text": " They don't know which text is produced." }, { "end": 1562, "start": 1561, "text": " You don't give them the tokens." }, { "end": 1564, "start": 1562, "text": " You simply give them the sound waves." }, { "end": 1567, "start": 1564, "text": " That's something I find particularly interesting here." }, { "end": 1577, "start": 1567, "text": " Now, you of course, this wouldn't work in a like a traditional again, because you simply have a data sample here and a data sample right here." }, { "end": 1582, "start": 1577, "text": " But in this case, you of course have the corresponding sound samples." }, { "end": 1586, "start": 1582, "text": " But still, they are, you know, they are cut down to a subsequence." }, { "end": 1588, "start": 1586, "text": " So you don't know which text you're producing." }, { "end": 1591, "start": 1588, "text": " So you have to make the discriminators unconditional." }, { "end": 1603, "start": 1591, "text": " And therefore, they are going to discriminate, as we said, between potentially between two completely non overlapping pieces of the sound wave, which, of course, doesn't help you." }, { "end": 1611, "start": 1603, "text": " And then the aligner can also not learn anything because there is no learning signal because everything just says this is not the same." }, { "end": 1613, "start": 1611, "text": " OK." }, { "end": 1615, "start": 1613, "text": " And that's what they say here." }, { "end": 1616, "start": 1615, "text": " We face a different problem." }, { "end": 1620, "start": 1616, "text": " We do not have aligned ground truth." }, { "end": 1630, "start": 1620, "text": " Conditional discriminators, which they don't have, need an aligner module, which cannot function correctly at the start of training, effectively turning them into unconditional discriminators." }, { "end": 1638, "start": 1630, "text": " So even if they were to input the text, it would still be the wrong text because their aligner is wrong at the beginning." }, { "end": 1647, "start": 1638, "text": " Although it should be possible in theory to train the discriminators aligner module adversarially, we find that this does not work in practice and training gets stuck." }, { "end": 1649, "start": 1647, "text": " So what do they do?" }, { "end": 1656, "start": 1649, "text": " They say instead we propose to guide learning by using an explicit prediction loss in the spectrogram domain." }, { "end": 1666, "start": 1656, "text": " We minimize the L one loss between the log scale male spectrograms of the generator output and the corresponding ground truth training window." }, { "end": 1674, "start": 1666, "text": " This helps learning to take off and renders conditional discriminators unnecessary, simplifying the model." }, { "end": 1688, "start": 1674, "text": " So they take the they take the spectrogram of the generator output and the corresponding ground truth training window and they simply calculate the L one difference of the spectrograms." }, { "end": 1700, "start": 1688, "text": " Now this, as I understand it, this is different from this is different from because we said they also have a discriminator on the spectrograms." }, { "end": 1703, "start": 1700, "text": " This is different from that." }, { "end": 1705, "start": 1703, "text": " This is even in addition to that." }, { "end": 1707, "start": 1705, "text": " So here somewhere we had this." }, { "end": 1711, "start": 1707, "text": " This was the discriminator on the spectrograms." }, { "end": 1713, "start": 1711, "text": " And I think this is even different." }, { "end": 1723, "start": 1713, "text": " So what they're doing is they also the discriminator simply decides do the spectrograms look real or fake?" }, { "end": 1725, "start": 1723, "text": " Does the spectrogram look real or fake?" }, { "end": 1733, "start": 1725, "text": " Now they also take the spectrograms and compare them with the L one loss." }, { "end": 1738, "start": 1733, "text": " So this is exactly what they said they wouldn't do right here." }, { "end": 1740, "start": 1738, "text": " Now it's still the case, right?" }, { "end": 1750, "start": 1740, "text": " It's still the case that they don't use spectrograms as intermediate representations, but they now do have a supervised loss on the spectrograms." }, { "end": 1760, "start": 1750, "text": " And one of the motivations to do this end to end is saying, you know, maybe these auxiliary losses and supervised losses, they sort of distract." }, { "end": 1762, "start": 1760, "text": " They're good to guide the training, but they sort of distract." }, { "end": 1773, "start": 1762, "text": " And now they see, OK, maybe we have to introduce this one right here in order to make the training start, because this is a real signal." }, { "end": 1780, "start": 1773, "text": " But again, you run into a problem, namely, if you produce something with the generator." }, { "end": 1786, "start": 1780, "text": " And so first of all, this is not a discriminator anymore." }, { "end": 1788, "start": 1786, "text": " This is a true L one loss." }, { "end": 1792, "start": 1788, "text": " So we potentially run into this problem, right?" }, { "end": 1799, "start": 1792, "text": " Of the of the generator simply copying the input because you always tell it what the correct input is." }, { "end": 1803, "start": 1799, "text": " This is now a supervised loss that we guide the training with." }, { "end": 1809, "start": 1803, "text": " And what was I going to say?" }, { "end": 1812, "start": 1809, "text": " Yeah, so you take the generator output, you transform it into a spectrogram." }, { "end": 1816, "start": 1812, "text": " You take the real output, transform it into a spectrogram, compare the L one loss." }, { "end": 1824, "start": 1816, "text": " Now, you sort of run into the same problem in that if these are completely not aligned, then this is not going to work." }, { "end": 1833, "start": 1824, "text": " But since you have a supervised loss, this it can it gives you a much stronger learning signal of what the generator should produce." }, { "end": 1846, "start": 1833, "text": " So you're kind of counting at the beginning of training, you're counting on sort of a reverse reverse learning process in that the real the real sound will go into a spectrogram." }, { "end": 1857, "start": 1846, "text": " And the generator will go here. And then that learning signal will sort of travel to make the generator produce more of whatever the real sound is." }, { "end": 1875, "start": 1857, "text": " And that almost like if you think that the aligner is so bad that we have even non overlapping fragments, basically you teach the generator to ignore the inputs that it gets from down here, that it gets from its entire backbone." }, { "end": 1882, "start": 1875, "text": " You teach it to sort of ignore all of that. If if that makes any sense." }, { "end": 1886, "start": 1882, "text": " It simply produces the sound according to this supervised loss." }, { "end": 1893, "start": 1886, "text": " Now, of course, it doesn't ignore it. It still takes the features, but it ignores the this whole alignment thing." }, { "end": 1904, "start": 1893, "text": " And now once the generator gets a better signal of what it should produce, that signal can travel back to the aligner module to this length estimation module." }, { "end": 1909, "start": 1904, "text": " And guide that one to make better predictions about the lengths." }, { "end": 1919, "start": 1909, "text": " Okay, so that's how you at the beginning of training, you sort of rely on this path of learning to make to initialize this module of the aligner." }, { "end": 1930, "start": 1919, "text": " And then once these length predictors are better, then the the loss can travel in its intended path where you forward produce these aligned sound waves." }, { "end": 1941, "start": 1930, "text": " And then these discriminators take over. I don't exactly know if they trade this off during training or they simply set it to a number such that it helps them at the beginning." }, { "end": 1953, "start": 1941, "text": " But it's a it's a good idea. And it's a it's a good trick to introduce here a supervised portion to make the beginning easier." }, { "end": 1966, "start": 1953, "text": " But of course, you'd run into the same problem as I said, and that the fact that if you have two spectrograms, they not don't necessarily align again." }, { "end": 1977, "start": 1966, "text": " And here they use this dynamic time warping loss. Now, this looks very, very similar to the aligner, but it is something different." }, { "end": 1984, "start": 1977, "text": " Because now you have to the difference here is you have two things that you know should match." }, { "end": 1990, "start": 1984, "text": " Right. You have this thing and you have this thing and they both have the same amount of entries." }, { "end": 1998, "start": 1990, "text": " So they both have a, b, c, d, e. This has an a, a b, a c, a d and an e slot." }, { "end": 2002, "start": 1998, "text": " And this also has an a, a b, a c, a d and an e slot." }, { "end": 2010, "start": 2002, "text": " And you know that you assume so here is something you assume you assume that the beginning and the ends match." }, { "end": 2017, "start": 2010, "text": " This is not true, of course, because we could have completely unaligned. But they say in practice, this works." }, { "end": 2024, "start": 2017, "text": " So you assume that sort of at least a little bit. These are aligned." }, { "end": 2035, "start": 2024, "text": " Right. So they have, by the way, there's so much to this paper, by the way, they have an auxiliary loss where the produced lengths," }, { "end": 2043, "start": 2035, "text": " all the lengths that the this length prediction module produces, they I don't remember where that is," }, { "end": 2048, "start": 2043, "text": " but they have an auxiliary loss where all the lengths must add up right here." }, { "end": 2056, "start": 2048, "text": " All the lengths that these length predictors must add up to the total length of the sound, which in our case, I guess, is the two seconds." }, { "end": 2071, "start": 2056, "text": " OK, so that's how they if so really quickly, these length predictions will sort of at least the least thing they can do is they can all predict like L over N." }, { "end": 2083, "start": 2071, "text": " And that will give you a sort of a rough alignment such that it it kind of makes sense to to do this dynamic time warping to assume that the beginnings and the endings align." }, { "end": 2088, "start": 2083, "text": " All right, so we have two things with they have the same amount of of slots." }, { "end": 2091, "start": 2088, "text": " We know the beginnings and ends align or we assume that." }, { "end": 2099, "start": 2091, "text": " How do we make it? How do we find out which slots align to which?" }, { "end": 2101, "start": 2099, "text": " And this is a dynamic programming." }, { "end": 2114, "start": 2101, "text": " They formulate this as a dynamic programming problem that you might, you know, from you might know from from like these are often taught in algorithms and data structure courses and so on," }, { "end": 2119, "start": 2114, "text": " where you you can figure out which of these align." }, { "end": 2135, "start": 2119, "text": " So if you go a step here, that means that you go one step in each in each of the sequences. And then if you go a step here, that means only this one advances and this one still corresponds to this one right here." }, { "end": 2138, "start": 2135, "text": " And OK, I formulated wrong at the beginning." }, { "end": 2141, "start": 2138, "text": " You don't have ABCDE." }, { "end": 2146, "start": 2141, "text": " I guess you would actually have all of these slots and you would figure out which ones correspond to which." }, { "end": 2149, "start": 2146, "text": " And we have the same problem here." }, { "end": 2153, "start": 2149, "text": " And we have the same problem again, where we have a different selection." }, { "end": 2160, "start": 2153, "text": " Yeah, but I hope you recognize these sort of problems where and the here you align them again." }, { "end": 2167, "start": 2160, "text": " So these are classic dynamic programming alignment problems and they align it like this." }, { "end": 2176, "start": 2167, "text": " And this is a larger penalty we give. So they give a penalty with respect to how much this path deviates." }, { "end": 2186, "start": 2176, "text": " So here you can see how much the spectrogram of the generated the generated sound aligns with the spectrogram of the ground truth." }, { "end": 2193, "start": 2186, "text": " And here is a penalty for each time that the two spectrograms don't align correctly." }, { "end": 2198, "start": 2193, "text": " So they align in a soft way. So they do every single possible path right here." }, { "end": 2201, "start": 2198, "text": " And you can again do this using dynamic programming." }, { "end": 2211, "start": 2201, "text": " And the entire catch here is that the alignment must be monotonic because no matter how long or short the sequences are," }, { "end": 2217, "start": 2211, "text": " they always follow one after another in both of the spectrograms and both of the sounds." }, { "end": 2219, "start": 2217, "text": " So that's why you can optimize it in a way." }, { "end": 2229, "start": 2219, "text": " So over all the possible paths that you can align them, you weigh these paths by their score that you give them here." }, { "end": 2236, "start": 2229, "text": " And then you calculate the loss across all these different paths." }, { "end": 2240, "start": 2236, "text": " And that will give you that is sort of a fuzzy loss." }, { "end": 2248, "start": 2240, "text": " So you don't compare the spectrograms directly, but you compare them and you sort of forgive them for not aligning too well." }, { "end": 2251, "start": 2248, "text": " But the more they don't align, you give a penalty." }, { "end": 2254, "start": 2251, "text": " And that's how you sort of force the generator." }, { "end": 2258, "start": 2254, "text": " Again, you force the generator to produce things that are aligned." }, { "end": 2265, "start": 2258, "text": " You produce these length predictions that make the spectrograms closer to each other." }, { "end": 2268, "start": 2265, "text": " So that's how you calculate the spectrogram loss." }, { "end": 2270, "start": 2268, "text": " This is entirely deterministic." }, { "end": 2273, "start": 2270, "text": " There's no learned weights right here." }, { "end": 2276, "start": 2273, "text": " Okay, cool." }, { "end": 2280, "start": 2276, "text": " Last thing they say is that they use this phony miser." }, { "end": 2284, "start": 2280, "text": " That's the very beginning, but they also ablate that." }, { "end": 2292, "start": 2284, "text": " So in the results, they do a lot, lot of ablation studies, which I don't want to go into right now." }, { "end": 2294, "start": 2292, "text": " I've already shown you some." }, { "end": 2298, "start": 2294, "text": " And they do a even I think they do a human evaluation." }, { "end": 2300, "start": 2298, "text": " Do they do a human evaluation?" }, { "end": 2303, "start": 2300, "text": " I know this might have been in another paper." }, { "end": 2308, "start": 2303, "text": " But as you have heard from the examples, this sounds extremely realistic." }, { "end": 2315, "start": 2308, "text": " I'll link the website to the samples in the in in the video description for sure." }, { "end": 2317, "start": 2315, "text": " So I think we've gone over everything." }, { "end": 2325, "start": 2317, "text": " The generator starts off with text, puts that into normalized text, calculates hidden features right here." }, { "end": 2331, "start": 2325, "text": " These hidden features on one hand are used to predict the lengths of each of the tokens in the sound" }, { "end": 2337, "start": 2331, "text": " and are also used to as an input to the generator here." }, { "end": 2344, "start": 2337, "text": " Now, they can only be used as an input to the generator if the generator knows how to align them in time" }, { "end": 2352, "start": 2344, "text": " and how to align them in time is predicted from these predicted lengths right here via this aligner algorithm." }, { "end": 2356, "start": 2352, "text": " This is the lengths are the only thing that is predicted." }, { "end": 2358, "start": 2356, "text": " Everything then is deterministic." }, { "end": 2367, "start": 2358, "text": " The aligner is simply a Gaussian kernel over the predicted locations on the on the time axis." }, { "end": 2374, "start": 2367, "text": " It is so the Gaussian kernel is to make it to make this alignment a bit fuzzy to make this prediction fuzzy." }, { "end": 2382, "start": 2374, "text": " You perform a weighted sum with these features and then the generator knows where to put the feet where to put the tokens." }, { "end": 2388, "start": 2382, "text": " Finally, the generator can up sample the token, the now aligned tokens into sound." }, { "end": 2390, "start": 2388, "text": " This goes into the discriminator." }, { "end": 2398, "start": 2390, "text": " The discriminator is actually five different discriminators, which try each try to discriminate the original from the real." }, { "end": 2402, "start": 2398, "text": " Sorry, the generated from the real at different time scales." }, { "end": 2411, "start": 2402, "text": " In addition to that, you have a discriminator on the spectrograms and you also have an L1 loss on the spectrograms," }, { "end": 2417, "start": 2411, "text": " which helps especially at the beginning of training for the L1 loss of the spectrograms." }, { "end": 2424, "start": 2417, "text": " You have to again compute an alignment, but you do this in a deterministic way by this thing down here." }, { "end": 2433, "start": 2424, "text": " This dynamic time warping where you simply assume that they are aligned and forgive them for not being aligned with a with a" }, { "end": 2440, "start": 2433, "text": " a soft penalty and not a hard hard zero score." }, { "end": 2442, "start": 2440, "text": " All right, this was the paper." }, { "end": 2448, "start": 2442, "text": " Again, if you like this, leave a like a comment, share it out, subscribe and have a good day." }, { "end": 2471, "start": 2448, "text": " Bye bye." } ]
xTzFJIknh7E
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
TransCoder: Unsupervised Translation of Programming Languages (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper" ]
Code migration between languages is an expensive and laborious task. To translate from one language to the other, one needs to be an expert at both. Current automatic tools often produce illegible and complicated code. This paper applies unsupervised neural machine translation to source code of Python, C++, and Java and is able to translate between them, without ever being trained in a supervised fashion. OUTLINE: 0:00 - Intro & Overview 1:15 - The Transcompiling Problem 5:55 - Neural Machine Translation 8:45 - Unsupervised NMT 12:55 - Shared Embeddings via Token Overlap 20:45 - MLM Objective 25:30 - Denoising Objective 30:10 - Back-Translation Objective 33:00 - Evaluation Dataset 37:25 - Results 41:45 - Tokenization 42:40 - Shared Embeddings 43:30 - Human-Aware Translation 47:25 - Failure Cases 48:05 - Conclusion Paper: https://arxiv.org/abs/2006.03511 Abstract: A transcompiler, also known as source-to-source translator, is a system that converts source code from a high-level programming language (such as C++ or Python) to another. Transcompilers are primarily used for interoperability, and to port codebases written in an obsolete or deprecated language (e.g. COBOL, Python 2) to a modern one. They typically rely on handcrafted rewrite rules, applied to the source code abstract syntax tree. Unfortunately, the resulting translations often lack readability, fail to respect the target language conventions, and require manual modifications in order to work properly. The overall translation process is timeconsuming and requires expertise in both the source and target languages, making code-translation projects expensive. Although neural models significantly outperform their rule-based counterparts in the context of natural language translation, their applications to transcompilation have been limited due to the scarcity of parallel data in this domain. In this paper, we propose to leverage recent approaches in unsupervised machine translation to train a fully unsupervised neural transcompiler. We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy. Our method relies exclusively on monolingual source code, requires no expertise in the source or target languages, and can easily be generalized to other programming languages. We also build and release a test set composed of 852 parallel functions, along with unit tests to check the correctness of translations. We show that our model outperforms rule-based commercial baselines by a significant margin. Authors: Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, Guillaume Lample Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! So the paper we're looking at today can take the code on the left, which is written in Python, and can output the code on the right, which is written in C++. Now the point here is that the code on the right does the same thing as the code on the left, so it is implementing the same function. The surprising thing here is that this model that takes the Python as an input has never been explicitly trained to output C++. So this is an unsupervised translation model. And the cool thing about this paper is that by having no target, having no supervised signal at translating source code languages into one another, it can perform pretty well at the task nonetheless. So we're going to look at this paper. It's called Unsupervised Translation of Programming Languages by Marie-Anne Lachaud, Baptiste Rosier, Loïc Chanusso and Jérôme Lomple at Facebook AI Research. As always, if you like content like this, consider sharing it out and leaving a like and also leaving a comment if you have something to say about it. They say a trans compiler, also known as a source-to-source translator, is a system that converts source code from a high-level programming language such as C++ or Python to another. They say trans compilers are primarily used for interoperability and to port code bases written in an obsolete or deprecated language such as COBOL or Python 2 to a modern one. So for Python 2, you might know this tool that's called 2-to-3. So 2-to-3 is a tool that ships with the Python 3 standard library, I believe, that allows you to take Python 2 code and produce Python 3 code. And that is to kind of push people to convert their old code bases of Python 2 to the modern Python 3. Now 2-to-3 is a handwritten program. It has specific rules built in that the programmers know if we modify Python 2 like this, Python 3 gets out. For example, the print statement in Python 2 requires no brackets, so we make a rule that whenever there's a print statement with no brackets, we'll add the brackets such that it's Python 3 compliant. Most of this code will transform the source code first into an abstract syntax tree, modify that, apply specific rules to that, and then output the language from the abstract syntax tree. Now the problem here is many-fold. First of all, there can only be so much translation as there are rules. So every one of these rules has to be coded as a modification to the abstract syntax tree, and every one of these rules is handcrafted and therefore needs sort of human ingenuity. Humans need to go and write these rules of how to transform one language into another. And often times, even though you can write these rules, often times whatever comes out is sort of a bit of a cryptic source code, because you kind of have to make sure that your rules cover all the possible things, and the source code that comes out is oftentimes very cryptic and a bit hard to understand, because it's been sort of expanded and formalized to make sure that it still does the same thing as the original source code. Now for Python 2 to Python 3, this is still easy, right? These languages are extremely similar, because it's not that big of a step to Python 3, except if you use very low level language constructs or language features which have been obsoleted with Python 3. On the other hand, if there's something like Cobol, so a lot of this old banking code or insurance code or government agency code whatnot is written in these really old programming languages, and they've been kept alive by these old-school programmers that are slowly but surely all retiring now, and there are just not many new programmers around that can support these languages, and the languages themselves aren't really updated that much anymore, so you would like to transform Cobalt into something like... I don't want to call Java modern programming language, but it is used in modern times. I'd rather not call it modern per se. Java itself is a beast that's been sort of supported since forever, but in any case, you would like to transform something like Cobalt to something like Java, where you have a lot of programmers that can develop and further develop your code. This is much harder. Cobalt and Java are much more away from each other than are Python 2 and Python 3. So what you would like to do is you would like to have a tool that is like 2 to 3, but humans... because now if you want a tool like this, you need someone that's really proficient in Cobalt and Java in order to write a tool like this, and you need lots of them, and they need to invest lots of time. What you would rather like to do is you would like to learn a system that translates from one language into another, such that the meaning is conserved. And this of course is exactly the domain of natural language machine translation, except it's source code. So we all know that we've all realized in the last years that things like Google Translate have become extremely good at translating. So they say right here, although neural models significantly outperform their rule-based counterparts in the context of natural language translation, which is Google Translate is all learned right now, they say their applications to trans compilation have been limited due to the scarcity of parallel data in this domain. So what's the problem with just going and saying, oh, we can build really good neural machine translation models, let's just apply them to source code. The problem is if you build a neural machine translation model, say something that transforms English to German, so you have the word hello and you output hello. You can do it not with just one word, but with entire sentences and so on. These models, they usually, or in the classical sense, what they need are parallel corpora, which means that you have documents that are written in many languages and you can guarantee that they mean the same thing. So this is a supervised signal. One example of this is, let's say press releases of the United Nations. So the United Nations will make some press release and they will then have professional translators translate that press release into all of the different or into many different languages. And so you can pretty much guarantee that these mean the same thing. So these pairs of documents or triplets or whatnot, they are supervised training data for a machine translation model that translates from one language into the other. And the neural machine translation models rely heavily on these parallel corpora. For source code, you just don't have that as much. You don't have big code bases in great numbers where the exact same thing is implemented in one language and in the other language. There's just not that much data available. It is the case that sometimes, sorry, that sometimes, let's say in the case of Torch, it started as Lua and then it went to PyTorch and the developers had to translate the code from Torch to Python. But in the same in the same step, they've also made improvements. They sort of re-engineered and reinvented the framework and made it better. And so you can't really say these are the same things. And likewise, there's not a lot of code available where the same thing is implemented in two languages. So we just don't have these parallel corpora for the source code translation. So rather what this paper does is this paper goes into unsupervised machine translation. Now what does unsupervised machine translation mean? Unsupervised machine translation, you imagine I have just a big database of documents. And these documents, I know they're all in English. And I have this other big database and I know that they are all in German. I just know their documents are in German. But they don't correspond to each other. They're just German documents. And over here, they're just English documents. They don't, I don't say that these two here are somehow the same. No, I just have a bunch of German, a bunch of English. They don't even have to correspond. They're just text. And now what I want to do is I want to learn a shared embedding space. I sort of want to learn a shared space of embeddings for these two languages, such that similar things are mapped to the similar place. So if these two documents just happen to talk about the same thing, I want them to be mapped to similar spaces in this shared embedding space. So I'm gonna have one model, a single model, where I input the text and it goes into this shared embedding space. Okay, now this is unusual because usually in machine translation, if you translate from here from English to German, then you'll have your dedicated model that takes as English as an input and German as an output. And that would be a different model than one that takes even German as an input and English as an output or French as an input and German as an output. In this case, we have this process right here and this process right here is the same model. And then the decoder that translates this, of course, now we have the encoder embedding, and then the decoder that actually translates to a language is also going to be the same model. So it's the same model that translates to English, then it translates to German. So first of all, how do we make the same model? Let's say we have the perfect encoder, right? This is E, the encoder, the same encoder for all languages. Let's say we have the perfect encoder and it can map the... whenever a sentence means the same thing in different languages, we can completely map it to the same point in embedding space, irrespective of the language it comes from. Now, how do we tell the decoder, which is also the same model, like how does it know what to do? This is a little trick where you basically take this embedding, so you take the input, you put it into your model, I don't even know how you do this, and then you're output is going to be autoregressive, right? So you decode one token at a time. So you decode this token and then you feed it back into the model, and then you decode this token, you feed that back into the model, and so on. This is an autoregressive language model. And the trick here is that the very first token is a special token that describes the language you want to output. So here you say, I want German, and then you let the model decode its thing. And by conditioning on this token right here, it knows it should now produce German. During training, you will simply put the token here, and if it produces something other than German, that's a loss, right? So it will learn to produce German after you produce this tag. Alright, so what we need is an encoder and a decoder, such that in the encoder we can put in any language text, and it will map to the same space the things that mean the same things, and we need a decoder that can produce any language given this first thing. Now the decoder should be fairly easy, right? If we have a shared vocabulary between the languages, and we always put this token, the decoder is not a problem. You can just learn the decoder in a straightforward way, but the encoder is going to be a problem. So how does the encoder map the different languages to the same space, such that the same things are ending up in the same place? It seems a bit counterintuitive, right? Because it doesn't know which things correspond to which things. Now the first thing you need is a shared vocabulary here. Since we are in a shared space right here, what you need is a shared vocabulary. So you tokenize all of the text with a shared vocabulary. And this vocabulary is going to consist of sort of word pieces. Now if you don't know what word pieces are, in a word piece tokenization what you would do is you would split words into so-called word pieces. So for example hello right here might be split into two word pieces. The first word piece might be he, and the second word piece might be LLO. There is usually some kind of indicator here that this is the end of a word and so on, but we'll simplify. And hello right here would be a HA for the first token and then LLO for the second token. And these kind of word piece encodings, since the smallest units are going to be the characters themselves, they ensure that you always have everything is in vocabulary. You have no out-of-vocabulary tokens. But here you can already see that if we tokenize the languages like this and then we use the same encoder, so the same encoder will pop them into this shared space, that means that to the model this and this looks like the same thing. It is the same thing, right? It's the same input token in different languages. Now as you can see this comes from the same word, the LLO at the end in English and the LLO at the end in German. It comes from the same word. So you know it's fair to assume that since it's the same input it's going to be mapped into a same embedding space right here. Or since these things are usually context-dependent we can say in a similar embedding space or a close embedding space, but certainly the initial vectors are the same. That is already half the task, right? So by tokenizing in this way we have already mapped part of our languages, even though they're just the different languages, we have mapped the same word to the same space. And this relies on the fact that in this case for example English and German and for example French they have significant overlap in their words as such. So the word hello and the word hollow, they are almost the same word as letters, as word pieces. And these shared embedding techniques abuse sort of the fact that these languages are close. They'll say there will be some word pieces that are going to be the same in these languages and naturally because they're the same they'll end up in the same place in embedding space. And because that now you have the... so what these embedding techniques do is they simply figure out the statistical relations between the word pieces. So if two things appear together often in the same context they'll be mapped into the same space as well. So it would realize there is a lot of times where ha and he appear in front of this low thing. So I should probably map the ha and the he to the same location in embedding space. So the same relation to the low. So they would end up at the same place. So now you see even though these word pieces are different, like they get different IDs, they'll be mapped to the same place in embedding space because their relation to the low is the same and the low themselves are being mapped to the same place because they are actually the same. So you can see that this partial overlap between word pieces in the different languages combined with the shared embedding pre-training results in these token across the languages results in an alignment of the embeddings. So naturally the things that mean the same things are going to be in the same places in embedding space, either because they are the same or because their statistical relation to the things that are the same is the same. It is sort of like these ha and the he are like synonyms in this shared language. So if you jumble all the English and German text together, the model thinks ha and he are synonyms and therefore maps it to the same space. This happens exactly the same in if you only have one language, two true synonyms. So this would exactly be the same thing in this case. Alright, so now we have different languages. We have a single encoder where we can input any of those languages mapping it to the shared space. The decoder can be trained by simply giving it this indicator token right here to decode the appropriate language. So now the question is how exactly do we train this such that this happens? There's one caveat in programming languages of course we still have to check whether that's the same or not and we know that in programming languages a lot of programming languages for example the word if is the same right so if you tokenize Java or Python or C++ the word if is the same and likewise there is a lot of overlap between the different programming languages and that exactly is this correspondence to here. These models use the parts that are overlapping either tokens themselves or this can also be grammatical constructs and so on they can also be overlapping and therefore map to the same space so if a construct is used in the same way this can be in higher layers this can induce the same effect and they'll use that to map the and sorry they'll use that the result will be that the similar things in the same in these languages will be mapped to the similar spaces in embedding space. Now this makes this example a bit weaker because that means this method would work exceptionally well for something like Python 2 to Python 3 because they of course have like a lot of overlap of syntax and keywords and constructs whereas something like cobalt to Java it's more let's say doubtable that they will work so well. In this paper here they've chosen C++ Python and Java which do have significant overlap but especially something like Python to Java of course there is a lot of a difference or Python to C++ as well Python is not typed and so you can see a bit of the difficulty is already in the paper here but you have to be aware that this works less and less the less this shared overlap is given. Alright so how do you train these models and remember we don't have parallel corpora we are simply reliant on having databases from having big repositories of Python code C++ code and Java code and they don't correspond to each other so the first thing you do and as I understand it you can do these things in parallel but there are three different objectives that achieve three different things in these models. The first objective is the cross-lingual masked language pre-training. Now the models here are going to be transformer models with encoders and decoders and that's comes from the attention is all you need papers and various other papers like this I've done videos on those if you want to see that. This masked language model pre-training however is from the BERT paper so BERT if you don't know what that is I've also done a video on that. This simply trains the encoder so this is to train the encoder. What you would do is you would input code so usually in masked language model if you train the encoder you input code with tokens like hello there you would then so this is your input you would then mask some of the tokens for example here the low and maybe the entire word there you would scrap that you would put it through your encoder which is this transformer model like BERT and then BERT is supposed to reconstruct hello there. BERT is supposed to reconstruct these two tokens like it doesn't see them and you ask it what did I cross out and it needs to reconstruct that so you train the model to reconstruct these masked tokens and the research on BERT and other things has shown that if you train with this objective the encoder sort of learns about the structure of code it learns about it learns which tokens and which constructs often appear together and therefore it learns something about the structure of the input and that means it can create whatever is up here is a good and meaningful embedding for these things that tells you something about the statistical coexistence of tokens and of course since we're doing this with all the languages so the Python goes in here C++ goes in here without telling the model what it is you just throw it in there right Java goes in there by tokenizing it and you see an example right here so if this right here is C++ but in Python this would also be if and since it needs to learn a single encoder for all of these languages and since the tokens overlap partially it is going to result in exactly what we want namely a shared embedding space where even though the input comes from different languages it is mapped similar things are mapped to similar places in the embedding space right so the mask language model pre training very quickly as you take a piece of code like here on the left you mask out some of the tokens here you can see them in this mask and you simply ask the model the encoder to reconstruct those things okay so this is just for the encoder as far as I understand it the encoder doesn't it doesn't see the thing back here it simply sees this and you tell it please reconstruct please tell me which words or which tokens I I clipped out here and it's supposed to tell you okay the first one is if the second one is int and the third one is the I now if you consider what the encoder has to do here so if you were to see this then that pretty clearly you know you could you could guess that that is an if of course it's not a hundred percent but this is just pre training right so you train it to output if here now here you have to do a little bit more inference maybe you've seen this for construct a bunch of times and you can see that this is compared here and this is added so probably it's an integer and then in the last thing this is even more complicated if you don't see that the eye is here you somehow have to guess that what it is it's not clear right but you can guess that okay there's a local variable I right here and probably it's going to be used somewhere in this block now this here isn't I and I don't see I anywhere else so probably I goes in here which makes sense because it's an integer and prime is an array and and it integers index arrays so on okay so this is what the model does first the second thing is we need to train the decoder somehow how do we train the decoder in a very similar way we make the decoder do denoising auto encoding now before we just had single tokens we just asked the encoder to reconstruct tokens so the encoder is this box right here this colors this box is the encoder and the actual part that's going to predict these words is going to be one sort of one classification layer on top that is going to predict for each position the individual word now did this is just for pre-training after the pre-training you scrap that and you attach it to a decoder so you attach whatever you got out of the encoder to a decoder and the decoder will output in an autoregressive way one token after another I did I put a it output a token right here and it feed that token back into the decoder saying okay here's what I've produced now produce the next thing you produce the next token and so on so it would produce token after token the output and now as I said I'm not exactly sure I think they're doing all of these things at the same time so this would still be here but the information would just be routed in two different ways or maybe they do it one after another it doesn't really matter but what matters is in this thing here you now train the decoder I mean you train it jointly with the encoder but you also involve the decoder and you do this by doing something very similar you corrupt a piece of code and you get corrupted code now you can see part of this corruption is masking like you did before but also part of the corruption is like here you scramble some of the tokens right this was it was this over here you just jumble some of them around a bit and then you here you also drop a token as you can see that the one is dropped and you simply so you don't show this to the encoder or the decoder you input this corrupted code into the decode into the first the encoder and then you ask the decoder to give you back the original code without showing it the original code so the the task for the decoder for the encoder decoder for the entire model here is if you're if you see this here is corrupt the code I have corrupted it in various ways please tell me what I originally had now it can the masking it does the same as before it sort of infers it this thing here it says well probably I probably this isn't really correct you don't even tell it where the errors are right before with the masking you at least told it where the errors are now you don't even tell it where the where the errors are so it needs to recognize this here is probably correct this isn't this I'm gonna rewrite this to that okay and it does this one token at a time so it first goes into the dirt and it needs to output the correct thing this is I hope the difference is clear to the masked language modeling which involved the involved the decoder and here also is the first time where in the encoder you you prepend this Java token now this as you can see it still goes from the same language to the same language but this is where you train the decoder to output a given language so here with the token again this is the same decoder for all the three languages the only difference here is every time you simply provided with the special token at the beginning to tell it which language it should decode right now so this this now we have an encoder that maps all the languages to a shared space and we have a decoder that conditioned on a token like this can output a valid code in that thing assuming this here was corrupted code now since the encoder is shared it should map the same kind of corrupted code of the different languages to the same place in the embedding space and therefore this would also this would already be enough to have this model that we desire we can input some code it doesn't actually have to be corrupted right we can just input some code in one language and ask the decoder to output the other language and this works but it doesn't work super well and here the authors go for another idea from the unsupervised machine translation literature which is back translation so back translation is a technique where you can tune an unsupervised machine translation model in a way that you would tune a supervised one but of course you don't have supervised data so what's the plan you will produce the data yourself using your own model so the plan is pretty simple it's actually contained in the back translation name so if you have a piece of code what you would do is you would first use your model to translate this to another language any of your choice now you have no clue whether this thing here is correct or not you have no clue and you have no way of assessing it because you don't have ground truth what you can do is use your model again or actually use a second model that you train in parallel now I believe in this case they could use the same model but you can that could be instable and so on but in any case you can use your system again to translate it back to your original language your system can do that right and here whatever you get as an output you know the ground truth it's whatever you started with so now you can compare what comes out to what you started with the difficulty of course is if there is a mistake you don't know which of the two models made a mistake and you so it could be could be that your original translation model made a mistake and or it could be that your back translation model made a mistake and you have to find a loss function that kind of punishes both equally or you simply keep one sort of constant and loss free and train the other one because there there's going to be a sample where you have C++ as an input and then the intermediate language is Python so all of the models sort of get trained once as an as a source to target translator and once as a target to source translator but I hope the the objective is clear from the back translation so now with the back translation you actually you train the models to go from one language to another language okay and that's the that's the final goal even though you do it without supervised data you now have a model that can encode things into a shared space that can decode into a language and that is attuned to translating from one language to another language so that's that's it how this is all how does this work now for evaluation the question is of course how do you evaluate models like this for evaluation they go to this website called geeks for geeks and this is a an online platform with computer science and programming articles it gathers many coding problems and presents solution in several programming languages okay so this is a website that teaches you to code and it will have like an exercise please do this and then it will provide solutions in the different languages now why is that cool and they have an example they have an example right here why is that cool because not only can you be relatively sure that these different functions that you have here do the same thing but you can also relatively be relatively sure that they are implemented in the similar way right because you what this website is trying to do is it's trying to teach the people how to how to code up an algorithm that they think up in their head and therefore not only is the solution correct and the same it is implemented in the in the same way as you can see here the construct there's this if construct is everywhere the else if is everywhere so even though some of the languages might have specialty things for implementing some algorithms these are really the same algorithmic the same expression of algorithmic thought in the different languages so that is a perfect parallel data set the problem of course is that there is not that many so it is good enough as a test set it is not good enough as a training set but given that it's a test set you can just have these as test set and then you can input the C++ and see whether or not the Java comes out the problem here of course is that even though this is very clear there are still you know sort of many variations of how you can implement that to even express the same algorithmic thought so metrics from natural language processing like blue just aren't going to be very good because they look at n-gram overlap and you can write this function with very different n-grams and still be very very valid and correct and also exact match is not going to be really the the gold standard here so what they do is they create a set of unit tests where for each of these functions they go they check their input types they randomly generate input randomly generate a set of inputs look whatever comes out and if the same thing comes out in all of their test functions that they consider this a good unit test for that function so whenever you your model now produces let's say you input Python it produces a C++ you simply put these unit tests through the C++ function you produce and if they produce the same output as the Python the original Python function when on the same inputs then you consider the unit test to succeed and you consider the function to be correct that this of course this isn't this isn't the super duper gold standard especially with random inputs because usually what you want to do is test sort of corner cases but it's better than anything else so far I've been a long dis-advocate of unit tests honestly because I think whenever a human writes a unit test then they're probably since they have already implemented the function itself they're probably going to make the same mistakes or they're probably just going to replicate the code and thinking of the function in the unit test itself and therefore it doesn't really get you anything I guess in large organizations you write unit tests so that someone else doesn't screw up your code but in this case it would actually be cool because now as a human you could simply write a bunch of unit tests and then let your let your trans compiler do the heavy lifting and you simply check whether or not the output is good alright so how does this do here you can see they have some baselines the C++ to Java as I understand it is a commercial system and the Java to Python is an open source system both are human experts that make up these rule-based systems on how to trans on how to translate code into other languages now the if you do what they have here is trans coder beam one which means a beam size of one so if you don't know a beam search is very shortly beam search is like if you decode from your language model you can either always take the next token that has the highest probability this would be greedy decoding or a beam size of one or you can sort of always keep the top n hypotheses of what the of what the most likely output is as you can keep that as a you can keep the top five in memory and always decode these five on sort of like you have a mini batch of five sequences and you always keep the top five in memory so at the end of the decoding you're going to have five different variants of the same sentence or of the same decoded output and you can then decide which one you like best and usually what you do is you then output the one that has the highest probability which is not the same as the greedy because sometimes the next token will be will look one next token will look very good in a greedy way but you'd better take the second most likely because the next to next token is going to sort of make up for that to make the entire sequence even more likely so more beam size basically means you can keep more hypotheses of the output in memory until the end so if you just do the greedy decoding you see you already get fairly close to these baselines it's very very cool very interesting and if you up the beam size you surpass these baselines now the way they up the beam size here I find to be a bit let's call it a bit cheaty because when they say beam five what they mean is they keep the five hypotheses and then at the end I as I understand it if any of the five hypotheses passes all the unit tests or the most they keep it right so basically they give themselves the freedom to say whichever one of the five we output is the best that's the one we count and of course that's not really a match to the commercial or to the baseline system because it can output one thing now it is maybe a good practical application to give the human that you know you input a function you give the human five options to choose from and it can choose and thereby decide which one the human likes best but it is sort of it is a wonky what I like more is this here the beam 10 top one this is what you would actually do so we could keep 10 hypotheses during decoding and that the end output the top one the top likely one and as you can see that is better than greedy but it is worse than where you you know give yourself the freedom to output multiple ones of course though they say that most of the errors that this top one makes come from compilation errors when the target language is Java or C++ it suggests that the beam and top one metric could easily be improved we leave this to future work which this again I find valid right so if you if your method is I'm going to keep the top 10 hypothesis until the end and then I'm going from the top and I simply compile them and I output the first one that compiles that that's not cheating right that's a valid thing again yeah so in that way I can I can understand what they're saying right here okay so they give some examples some of which I find very interesting so the first thing here is that oh yeah by the way I've said in the I've said that the tokenizer between the natural languages is shared they make a little tweak here in that they tokenize the different languages with their language respective tokenizers which will still end up tokenizing pretty much you know this the print statement in C++ or in Java no actually the print statement in Python is print and in Java it's println and so on but it will still like the all the if statements it will still tokenize into the same into the same word but it's simply not viable to to parse Python with a C++ parser okay so we have looked at this the results this is one of the results they look at their shared embedding space and this is a t-sneak plot so a 2d projection of this shared embedding space and you can see that this is actually happening so the different so null null and none are mapped to similar locations println and cout are mapped to similar locations in this space so this is exactly what we want this is sort of a verification that this method of embedding the different languages into the same space really turns out such that whatever means the same thing is mapped to the same place you can see here catch and accept two very different tokens are mapped to the same place simply because they're used in the same sort of constructs across the languages very cool one of these examples here is quite impressive and kind of shows the difference between this and and rule-based translation in this function right here you have a C plus plus function that takes a character pointer to that is called str str in as an input now in C++ strings are at least in old versions of C++ strings are handled as character arrays so a string is indistinguishable from a character array and in this case usually what you do is you don't input the array because that will cause a copy you input a pointer to the first to you input a pointer to the array and that would define the string okay so if you translate this again this the type of this is simply character array if you translate this with this transcoder system that they've built into Java in Java there is a type called string right there's a native type called string and is that true I think oh yeah that's and then that's handled really weirdly in the JVM I think yes so there is it at least there is a type called string so it would map that it would recognize are you mean a string therefore I'm going to put a string here and it uses all the string method like string length string character at and so on where in C++ this is just an array and you just have array accesses now they take this same C++ function and only change one thing they change the name of the parameter everything else is the same but now the character array is called our okay and they put it through the same system and that system now outputs a function that takes in a character array called our instead of a string and it uses you know here the property length it uses array access instead of this car character at method so simply by changing the name and this is something where I believe the rule-based systems can this can be an advantage over rule-based system because what this here does is it simply says oh I've seen a lot of humans in my code base that use this use like stir as a as a variable name and that usually means that the constructs here are like the constructs in Java where people use strings and I've seen other places where people use you know names like this right here and usually that is used in the same context as in Java people use character arrays right so it in programming it's not only important what the the code actually does but a lot of programming goes via naming of things like other programmers will read your code and by reading stir right here they will sort of assume that this is a string whereas if they read our right here they will assume you're a pirate and you are referring to a character array and they will treat the code the code means something different and these systems right here these neural machine translation systems can actually understand that part because they do statistical inference on code that humans wrote if you change this back to say input then again it goes back to a string and uses all the string functions so that's fairly impressive in my mind and it yeah definitely an advantage over rule-based systems of course the disadvantage over rule-based systems is that in rule-based systems you can almost get on like sometimes you can even guarantee that the code does the same thing here you can't they give some examples of failed translations where so now you get you run into this problem where the min function in Python is overloaded it can either give you the minimum of a sequence or it can give you the minimum of two things now this is translated to Java right here and math dot min is not overloaded in Java it only gives you the minimum of two things and not the minimum of an array and it still outputs that now given enough data probably could learn because these things are all context dependent but this is one of the this is one of the failure cases of these models of course all right so this was this paper I I've read that the code of this and the unit tests will be output will be put online at some times they are not right now if I if I hear about it I can link to it or let you know about it let me know what you think of this paper in the comments share it out and subscribe if you haven't yet and bye bye
[ { "end": 5.24, "start": 0, "text": " Hi there! So the paper we're looking at today can take the code on the left," }, { "end": 9.8, "start": 5.24, "text": " which is written in Python, and can output the code on the right, which is" }, { "end": 15.16, "start": 9.8, "text": " written in C++. Now the point here is that the code on the right does the same" }, { "end": 20.16, "start": 15.16, "text": " thing as the code on the left, so it is implementing the same function. The" }, { "end": 25.6, "start": 20.16, "text": " surprising thing here is that this model that takes the Python as an input has" }, { "end": 32.36, "start": 25.6, "text": " never been explicitly trained to output C++. So this is an unsupervised" }, { "end": 37.56, "start": 32.36, "text": " translation model. And the cool thing about this paper is that by" }, { "end": 43.72, "start": 37.56, "text": " having no target, having no supervised signal at translating source code" }, { "end": 48.040000000000006, "start": 43.72, "text": " languages into one another, it can perform pretty well at the task" }, { "end": 53.760000000000005, "start": 48.040000000000006, "text": " nonetheless. So we're going to look at this paper. It's called" }, { "end": 59.32, "start": 53.76, "text": " Unsupervised Translation of Programming Languages by Marie-Anne Lachaud, Baptiste" }, { "end": 67.44, "start": 59.32, "text": " Rosier, Loïc Chanusso and Jérôme Lomple at Facebook AI Research. As always, if you" }, { "end": 73.24, "start": 67.44, "text": " like content like this, consider sharing it out and leaving a like and also" }, { "end": 78.75999999999999, "start": 73.24, "text": " leaving a comment if you have something to say about it. They say a trans" }, { "end": 82.7, "start": 78.75999999999999, "text": " compiler, also known as a source-to-source translator, is a system" }, { "end": 86.8, "start": 82.7, "text": " that converts source code from a high-level programming language such as" }, { "end": 94.32000000000001, "start": 86.8, "text": " C++ or Python to another. They say trans compilers are primarily used for" }, { "end": 99.64, "start": 94.32000000000001, "text": " interoperability and to port code bases written in an obsolete or deprecated" }, { "end": 106.12, "start": 99.64, "text": " language such as COBOL or Python 2 to a modern one. So for Python 2, you might" }, { "end": 112, "start": 106.12, "text": " know this tool that's called 2-to-3. So 2-to-3 is a tool that ships with the" }, { "end": 117.16, "start": 112, "text": " Python 3 standard library, I believe, that allows you to take Python 2 code" }, { "end": 123.32, "start": 117.16, "text": " and produce Python 3 code. And that is to kind of push people to convert their" }, { "end": 129.76, "start": 123.32, "text": " old code bases of Python 2 to the modern Python 3. Now 2-to-3 is a" }, { "end": 136.32, "start": 129.76, "text": " handwritten program. It has specific rules built in that the programmers know" }, { "end": 140.48, "start": 136.32, "text": " if we modify Python 2 like this, Python 3 gets out. For example, the print" }, { "end": 145.84, "start": 140.48, "text": " statement in Python 2 requires no brackets, so we make a rule that whenever" }, { "end": 148.95999999999998, "start": 145.84, "text": " there's a print statement with no brackets, we'll add the brackets such that" }, { "end": 155, "start": 148.95999999999998, "text": " it's Python 3 compliant. Most of this code will transform the" }, { "end": 160.51999999999998, "start": 155, "text": " source code first into an abstract syntax tree, modify that, apply specific" }, { "end": 166.04, "start": 160.51999999999998, "text": " rules to that, and then output the language from the abstract syntax tree." }, { "end": 172.84, "start": 166.04, "text": " Now the problem here is many-fold. First of all, there can only be so much" }, { "end": 178.28, "start": 172.84, "text": " translation as there are rules. So every one of these rules has to be coded as a" }, { "end": 182.79999999999998, "start": 178.28, "text": " modification to the abstract syntax tree, and every one of these rules is" }, { "end": 188.35999999999999, "start": 182.79999999999998, "text": " handcrafted and therefore needs sort of human ingenuity. Humans need to go and" }, { "end": 195.2, "start": 188.35999999999999, "text": " write these rules of how to transform one language into another. And often" }, { "end": 200.11999999999998, "start": 195.2, "text": " times, even though you can write these rules," }, { "end": 205.35999999999999, "start": 200.11999999999998, "text": " often times whatever comes out is sort of a bit of a cryptic source code, because" }, { "end": 209.6, "start": 205.35999999999999, "text": " you kind of have to make sure that your rules cover all the possible things, and" }, { "end": 214.35999999999999, "start": 209.6, "text": " the source code that comes out is oftentimes very cryptic and a bit hard" }, { "end": 219.2, "start": 214.35999999999999, "text": " to understand, because it's been sort of expanded and formalized to make sure" }, { "end": 225.6, "start": 219.2, "text": " that it still does the same thing as the original source code. Now for Python 2 to" }, { "end": 230.11999999999998, "start": 225.6, "text": " Python 3, this is still easy, right? These languages are extremely similar," }, { "end": 236.39999999999998, "start": 230.11999999999998, "text": " because it's not that big of a step to Python 3, except if you use very low" }, { "end": 242.07999999999998, "start": 236.39999999999998, "text": " level language constructs or language features which have been" }, { "end": 248.07999999999998, "start": 242.07999999999998, "text": " obsoleted with Python 3. On the other hand, if there's something like Cobol, so a" }, { "end": 253.44000000000003, "start": 248.08, "text": " lot of this old banking code or insurance code or government agency code" }, { "end": 257.40000000000003, "start": 253.44000000000003, "text": " whatnot is written in these really old programming languages, and they've been" }, { "end": 262.2, "start": 257.40000000000003, "text": " kept alive by these old-school programmers that are slowly but surely" }, { "end": 267.24, "start": 262.2, "text": " all retiring now, and there are just not many new programmers around that can" }, { "end": 270.72, "start": 267.24, "text": " support these languages, and the languages themselves aren't really" }, { "end": 275.32, "start": 270.72, "text": " updated that much anymore, so you would like to transform Cobalt into something" }, { "end": 282.12, "start": 275.32, "text": " like... I don't want to call Java modern programming language, but it is used in" }, { "end": 287.56, "start": 282.12, "text": " modern times. I'd rather not call it modern per se. Java itself is a beast" }, { "end": 294.15999999999997, "start": 287.56, "text": " that's been sort of supported since forever, but in any case, you would like" }, { "end": 298.6, "start": 294.15999999999997, "text": " to transform something like Cobalt to something like Java, where you have" }, { "end": 303.71999999999997, "start": 298.6, "text": " a lot of programmers that can develop and further develop your code. This is" }, { "end": 309.42, "start": 303.72, "text": " much harder. Cobalt and Java are much more away from each other than are Python 2" }, { "end": 314.52000000000004, "start": 309.42, "text": " and Python 3. So what you would like to do is you would like to have a tool that" }, { "end": 320.6, "start": 314.52000000000004, "text": " is like 2 to 3, but humans... because now if you want a tool like this, you need" }, { "end": 325, "start": 320.6, "text": " someone that's really proficient in Cobalt and Java in order to write a tool" }, { "end": 329.52000000000004, "start": 325, "text": " like this, and you need lots of them, and they need to invest lots of time. What" }, { "end": 333.71999999999997, "start": 329.52, "text": " you would rather like to do is you would like to learn a system that translates" }, { "end": 338.56, "start": 333.71999999999997, "text": " from one language into another, such that the meaning is conserved. And this of" }, { "end": 343.68, "start": 338.56, "text": " course is exactly the domain of natural language machine translation, except it's" }, { "end": 349.59999999999997, "start": 343.68, "text": " source code. So we all know that we've all realized in the last years that" }, { "end": 356.64, "start": 349.59999999999997, "text": " things like Google Translate have become extremely good at translating. So they" }, { "end": 361.56, "start": 356.64, "text": " say right here, although neural models significantly outperform their rule-based" }, { "end": 365.96, "start": 361.56, "text": " counterparts in the context of natural language translation, which is Google" }, { "end": 370.91999999999996, "start": 365.96, "text": " Translate is all learned right now, they say their applications to trans" }, { "end": 375.59999999999997, "start": 370.91999999999996, "text": " compilation have been limited due to the scarcity of parallel data in this domain." }, { "end": 380.88, "start": 375.59999999999997, "text": " So what's the problem with just going and saying, oh, we can build" }, { "end": 384.88, "start": 380.88, "text": " really good neural machine translation models, let's just apply them to source" }, { "end": 388.56, "start": 384.88, "text": " code. The problem is if you build a neural machine translation model, say" }, { "end": 394.4, "start": 388.56, "text": " something that transforms English to German, so you have the word hello and" }, { "end": 401.4, "start": 394.4, "text": " you output hello. You can do it not with just one word, but with entire sentences" }, { "end": 406.71999999999997, "start": 401.4, "text": " and so on. These models, they usually, or in the classical sense, what they need" }, { "end": 411.92, "start": 406.71999999999997, "text": " are parallel corpora, which means that you have documents that are" }, { "end": 418.64000000000004, "start": 411.92, "text": " written in many languages and you can guarantee that they mean the same thing." }, { "end": 424.44, "start": 418.64000000000004, "text": " So this is a supervised signal. One example of this is, let's say press" }, { "end": 428.40000000000003, "start": 424.44, "text": " releases of the United Nations. So the United Nations will make some press" }, { "end": 433.84000000000003, "start": 428.40000000000003, "text": " release and they will then have professional translators translate that" }, { "end": 439.18, "start": 433.84000000000003, "text": " press release into all of the different or into many different languages. And so" }, { "end": 443.32, "start": 439.18, "text": " you can pretty much guarantee that these mean the same thing. So these pairs of" }, { "end": 449.08, "start": 443.32, "text": " documents or triplets or whatnot, they are supervised training data for a" }, { "end": 453.04, "start": 449.08, "text": " machine translation model that translates from one language into the" }, { "end": 458.28000000000003, "start": 453.04, "text": " other. And the neural machine translation models rely heavily on these parallel" }, { "end": 464.4, "start": 458.28000000000003, "text": " corpora. For source code, you just don't have that as much. You don't have big" }, { "end": 470.88, "start": 464.4, "text": " code bases in great numbers where the exact same thing is implemented in one" }, { "end": 474.79999999999995, "start": 470.88, "text": " language and in the other language. There's just not that much data" }, { "end": 482.03999999999996, "start": 474.79999999999995, "text": " available. It is the case that sometimes, sorry, that sometimes, let's say in the" }, { "end": 489.15999999999997, "start": 482.03999999999996, "text": " case of Torch, it started as Lua and then it went to PyTorch and the developers" }, { "end": 496.12, "start": 489.16, "text": " had to translate the code from Torch to Python. But in the same in the same step," }, { "end": 500.12, "start": 496.12, "text": " they've also made improvements. They sort of re-engineered and reinvented the" }, { "end": 504.56, "start": 500.12, "text": " framework and made it better. And so you can't really say these are the same" }, { "end": 510.08000000000004, "start": 504.56, "text": " things. And likewise, there's not a lot of code available where the same thing is" }, { "end": 515.28, "start": 510.08000000000004, "text": " implemented in two languages. So we just don't have these parallel corpora for" }, { "end": 521.24, "start": 515.28, "text": " the source code translation. So rather what this paper does is this paper goes" }, { "end": 526.48, "start": 521.24, "text": " into unsupervised machine translation. Now what does unsupervised machine" }, { "end": 532, "start": 526.48, "text": " translation mean? Unsupervised machine translation, you imagine I have just a" }, { "end": 538.92, "start": 532, "text": " big database of documents. And these documents, I know they're all in" }, { "end": 545.1999999999999, "start": 538.92, "text": " English. And I have this other big database and I know that they are all" }, { "end": 549.9200000000001, "start": 545.2, "text": " in German. I just know their documents are in German. But they don't correspond" }, { "end": 553.48, "start": 549.9200000000001, "text": " to each other. They're just German documents. And over here, they're just" }, { "end": 558.4000000000001, "start": 553.48, "text": " English documents. They don't, I don't say that these two here are somehow the" }, { "end": 562.6400000000001, "start": 558.4000000000001, "text": " same. No, I just have a bunch of German, a bunch of English. They don't even have" }, { "end": 567.44, "start": 562.6400000000001, "text": " to correspond. They're just text. And now what I want to do is I want to learn a" }, { "end": 574.96, "start": 567.44, "text": " shared embedding space. I sort of want to learn a shared space of embeddings" }, { "end": 580.24, "start": 574.96, "text": " for these two languages, such that similar things are mapped to the similar" }, { "end": 584.4000000000001, "start": 580.24, "text": " place. So if these two documents just happen to talk about the same thing, I" }, { "end": 590.6800000000001, "start": 584.4000000000001, "text": " want them to be mapped to similar spaces in this shared embedding space. So I'm" }, { "end": 597.02, "start": 590.6800000000001, "text": " gonna have one model, a single model, where I input the text and it goes into" }, { "end": 601.84, "start": 597.02, "text": " this shared embedding space. Okay, now this is unusual because usually in" }, { "end": 607.8000000000001, "start": 601.84, "text": " machine translation, if you translate from here from English to German, then" }, { "end": 612.84, "start": 607.8000000000001, "text": " you'll have your dedicated model that takes as English as an input and German" }, { "end": 617.6, "start": 612.84, "text": " as an output. And that would be a different model than one that takes even" }, { "end": 621.5600000000001, "start": 617.6, "text": " German as an input and English as an output or French as an input and German" }, { "end": 627.6, "start": 621.5600000000001, "text": " as an output. In this case, we have this process right here and this process" }, { "end": 633.76, "start": 627.6, "text": " right here is the same model. And then the decoder that translates this, of" }, { "end": 637.32, "start": 633.76, "text": " course, now we have the encoder embedding, and then the decoder that actually" }, { "end": 642.4, "start": 637.32, "text": " translates to a language is also going to be the same model. So it's the same" }, { "end": 648.48, "start": 642.4, "text": " model that translates to English, then it translates to German. So first of all, how" }, { "end": 652.96, "start": 648.48, "text": " do we make the same model? Let's say we have the perfect encoder, right?" }, { "end": 656.82, "start": 652.96, "text": " This is E, the encoder, the same encoder for all languages. Let's say we have the" }, { "end": 662.2, "start": 656.82, "text": " perfect encoder and it can map the... whenever a sentence means the same thing" }, { "end": 665.4000000000001, "start": 662.2, "text": " in different languages, we can completely map it to the same point in" }, { "end": 670.44, "start": 665.4000000000001, "text": " embedding space, irrespective of the language it comes from. Now, how do we tell" }, { "end": 676.24, "start": 670.44, "text": " the decoder, which is also the same model, like how does it know what to do? This is" }, { "end": 682.1600000000001, "start": 676.24, "text": " a little trick where you basically take this embedding, so you take the input," }, { "end": 686.5600000000001, "start": 682.1600000000001, "text": " you put it into your model, I don't even know how you do this, and then you're" }, { "end": 690.92, "start": 686.56, "text": " output is going to be autoregressive, right? So you decode one token at a time." }, { "end": 696.8399999999999, "start": 690.92, "text": " So you decode this token and then you feed it back into the model, and then you" }, { "end": 700.4799999999999, "start": 696.8399999999999, "text": " decode this token, you feed that back into the model, and so on. This is an" }, { "end": 705.1999999999999, "start": 700.4799999999999, "text": " autoregressive language model. And the trick here is that the very" }, { "end": 708.8399999999999, "start": 705.1999999999999, "text": " first token is a special token that describes the language you want to" }, { "end": 714.2399999999999, "start": 708.8399999999999, "text": " output. So here you say, I want German, and then you let the model decode its" }, { "end": 719.4, "start": 714.24, "text": " thing. And by conditioning on this token right here, it knows it should" }, { "end": 726.12, "start": 719.4, "text": " now produce German. During training, you will simply put the token here," }, { "end": 730.64, "start": 726.12, "text": " and if it produces something other than German, that's a loss, right? So it will" }, { "end": 736.24, "start": 730.64, "text": " learn to produce German after you produce this tag. Alright, so what we need" }, { "end": 743.6, "start": 736.24, "text": " is an encoder and a decoder, such that in the encoder we can put in any language" }, { "end": 750.44, "start": 743.6, "text": " text, and it will map to the same space the things that mean the same things, and" }, { "end": 757.5600000000001, "start": 750.44, "text": " we need a decoder that can produce any language given this first thing." }, { "end": 761.32, "start": 757.5600000000001, "text": " Now the decoder should be fairly easy, right? If we have a shared vocabulary" }, { "end": 768.96, "start": 761.32, "text": " between the languages, and we always put this token, the decoder is not a" }, { "end": 773.12, "start": 768.96, "text": " problem. You can just learn the decoder in a straightforward way, but the encoder" }, { "end": 779.2, "start": 773.12, "text": " is going to be a problem. So how does the encoder map the different languages to" }, { "end": 784.76, "start": 779.2, "text": " the same space, such that the same things are ending up in the same place? It seems" }, { "end": 791.2, "start": 784.76, "text": " a bit counterintuitive, right? Because it doesn't know which things" }, { "end": 795.36, "start": 791.2, "text": " correspond to which things. Now the first thing you need is a shared vocabulary" }, { "end": 800.12, "start": 795.36, "text": " here. Since we are in a shared space right here, what you need is a" }, { "end": 807.5600000000001, "start": 800.12, "text": " shared vocabulary. So you tokenize all of the text with a shared vocabulary. And" }, { "end": 812.48, "start": 807.5600000000001, "text": " this vocabulary is going to consist of sort of word pieces. Now if you don't" }, { "end": 817.72, "start": 812.48, "text": " know what word pieces are, in a word piece tokenization what you would" }, { "end": 822.48, "start": 817.72, "text": " do is you would split words into so-called word pieces. So for example" }, { "end": 828.36, "start": 822.48, "text": " hello right here might be split into two word pieces. The first word piece might" }, { "end": 835.64, "start": 828.36, "text": " be he, and the second word piece might be LLO. There is usually some kind of" }, { "end": 840.08, "start": 835.64, "text": " indicator here that this is the end of a word and so on, but we'll simplify. And" }, { "end": 846.8000000000001, "start": 840.08, "text": " hello right here would be a HA for the first token and then LLO for the second" }, { "end": 850.96, "start": 846.8000000000001, "text": " token. And these kind of word piece encodings, since the smallest units are" }, { "end": 855.96, "start": 850.96, "text": " going to be the characters themselves, they ensure that you always have" }, { "end": 860.64, "start": 855.96, "text": " everything is in vocabulary. You have no out-of-vocabulary tokens. But here you" }, { "end": 867.36, "start": 860.64, "text": " can already see that if we tokenize the languages like this and then we use the" }, { "end": 873.84, "start": 867.36, "text": " same encoder, so the same encoder will pop them into this shared space, that" }, { "end": 881.08, "start": 873.84, "text": " means that to the model this and this looks like the same thing. It is the same" }, { "end": 886.72, "start": 881.08, "text": " thing, right? It's the same input token in different languages. Now as you can see" }, { "end": 892, "start": 886.72, "text": " this comes from the same word, the LLO at the end in English and the LLO at the" }, { "end": 898.5200000000001, "start": 892, "text": " end in German. It comes from the same word. So you know it's fair to assume" }, { "end": 903.8000000000001, "start": 898.5200000000001, "text": " that since it's the same input it's going to be mapped into a same embedding" }, { "end": 908.24, "start": 903.8000000000001, "text": " space right here. Or since these things are usually context-dependent we can say" }, { "end": 916.28, "start": 908.24, "text": " in a similar embedding space or a close embedding space, but certainly the" }, { "end": 922.2, "start": 916.28, "text": " initial vectors are the same. That is already half the task, right? So by" }, { "end": 929.44, "start": 922.2, "text": " tokenizing in this way we have already mapped part of our languages, even though" }, { "end": 933.4, "start": 929.44, "text": " they're just the different languages, we have mapped the same word to the same" }, { "end": 938.68, "start": 933.4, "text": " space. And this relies on the fact that in this case for example English and" }, { "end": 944.68, "start": 938.68, "text": " German and for example French they have significant overlap in their words" }, { "end": 950.4399999999999, "start": 944.68, "text": " as such. So the word hello and the word hollow, they are almost the same" }, { "end": 956.88, "start": 950.4399999999999, "text": " word as letters, as word pieces. And these shared embedding techniques abuse" }, { "end": 961.84, "start": 956.88, "text": " sort of the fact that these languages are close. They'll say there will be" }, { "end": 966.1600000000001, "start": 961.84, "text": " some word pieces that are going to be the same in these languages and" }, { "end": 969.8000000000001, "start": 966.1600000000001, "text": " naturally because they're the same they'll end up in the same place in" }, { "end": 976.2, "start": 969.8000000000001, "text": " embedding space. And because that now you have the... so what these" }, { "end": 981.32, "start": 976.2, "text": " embedding techniques do is they simply figure out the statistical relations" }, { "end": 986.76, "start": 981.32, "text": " between the word pieces. So if two things appear together often in the same" }, { "end": 992.2, "start": 986.76, "text": " context they'll be mapped into the same space as well. So it would realize there" }, { "end": 998.56, "start": 992.2, "text": " is a lot of times where ha and he appear in front of this low thing. So I should" }, { "end": 1004.28, "start": 998.56, "text": " probably map the ha and the he to the same location in embedding space. So the" }, { "end": 1009.16, "start": 1004.28, "text": " same relation to the low. So they would end up at the same place. So now you see" }, { "end": 1013.72, "start": 1009.16, "text": " even though these word pieces are different, like they get different" }, { "end": 1018.6800000000001, "start": 1013.72, "text": " IDs, they'll be mapped to the same place in embedding space because their" }, { "end": 1024.08, "start": 1018.6800000000001, "text": " relation to the low is the same and the low themselves are being mapped to the" }, { "end": 1029.28, "start": 1024.08, "text": " same place because they are actually the same. So you can see that this" }, { "end": 1035.56, "start": 1029.28, "text": " partial overlap between word pieces in the different languages combined with" }, { "end": 1042.88, "start": 1035.56, "text": " the shared embedding pre-training results in these token across the" }, { "end": 1049.5200000000002, "start": 1042.88, "text": " languages results in an alignment of the embeddings. So naturally the things that" }, { "end": 1053.88, "start": 1049.5200000000002, "text": " mean the same things are going to be in the same places in embedding space, either" }, { "end": 1059.1200000000001, "start": 1053.88, "text": " because they are the same or because their statistical relation to the things" }, { "end": 1065.3200000000002, "start": 1059.1200000000001, "text": " that are the same is the same. It is sort of like these ha and the he are" }, { "end": 1070.44, "start": 1065.3200000000002, "text": " like synonyms in this shared language. So if you jumble all the English and" }, { "end": 1075.2, "start": 1070.44, "text": " German text together, the model thinks ha and he are synonyms and therefore" }, { "end": 1079.72, "start": 1075.2, "text": " maps it to the same space. This happens exactly the same in if you only have one" }, { "end": 1085.48, "start": 1079.72, "text": " language, two true synonyms. So this would exactly be the same thing in" }, { "end": 1091.8400000000001, "start": 1085.48, "text": " this case. Alright, so now we have different languages. We have a single" }, { "end": 1097.04, "start": 1091.8400000000001, "text": " encoder where we can input any of those languages mapping it to the shared space." }, { "end": 1102.76, "start": 1097.04, "text": " The decoder can be trained by simply giving it this indicator token right" }, { "end": 1108.2, "start": 1102.76, "text": " here to decode the appropriate language. So now the question is how exactly do we" }, { "end": 1114.48, "start": 1108.2, "text": " train this such that this happens? There's one caveat in" }, { "end": 1119.8799999999999, "start": 1114.48, "text": " programming languages of course we still have to check whether that's the same or" }, { "end": 1123.6, "start": 1119.8799999999999, "text": " not and we know that in programming languages a lot of programming languages" }, { "end": 1129.6, "start": 1123.6, "text": " for example the word if is the same right so if you tokenize Java or Python" }, { "end": 1136.56, "start": 1129.6, "text": " or C++ the word if is the same and likewise there is a lot of overlap" }, { "end": 1141.04, "start": 1136.56, "text": " between the different programming languages and that exactly is this" }, { "end": 1147.36, "start": 1141.04, "text": " correspondence to here. These models use the parts that are overlapping either" }, { "end": 1152.84, "start": 1147.36, "text": " tokens themselves or this can also be grammatical constructs and so on they" }, { "end": 1157.1999999999998, "start": 1152.84, "text": " can also be overlapping and therefore map to the same space so if a construct" }, { "end": 1161.76, "start": 1157.1999999999998, "text": " is used in the same way this can be in higher layers this can induce the same" }, { "end": 1167.24, "start": 1161.76, "text": " effect and they'll use that to map the and sorry they'll use that the result" }, { "end": 1171.8, "start": 1167.24, "text": " will be that the similar things in the same in these languages will be" }, { "end": 1175.84, "start": 1171.8, "text": " mapped to the similar spaces in embedding space. Now this makes this" }, { "end": 1180.28, "start": 1175.84, "text": " example a bit weaker because that means this method would work exceptionally" }, { "end": 1184.28, "start": 1180.28, "text": " well for something like Python 2 to Python 3 because they of course have" }, { "end": 1190.12, "start": 1184.28, "text": " like a lot of overlap of syntax and keywords and constructs whereas" }, { "end": 1195.72, "start": 1190.12, "text": " something like cobalt to Java it's more let's say doubtable that they will" }, { "end": 1202.12, "start": 1195.72, "text": " work so well. In this paper here they've chosen C++ Python and Java which" }, { "end": 1207.08, "start": 1202.12, "text": " do have significant overlap but especially something like Python to Java" }, { "end": 1212.48, "start": 1207.08, "text": " of course there is a lot of a difference or Python to C++ as well" }, { "end": 1218.6799999999998, "start": 1212.48, "text": " Python is not typed and so you can see a bit of the difficulty is already in the" }, { "end": 1223.76, "start": 1218.6799999999998, "text": " paper here but you have to be aware that this works less and less the less this" }, { "end": 1230.36, "start": 1223.76, "text": " shared overlap is given. Alright so how do you train these models and remember" }, { "end": 1236.72, "start": 1230.36, "text": " we don't have parallel corpora we are simply reliant on having databases from" }, { "end": 1243.4, "start": 1236.72, "text": " having big repositories of Python code C++ code and Java code and they don't" }, { "end": 1247.6000000000001, "start": 1243.4, "text": " correspond to each other so the first thing you do and as I understand it you" }, { "end": 1250.72, "start": 1247.6000000000001, "text": " can do these things in parallel but there are three different objectives" }, { "end": 1256.24, "start": 1250.72, "text": " that achieve three different things in these models. The first objective is the" }, { "end": 1260.6000000000001, "start": 1256.24, "text": " cross-lingual masked language pre-training. Now the models here are" }, { "end": 1265.76, "start": 1260.6000000000001, "text": " going to be transformer models with encoders and decoders and that's comes" }, { "end": 1269.84, "start": 1265.76, "text": " from the attention is all you need papers and various other papers like" }, { "end": 1275.76, "start": 1269.84, "text": " this I've done videos on those if you want to see that. This masked language" }, { "end": 1280.68, "start": 1275.76, "text": " model pre-training however is from the BERT paper so BERT if you don't know" }, { "end": 1286.28, "start": 1280.68, "text": " what that is I've also done a video on that. This simply trains the encoder so" }, { "end": 1292.12, "start": 1286.28, "text": " this is to train the encoder. What you would do is you would input code so" }, { "end": 1296.9199999999998, "start": 1292.12, "text": " usually in masked language model if you train the encoder you input code with" }, { "end": 1308.9199999999998, "start": 1296.9199999999998, "text": " tokens like hello there you would then so this is your input you would then mask" }, { "end": 1315.7199999999998, "start": 1308.9199999999998, "text": " some of the tokens for example here the low and maybe the entire word there you" }, { "end": 1320.32, "start": 1315.7199999999998, "text": " would scrap that you would put it through your encoder which is this" }, { "end": 1326.6399999999999, "start": 1320.32, "text": " transformer model like BERT and then BERT is supposed to reconstruct hello" }, { "end": 1333.24, "start": 1326.6399999999999, "text": " there. BERT is supposed to reconstruct these two tokens like it doesn't see" }, { "end": 1338.8, "start": 1333.24, "text": " them and you ask it what did I cross out and it needs to reconstruct that so you" }, { "end": 1345.1599999999999, "start": 1338.8, "text": " train the model to reconstruct these masked tokens and the research on BERT" }, { "end": 1349.72, "start": 1345.1599999999999, "text": " and other things has shown that if you train with this objective the encoder" }, { "end": 1355.92, "start": 1349.72, "text": " sort of learns about the structure of code it learns about it learns which" }, { "end": 1360.64, "start": 1355.92, "text": " tokens and which constructs often appear together and therefore it learns" }, { "end": 1366, "start": 1360.64, "text": " something about the structure of the input and that means it can create" }, { "end": 1371.6000000000001, "start": 1366, "text": " whatever is up here is a good and meaningful embedding for these things" }, { "end": 1377.1200000000001, "start": 1371.6000000000001, "text": " that tells you something about the statistical coexistence of tokens and of" }, { "end": 1382.4799999999998, "start": 1377.12, "text": " course since we're doing this with all the languages so the Python goes in here" }, { "end": 1387.36, "start": 1382.4799999999998, "text": " C++ goes in here without telling the model what it is you just throw it in" }, { "end": 1393.9199999999998, "start": 1387.36, "text": " there right Java goes in there by tokenizing it and you see an example" }, { "end": 1401.12, "start": 1393.9199999999998, "text": " right here so if this right here is C++ but in Python this would also be if and" }, { "end": 1407.08, "start": 1401.12, "text": " since it needs to learn a single encoder for all of these languages and since the" }, { "end": 1412.28, "start": 1407.08, "text": " tokens overlap partially it is going to result in exactly what we want namely a" }, { "end": 1416.6399999999999, "start": 1412.28, "text": " shared embedding space where even though the input comes from different" }, { "end": 1421.28, "start": 1416.6399999999999, "text": " languages it is mapped similar things are mapped to similar places in the" }, { "end": 1426.1599999999999, "start": 1421.28, "text": " embedding space right so the mask language model pre training very quickly" }, { "end": 1431.9199999999998, "start": 1426.1599999999999, "text": " as you take a piece of code like here on the left you mask out some of the tokens" }, { "end": 1437.44, "start": 1431.92, "text": " here you can see them in this mask and you simply ask the model the encoder to" }, { "end": 1443.52, "start": 1437.44, "text": " reconstruct those things okay so this is just for the encoder as far as I" }, { "end": 1447.44, "start": 1443.52, "text": " understand it the encoder doesn't it doesn't see the thing back here it" }, { "end": 1454.24, "start": 1447.44, "text": " simply sees this and you tell it please reconstruct please tell me which words" }, { "end": 1460.4, "start": 1454.24, "text": " or which tokens I I clipped out here and it's supposed to tell you okay the first" }, { "end": 1466.6000000000001, "start": 1460.4, "text": " one is if the second one is int and the third one is the I now if you consider" }, { "end": 1471.8000000000002, "start": 1466.6000000000001, "text": " what the encoder has to do here so if you were to see this then that pretty" }, { "end": 1477.5600000000002, "start": 1471.8000000000002, "text": " clearly you know you could you could guess that that is an if of course it's" }, { "end": 1482.0400000000002, "start": 1477.5600000000002, "text": " not a hundred percent but this is just pre training right so you train it to" }, { "end": 1486, "start": 1482.0400000000002, "text": " output if here now here you have to do a little bit more inference maybe you've" }, { "end": 1491.04, "start": 1486, "text": " seen this for construct a bunch of times and you can see that this is compared" }, { "end": 1496.28, "start": 1491.04, "text": " here and this is added so probably it's an integer and then in the last thing" }, { "end": 1500.12, "start": 1496.28, "text": " this is even more complicated if you don't see that the eye is here you" }, { "end": 1505.32, "start": 1500.12, "text": " somehow have to guess that what it is it's not clear right but you can guess" }, { "end": 1509.94, "start": 1505.32, "text": " that okay there's a local variable I right here and probably it's going to be" }, { "end": 1516.24, "start": 1509.94, "text": " used somewhere in this block now this here isn't I and I don't see I anywhere" }, { "end": 1520.3200000000002, "start": 1516.24, "text": " else so probably I goes in here which makes sense because it's an integer and" }, { "end": 1527.2, "start": 1520.3200000000002, "text": " prime is an array and and it integers index arrays so on okay so this is what" }, { "end": 1531.92, "start": 1527.2, "text": " the model does first the second thing is we need to train the decoder somehow" }, { "end": 1536.88, "start": 1531.92, "text": " how do we train the decoder in a very similar way we make the decoder do" }, { "end": 1543.8000000000002, "start": 1536.88, "text": " denoising auto encoding now before we just had single tokens we just asked the" }, { "end": 1551.64, "start": 1543.8000000000002, "text": " encoder to reconstruct tokens so the encoder is this box right here this" }, { "end": 1556.96, "start": 1551.64, "text": " colors this box is the encoder and the actual part that's going to predict" }, { "end": 1563.1200000000001, "start": 1556.96, "text": " these words is going to be one sort of one classification layer on top that is" }, { "end": 1568.8, "start": 1563.12, "text": " going to predict for each position the individual word now did this is just for" }, { "end": 1573.28, "start": 1568.8, "text": " pre-training after the pre-training you scrap that and you attach it to a" }, { "end": 1580, "start": 1573.28, "text": " decoder so you attach whatever you got out of the encoder to a decoder and the" }, { "end": 1586.4799999999998, "start": 1580, "text": " decoder will output in an autoregressive way one token after another I did I put" }, { "end": 1591.4799999999998, "start": 1586.4799999999998, "text": " a it output a token right here and it feed that token back into the decoder" }, { "end": 1595.72, "start": 1591.48, "text": " saying okay here's what I've produced now produce the next thing you produce" }, { "end": 1603.24, "start": 1595.72, "text": " the next token and so on so it would produce token after token the output and" }, { "end": 1607.56, "start": 1603.24, "text": " now as I said I'm not exactly sure I think they're doing all of these things" }, { "end": 1612.52, "start": 1607.56, "text": " at the same time so this would still be here but the information would just be" }, { "end": 1617.32, "start": 1612.52, "text": " routed in two different ways or maybe they do it one after another it doesn't" }, { "end": 1623.04, "start": 1617.32, "text": " really matter but what matters is in this thing here you now train the decoder" }, { "end": 1628.4399999999998, "start": 1623.04, "text": " I mean you train it jointly with the encoder but you also involve the decoder" }, { "end": 1635.12, "start": 1628.4399999999998, "text": " and you do this by doing something very similar you corrupt a piece of code and" }, { "end": 1641.12, "start": 1635.12, "text": " you get corrupted code now you can see part of this corruption is masking like" }, { "end": 1645.6, "start": 1641.12, "text": " you did before but also part of the corruption is like here you scramble" }, { "end": 1650.8, "start": 1645.6, "text": " some of the tokens right this was it was this over here you just jumble some of" }, { "end": 1656, "start": 1650.8, "text": " them around a bit and then you here you also drop a token as you can see that" }, { "end": 1661.6, "start": 1656, "text": " the one is dropped and you simply so you don't show this to the encoder or the" }, { "end": 1667.48, "start": 1661.6, "text": " decoder you input this corrupted code into the decode into the first the" }, { "end": 1674.6399999999999, "start": 1667.48, "text": " encoder and then you ask the decoder to give you back the original code without" }, { "end": 1679.44, "start": 1674.64, "text": " showing it the original code so the the task for the decoder for the encoder" }, { "end": 1684.2800000000002, "start": 1679.44, "text": " decoder for the entire model here is if you're if you see this here is corrupt" }, { "end": 1691.2800000000002, "start": 1684.2800000000002, "text": " the code I have corrupted it in various ways please tell me what I originally" }, { "end": 1698.96, "start": 1691.2800000000002, "text": " had now it can the masking it does the same as before it sort of infers it this" }, { "end": 1705.64, "start": 1698.96, "text": " thing here it says well probably I probably this isn't really correct you" }, { "end": 1709.08, "start": 1705.64, "text": " don't even tell it where the errors are right before with the masking you at" }, { "end": 1712.8400000000001, "start": 1709.08, "text": " least told it where the errors are now you don't even tell it where the where" }, { "end": 1717.08, "start": 1712.8400000000001, "text": " the errors are so it needs to recognize this here is probably correct this isn't" }, { "end": 1723.48, "start": 1717.08, "text": " this I'm gonna rewrite this to that okay and it does this one token at a time so" }, { "end": 1729.32, "start": 1723.48, "text": " it first goes into the dirt and it needs to output the correct thing this is I" }, { "end": 1733.92, "start": 1729.32, "text": " hope the difference is clear to the masked language modeling which involved" }, { "end": 1740.48, "start": 1733.92, "text": " the involved the decoder and here also is the first time where in the encoder" }, { "end": 1745.98, "start": 1740.48, "text": " you you prepend this Java token now this as you can see it still goes from the" }, { "end": 1750.48, "start": 1745.98, "text": " same language to the same language but this is where you train the decoder to" }, { "end": 1757.52, "start": 1750.48, "text": " output a given language so here with the token again this is the same decoder for" }, { "end": 1761.52, "start": 1757.52, "text": " all the three languages the only difference here is every time you simply" }, { "end": 1765.96, "start": 1761.52, "text": " provided with the special token at the beginning to tell it which language it" }, { "end": 1772.6, "start": 1765.96, "text": " should decode right now so this this now we have an encoder that maps all the" }, { "end": 1776.52, "start": 1772.6, "text": " languages to a shared space and we have a decoder that conditioned on a token" }, { "end": 1783.48, "start": 1776.52, "text": " like this can output a valid code in that thing assuming this here was" }, { "end": 1790, "start": 1783.48, "text": " corrupted code now since the encoder is shared it should map the same kind of" }, { "end": 1793.6, "start": 1790, "text": " corrupted code of the different languages to the same place in the" }, { "end": 1798, "start": 1793.6, "text": " embedding space and therefore this would also this would already be enough to" }, { "end": 1803.2, "start": 1798, "text": " have this model that we desire we can input some code it doesn't actually have" }, { "end": 1807.16, "start": 1803.2, "text": " to be corrupted right we can just input some code in one language and ask the" }, { "end": 1812.1200000000001, "start": 1807.16, "text": " decoder to output the other language and this works but it doesn't work super" }, { "end": 1817.68, "start": 1812.1200000000001, "text": " well and here the authors go for another idea from the unsupervised machine" }, { "end": 1822.48, "start": 1817.68, "text": " translation literature which is back translation so back translation is a" }, { "end": 1827.98, "start": 1822.48, "text": " technique where you can tune an unsupervised machine translation model" }, { "end": 1832.8, "start": 1827.98, "text": " in a way that you would tune a supervised one but of course you don't" }, { "end": 1838.6399999999999, "start": 1832.8, "text": " have supervised data so what's the plan you will produce the data yourself using" }, { "end": 1843.24, "start": 1838.6399999999999, "text": " your own model so the plan is pretty simple it's actually contained in the" }, { "end": 1848.6399999999999, "start": 1843.24, "text": " back translation name so if you have a piece of code what you would do is you" }, { "end": 1853.8799999999999, "start": 1848.6399999999999, "text": " would first use your model to translate this to another language any of your" }, { "end": 1858.6399999999999, "start": 1853.8799999999999, "text": " choice now you have no clue whether this thing here is correct or not you have no" }, { "end": 1861.96, "start": 1858.6399999999999, "text": " clue and you have no way of assessing it because you don't have ground truth" }, { "end": 1867.64, "start": 1861.96, "text": " what you can do is use your model again or actually use a second model that you" }, { "end": 1873.16, "start": 1867.64, "text": " train in parallel now I believe in this case they could use the same model but" }, { "end": 1878.72, "start": 1873.16, "text": " you can that could be instable and so on but in any case you can use your system" }, { "end": 1884.04, "start": 1878.72, "text": " again to translate it back to your original language your system can do" }, { "end": 1889.3600000000001, "start": 1884.04, "text": " that right and here whatever you get as an output you know the ground truth it's" }, { "end": 1893.7199999999998, "start": 1889.36, "text": " whatever you started with so now you can compare what comes out to what you" }, { "end": 1898.36, "start": 1893.7199999999998, "text": " started with the difficulty of course is if there is a mistake you don't know" }, { "end": 1906.6399999999999, "start": 1898.36, "text": " which of the two models made a mistake and you so it could be could be that" }, { "end": 1910.76, "start": 1906.6399999999999, "text": " your original translation model made a mistake and or it could be that your" }, { "end": 1917.12, "start": 1910.76, "text": " back translation model made a mistake and you have to find a loss function that" }, { "end": 1924.04, "start": 1917.12, "text": " kind of punishes both equally or you simply keep one sort of constant and" }, { "end": 1929.28, "start": 1924.04, "text": " loss free and train the other one because there there's going to be a" }, { "end": 1932.9599999999998, "start": 1929.28, "text": " sample where you have C++ as an input and then the intermediate language is" }, { "end": 1938.8799999999999, "start": 1932.9599999999998, "text": " Python so all of the models sort of get trained once as an as a source to target" }, { "end": 1944, "start": 1938.8799999999999, "text": " translator and once as a target to source translator but I hope the the" }, { "end": 1947.68, "start": 1944, "text": " objective is clear from the back translation so now with the back" }, { "end": 1954.8, "start": 1947.68, "text": " translation you actually you train the models to go from one language to" }, { "end": 1959.92, "start": 1954.8, "text": " another language okay and that's the that's the final goal even though you do" }, { "end": 1966.28, "start": 1959.92, "text": " it without supervised data you now have a model that can encode things into a" }, { "end": 1970.28, "start": 1966.28, "text": " shared space that can decode into a language and that is attuned to" }, { "end": 1978.2, "start": 1970.28, "text": " translating from one language to another language so that's that's it how this is" }, { "end": 1984.24, "start": 1978.2, "text": " all how does this work now for evaluation the question is of course how" }, { "end": 1989.3999999999999, "start": 1984.24, "text": " do you evaluate models like this for evaluation they go to this website" }, { "end": 1997.6399999999999, "start": 1989.3999999999999, "text": " called geeks for geeks and this is a an online platform with computer science" }, { "end": 2002.5200000000002, "start": 1997.64, "text": " and programming articles it gathers many coding problems and presents solution in" }, { "end": 2007.48, "start": 2002.5200000000002, "text": " several programming languages okay so this is a website that teaches you to" }, { "end": 2012.68, "start": 2007.48, "text": " code and it will have like an exercise please do this and then it will provide" }, { "end": 2017.6200000000001, "start": 2012.68, "text": " solutions in the different languages now why is that cool and they have an" }, { "end": 2025.5200000000002, "start": 2017.6200000000001, "text": " example they have an example right here why is that cool because not only can" }, { "end": 2031.36, "start": 2025.52, "text": " you be relatively sure that these different functions that you have here" }, { "end": 2036.24, "start": 2031.36, "text": " do the same thing but you can also relatively be relatively sure that they" }, { "end": 2041.4, "start": 2036.24, "text": " are implemented in the similar way right because you what this website is trying" }, { "end": 2048.6, "start": 2041.4, "text": " to do is it's trying to teach the people how to how to code up an algorithm that" }, { "end": 2051.84, "start": 2048.6, "text": " they think up in their head and therefore not only is the solution" }, { "end": 2056.4, "start": 2051.84, "text": " correct and the same it is implemented in the in the same way as you can see" }, { "end": 2061, "start": 2056.4, "text": " here the construct there's this if construct is everywhere the else if is" }, { "end": 2066.36, "start": 2061, "text": " everywhere so even though some of the languages might have specialty things" }, { "end": 2071.96, "start": 2066.36, "text": " for implementing some algorithms these are really the same algorithmic the same" }, { "end": 2076.32, "start": 2071.96, "text": " expression of algorithmic thought in the different languages so that is a perfect" }, { "end": 2081.08, "start": 2076.32, "text": " parallel data set the problem of course is that there is not that many so it is" }, { "end": 2087.16, "start": 2081.08, "text": " good enough as a test set it is not good enough as a training set but given that" }, { "end": 2092.96, "start": 2087.16, "text": " it's a test set you can just have these as test set and then you can input the" }, { "end": 2098.2, "start": 2092.96, "text": " C++ and see whether or not the Java comes out the problem here of course is" }, { "end": 2103.44, "start": 2098.2, "text": " that even though this is very clear there are still you know sort of many" }, { "end": 2107.92, "start": 2103.44, "text": " variations of how you can implement that to even express the same algorithmic" }, { "end": 2114.2000000000003, "start": 2107.92, "text": " thought so metrics from natural language processing like blue just aren't going" }, { "end": 2118.16, "start": 2114.2000000000003, "text": " to be very good because they look at n-gram overlap and you can write this" }, { "end": 2124.76, "start": 2118.16, "text": " function with very different n-grams and still be very very valid and correct and" }, { "end": 2130.36, "start": 2124.76, "text": " also exact match is not going to be really the the gold standard here so" }, { "end": 2135.16, "start": 2130.36, "text": " what they do is they create a set of unit tests where for each of these" }, { "end": 2141.52, "start": 2135.16, "text": " functions they go they check their input types they randomly generate input" }, { "end": 2147.7999999999997, "start": 2141.52, "text": " randomly generate a set of inputs look whatever comes out and if the same thing" }, { "end": 2153.3199999999997, "start": 2147.7999999999997, "text": " comes out in all of their test functions that they consider this a good unit test" }, { "end": 2158.3199999999997, "start": 2153.3199999999997, "text": " for that function so whenever you your model now produces let's say you input" }, { "end": 2164, "start": 2158.3199999999997, "text": " Python it produces a C++ you simply put these unit tests through the C++" }, { "end": 2169.16, "start": 2164, "text": " function you produce and if they produce the same output as the Python the" }, { "end": 2175, "start": 2169.16, "text": " original Python function when on the same inputs then you consider the unit" }, { "end": 2180.96, "start": 2175, "text": " test to succeed and you consider the function to be correct that this of" }, { "end": 2186.44, "start": 2180.96, "text": " course this isn't this isn't the super duper gold standard especially with" }, { "end": 2191.8, "start": 2186.44, "text": " random inputs because usually what you want to do is test sort of corner cases" }, { "end": 2197.36, "start": 2191.8, "text": " but it's better than anything else so far I've been a long" }, { "end": 2202, "start": 2197.36, "text": " dis-advocate of unit tests honestly because I think whenever a human writes" }, { "end": 2207.5600000000004, "start": 2202, "text": " a unit test then they're probably since they have already implemented the" }, { "end": 2212.7200000000003, "start": 2207.5600000000004, "text": " function itself they're probably going to make the same mistakes or they're" }, { "end": 2216.88, "start": 2212.7200000000003, "text": " probably just going to replicate the code and thinking of the function in the" }, { "end": 2222.44, "start": 2216.88, "text": " unit test itself and therefore it doesn't really get you anything I guess" }, { "end": 2226.32, "start": 2222.44, "text": " in large organizations you write unit tests so that someone else doesn't screw" }, { "end": 2232.2400000000002, "start": 2226.32, "text": " up your code but in this case it would actually be cool because now as a human" }, { "end": 2237.6400000000003, "start": 2232.2400000000002, "text": " you could simply write a bunch of unit tests and then let your let your trans" }, { "end": 2241.92, "start": 2237.6400000000003, "text": " compiler do the heavy lifting and you simply check whether or not the" }, { "end": 2249.12, "start": 2241.92, "text": " output is good alright so how does this do here you can see they have some" }, { "end": 2253.52, "start": 2249.12, "text": " baselines the C++ to Java as I understand it is a commercial system and" }, { "end": 2259.52, "start": 2253.52, "text": " the Java to Python is an open source system both are human experts that make" }, { "end": 2265, "start": 2259.52, "text": " up these rule-based systems on how to trans on how to translate code into" }, { "end": 2271.4, "start": 2265, "text": " other languages now the if you do what they have here is trans coder beam one" }, { "end": 2275.76, "start": 2271.4, "text": " which means a beam size of one so if you don't know a beam search is very shortly" }, { "end": 2280.88, "start": 2275.76, "text": " beam search is like if you decode from your language model you can either" }, { "end": 2285.2000000000003, "start": 2280.88, "text": " always take the next token that has the highest probability this would be greedy" }, { "end": 2291.08, "start": 2285.2000000000003, "text": " decoding or a beam size of one or you can sort of always keep the top n" }, { "end": 2297.2000000000003, "start": 2291.08, "text": " hypotheses of what the of what the most likely output is as you can keep that as" }, { "end": 2304.96, "start": 2297.2, "text": " a you can keep the top five in memory and always decode these five on sort of" }, { "end": 2309.48, "start": 2304.96, "text": " like you have a mini batch of five sequences and you always keep the top" }, { "end": 2314.4399999999996, "start": 2309.48, "text": " five in memory so at the end of the decoding you're going to have five" }, { "end": 2320.08, "start": 2314.4399999999996, "text": " different variants of the same sentence or of the same decoded output and you" }, { "end": 2324.8799999999997, "start": 2320.08, "text": " can then decide which one you like best and usually what you do is you then" }, { "end": 2328.88, "start": 2324.88, "text": " output the one that has the highest probability which is not the same as the" }, { "end": 2334.48, "start": 2328.88, "text": " greedy because sometimes the next token will be will look one next token will" }, { "end": 2340.28, "start": 2334.48, "text": " look very good in a greedy way but you'd better take the second most likely" }, { "end": 2345.96, "start": 2340.28, "text": " because the next to next token is going to sort of make up for that to make the" }, { "end": 2352.76, "start": 2345.96, "text": " entire sequence even more likely so more beam size basically means you can keep" }, { "end": 2358.76, "start": 2352.76, "text": " more hypotheses of the output in memory until the end so if you just do the" }, { "end": 2363.32, "start": 2358.76, "text": " greedy decoding you see you already get fairly close to these baselines it's" }, { "end": 2370.44, "start": 2363.32, "text": " very very cool very interesting and if you up the beam size you surpass these" }, { "end": 2376.4, "start": 2370.44, "text": " baselines now the way they up the beam size here I find to be a bit let's call" }, { "end": 2380.0400000000004, "start": 2376.4, "text": " it a bit cheaty because when they say beam five what they mean is they keep" }, { "end": 2385.72, "start": 2380.04, "text": " the five hypotheses and then at the end I as I understand it if any of the five" }, { "end": 2392.6, "start": 2385.72, "text": " hypotheses passes all the unit tests or the most they keep it right so basically" }, { "end": 2398, "start": 2392.6, "text": " they give themselves the freedom to say whichever one of the five we output is" }, { "end": 2403.4, "start": 2398, "text": " the best that's the one we count and of course that's not really a match to the" }, { "end": 2409.52, "start": 2403.4, "text": " commercial or to the baseline system because it can output one thing now" }, { "end": 2415.08, "start": 2409.52, "text": " it is maybe a good practical application to give the human that you know you input" }, { "end": 2419.4, "start": 2415.08, "text": " a function you give the human five options to choose from and it can choose" }, { "end": 2427.48, "start": 2419.4, "text": " and thereby decide which one the human likes best but it is sort of it is a" }, { "end": 2432.48, "start": 2427.48, "text": " wonky what I like more is this here the beam 10 top one this is what you would" }, { "end": 2436.84, "start": 2432.48, "text": " actually do so we could keep 10 hypotheses during decoding and that the" }, { "end": 2443.1200000000003, "start": 2436.84, "text": " end output the top one the top likely one and as you can see that is better" }, { "end": 2447.2400000000002, "start": 2443.1200000000003, "text": " than greedy but it is worse than where you you know give yourself the freedom" }, { "end": 2452.7200000000003, "start": 2447.2400000000002, "text": " to output multiple ones of course though they say that most of the errors that" }, { "end": 2458.28, "start": 2452.7200000000003, "text": " this top one makes come from compilation errors when the target language is Java" }, { "end": 2464.44, "start": 2458.28, "text": " or C++ it suggests that the beam and top one metric could easily be improved we" }, { "end": 2469.56, "start": 2464.44, "text": " leave this to future work which this again I find valid right so if you if" }, { "end": 2474.68, "start": 2469.56, "text": " your method is I'm going to keep the top 10 hypothesis until the end and" }, { "end": 2480.16, "start": 2474.68, "text": " then I'm going from the top and I simply compile them and I output the first one" }, { "end": 2489.04, "start": 2480.16, "text": " that compiles that that's not cheating right that's a valid thing again yeah so" }, { "end": 2496.96, "start": 2489.04, "text": " in that way I can I can understand what they're saying right here okay so they" }, { "end": 2503, "start": 2496.96, "text": " give some examples some of which I find very interesting so the first thing here" }, { "end": 2509.7599999999998, "start": 2503, "text": " is that oh yeah by the way I've said in the I've said that the tokenizer between" }, { "end": 2513.72, "start": 2509.7599999999998, "text": " the natural languages is shared they make a little tweak here in that they" }, { "end": 2518.2, "start": 2513.72, "text": " tokenize the different languages with their language respective tokenizers" }, { "end": 2524.16, "start": 2518.2, "text": " which will still end up tokenizing pretty much you know this the print" }, { "end": 2531.2799999999997, "start": 2524.16, "text": " statement in C++ or in Java no actually the print statement in Python is print" }, { "end": 2534.7999999999997, "start": 2531.2799999999997, "text": " and in Java it's println and so on but it will still like the all the if" }, { "end": 2542.7599999999998, "start": 2534.7999999999997, "text": " statements it will still tokenize into the same into the same word but it's" }, { "end": 2554.92, "start": 2542.76, "text": " simply not viable to to parse Python with a C++ parser okay so we have looked" }, { "end": 2560.4, "start": 2554.92, "text": " at this the results this is one of the results they look at their shared" }, { "end": 2564.44, "start": 2560.4, "text": " embedding space and this is a t-sneak plot so a 2d projection of this shared" }, { "end": 2568.4, "start": 2564.44, "text": " embedding space and you can see that this is actually happening so the" }, { "end": 2574.28, "start": 2568.4, "text": " different so null null and none are mapped to similar locations println" }, { "end": 2580.88, "start": 2574.28, "text": " and cout are mapped to similar locations in this space so this is exactly what we" }, { "end": 2587, "start": 2580.88, "text": " want this is sort of a verification that this method of embedding the different" }, { "end": 2592.1600000000003, "start": 2587, "text": " languages into the same space really turns out such that whatever means the" }, { "end": 2596.76, "start": 2592.1600000000003, "text": " same thing is mapped to the same place you can see here catch and accept two" }, { "end": 2600.48, "start": 2596.76, "text": " very different tokens are mapped to the same place simply because they're used" }, { "end": 2607, "start": 2600.48, "text": " in the same sort of constructs across the languages very cool one of these" }, { "end": 2611.96, "start": 2607, "text": " examples here is quite impressive and kind of shows the difference between" }, { "end": 2619.2400000000002, "start": 2611.96, "text": " this and and rule-based translation in this function right here you have a C" }, { "end": 2624.36, "start": 2619.2400000000002, "text": " plus plus function that takes a character pointer to that is called str" }, { "end": 2632.08, "start": 2624.36, "text": " str in as an input now in C++ strings are at least in old versions of C++" }, { "end": 2637.4, "start": 2632.08, "text": " strings are handled as character arrays so a string is indistinguishable from a" }, { "end": 2643.52, "start": 2637.4, "text": " character array and in this case usually what you do is you don't input the array" }, { "end": 2649.6400000000003, "start": 2643.52, "text": " because that will cause a copy you input a pointer to the first to you input a" }, { "end": 2656.7999999999997, "start": 2649.64, "text": " pointer to the array and that would define the string okay so if you" }, { "end": 2662.24, "start": 2656.7999999999997, "text": " translate this again this the type of this is simply character array if you" }, { "end": 2667, "start": 2662.24, "text": " translate this with this transcoder system that they've built into Java in" }, { "end": 2672.4, "start": 2667, "text": " Java there is a type called string right there's a native type called string and" }, { "end": 2677.92, "start": 2672.4, "text": " is that true I think oh yeah that's and then that's handled really weirdly in" }, { "end": 2685.8, "start": 2677.92, "text": " the JVM I think yes so there is it at least there is a type called string so" }, { "end": 2690.32, "start": 2685.8, "text": " it would map that it would recognize are you mean a string therefore I'm going to" }, { "end": 2694.76, "start": 2690.32, "text": " put a string here and it uses all the string method like string length string" }, { "end": 2700.4, "start": 2694.76, "text": " character at and so on where in C++ this is just an array and you just have array" }, { "end": 2706.28, "start": 2700.4, "text": " accesses now they take this same C++ function and only change one thing they" }, { "end": 2711.44, "start": 2706.28, "text": " change the name of the parameter everything else is the same but now the" }, { "end": 2717.32, "start": 2711.44, "text": " character array is called our okay and they put it through the same system and" }, { "end": 2723.88, "start": 2717.32, "text": " that system now outputs a function that takes in a character array called our" }, { "end": 2730.2400000000002, "start": 2723.88, "text": " instead of a string and it uses you know here the property length it uses array" }, { "end": 2735.6400000000003, "start": 2730.2400000000002, "text": " access instead of this car character at method so simply by changing the name" }, { "end": 2741.3599999999997, "start": 2735.64, "text": " and this is something where I believe the rule-based systems can this can be" }, { "end": 2746.72, "start": 2741.3599999999997, "text": " an advantage over rule-based system because what this here does is it simply" }, { "end": 2753.64, "start": 2746.72, "text": " says oh I've seen a lot of humans in my code base that use this use like stir as" }, { "end": 2759.96, "start": 2753.64, "text": " a as a variable name and that usually means that the constructs here are like" }, { "end": 2766.32, "start": 2759.96, "text": " the constructs in Java where people use strings and I've seen other places where" }, { "end": 2771.56, "start": 2766.32, "text": " people use you know names like this right here and usually that is used in" }, { "end": 2777.2400000000002, "start": 2771.56, "text": " the same context as in Java people use character arrays right so it in" }, { "end": 2782.28, "start": 2777.2400000000002, "text": " programming it's not only important what the the code actually does but a lot of" }, { "end": 2786.7200000000003, "start": 2782.28, "text": " programming goes via naming of things like other programmers will read your" }, { "end": 2791.08, "start": 2786.72, "text": " code and by reading stir right here they will sort of assume that this is a" }, { "end": 2796.8799999999997, "start": 2791.08, "text": " string whereas if they read our right here they will assume you're a pirate" }, { "end": 2802.8799999999997, "start": 2796.8799999999997, "text": " and you are referring to a character array and they will treat the code the" }, { "end": 2807.9399999999996, "start": 2802.8799999999997, "text": " code means something different and these systems right here these neural machine" }, { "end": 2812.56, "start": 2807.9399999999996, "text": " translation systems can actually understand that part because they do" }, { "end": 2817.7599999999998, "start": 2812.56, "text": " statistical inference on code that humans wrote if you change this back to" }, { "end": 2823.7999999999997, "start": 2817.7599999999998, "text": " say input then again it goes back to a string and uses all the string functions" }, { "end": 2830.44, "start": 2823.7999999999997, "text": " so that's fairly impressive in my mind and it yeah definitely an advantage over" }, { "end": 2835.32, "start": 2830.44, "text": " rule-based systems of course the disadvantage over rule-based systems is" }, { "end": 2839.04, "start": 2835.32, "text": " that in rule-based systems you can almost get on like sometimes you can" }, { "end": 2844, "start": 2839.04, "text": " even guarantee that the code does the same thing here you can't they give some" }, { "end": 2850.48, "start": 2844, "text": " examples of failed translations where so now you get you run into this problem" }, { "end": 2855.2, "start": 2850.48, "text": " where the min function in Python is overloaded it can either give you the" }, { "end": 2860.7599999999998, "start": 2855.2, "text": " minimum of a sequence or it can give you the minimum of two things now this is" }, { "end": 2867.56, "start": 2860.7599999999998, "text": " translated to Java right here and math dot min is not overloaded in Java it" }, { "end": 2873.44, "start": 2867.56, "text": " only gives you the minimum of two things and not the minimum of an array and it" }, { "end": 2877.92, "start": 2873.44, "text": " still outputs that now given enough data probably could learn because these" }, { "end": 2882.84, "start": 2877.92, "text": " things are all context dependent but this is one of the this is one of the" }, { "end": 2891.6, "start": 2882.84, "text": " failure cases of these models of course all right so this was this paper I" }, { "end": 2898.24, "start": 2891.6, "text": " I've read that the code of this and the unit tests will be output will be put" }, { "end": 2905.5, "start": 2898.24, "text": " online at some times they are not right now if I if I hear about it I can link to" }, { "end": 2909.92, "start": 2905.5, "text": " it or let you know about it let me know what you think of this paper in the" }, { "end": 2922.4, "start": 2909.92, "text": " comments share it out and subscribe if you haven't yet and bye bye" } ]
cvkeWwDQr0A
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
JOIN ME for the NeurIPS 2020 Flatland Multi-Agent RL Challenge!
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper" ]
Join me to solve the NeurIPS 2020 challenge on multi-agent reinforcement learning in the flatland environment. This challenge has participants optimize a complex train scheduling system, subject to accidents, delays and re-routing. The plan is to solve this as a community with no expectations of winning and fully in the open. Discord: https://discord.gg/4H8xxDF Community GitHub Repo: https://github.com/yk/youtube-flatland Neurips 2020 Flatland Challenge: https://www.aicrowd.com/challenges/neurips-2020-flatland-challenge Flatland Environment: https://gitlab.aicrowd.com/flatland/flatland OUTLINE: 0:00 - Intro 1:00 - The Flatland Environment 2:00 - The NeurIPS 2020 Flatland Challenge 3:20 - Let's do this as a Community 4:10 - Ground Rules 6:15 - Conclusion Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today I want to talk to you about something that's very near and dear to my heart and that is the Flatland environment. Now the Flatland environment is a train simulator that has been developed by the Swiss Train Company and I ride the trains every day. So when I heard that there is a NeurIPS challenge to use the Flatland environment to make the train system in my country better, I of course was very excited to do that. So out of purely egotistical reasons I'm going to present to you the Flatland environment and I invite you to join me in solving this as a group together. So the plan is basically that we as a community sort of do this challenge and completely in the open with absolutely no aspirations of winning or doing well or getting any of the prizes just for the fun of it and we'll see how far we'll come together. Okay so let me demonstrate the environment itself. So as you can see here this is a visualization of the environments. There are these agents in the environments and they have to reach certain goals and of course they can't crash. If you look here to the left there's a bunch of them crashing right now which is not good and your task is this is a multi-agent reinforcement learning problem. All of these agents have to reach their goal and as fast as possible without any collisions along these tracks right here. So you basically have to specify for every single agent what their next action at each time step is. Now this simulator is completely given to you. You can use it and basically it's a planning problem for multiple agents. So at each step you have to decide does the agent move up, down, left or right depending on whether they can do so, depending on the tracks and whether something is in their way or is not in their way. And you know every agent should reach their goal at the closest possible or at the shortest possible time of course. Now there is this NURIPS 2020 Flatland challenge and basically you can submit your solutions to their evaluator and there's a leaderboard and everything and I thought it would be fun to participate in this. Now I don't exactly know what's the exact connection to NURIPS and so on but I don't care honestly and this hasn't started yet. The timeline isn't really open yet but it will start soon but I think we can already start working on it. So the plan here is basically to just you know kind of I have no idea of traffic scheduling. No idea, absolutely clue less. But I know a lot about reinforcement learning and even though they say the challenge has already existed last year in a very in a slightly different form. I think it was just one agent instead of multi-agent and they said usually you have to combine the reinforcement learning with like some traditional stuff in order to perform really well. Like screw that. No I'm totally up for that but it would be fun to just blast it off with RL and go there. So here's my proposition. I have opened a discord server for you to join where you can join in and basically people can discuss solutions to this problem. I'll make a github repository in public where people can submit poll requests to and I'll be sort of the merger and whatnot of these. And we together sort of develop solutions. Now my idea is that people would sort of independently try things and then kind of suggest things and if they work we can merge them and whatnot. And there's just a lot of discussion in the discord server. I myself will not be like super active on the server. It's meant for the community basically together to discuss things and whoever wants to do that. So I just want to make some things clear from the beginning. I will be the dictator of this project. The 100% authoritarian no compromises dictator. If anything is supposed to be decided I may elect to hold the vote and I may not. If we win something I'll decide what to do with it. So just this because otherwise there's just trouble right. Are we going to win? Probably not because anyone could just come to our github repo clone it and then tune it a little bit more. Right so I have no aspirations of winning right here. Also as I already said I'm not going to be super active in this discord. It's meant as a method for the community among itself to communicate. Third if you decide to put in work don't expect others to do so. Expect nothing. If the project doesn't work out we scrap it. If people get tired of it we scrap it. If there's some other problem we scrap it. No expectations. Never get mad at anyone else for not doing as much work or anything like this. This is purely you participate because you yourself want to learn something, want to have fun and if someone else does the same thing that's all the better okay. I will have a mainly supervisory role in this in that I will look at things that are happening and advise and occasionally I of course will participate myself. So I hope the framing of this is clear. This is not me throwing a hundred percent at this. I just thought it would be cool to do something as a community together and this challenge it seems like you know there are other challenges like mine RL where everyone needs like a billion GPUs to even get competitive. This seems like small enough that we could actually make a difference here and hopefully do something very cool. All right if you still want to participate even though I really really really try to talk you out of this right now I will leave a link to the discord somewhere in the description and a link to the git repo as well and I hope that some of you will be motivated enough to come join and have some fun. All right I'll see you there bye bye.
[ { "end": 4.64, "start": 0, "text": " Hi there, today I want to talk to you about something that's very near and dear to my heart" }, { "end": 10.88, "start": 4.64, "text": " and that is the Flatland environment. Now the Flatland environment is a train simulator that" }, { "end": 16.96, "start": 10.88, "text": " has been developed by the Swiss Train Company and I ride the trains every day. So when I heard that" }, { "end": 23.12, "start": 16.96, "text": " there is a NeurIPS challenge to use the Flatland environment to make the train system in my country" }, { "end": 29.92, "start": 23.12, "text": " better, I of course was very excited to do that. So out of purely egotistical reasons I'm going to" }, { "end": 36.72, "start": 29.92, "text": " present to you the Flatland environment and I invite you to join me in solving this as a group" }, { "end": 44.24, "start": 36.72, "text": " together. So the plan is basically that we as a community sort of do this challenge and completely" }, { "end": 50.88, "start": 44.24, "text": " in the open with absolutely no aspirations of winning or doing well or getting any of the" }, { "end": 59.2, "start": 50.88, "text": " prizes just for the fun of it and we'll see how far we'll come together. Okay so let me demonstrate" }, { "end": 63.760000000000005, "start": 59.2, "text": " the environment itself. So as you can see here this is a visualization of the environments. There" }, { "end": 68.64, "start": 63.760000000000005, "text": " are these agents in the environments and they have to reach certain goals and of course they" }, { "end": 74, "start": 68.64, "text": " can't crash. If you look here to the left there's a bunch of them crashing right now which is not" }, { "end": 79.76, "start": 74, "text": " good and your task is this is a multi-agent reinforcement learning problem. All of these" }, { "end": 86.80000000000001, "start": 79.76, "text": " agents have to reach their goal and as fast as possible without any collisions along these tracks" }, { "end": 94.24, "start": 86.8, "text": " right here. So you basically have to specify for every single agent what their next action at each" }, { "end": 102.39999999999999, "start": 94.24, "text": " time step is. Now this simulator is completely given to you. You can use it and basically it's" }, { "end": 107.67999999999999, "start": 102.39999999999999, "text": " a planning problem for multiple agents. So at each step you have to decide does the agent move up," }, { "end": 113.12, "start": 107.67999999999999, "text": " down, left or right depending on whether they can do so, depending on the tracks and whether something" }, { "end": 119.28, "start": 113.12, "text": " is in their way or is not in their way. And you know every agent should reach their goal at the" }, { "end": 127.28, "start": 119.92, "text": " closest possible or at the shortest possible time of course. Now there is this NURIPS 2020" }, { "end": 134.88, "start": 127.28, "text": " Flatland challenge and basically you can submit your solutions to their evaluator and there's a" }, { "end": 141.6, "start": 134.88, "text": " leaderboard and everything and I thought it would be fun to participate in this. Now I don't exactly" }, { "end": 149.6, "start": 141.6, "text": " know what's the exact connection to NURIPS and so on but I don't care honestly and this hasn't" }, { "end": 158.07999999999998, "start": 149.6, "text": " started yet. The timeline isn't really open yet but it will start soon but I think we can already" }, { "end": 164.64, "start": 158.07999999999998, "text": " start working on it. So the plan here is basically to just you know kind of I have no idea of traffic" }, { "end": 172.16, "start": 164.64, "text": " scheduling. No idea, absolutely clue less. But I know a lot about reinforcement learning and even" }, { "end": 177.83999999999997, "start": 172.16, "text": " though they say the challenge has already existed last year in a very in a slightly different form." }, { "end": 184.32, "start": 177.83999999999997, "text": " I think it was just one agent instead of multi-agent and they said usually you have to combine the" }, { "end": 189.11999999999998, "start": 184.32, "text": " reinforcement learning with like some traditional stuff in order to perform really well. Like screw" }, { "end": 196.4, "start": 189.12, "text": " that. No I'm totally up for that but it would be fun to just blast it off with RL and go there." }, { "end": 206.48000000000002, "start": 197.92000000000002, "text": " So here's my proposition. I have opened a discord server for you to join where you can join in and" }, { "end": 213.04000000000002, "start": 206.48000000000002, "text": " basically people can discuss solutions to this problem. I'll make a github repository in public" }, { "end": 220.79999999999998, "start": 213.04, "text": " where people can submit poll requests to and I'll be sort of the merger and whatnot of these. And" }, { "end": 228.32, "start": 220.79999999999998, "text": " we together sort of develop solutions. Now my idea is that people would sort of independently" }, { "end": 233.68, "start": 228.32, "text": " try things and then kind of suggest things and if they work we can merge them and whatnot. And" }, { "end": 240.64, "start": 233.68, "text": " there's just a lot of discussion in the discord server. I myself will not be like super active on" }, { "end": 247.44, "start": 240.64, "text": " the server. It's meant for the community basically together to discuss things and whoever wants to do" }, { "end": 255.35999999999999, "start": 247.44, "text": " that. So I just want to make some things clear from the beginning. I will be the dictator of this" }, { "end": 266, "start": 255.35999999999999, "text": " project. The 100% authoritarian no compromises dictator. If anything is supposed to be decided" }, { "end": 271.92, "start": 266, "text": " I may elect to hold the vote and I may not. If we win something I'll decide what to do with it." }, { "end": 279.6, "start": 273.36, "text": " So just this because otherwise there's just trouble right. Are we going to win? Probably" }, { "end": 284.08, "start": 279.6, "text": " not because anyone could just come to our github repo clone it and then tune it a little bit more." }, { "end": 292.8, "start": 284.8, "text": " Right so I have no aspirations of winning right here. Also as I already said I'm not going to be" }, { "end": 300.32, "start": 292.8, "text": " super active in this discord. It's meant as a method for the community among itself to" }, { "end": 307.2, "start": 300.32, "text": " communicate. Third if you decide to put in work don't expect others to do so. Expect nothing. If" }, { "end": 313.04, "start": 307.2, "text": " the project doesn't work out we scrap it. If people get tired of it we scrap it. If there's some other" }, { "end": 321.44, "start": 313.04, "text": " problem we scrap it. No expectations. Never get mad at anyone else for not doing as much work or" }, { "end": 327.2, "start": 321.44, "text": " anything like this. This is purely you participate because you yourself want to learn something," }, { "end": 333.28, "start": 327.2, "text": " want to have fun and if someone else does the same thing that's all the better okay. I will have a" }, { "end": 341.2, "start": 333.28, "text": " mainly supervisory role in this in that I will look at things that are happening and advise and" }, { "end": 348.4, "start": 341.2, "text": " occasionally I of course will participate myself. So I hope the framing of this is clear. This is" }, { "end": 355.67999999999995, "start": 348.4, "text": " not me throwing a hundred percent at this. I just thought it would be cool to do something as a" }, { "end": 360.32, "start": 355.67999999999995, "text": " community together and this challenge it seems like you know there are other challenges like" }, { "end": 367.12, "start": 360.32, "text": " mine RL where everyone needs like a billion GPUs to even get competitive. This seems like small" }, { "end": 373.44, "start": 367.12, "text": " enough that we could actually make a difference here and hopefully do something very cool." }, { "end": 378.71999999999997, "start": 373.44, "text": " All right if you still want to participate even though I really really really try to talk you out" }, { "end": 386.32, "start": 378.71999999999997, "text": " of this right now I will leave a link to the discord somewhere in the description and a link" }, { "end": 394.24, "start": 386.32, "text": " to the git repo as well and I hope that some of you will be motivated enough to come join and have" }, { "end": 407.12, "start": 394.24, "text": " some fun. All right I'll see you there bye bye." } ]
rl4nUngiR2k
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
BLEURT: Learning Robust Metrics for Text Generation (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nlp", "natural language processing", "mt", "machine translation", "transformer", "bert", "lstm", "attention", "wmt", "wikipedia", "backtranslation", "bleu", "rouge", "ngrams", "score", "metric", "comparison", "human raters", "google", "google research", "automatic", "overlap", "distribution shift" ]
Proper evaluation of text generation models, such as machine translation systems, requires expensive and slow human assessment. As these models have gotten better in previous years, proxy-scores, like BLEU, are becoming less and less useful. This paper proposes to learn a proxy score and demonstrates that it correlates well with human raters, even as the data distribution shifts. OUTLINE: 0:00 - Intro & High-Level Overview 1:00 - The Problem with Evaluating Machine Translation 5:10 - Task Evaluation as a Learning Problem 10:45 - Naive Fine-Tuning BERT 13:25 - Pre-Training on Synthetic Data 16:50 - Generating the Synthetic Data 18:30 - Priming via Auxiliary Tasks 23:35 - Experiments & Distribution Shifts 27:00 - Concerns & Conclusion Paper: https://arxiv.org/abs/2004.04696 Code: https://github.com/google-research/bleurt Abstract: Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution. Abstract: Thibault Sellam, Dipanjan Das, Ankur P. Parikh Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello there! Today we'll look at BLERT, Learning Robust Metrics for Text Generation by Thibaut Salam, Tipanjan Das and Ankur P. Parikh. So this paper on a high level proposes a new metric for text generation tasks such as machine translation by leveraging a BERT model to produce like an automated metric, an automated quality metric. And they make this BERT model robust by pre-training it on a very wide array of tasks that they can use synthetic data to train it. And therefore the model and the resulting score is very robust to shifts in distribution and they advocate that this could be used in the future to assess text generation systems. Alright, as always, if you like content like this, consider subscribing and sharing it out and leaving a like. Tell YouTube that it's good content. Of course only if you agree. So what's the problem with evaluation for text generation? So if you know the machine translation community, basically what they do is they have these datasets where they translate from one language into another. Let's say English to French. And they have a training dataset that is fairly okay-ishly large. And then they somehow need to evaluate this. So you have like a test dataset, but all you can really do is sort of calculate the perplexity of a language model that you produce or of a translation model that you produce. There's not really a metric for translation, so the gold standard is to get it to humans. So you train on this dataset, you produce a program. This is your machine translation program that you produce from the data. And you let this run on your evaluation dataset and you give the results to a bunch of human raters. These could be regular people, these could be linguists that are experts in translation in both languages. And they will score each of the outputs of the machine translation systems and at the end you will get like a number, like eight. Your system is eight good. The problem of course is this process is very very slow. So the machine translation community does this every year and it's quite slow and it's quite expensive as you know it requires these humans here to assess all of these systems output. And you want a sort of a sizable output, right? Because you want sort of a good sample of the machine translation system. So this is not really satisfactory but like an automated score like perplexity is also not satisfactory. What people have done is they've come up with proxy scores for the humans and two of those scores are called rouge and blue. And specifically blue is one of these metrics that people use and it kind of takes n-grams in the sentences. So n-grams would be like snippets of like let's say four words after one another and there would be these snippets and that the machine translation system produces and then it would go into the validation data set and look at the gold standard translation that it was also produced by humans. And it would also look at these snippets of size four and it would just kind of assess how many of the snippets overlap. Of course the machine translation system has never seen the label like the gold standard for that particular set otherwise it wouldn't you know be fair. But you basically compare n-grams of output and gold and some gold standard. You can have multiple gold standards and so on. So this blue metric is more of like a heuristic and it has been found to correlate fairly well with the humans up until recently of course with the explosion of neural machine translation and especially transformer based machine translation I guess and but also the their system that use LSTM with attention. These systems have become extremely extremely good. I don't know if you notice but Google Translate has been getting better and better really fast. I remember the the first years of Google Translate when people still made fun of it. I don't think many people make fun of it now. At least it's not a meme anymore. So the better and better these systems were the more these metrics like BLÖ and RÖS have diverged from the humans and they're not really reliable anymore especially if you compare really high skill systems to each other. BLÖ tends to not correlate well with humans and therefore we're looking for a new metric. A new metric that correlates well with humans but can be evaluated automatically. This paper here proposes this BLÖRT. Can we just stop with the variance on BÖRT? We get it, you use BÖRT for everything but you know. They say it's a learned evaluation metric based on BÖRT that can model human judgments with a few thousand possibly biased training examples. What you would do in these cases is now the creation of a metric becomes a machine learning task itself. What you'll have is a data set of things that are gold standard translations by humans. You will have the output of the machine translation system. You put them together so you have the gold standard sentence. This would be the optimal translation. You'll have whatever the machine translation produced and then you'll have a human look at it and create a score like this 8 right here. It says these two sentences match 8 good. So 8 maybe it's out of 10. This bottom thing is a very good translation for the top thing, to match the top thing. The human assesses the quality of the sample. Now you have a training data set. You have a z and z tilde or something or y. They call this y which is the gold standard label. This is y tilde, whatever the machine produced and or x tilde and then y is the label. Your task is now given x and x tilde predict whatever the human would say about this. If you can collect a few of these samples right here of different machine translation systems then you can formulate, you can make a data set out of this and formulate a machine learning task. That's exactly what these competitions have done. It's like a meta competition. It's a competition for designing the best metrics of the other competitions. Of course the difficulty here is that the data set isn't static because if you come up with a metric such as blue, let's say you come up with a better blue, you would want these other tasks to use it in the next years as well. The thing about metrics is you need to be able to compare to previous years and so on. You would want a metric that is still valid for other years and other slightly different tasks and also for other machine translation systems. If you just learn on data from this year's competitions and in five years all of these models will have become so much better and they'll produce different output and that's the difficulty. Your metric should still be valid at that point. This paper basically deals with the fact that can you learn such a metric from data that exists at one point in time that will be robust to shifts in distribution. In five years the machine translation systems are better. They maybe use different language constructs to translate certain things because they assess that better. Can you still make a good judgment about which of these systems is better than the other system? Can you still assess how humans would rate these systems? They're saying that they found the method to do this. This blurt, as they said, not only have they found the method but their method only uses a few thousand possibly biased training examples. They do this via a new pre-training scheme. They say a key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. Why is it important that it only uses a few thousand training examples? Because these are generated by humans and humans are expensive. It's not like ImageNet. You do it once, you have it for for 20 years. This is done year after year and you need real experts like translation experts. This is expensive. The fewer of these actual training examples that the thing can be efficient on, the better. They circumvent this by using millions of synthetic examples to help the model generalize. They do this in a pre-training step. Blurt provides state-of-the-art results on the last three years of the WMT metrics shared tasks. This is this meta task where you're asked to come up with a metric for the other tasks. The WebNLG competition dataset, in contrast to a vanilla bird-based approach, yields superior results even when the training data is scarce and out of distribution. Let's have a look at what they do. They say what do we need to do to fine-tune BERT for quality evaluation? If you don't know what BERT is, it's basically a model that takes in a bunch of text. You have a bunch of text and then it's a model that is a transformer. I've made a video on it if you want to check that out. As outputs you get a sequence of vectors, but important most of the time is only the first one, which you then use in a subsequent task. For example, if you want to do classification, you would put a classification layer on top to classify it into certain classes. If you want to do regression, you can do other things with these outputs right here, but for this particular paper only this CLS output is relevant. You would input this pair of gold standard and output of the machine and the output would be an output Y, which is either a number or a class label. In this case it's a number. You input these two things and out comes this whole sequence. You take the CLS output vector and you put it through a linear layer, weights and bias, and that would output a number Y. The number Y you train to be as close as possible to what the human would say about this pair X, the gold standard, and X tilde, the output of the system. This is what you would do if you were simply going about it. Just take BERT, take the model, so BERT is really good at language, take the model and train it on this data set. However, fine-tuning BERT requires a sizable amount of IID data. We don't have that in these tasks, which is less than ideal for a metric that should generalize to a variety of tasks and model drift. The problem with just applying BERT here is that you don't have enough data and it won't be a robust solution. It will only work for this particular data set that you train it on. The solution they say is you pre-train on synthetic data. What does that mean? They say the key aspect of our approach is a pre-training technique that we use to warm up BERT before fine-tuning on the rating data. You might know BERT training, which is where you do this masked language model pre-training. If you are given a piece of text, let's say you're given this piece of text right here, what you would do is you would drop out a couple of words like this one and this one and ask BERT to reconstruct it, like a denoising autoencoder. That way BERT learns about language in this particular way. They're not saying you should replace that. What they're saying is first you should do this masked language model pre-training, second you should do their synthetic pre-training and third you should do the fine-tuning. In the naive approach you would skip this step too. Their claim is that by introduction of this step too, that you could be a lot better and a lot more robust. You're already exposed to information in this step that will make you more robust to distribution shifts in this fine-tuning data later. I've advocated for this step to be called priming. Because otherwise you always have to say, okay I want to pre-train BERT but I don't mean pre-pre-training, like I don't mean this. This is already called pre-training. I want to pre-train after pre-train, so I just vote for this to be called priming. I have no idea. If you come up with stuff like this, probably you've heard it somewhere. I guess I might not be the inventor of this, but it is a good sounding word and it sort of fits. They say we generate a large number of synthetic reference candidate pairs. What they're going to do is take a bunch of text and in their case I think it's Wikipedia. For each Wikipedia article they're going to draw sentences or samples or paragraphs from Wikipedia. These are going to be your Z and then they're going to kind of muddle with them a bit. They're going to disturb them a bit, to make them a bit different, to make them go Z tilde. This simulates the difference between what the machine translation outputs and the gold standard sentence. They're usually not exactly the same, if you translate a sentence there are many ways you can do it. Their goal is to produce a data set that has sentences and perturbed versions of the sentence, but not perturbed randomly, but perturbed in a language knowledgeable way. How do they do this? They have three different ways. First of all mask filling with BERT. What they're doing is they take a BERT that can do language modeling, a pre-trained BERT. Let's again say we have this text right here and they simply drop out two words or so and fill them in again with BERT. BERT might produce the same words or it might produce slightly different words. Depending on how many you drop out you can choose the amount that you perturb these sentences. The second is they back translate. What they do with back translation is they use a machine translation model. It doesn't matter which one you take. They use any machine translation model to take a sentence and then they map it to another language, say from English to French. This is Z French and then they map it back again. The Z tilde is now the French to English translation. You need two translation models. First you translate it to French and then you translate it back again. That would sometimes give you the same sentence but often it will give you sort of a paraphrase of the sentence that you had at the beginning. That would be the second version that you could make pairs of sentences that are sort of similar. The third way is just to drop out words. They just found this to help. Now they have a giant data set of sentences and perturbed versions of sentences. What are they going to do with that giant data set? The answer is they're going to take this Z and Z tilde and you're going to put that into BERT into their thing that they prime now. This is the priming stage. This was pre-trained on mask language modeling. Now they want to prime it. What are they going to do? They're going to take this CLS vector. Of course this is not the final task and we don't have final labels for these two things. We need to somehow come up with our own tasks and labels for them. They decide to go a whole bunch of tasks. They go like... I don't even know. They go eight or so or five or so different tasks. They construct different tasks to perform with these two things. This could be metrics like BLÖ or RÖSCH or this BERT score right here. You simply calculate the n-gram overlap between Z and Z' that would be one of the scores. It could be the back translation likelihood which is how likely does a back translation model assess the sentence. Here are all the things. Six different tasks. The catch here is... What would happen for example with BLÖ is you would take a model and you would calculate the BLÖ score between those two things. But you wouldn't input that into BERT. You would ask BERT to predict the BLÖ score. BERT would be outputting B hat and B would be the actual BLÖ score. You would train BERT to predict the BLÖ score of this particular pair of inputs. One you take as the input and the other one you take as the reference. You would ask BERT to predict the BLÖ score of this. To predict the RÖSCH score. You would ask all of these signals. You ask the same model. You ask to predict all of these scores for these two things. You can calculate all of these scores by either BLÖ is like a script you run or you have some other model like a pre-trained translation model that you use to assess the... that you ask how good is this in terms of this particular task back translation and then you try to predict that score. It's important you're not training the model to perform these tasks. These tasks you already have another model that's specialized to these particular tasks and you simply ask them to score the input. You have an entailment model that outputs how much by how much does the second sentence entail the first that basically means does the second sentence follow from the first and of course this is not you know it's not actually proper input data for that task but you can still ask the model and if these are good translations of each other if these sentences match then the second one should probably follow fairly well for the first but at least you can if you make BERT predict that it will learn something useful about the relation between the two sentences. So the entire game name of the game in here is to come up with tasks that if BERT learns to predict the score of these tasks on those inputs sorry on pretending one is the input and the other one is the output or on the two inputs and then trying to predict the score then BERT would learn something useful. So that's the trick here is to come up with these pre-training tasks and you train them all at the same time and by doing it all at the same time and by doing it on many many different perturbations on these different tasks you hope that your model learns something it's kind of becoming attuned to the variations that language can have and what it needs to pay attention to and then you hope that if you then have done this then take this model and then do step three which is fine-tuning on the actual data you have you would guess that it becomes very good at that data but also it retains all of these abilities and generalizes better to other sort of distribution shifts. So that is the thing here and on this metric learning tasks they do outperform all other models right here and what I find interesting is down here where they now test for the distribution shift so what they're saying is okay this is all on data basically where you know we train on training data and evaluate on testing data and they're sort of the same they come from the same year from the same machine translation models and we don't really know how you know next year the machine translation models might be different thus our scores still hold so they try to simulate this by splitting the data and they introduce this skew factor so what they'll do is they'll split the data so usually as you can see right here the training date the ratings these are the human ratings the training data is sort of distributed like this would be the test data and the training data would almost be overlapping that if you can see like the dotted lines right here or so so you can see the overlap between the test and the trained of the human ratings is very close now they say we can we can skew that we can sort of filter the data such that in the training data only very bad sentences are and in the test data there are only very good sentences okay and this simulates the fact that you know we this might be the previous year's data that we train our metric on and then we we evaluate it on the next year's data where all the systems have become better and what this does is you can see right here the bottom axis is the test skew and the color here is the training skew okay so what interests what what we're interested in is to the right and down the colors so as these skew increases you can see right here that the the quality of the metric decreases okay the correlation with the human ratings decreases but it it still remains fairly well but especially the training skew if you update the train so if you make the training examples really bad so to say it the score just drops down and they can show pretty well here that if you add this pre training then the score except in this extreme case so the score for all of these it remains relatively high and especially remains above the blue score which is always sort of worse right so this is is pretty as pretty neat and shows this power of this pre training basically that's that's the the robustness to quality drift metric and they have a bunch of other metrics right here where they ablate and so on but I don't want to go too much into that I more want to make some comments on on this work so what what I think so first of all in a paper like this what what I would like to see is like the extrapolation right here to if and where this ever crosses the the blue score right because I mean okay it seems like yeah this skew of three is a is a big number but who knows if three is a big number right who like we can't assess that what we need to see is really the where the crossover point between the models to assess where does it where is it no longer valid and so on the second part here is that my problem with this method of splitting the data I mean yes okay you split the bad from the good but it's not it's not only that these things are getting better so right now everyone's using transformers for everyone's using BERT for everything right and BERT is a specific architecture that is going to be good at specific things at specific grammatical constructs in specific languages right so it's the mistakes it makes are very systematic now if in one year or two years all of a sudden the a new model pops up I don't know like someone discovers that graph neural networks are really good at machine translation these models are going to be attuned to a very very different set of construct they might be better overall but they're going to make a different sort of mistake and so I think just assessing the skew via just dividing up the data in into bad and good ratings I don't think that covers these distribution shifts that they set out to cover right what I would have expected is something like them because these tasks are repeated year after year and I would have expected them to for example train on 2017 and then evaluate on 2019 or something like or show like evaluate on 2017 2018 and 2019 and there we would have a much better assessment of a distribution shift over the years right so so it is not super convincing to me and what is most worrisome is if you look at their pre training tasks I mean okay there is a there's blur and rouge but there is BERT score right there is entailment which is also a BERT model and the back translation I mean who knows that's probably either going to be a transformer or a LSTM with an attention mechanism which is the attention mechanism is the basis for transformers so all of these things are basically going to make the same sort of bias mistakes right they're doing to to it's not it's not like there is Gaussian noise on top of these things all of these things are going to be weak in the same sort of assessments and not agree with like they're going to have systematic errors in them with respect to them predicting the human scores and if we evaluate our systems that some are also using exactly that thing right so these the systems we evaluate they are using the same type of models as here they're going to fall prey to the same type of mistakes and then if we switch over to systems that use some different right so next year we have some systems that use different techniques they're going to be like exactly maybe not bad in these particular things but bad in other things and then this thing will output a systematically biased assessment so it's sort of a house of like if you've seen these images of plugging in the power strip into itself and you have infinite power it's like it's very to me it seems very dangerous to have as such an overlap of architectures and methods to evaluate systems as you have in the systems themselves but I hope this will be regularly checked with human scores and assessed as to how much these systems are out of sync or in sync with humans all right this was it for me for blurt check out they have the code available the metric is available evaluate your stuff with it and bye bye
[ { "end": 6.08, "start": 0, "text": " Hello there! Today we'll look at BLERT, Learning Robust Metrics for Text Generation by" }, { "end": 12.6, "start": 6.08, "text": " Thibaut Salam, Tipanjan Das and Ankur P. Parikh. So this paper on a high level" }, { "end": 18, "start": 12.6, "text": " proposes a new metric for text generation tasks such as machine translation by" }, { "end": 24.240000000000002, "start": 18, "text": " leveraging a BERT model to produce like an automated metric, an automated quality" }, { "end": 30.479999999999997, "start": 24.24, "text": " metric. And they make this BERT model robust by pre-training it on a very wide" }, { "end": 36.32, "start": 30.479999999999997, "text": " array of tasks that they can use synthetic data to train it. And therefore" }, { "end": 43.16, "start": 36.32, "text": " the model and the resulting score is very robust to shifts in distribution and" }, { "end": 47.599999999999994, "start": 43.16, "text": " they advocate that this could be used in the future to assess text generation" }, { "end": 53, "start": 47.599999999999994, "text": " systems. Alright, as always, if you like content like this, consider subscribing" }, { "end": 59.96, "start": 53, "text": " and sharing it out and leaving a like. Tell YouTube that it's good content. Of" }, { "end": 66.08, "start": 59.96, "text": " course only if you agree. So what's the problem with evaluation for text" }, { "end": 71.56, "start": 66.08, "text": " generation? So if you know the machine translation community, basically what they" }, { "end": 75.76, "start": 71.56, "text": " do is they have these datasets where they translate from one language into" }, { "end": 82.56, "start": 75.76, "text": " another. Let's say English to French. And they have a training dataset that is" }, { "end": 89.88, "start": 82.56, "text": " fairly okay-ishly large. And then they somehow need to evaluate this. So" }, { "end": 94.8, "start": 89.88, "text": " you have like a test dataset, but all you can really do is sort of calculate the" }, { "end": 98.84, "start": 94.8, "text": " perplexity of a language model that you produce or of a translation model that" }, { "end": 103.84, "start": 98.84, "text": " you produce. There's not really a metric for translation, so the gold standard is" }, { "end": 108.6, "start": 103.84, "text": " to get it to humans. So you train on this dataset, you produce a program." }, { "end": 113.36, "start": 108.6, "text": " This is your machine translation program that you produce from the data. And you" }, { "end": 118.6, "start": 113.36, "text": " let this run on your evaluation dataset and you give the results to a" }, { "end": 123.32, "start": 118.6, "text": " bunch of human raters. These could be regular people, these could be linguists" }, { "end": 129.76, "start": 123.32, "text": " that are experts in translation in both languages. And they will score each" }, { "end": 134.07999999999998, "start": 129.76, "text": " of the outputs of the machine translation systems and at the end you" }, { "end": 139.32000000000002, "start": 134.08, "text": " will get like a number, like eight. Your system is eight good. The problem of" }, { "end": 144.20000000000002, "start": 139.32000000000002, "text": " course is this process is very very slow. So the machine translation community does" }, { "end": 148.56, "start": 144.20000000000002, "text": " this every year and it's quite slow and it's quite expensive as you know" }, { "end": 153.36, "start": 148.56, "text": " it requires these humans here to assess all of these systems output. And you want" }, { "end": 157.84, "start": 153.36, "text": " a sort of a sizable output, right? Because you want sort of a good sample" }, { "end": 165.16, "start": 157.84, "text": " of the machine translation system. So this is not really satisfactory but like" }, { "end": 169.52, "start": 165.16, "text": " an automated score like perplexity is also not satisfactory. What people have" }, { "end": 173.92000000000002, "start": 169.52, "text": " done is they've come up with proxy scores for the humans and two of those" }, { "end": 180.92000000000002, "start": 173.92000000000002, "text": " scores are called rouge and blue. And specifically blue is one of these" }, { "end": 187.08, "start": 180.92000000000002, "text": " metrics that people use and it kind of takes n-grams in the sentences. So" }, { "end": 191.8, "start": 187.08, "text": " n-grams would be like snippets of like let's say four words after one another" }, { "end": 196.16000000000003, "start": 191.8, "text": " and there would be these snippets and that the machine translation system" }, { "end": 201.12, "start": 196.16000000000003, "text": " produces and then it would go into the validation data set and look at the gold" }, { "end": 205.24, "start": 201.12, "text": " standard translation that it was also produced by humans. And it would also" }, { "end": 210.08, "start": 205.24, "text": " look at these snippets of size four and it would just kind of assess how many of" }, { "end": 214.36, "start": 210.08, "text": " the snippets overlap. Of course the machine translation system has never" }, { "end": 219.36, "start": 214.36, "text": " seen the label like the gold standard for that particular set otherwise it" }, { "end": 226.24, "start": 219.36, "text": " wouldn't you know be fair. But you basically compare n-grams of output and" }, { "end": 230.56, "start": 226.24, "text": " gold and some gold standard. You can have multiple gold standards and so on. So" }, { "end": 235.60000000000002, "start": 230.56, "text": " this blue metric is more of like a heuristic and it has been found to" }, { "end": 240.44000000000003, "start": 235.60000000000002, "text": " correlate fairly well with the humans up until recently of course with the" }, { "end": 245.28, "start": 240.44, "text": " explosion of neural machine translation and especially transformer based machine" }, { "end": 250.56, "start": 245.28, "text": " translation I guess and but also the their system that use LSTM with" }, { "end": 254.96, "start": 250.56, "text": " attention. These systems have become extremely extremely good. I don't know if" }, { "end": 261.12, "start": 254.96, "text": " you notice but Google Translate has been getting better and better really fast. I" }, { "end": 265.72, "start": 261.12, "text": " remember the the first years of Google Translate when people still made fun of" }, { "end": 271.12, "start": 265.72, "text": " it. I don't think many people make fun of it now. At least it's not a meme anymore." }, { "end": 278.16, "start": 271.12, "text": " So the better and better these systems were the more these" }, { "end": 285.32000000000005, "start": 278.16, "text": " metrics like BLÖ and RÖS have diverged from the humans and they're not really" }, { "end": 290.40000000000003, "start": 285.32000000000005, "text": " reliable anymore especially if you compare really high skill systems to" }, { "end": 296.76, "start": 290.4, "text": " each other. BLÖ tends to not correlate well with humans and therefore we're" }, { "end": 301.52, "start": 296.76, "text": " looking for a new metric. A new metric that correlates well with humans but can" }, { "end": 311.79999999999995, "start": 301.52, "text": " be evaluated automatically. This paper here proposes this BLÖRT. Can we" }, { "end": 316.76, "start": 311.79999999999995, "text": " just stop with the variance on BÖRT? We get it, you use BÖRT for everything but you" }, { "end": 324.2, "start": 316.76, "text": " know. They say it's a learned evaluation metric based on BÖRT that can" }, { "end": 331.71999999999997, "start": 324.2, "text": " model human judgments with a few thousand possibly biased training examples." }, { "end": 341.59999999999997, "start": 331.71999999999997, "text": " What you would do in these cases is now the creation of a metric becomes" }, { "end": 350.16, "start": 341.6, "text": " a machine learning task itself. What you'll have is a data set of" }, { "end": 356.96000000000004, "start": 350.16, "text": " things that are gold standard translations by humans. You will have the" }, { "end": 361.64000000000004, "start": 356.96000000000004, "text": " output of the machine translation system. You put them together so you have" }, { "end": 365.48, "start": 361.64000000000004, "text": " the gold standard sentence. This would be the optimal translation." }, { "end": 368.96000000000004, "start": 365.48, "text": " You'll have whatever the machine translation produced and then you'll" }, { "end": 374.84, "start": 368.96, "text": " have a human look at it and create a score like this 8 right here. It says" }, { "end": 382.59999999999997, "start": 374.84, "text": " these two sentences match 8 good. So 8 maybe it's out of 10." }, { "end": 387.56, "start": 382.59999999999997, "text": " This bottom thing is a very good translation for the top thing," }, { "end": 393, "start": 387.56, "text": " to match the top thing. The human assesses the quality of the sample." }, { "end": 402.68, "start": 393, "text": " Now you have a training data set. You have a z and z tilde or something or y." }, { "end": 410.76, "start": 402.68, "text": " They call this y which is the gold standard label. This is y tilde," }, { "end": 418.28, "start": 410.76, "text": " whatever the machine produced and or x tilde and then y is the label." }, { "end": 424.44, "start": 418.28, "text": " Your task is now given x and x tilde predict whatever the human would say" }, { "end": 431, "start": 424.44, "text": " about this. If you can collect a few of these samples right here of" }, { "end": 436.47999999999996, "start": 431, "text": " different machine translation systems then you can formulate, you can make a" }, { "end": 442.11999999999995, "start": 436.47999999999996, "text": " data set out of this and formulate a machine learning task. That's" }, { "end": 447.11999999999995, "start": 442.11999999999995, "text": " exactly what these competitions have done. It's like a meta competition." }, { "end": 452, "start": 447.12, "text": " It's a competition for designing the best metrics of the other" }, { "end": 457.68, "start": 452, "text": " competitions. Of course the difficulty here is that the data set" }, { "end": 462.52, "start": 457.68, "text": " isn't static because if you come up with a metric such as blue, let's say you come" }, { "end": 468.2, "start": 462.52, "text": " up with a better blue, you would want these other tasks to use it in" }, { "end": 472.24, "start": 468.2, "text": " the next years as well. The thing about metrics is you need to be able" }, { "end": 476.72, "start": 472.24, "text": " to compare to previous years and so on. You would want a metric that is" }, { "end": 481.84000000000003, "start": 476.72, "text": " still valid for other years and other slightly different" }, { "end": 486.8, "start": 481.84000000000003, "text": " tasks and also for other machine translation systems. If you just learn" }, { "end": 495.48, "start": 486.8, "text": " on data from this year's competitions and in five years all of" }, { "end": 498.8, "start": 495.48, "text": " these models will have become so much better and they'll produce different" }, { "end": 504.20000000000005, "start": 498.8, "text": " output and that's the difficulty. Your metric should still be valid at that" }, { "end": 511, "start": 504.2, "text": " point. This paper basically deals with the fact that can you" }, { "end": 516.84, "start": 511, "text": " learn such a metric from data that exists at one point in time that will" }, { "end": 522.72, "start": 516.84, "text": " be robust to shifts in distribution. In five years the machine translation" }, { "end": 526.72, "start": 522.72, "text": " systems are better. They maybe use different language constructs to" }, { "end": 531.4, "start": 526.72, "text": " translate certain things because they assess that better. Can you still" }, { "end": 535.68, "start": 531.4, "text": " make a good judgment about which of these systems is better than" }, { "end": 540.84, "start": 535.68, "text": " the other system? Can you still assess how humans would rate these systems?" }, { "end": 549.24, "start": 540.84, "text": " They're saying that they found the method to do this. This blurt, as" }, { "end": 554.48, "start": 549.24, "text": " they said, not only have they found the method but their method only uses a" }, { "end": 561.12, "start": 554.48, "text": " few thousand possibly biased training examples. They do this via a new" }, { "end": 565.6, "start": 561.12, "text": " pre-training scheme. They say a key aspect of our approach is a novel" }, { "end": 570.84, "start": 565.6, "text": " pre-training scheme that uses millions of synthetic examples to help the model" }, { "end": 575.4, "start": 570.84, "text": " generalize. Why is it important that it only uses a few thousand training" }, { "end": 581.2, "start": 575.4, "text": " examples? Because these are generated by humans and humans are expensive." }, { "end": 589.08, "start": 581.2, "text": " It's not like ImageNet. You do it once, you have it for for 20 years." }, { "end": 594.88, "start": 589.08, "text": " This is done year after year and you need real experts like translation experts." }, { "end": 600.88, "start": 594.88, "text": " This is expensive. The fewer of these actual training examples that the" }, { "end": 609.12, "start": 600.88, "text": " thing can be efficient on, the better. They circumvent this by using" }, { "end": 613.6, "start": 609.12, "text": " millions of synthetic examples to help the model generalize. They do this in a" }, { "end": 619.6, "start": 613.6, "text": " pre-training step. Blurt provides state-of-the-art results on the" }, { "end": 624.48, "start": 619.6, "text": " last three years of the WMT metrics shared tasks. This is this" }, { "end": 630.6, "start": 624.48, "text": " meta task where you're asked to come up with a metric for the other tasks." }, { "end": 635.72, "start": 630.6, "text": " The WebNLG competition dataset, in contrast to a vanilla bird-based" }, { "end": 639.8000000000001, "start": 635.72, "text": " approach, yields superior results even when the training data is scarce and out" }, { "end": 648.28, "start": 639.8, "text": " of distribution. Let's have a look at what they do." }, { "end": 655.1999999999999, "start": 648.28, "text": " They say what do we need to do to fine-tune BERT for quality evaluation?" }, { "end": 661.68, "start": 655.1999999999999, "text": " If you don't know what BERT is, it's basically a model that takes" }, { "end": 667.4799999999999, "start": 661.68, "text": " in a bunch of text. You have a bunch of text and then it's a model that is a" }, { "end": 673.08, "start": 667.48, "text": " transformer. I've made a video on it if you want to check that out." }, { "end": 679.04, "start": 673.08, "text": " As outputs you get a sequence of vectors, but important most of the" }, { "end": 684.52, "start": 679.04, "text": " time is only the first one, which you then use in a subsequent task. For" }, { "end": 688.36, "start": 684.52, "text": " example, if you want to do classification, you would put a" }, { "end": 693.76, "start": 688.36, "text": " classification layer on top to classify it into certain classes. If you want to do" }, { "end": 699.08, "start": 693.76, "text": " regression, you can do other things with these outputs right here," }, { "end": 708.28, "start": 699.08, "text": " but for this particular paper only this CLS output is relevant." }, { "end": 714.48, "start": 708.28, "text": " You would input this pair of gold standard and output of the machine and the" }, { "end": 721.36, "start": 714.48, "text": " output would be an output Y, which is either a number or a class label." }, { "end": 731.76, "start": 721.36, "text": " In this case it's a number. You input these two things and" }, { "end": 739.04, "start": 731.76, "text": " out comes this whole sequence. You take the CLS output vector and you" }, { "end": 743.6800000000001, "start": 739.04, "text": " put it through a linear layer, weights and bias, and that would output" }, { "end": 750.52, "start": 743.6800000000001, "text": " a number Y. The number Y you train to be as close as possible to what" }, { "end": 756.76, "start": 750.52, "text": " the human would say about this pair X, the gold standard, and X tilde, the" }, { "end": 763.12, "start": 756.76, "text": " output of the system. This is what you would do if you were simply going" }, { "end": 768.4, "start": 763.12, "text": " about it. Just take BERT, take the model, so BERT is really good at language," }, { "end": 776.28, "start": 768.4, "text": " take the model and train it on this data set. However, fine-tuning BERT" }, { "end": 781.36, "start": 776.28, "text": " requires a sizable amount of IID data. We don't have that in these tasks," }, { "end": 786.4399999999999, "start": 781.36, "text": " which is less than ideal for a metric that should generalize to a variety of" }, { "end": 792.72, "start": 786.4399999999999, "text": " tasks and model drift. The problem with just applying BERT here is that" }, { "end": 797.88, "start": 792.72, "text": " you don't have enough data and it won't be a robust solution." }, { "end": 802.92, "start": 797.88, "text": " It will only work for this particular data set that you train it on. The" }, { "end": 808.68, "start": 802.92, "text": " solution they say is you pre-train on synthetic data. What does that mean?" }, { "end": 817.36, "start": 808.68, "text": " They say the key aspect of our approach is a pre-training technique that we use" }, { "end": 823.8, "start": 817.36, "text": " to warm up BERT before fine-tuning on the rating data. You might know BERT" }, { "end": 829.8, "start": 823.8, "text": " training, which is where you do this masked language model pre-training." }, { "end": 834.5999999999999, "start": 829.8, "text": " If you are given a piece of text, let's say you're given this piece of text" }, { "end": 838.3199999999999, "start": 834.5999999999999, "text": " right here, what you would do is you would drop out a couple of words like" }, { "end": 842.9599999999999, "start": 838.3199999999999, "text": " this one and this one and ask BERT to reconstruct it, like a denoising" }, { "end": 849.2199999999999, "start": 842.9599999999999, "text": " autoencoder. That way BERT learns about language in this" }, { "end": 853.9399999999999, "start": 849.2199999999999, "text": " particular way. They're not saying you should replace that. What they're" }, { "end": 859.84, "start": 853.94, "text": " saying is first you should do this masked language model pre-training," }, { "end": 867.08, "start": 859.84, "text": " second you should do their synthetic pre-training and third you should do the" }, { "end": 876.32, "start": 867.08, "text": " fine-tuning. In the naive approach you would skip this step" }, { "end": 881.1600000000001, "start": 876.32, "text": " too. Their claim is that by introduction of this step too, that you could be a" }, { "end": 886.9599999999999, "start": 881.16, "text": " lot better and a lot more robust. You're already" }, { "end": 892.6, "start": 886.9599999999999, "text": " exposed to information in this step that will make you more robust to" }, { "end": 899.48, "start": 892.6, "text": " distribution shifts in this fine-tuning data later. I've" }, { "end": 903.7199999999999, "start": 899.48, "text": " advocated for this step to be called" }, { "end": 911.32, "start": 903.72, "text": " priming. Because otherwise you always have to say, okay I" }, { "end": 915.36, "start": 911.32, "text": " want to pre-train BERT but I don't mean pre-pre-training, like I don't" }, { "end": 920.64, "start": 915.36, "text": " mean this. This is already called pre-training. I want to pre-train after" }, { "end": 929.1, "start": 920.64, "text": " pre-train, so I just vote for this to be called priming. I have no idea." }, { "end": 933, "start": 929.1, "text": " If you come up with stuff like this, probably you've heard it somewhere." }, { "end": 938.92, "start": 933, "text": " I guess I might not be the inventor of this, but it is a good sounding word and" }, { "end": 946.32, "start": 938.92, "text": " it sort of fits. They say we generate a large number of synthetic" }, { "end": 950.88, "start": 946.32, "text": " reference candidate pairs. What they're going to do is take a" }, { "end": 956.68, "start": 950.88, "text": " bunch of text and in their case I think it's Wikipedia. For each Wikipedia" }, { "end": 963.16, "start": 956.68, "text": " article they're going to draw" }, { "end": 968.56, "start": 963.16, "text": " sentences or samples or paragraphs from Wikipedia. These are going to be" }, { "end": 976.8399999999999, "start": 968.56, "text": " your Z and then they're going to kind of muddle with them a bit. They're going to" }, { "end": 982.4799999999999, "start": 976.8399999999999, "text": " disturb them a bit, to make them a bit different, to make them go Z tilde." }, { "end": 987.24, "start": 982.48, "text": " This simulates the difference between what the machine translation" }, { "end": 991.64, "start": 987.24, "text": " outputs and the gold standard sentence. They're usually not exactly the same," }, { "end": 995.16, "start": 991.64, "text": " if you translate a sentence there are many ways you can do it. Their goal" }, { "end": 1001.44, "start": 995.16, "text": " is to produce a data set that has sentences and perturbed versions" }, { "end": 1006.6, "start": 1001.44, "text": " of the sentence, but not perturbed randomly, but perturbed in a" }, { "end": 1012.5600000000001, "start": 1006.6, "text": " language knowledgeable way. How do they do this?" }, { "end": 1020.6, "start": 1012.5600000000001, "text": " They have three different ways. First of all mask filling with BERT. What" }, { "end": 1024.4, "start": 1020.6, "text": " they're doing is they take a BERT that can do language modeling, a" }, { "end": 1028.84, "start": 1024.4, "text": " pre-trained BERT. Let's again say we have this text right here and they" }, { "end": 1035.04, "start": 1028.84, "text": " simply drop out two words or so and fill them in again with BERT. BERT might" }, { "end": 1039.24, "start": 1035.04, "text": " produce the same words or it might produce slightly different words." }, { "end": 1043.2, "start": 1039.24, "text": " Depending on how many you drop out you can choose the amount that you" }, { "end": 1053.8, "start": 1043.2, "text": " perturb these sentences. The second is they back translate. What they do" }, { "end": 1059.8799999999999, "start": 1053.8, "text": " with back translation is they use a machine translation model. It doesn't" }, { "end": 1066.72, "start": 1059.88, "text": " matter which one you take. They use any machine translation model to take a" }, { "end": 1072.92, "start": 1066.72, "text": " sentence and then they map it to another language, say from English to" }, { "end": 1081.8000000000002, "start": 1072.92, "text": " French. This is Z French and then they map it back again. The Z tilde is" }, { "end": 1087.16, "start": 1081.8000000000002, "text": " now the French to English translation. You need two translation models. First" }, { "end": 1092.1200000000001, "start": 1087.16, "text": " you translate it to French and then you translate it back again. That would" }, { "end": 1095.44, "start": 1092.1200000000001, "text": " sometimes give you the same sentence but often it will give you sort of a" }, { "end": 1101.52, "start": 1095.44, "text": " paraphrase of the sentence that you had at the beginning. That would be" }, { "end": 1107.92, "start": 1101.52, "text": " the second version that you could make pairs of sentences that are sort of" }, { "end": 1113.24, "start": 1107.92, "text": " similar. The third way is just to drop out words. They just found this to" }, { "end": 1119.88, "start": 1113.24, "text": " help. Now they have a giant data set of sentences and perturbed versions of" }, { "end": 1125.48, "start": 1119.88, "text": " sentences. What are they going to do with that giant data set? The answer is" }, { "end": 1132.52, "start": 1125.48, "text": " they're going to take this Z and Z tilde and you're going to put that into BERT" }, { "end": 1138.8, "start": 1132.52, "text": " into their thing that they prime now. This is the priming stage. This" }, { "end": 1141.84, "start": 1138.8, "text": " was pre-trained on mask language modeling. Now they want to prime it. What" }, { "end": 1146.32, "start": 1141.84, "text": " are they going to do? They're going to take this CLS vector. Of course this" }, { "end": 1151.48, "start": 1146.32, "text": " is not the final task and we don't have final labels for these two things." }, { "end": 1157.6399999999999, "start": 1151.48, "text": " We need to somehow come up with our own tasks and labels for them. They" }, { "end": 1166.8, "start": 1157.6399999999999, "text": " decide to go a whole bunch of tasks. They go like... I don't even" }, { "end": 1172.56, "start": 1166.8, "text": " know. They go eight or so or five or so different tasks. They construct different" }, { "end": 1179.28, "start": 1172.56, "text": " tasks to perform with these two things. This could be metrics like BLÖ or" }, { "end": 1185.12, "start": 1179.28, "text": " RÖSCH or this BERT score right here. You simply calculate the n-gram overlap" }, { "end": 1191.24, "start": 1185.12, "text": " between Z and Z' that would be one of the scores. It could be the back" }, { "end": 1196.3999999999999, "start": 1191.24, "text": " translation likelihood which is how likely does a back translation model" }, { "end": 1202.72, "start": 1196.4, "text": " assess the sentence. Here are all the things. Six different tasks." }, { "end": 1210.24, "start": 1202.72, "text": " The catch here is... What would happen for example with" }, { "end": 1217.0400000000002, "start": 1210.24, "text": " BLÖ is you would take a model and you would calculate the BLÖ score between" }, { "end": 1221.8400000000001, "start": 1217.0400000000002, "text": " those two things. But you wouldn't input that into BERT. You would ask BERT to" }, { "end": 1229.9599999999998, "start": 1221.84, "text": " predict the BLÖ score. BERT would be outputting B hat and B would be" }, { "end": 1235.72, "start": 1229.9599999999998, "text": " the actual BLÖ score. You would train BERT to predict the BLÖ score of this" }, { "end": 1242.28, "start": 1235.72, "text": " particular pair of inputs. One you take as the input and the" }, { "end": 1247.52, "start": 1242.28, "text": " other one you take as the reference. You would ask BERT to predict the" }, { "end": 1252.24, "start": 1247.52, "text": " BLÖ score of this. To predict the RÖSCH score. You would ask all of these" }, { "end": 1257.72, "start": 1252.24, "text": " signals. You ask the same model. You ask to predict all of these scores" }, { "end": 1262.76, "start": 1257.72, "text": " for these two things. You can calculate all of these scores by either" }, { "end": 1269.36, "start": 1262.76, "text": " BLÖ is like a script you run or you have some other model like a pre-trained" }, { "end": 1276.76, "start": 1269.36, "text": " translation model that you use to assess the... that you ask how good is this in" }, { "end": 1282.76, "start": 1276.76, "text": " terms of this particular task back translation and then you try to predict" }, { "end": 1287.44, "start": 1282.76, "text": " that score. It's important you're not training the model to perform these" }, { "end": 1293.36, "start": 1287.44, "text": " tasks. These tasks you already have another model that's specialized to" }, { "end": 1299.56, "start": 1293.36, "text": " these particular tasks and you simply ask them to score the input. You have" }, { "end": 1304.52, "start": 1299.56, "text": " an entailment model that outputs how much by how much does the second sentence" }, { "end": 1308.84, "start": 1304.52, "text": " entail the first that basically means does the second sentence follow from" }, { "end": 1314.72, "start": 1308.84, "text": " the first and of course this is not you know it's not actually proper input data" }, { "end": 1318.72, "start": 1314.72, "text": " for that task but you can still ask the model and if these are good" }, { "end": 1322.56, "start": 1318.72, "text": " translations of each other if these sentences match then the second one" }, { "end": 1328.6399999999999, "start": 1322.56, "text": " should probably follow fairly well for the first but at least you can if you" }, { "end": 1333.08, "start": 1328.6399999999999, "text": " make BERT predict that it will learn something useful about the relation" }, { "end": 1338.36, "start": 1333.08, "text": " between the two sentences. So the entire game name of the game in here is to come" }, { "end": 1344.1599999999999, "start": 1338.36, "text": " up with tasks that if BERT learns to predict the score of these tasks on" }, { "end": 1350.4399999999998, "start": 1344.1599999999999, "text": " those inputs sorry on pretending one is the input and the other one is the" }, { "end": 1356.3999999999999, "start": 1350.4399999999998, "text": " output or on the two inputs and then trying to predict the score then BERT" }, { "end": 1364, "start": 1356.4, "text": " would learn something useful. So that's the trick here is to" }, { "end": 1368.4, "start": 1364, "text": " come up with these pre-training tasks and you train them all at the same time" }, { "end": 1373.24, "start": 1368.4, "text": " and by doing it all at the same time and by doing it on many many different" }, { "end": 1377.3600000000001, "start": 1373.24, "text": " perturbations on these different tasks you hope that your model learns" }, { "end": 1384.2800000000002, "start": 1377.3600000000001, "text": " something it's kind of becoming attuned to the variations that language can have" }, { "end": 1389.52, "start": 1384.28, "text": " and what it needs to pay attention to and then you hope that if you then have" }, { "end": 1395.04, "start": 1389.52, "text": " done this then take this model and then do step three which is fine-tuning on" }, { "end": 1399.2, "start": 1395.04, "text": " the actual data you have you would guess that it becomes very good at that data" }, { "end": 1406.6, "start": 1399.2, "text": " but also it retains all of these abilities and generalizes better to" }, { "end": 1416.6799999999998, "start": 1406.6, "text": " other sort of distribution shifts. So that is the thing here and" }, { "end": 1424.24, "start": 1416.6799999999998, "text": " on this metric learning tasks they do outperform all other models" }, { "end": 1431.24, "start": 1424.24, "text": " right here and what I find interesting is down here where they now test for" }, { "end": 1443.68, "start": 1431.24, "text": " the distribution shift so what they're saying is okay this is all on" }, { "end": 1448.96, "start": 1443.68, "text": " data basically where you know we train on training data and evaluate on testing" }, { "end": 1452.2, "start": 1448.96, "text": " data and they're sort of the same they come from the same year from the same" }, { "end": 1458.52, "start": 1452.2, "text": " machine translation models and we don't really know how you know next" }, { "end": 1461.8799999999999, "start": 1458.52, "text": " year the machine translation models might be different thus our scores still" }, { "end": 1468.68, "start": 1461.8799999999999, "text": " hold so they try to simulate this by splitting the data and they introduce" }, { "end": 1474.24, "start": 1468.68, "text": " this skew factor so what they'll do is they'll split the data so usually as you" }, { "end": 1478.98, "start": 1474.24, "text": " can see right here the training date the ratings these are the human ratings the" }, { "end": 1487.76, "start": 1478.98, "text": " training data is sort of distributed like this would be the test data and the" }, { "end": 1491.92, "start": 1487.76, "text": " training data would almost be overlapping that if you can see like" }, { "end": 1499.16, "start": 1491.92, "text": " the dotted lines right here or so so you can see the overlap between the test and" }, { "end": 1504.2, "start": 1499.16, "text": " the trained of the human ratings is very close now they say we can we can skew" }, { "end": 1511.28, "start": 1504.2, "text": " that we can sort of filter the data such that in the training data only very bad" }, { "end": 1517.2, "start": 1511.28, "text": " sentences are and in the test data there are only very good sentences okay and" }, { "end": 1522.56, "start": 1517.2, "text": " this simulates the fact that you know we this might be the previous year's data" }, { "end": 1527.56, "start": 1522.56, "text": " that we train our metric on and then we we evaluate it on the next year's data" }, { "end": 1534.72, "start": 1527.56, "text": " where all the systems have become better and what this does is you can see right" }, { "end": 1542.48, "start": 1534.72, "text": " here the bottom axis is the test skew and the color here is the training skew" }, { "end": 1550.68, "start": 1542.48, "text": " okay so what interests what what we're interested in is to the right and down" }, { "end": 1557.72, "start": 1550.68, "text": " the colors so as these skew increases you can see right here that the the" }, { "end": 1562.52, "start": 1557.72, "text": " quality of the metric decreases okay the correlation with the human ratings" }, { "end": 1570.96, "start": 1562.52, "text": " decreases but it it still remains fairly well but especially the training skew if" }, { "end": 1575.6000000000001, "start": 1570.96, "text": " you update the train so if you make the training examples really bad so to say" }, { "end": 1580.72, "start": 1575.6000000000001, "text": " it the score just drops down and they can show pretty well here that if you" }, { "end": 1587.04, "start": 1580.72, "text": " add this pre training then the score except in this extreme case so the" }, { "end": 1591.48, "start": 1587.04, "text": " score for all of these it remains relatively high and especially remains" }, { "end": 1598.52, "start": 1591.48, "text": " above the blue score which is always sort of worse right so this is is pretty" }, { "end": 1607.08, "start": 1598.52, "text": " as pretty neat and shows this power of this pre training basically that's" }, { "end": 1612, "start": 1607.08, "text": " that's the the robustness to quality drift metric and they have a bunch of" }, { "end": 1617.32, "start": 1612, "text": " other metrics right here where they ablate and so on but I don't want to go" }, { "end": 1624.48, "start": 1617.32, "text": " too much into that I more want to make some comments on on this work so what" }, { "end": 1629.4, "start": 1624.48, "text": " what I think so first of all in a paper like this what what I would like to see" }, { "end": 1637.28, "start": 1629.4, "text": " is like the extrapolation right here to if and where this ever crosses the the" }, { "end": 1642.92, "start": 1637.28, "text": " blue score right because I mean okay it seems like yeah this skew of three is a" }, { "end": 1647.96, "start": 1642.92, "text": " is a big number but who knows if three is a big number right who like we can't" }, { "end": 1652.3600000000001, "start": 1647.96, "text": " assess that what we need to see is really the where the crossover point" }, { "end": 1658.1999999999998, "start": 1652.36, "text": " between the models to assess where does it where is it no longer valid and so on" }, { "end": 1663.84, "start": 1658.1999999999998, "text": " the second part here is that my problem with this method of splitting the data I" }, { "end": 1669.08, "start": 1663.84, "text": " mean yes okay you split the bad from the good but it's not it's not only that" }, { "end": 1674.32, "start": 1669.08, "text": " these things are getting better so right now everyone's using transformers for" }, { "end": 1677.84, "start": 1674.32, "text": " everyone's using BERT for everything right and BERT is a specific" }, { "end": 1682.9599999999998, "start": 1677.84, "text": " architecture that is going to be good at specific things at specific grammatical" }, { "end": 1688.1599999999999, "start": 1682.9599999999998, "text": " constructs in specific languages right so it's the mistakes it makes are very" }, { "end": 1693.76, "start": 1688.1599999999999, "text": " systematic now if in one year or two years all of a sudden the a new model" }, { "end": 1698.6, "start": 1693.76, "text": " pops up I don't know like someone discovers that graph neural networks are" }, { "end": 1702.8, "start": 1698.6, "text": " really good at machine translation these models are going to be attuned to a very" }, { "end": 1708.08, "start": 1702.8, "text": " very different set of construct they might be better overall but they're" }, { "end": 1712.8799999999999, "start": 1708.08, "text": " going to make a different sort of mistake and so I think just assessing" }, { "end": 1719.04, "start": 1712.8799999999999, "text": " the skew via just dividing up the data in into bad and good ratings I don't" }, { "end": 1724.1599999999999, "start": 1719.04, "text": " think that covers these distribution shifts that they set out to cover right" }, { "end": 1728.9199999999998, "start": 1724.1599999999999, "text": " what I would have expected is something like them because these tasks are" }, { "end": 1735.04, "start": 1728.92, "text": " repeated year after year and I would have expected them to for example train" }, { "end": 1741.8400000000001, "start": 1735.04, "text": " on 2017 and then evaluate on 2019 or something like or show like evaluate on" }, { "end": 1748.4, "start": 1741.8400000000001, "text": " 2017 2018 and 2019 and there we would have a much better assessment of a" }, { "end": 1755.5600000000002, "start": 1748.4, "text": " distribution shift over the years right so so it is not super convincing to me" }, { "end": 1760.8799999999999, "start": 1755.56, "text": " and what is most worrisome is if you look at their pre training tasks I mean" }, { "end": 1767.84, "start": 1760.8799999999999, "text": " okay there is a there's blur and rouge but there is BERT score right there is" }, { "end": 1772.96, "start": 1767.84, "text": " entailment which is also a BERT model and the back translation I mean who" }, { "end": 1778.1599999999999, "start": 1772.96, "text": " knows that's probably either going to be a transformer or a LSTM with an" }, { "end": 1782.96, "start": 1778.1599999999999, "text": " attention mechanism which is the attention mechanism is the basis for" }, { "end": 1788.44, "start": 1782.96, "text": " transformers so all of these things are basically going to make the same sort of" }, { "end": 1794.68, "start": 1788.44, "text": " bias mistakes right they're doing to to it's not it's not like there is Gaussian" }, { "end": 1800.16, "start": 1794.68, "text": " noise on top of these things all of these things are going to be weak in the" }, { "end": 1805.16, "start": 1800.16, "text": " same sort of assessments and not agree with like they're going to have" }, { "end": 1809.8, "start": 1805.16, "text": " systematic errors in them with respect to them predicting the human scores and" }, { "end": 1817.24, "start": 1809.8, "text": " if we evaluate our systems that some are also using exactly that thing right so" }, { "end": 1823.32, "start": 1817.24, "text": " these the systems we evaluate they are using the same type of models as here" }, { "end": 1827.8799999999999, "start": 1823.32, "text": " they're going to fall prey to the same type of mistakes and then if we switch" }, { "end": 1833.24, "start": 1827.8799999999999, "text": " over to systems that use some different right so next year we have some systems" }, { "end": 1840.6, "start": 1833.24, "text": " that use different techniques they're going to be like exactly maybe not bad" }, { "end": 1845.2, "start": 1840.6, "text": " in these particular things but bad in other things and then this thing will" }, { "end": 1849.96, "start": 1845.2, "text": " output a systematically biased assessment so it's sort of a house of" }, { "end": 1855.28, "start": 1849.96, "text": " like if you've seen these images of plugging in the power strip into itself" }, { "end": 1860.48, "start": 1855.28, "text": " and you have infinite power it's like it's very to me it seems very dangerous" }, { "end": 1867.2, "start": 1860.48, "text": " to have as such an overlap of architectures and methods to evaluate" }, { "end": 1874.68, "start": 1867.2, "text": " systems as you have in the systems themselves but I hope this will be" }, { "end": 1880.68, "start": 1874.68, "text": " regularly checked with human scores and assessed as to how much these systems" }, { "end": 1886.08, "start": 1880.68, "text": " are out of sync or in sync with humans all right this was it for me for blurt" }, { "end": 1891.1599999999999, "start": 1886.08, "text": " check out they have the code available the metric is available evaluate your" }, { "end": 1919.72, "start": 1891.16, "text": " stuff with it and bye bye" } ]
4GKCxJQSw-g
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nas", "nao", "uber", "openai", "architecture search", "neural architecture search", "inner loop", "inner optimization", "small", "abstract", "turing", "performance", "evolutionary algorithm", "outer loop", "mlp", "sigmoid", "ptb", "rnn", "cell", "meta-learning" ]
Neural Architecture Search is usually prohibitively expensive in both time and resources to be useful. A search strategy has to keep evaluating new models, training them to convergence in an inner loop to find out if they are any good. This paper proposes to abstract the problem and extract the essential part of the architecture to be optimized into a smaller version and evaluates that version on specifically custom learned data points to predict its performance, which is much faster and cheaper than running the full model. OUTLINE: 0:00 - Intro & High-Level Overview 1:00 - Neural Architecture Search 4:30 - Predicting performance via architecture encoding 7:50 - Synthetic Petri Dish 12:50 - Motivating MNIST example 18:15 - Entire Algorithm 23:00 - Producing the synthetic data 26:00 - Combination with architecture search 27:30 - PTB RNN-Cell Experiment 29:20 - Comments & Conclusion Paper: https://arxiv.org/abs/2005.13092 Code: https://github.com/uber-research/Synthetic-Petri-Dish Abstract: Neural Architecture Search (NAS) explores a large space of architectural motifs -- a compute-intensive process that often involves ground-truth evaluation of each motif by instantiating it within a large network, and training and evaluating the network with thousands of domain-specific data samples. Inspired by how biological motifs such as cells are sometimes extracted from their natural environment and studied in an artificial Petri dish setting, this paper proposes the Synthetic Petri Dish model for evaluating architectural motifs. In the Synthetic Petri Dish, architectural motifs are instantiated in very small networks and evaluated using very few learned synthetic data samples (to effectively approximate performance in the full problem). The relative performance of motifs in the Synthetic Petri Dish can substitute for their ground-truth performance, thus accelerating the most expensive step of NAS. Unlike other neural network-based prediction models that parse the structure of the motif to estimate its performance, the Synthetic Petri Dish predicts motif performance by training the actual motif in an artificial setting, thus deriving predictions from its true intrinsic properties. Experiments in this paper demonstrate that the Synthetic Petri Dish can therefore predict the performance of new motifs with significantly higher accuracy, especially when insufficient ground truth data is available. Our hope is that this work can inspire a new research direction in studying the performance of extracted components of models in an alternative controlled setting. Authors: Aditya Rawal, Joel Lehman, Felipe Petroski Such, Jeff Clune, Kenneth O. Stanley Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at Synthetic Petri dish, a novel surrogate model for rapid architecture search by Adi Tarawal, Joel Lehman, Felipe Petroski-Sucs, Jeff Klun and Kenneth O. Stanley. This paper on a high level, it basically says if you want to do neural architecture search, if you for example search for a better non-linearity, you should be able to extract that non-linearity instantiated in a very small network and then evaluate that very small network in order to predict the performance of a large network and therefore you can find a better non-linearity in much less time. Now the exact procedure how you do this in the small network is the topic of this paper. As always if you like content like this I encourage you to subscribe if you are not already and to share out the video so other people can experience the joy themselves. Alright, let's dive in. So they say in the abstract, neural architecture search explores a large space of architectural motives. So it basically means you want to find a neural architecture, let's say you have a multi-layer perceptron right here, a couple of layers, okay, and they're all connected by you know feet forward weights whatnot and each of these weights basically is a multiplication. So each one of these is a multiplication of X by your weight W and then there is a non-linearity. So the non-linearity could be a sigmoid. So the sigmoid would be something like 1 over 1 plus e to the negative X. Now there's a bit of an extension in a sigmoid where you can do a sigmoid that has like a temperature parameter attached or a slope parameter where you go CX. So in one case you can set C such that the sigmoid has a shape like this and then if you put C to a different value you can make this slope, you can make it like a shape, well this is terrible, like this. You know what I mean. Okay so this this C right here can potentially change the behavior of your network and you want to find a good parameter C and this is a hyper parameter. Now there are many hyper parameters like this for example how many units you have in a particular layer in a CNN it could be your filter size in a transformer could be the number of heads and so on. It could actually be not only the slope of the non-linearity but the actual non-linearity itself or famously in recurrent neural networks you have these recurrent cells and they're like okay we have an input signal and a carry signal and then the input here is like dot multiplied here and then there is like a gate with a non-linearity and then it's kind of like multiplied by the carry but then there's also a like a forget gate and whatnot there's a minus right here. It's very complicated and so people do architecture search over these kind of problems to find better architectures for particular problems. Now the problem is of course that how do you know if a if a given architecture is good? What you have to do is you'll have to go take that cell that you have dreamed up you think well I think that's a good cell and you have to train it on the full training data set this is a data set a database right this is a full training data set then you need to evaluate it on your validation data set and then you have like a number you have like okay this is 8 good and then you go back and you say okay what if I change this cell here what if I change it to a plus instead of a minus and you do the entire thing again train it for I don't know how much validate it and then this is like a 9 and you can say oh cool that's a 9 so this is a very basic architecture search and there has been a lot of development in this space so like evolutionary search and so on but they most of the time they require pretty much evaluating the entire thing on the full data so you get a good you get a good estimate of what your final performance back here is going to be. Now people have come up with methods to counter that and they say well if we can sort of encode the cell structure let's go with the let's go with the RNN cell if we could encode the cell structure in in a sort of a continuous way so you know we can encode text in a continuous way let we could also encode a cell structure because the cell structure I can write it down as an equation I can say like okay it's the forget gate of the carry times the sigmoid output of x plus the so this is the plus here and plus the sigmoid output of x multiplied by the input let's call that I something like this right this is text I can like write it down and then I can encode that into a vector much I can for example build another RNN ironically or something to to encode that or I can represent it as a computation graph like it is here and use a graph neural network to encode that into a single vector and then I have sort of an embedding space where each cell that I could build is a point in that embedding space and then I can evaluate a couple of them I can for example say okay this one here this one here this one here this one here I'm going to train them these cells I'm going to do the full training eval and so on get their scores and then I can learn basically in this latent space I can learn a predictor I can say okay here I get I got an 8 I got a 9 I got a 2 and I got a 4 so it appears to be that in this direction that the good cells are in this direction and then I can do it again I can sample or I can do gradient descent in this space since this is now a continuous space and the gradient descent on the model that gives me this space so right so this this method basically tries to take in the building plan of a cell and learn to predict the performance just by looking at it if if you're thinking of the a Turing machine right now then you like I I immediately thought of like this this halting problem because it appears to be exactly what it is so you're trying to build a machine that takes the building plan of another machine and tries to predict its performance now in a general sense we can already state that this problem is sort of the difficulty of this problem is equivalent to the difficulty of the original problem so I'm not sure but it appears to you know it appears to work if you throw lots of compute at it but of course that's a problem you need lots of compute right so either your your option one is to run all of these things and kind of iterate them in a neural sorry in an evolutionary way or your second option is to take the building plan and predict the performance from that both are not satisfactory and both use lots of compute now neuro petri dish is a or synthetic petri dish is a way to combine the two together it says can't we take the building plan here but actually run on the data on data to predict the performance so what they're saying is basically if I have this cell right here and usually this cell you know it deals with vectors of let's say size 512 and so on it will say it since this is let me draw it again up here so here you have the cell and here you have somehow the connections in there when you carry and the input and the input here okay and this is the output or the carry I have 512 embedding like size of this so this is a giant cell there's 512 the vector has 512 dimensions going in basically can't I take the exact same thing but and keep the connection pattern so I would keep the entire pattern of connection right here but I only do it for one or two so this is 512 and this is just two right just the I just reduce the dimensionality but I sort of keep the connection pattern alive if I only do that I have like a very small network right now and the same goes for if this is so a lot of times these RNNs they have multiple layers of these things so they have another exactly equal box up here and then another up here I can just reduce this to one layer and out of the regularity of these neural network things it is known that or one can make the assumption that the performance on this thing will sort of kind of be correlated to the performance of the entire thing and that's one of the things that this petri dish paper does so we try to take out what we are trying to search over namely the connection pattern we keep that as it is up here but we reduce everything else we reduce the dimensionality we reduce the number of layers and so on now they don't actually reduce the number of layers here but you can reduce the number of units and so on okay so this in essence this works whenever you can do that whenever you can keep the structure you're searching over but can reduce the rest so that's one precondition doesn't work for everything the second part here is that you don't want to use this particular training data and then this particular validation data because first of all it's a lot of training data and second of all it won't give you that good of a prediction instead of what you're trying to do and this is the second part of the idea of petri dish is you're trying to abstract the training data to get you a very small data set search that and the validation data as well as well search that if you train on this data and evaluate on this data the performance that you get will be very predictive of the performance had you trained the big model on the big data set okay in fact in this petri dish paper these little data sets they have nothing to do with the with the original train and validation data and that's I think that's one of the cool things here these things this training data and this validation data they are optimized as well by the procedure they're optimized special data points that are trained these are trained parameters such that if you train on the training on the small training data and evaluate on the small eval data the you will be able to predict this performance back here with high accuracy okay and this I think is where previous approaches have or might have failed because it's you know the the idea of scaling down your network in order to do the architecture search is probably has it has appeared to many people before that's not you know that's not really genius idea but probably they have all found that now we can't really do it it doesn't really give us accurate enough numbers but in this case the addition of adding these synthetic data sets that are much smaller but can still if you train and evaluate on them can still predict with high accuracy the full score of the full model that I think makes this idea work alright so I guess we're already through the idea and problem setting and everything without actually reading the paper so they give this example right here at the beginning that if you have a a two layer 100 wide so 100 dimensional MNIST networks it's two layers it's I think it's an MLP it's two layer MLP with a non-linearity that is this the sigmoid right here this okay now you can see it has this temperature parameter here it has this slope parameter and you you want to do neural architecture search to find the best slope parameter now usually you would just do a grid search but this is an example because this can be of course much higher dimensional things and then you don't want to do grid search anymore okay so what do we do if you look at how the 100 wide MNIST network so we can draw it right here so this is a 100 dimensional MNIST network so this is 100 and each cell each connection here first has a weight and then has the sigmoid non-linearity and the sigmoid non- linearity is parameterized by the parameter C okay and you have you have many of them right you have one here and so on and each one has a different C and each of these networks represents one blue dot here so if you let C vary so this sigmoid slope value right here that's your parameter C if you let this variant train the big network on the entire data set to convergence and then you eval on the validation data set you get the slope like the blue curve so if you see the blue curve the blue curve is if you start over here if you reduce this slope you'll gain in performance but if you reduce it too much you drop drastically okay until it's if it's if it's zero it's basically you know the the X is not the signal doesn't propagate anymore and you you have no learning occurring okay so that's the original performance now what if I only give you training data in this range right here I only showed you this particular range I can't actually zoom in that much but if I give you this and I ask you to build one of these please take the architecture and predict the performance that we saw at the beginning like one of these girdle machines or touring touring machines you would basically say well that looks to me like a line so I'm gonna predict the red thing here and even if you can you know evaluate a bunch of these it just looks like a line and you're going to predict there's probably a slope like this right this happens almost independently of which model you choose to predict right here the data of training is simply doesn't give away that the fact that there is a there is this break down here which happens in the real world so if you just give this as training data there's no way so so the the the criticism about these models is valid that they will only work where you give them training data they can at best interpolate their training data but they can't really extrapolate now here since the synthetic petri dish method which is the green thing here uses the actual not the actual non-linearity that this thing characterizes so it it instantiates the sigmoid with the parameter C that you give it just not on the large network but on a small network in fact their network is just one unit sorry one unit and then another unit so it's just a two hidden layer but just with one unit instead of 100 and of course you can't feed in M nest right here right but we said they don't feed in the data they actually feed in their synthetic data that they learn so you give them the points here and they learn the synthetic data to to evaluate to evaluate the others and then once you ask them well if if my C is right here what's the performance going to be it's going to instantiate that in its small network it is going to use the training data that it has learned from this region right here in order to train this and then it's going to evaluate this on the synthetic validation data that is also learned on the training data and it is going to come up with a performance metric it says okay this is how good it's going to be and since it is an approximation in its building plan to the entire network it will react similarly so it will get that there is this performance dip right here okay so it you can see how this sort of makes sense you are actually running an approximation to the actual program instead of just looking at the plan of the program and trying to predict it which you know halting problem says hello okay so that is the motivating example of their MNIST thing and here is the entire algorithm all right so you take MNIST training and validation data and you instantiate a bunch of really big networks this is ground truth okay you you need this you need this to learn from you instantiate a bunch of really big networks now if I draw the graph from before right we had this was the performance of the actual networks you want you this comes from here from this region right here this is the training data okay so you instantiate a bunch of these networks each one you instantiate in one of them right each one gives rise to a different non-linearity and you do the full training ground truth training and evaluation on the full training set and the full validation set and you get validation losses right for each of these and these are the points right here now you that's the training data for your neural for your neural architecture search so for your petri dish method what the petri dish does is it says it extracts the motive and the motive is the thing that you optimize over so as I said you want to keep that thing in its essence but you want to reduce everything else so it reduces it instead of from a two layer on hundred wide MLP it reduces that to a two layer single neuron wide MLP okay and it now this over here is the training data for the procedure that we're going to do now so what it would take is it would take it would take one of these values it would instantiate and we have that here would instantiate the neural network in the small form of that and now we know that if I train the full data and evaluate if I train on the full training data and evaluate on the full validation data I should get this accuracy all right so I will create and we're going to look at in a second I will create training and validation data such that if I train on this training data and then validate on this validation data I get the same validation loss as if I had trained the big network with the same you know the same C parameter on the full training data and evaluate in the full validation data okay so in this step I'm optimizing the data here the training and validation data all right and now in the second step once I have this training and validation data such like that I can basically reproduce this this graph right here then I can go and actually ask my model okay now please tell me what happens over here so what am I gonna do I'm gonna take that I'm gonna instantiate it I'm going to use my training data that I learned to train it when to use my validation data that I learned to evaluate it and it's gonna give me a number and that number is going to be like close to hopefully close to do this so this is how we can extrapolate using that method okay now there are a number of assumptions right here and you can imagine this doesn't work in any situation this works if if you you know if you basically you have to get lucky in that you have to abstract the correct things right I said you need to reduce everything else so they reduce notably you see they reduce the 100 the 100 layer width to a single neuron wide MLP and they sort of guess that that doesn't change the fundamental thing but you can also see they leave the two layer right they leave the two layer neural network and I'm can almost guarantee you that they tried this reducing this to a one layer neural network and it did not work and so you have to be sort of very careful of what quantities you abstract and what quantities you don't because okay now you might always think oh I can reduce the you know number of dimensions or channels that's also not always the case so I think that's kind of the crux of the method you have to actually engineer this down compressing of the architecture such that it's its properties are still kept and yeah but yeah in other things how do you how do you actually produce training and validation data to match these and there are a number of ways but what comes to mind is is meta learning right so because what you're doing they initialize the training and validation data at random points so these are just random at the beginning and then they optimize the data itself using gradient descent okay now see synthetic training data and they are randomly initialized okay and they use gradient descent they have it somewhere yes so they have this inner training loop okay which is many steps of inner training and then they have the outer loss which is the it's the validation loss after the inner training loop and the difference for that to the true validation loss and then they do gradient descent on this outer loss now this outer loss is a result of the inner loss and the inner loss is a result of the inner training procedure and the inner training procedure is n steps of feeding in the training data every step you feed in the training data so your computational graph is going to look like so here's your training data as train and here are your initial parameters you at random lies initialize them randomly in the first step you use the training data to produce theta one then in the second step you use your training perhaps your training data again to produce theta 2 and then you use it again to use data for me and so on each time you feed the training data in order to evolve your parameters to give you a better prediction right so the gradient since somewhere back here there's a loss the gradient here will have to flow back through all of these paths and through all of these connections to the training data this is kind of you back propagate through an optimization procedure and we have this a bunch of times here and I've looked at the code and the code is like really crazy and it looks like proper research code but it appears to be that that's actually what's happening they backprop through the optimization procedure to find this synthetic training and validation data now that's I mean that's crazy but it also kind of limits how far you can go with this because usually you can't backprop for more than a couple of steps doing this now that the model the fact that the model inner model is small helps but also this introduces very very much like these things are very brittle if you backprop through an optimization procedure like this these things tend to be very brittle and so I think there's another thing there where you have to pay careful attention alright that's it's basically it the last thing they say is that they can combine this with architecture search in that so not only can you predict good architectures what you can do is you can actually predict the which architectures are good and then you can use that prediction to get new to basically input this into your neural architecture search to inform it so instead of the neural architecture search having to evaluate all of the candidates that it produces it only has to now evaluate the very small subset of candidates that the synthetic petri dish training deems most worthy of being evaluated in this case here instead of evaluating all of the things here it would limit itself to whatever the synthetic petri dish says are the highest performing ones because if the synthetic petri dish is any good then it will you know give accurate predictions of how they're performing and then that can go in in multiple rounds so the architecture search can find new come up with new things that it thinks are better through like an evolutionary mutation algorithm the petri dish can evaluate them in the synthetic way and then suggest the like 10 candidates to evaluate on the full test set and you that way you don't have to evaluate all the like thousand candidates alright alright cool they do this for this M nest and they also do it for finding a RNN cell for the pen tree bank this is a language modeling task and the this is a benchmark for neural architecture search where you're trying to find a good RNN cell to get the perplexity really low and here you can see if they give the same amount of data to all the methods then the benchmark neural architecture search is worse than the synthetic petri dish informed architecture search now one has to say on the full date I believe the NAO gets to about here but of course if you give all of them the same data the neural the petri dish beats this method and I think still this method here uses way more compute because it always has to evaluate all the candidates and that's exactly one of these where I learn an architecture to predict the other architecture by just looking at it so it works but it doesn't work as well as actually running the architecture in an abstract fashion this also shows you the importance of selecting your experimental evaluation in a smart way like they argue they argue for very long why it makes sense to evaluate everything on reduced data such that their method here can be better and they don't have to compare to the full thing it's easier for them to work on reduced data and they argue you know it's it's it's what people usually do in practice and that's the task they focus on so you know good good good good paper writing right here yeah that's basically it to the paper there's a lot of things to be said here I think this works in very very limited settings it seems to me that it's sort of brittle with respect to how you abstract and also it it's always the case like how many how how large is this synthetic training data in their case there they like abstract this to 20 or 30 data points or something like this so it seems to me since you're optimizing this training data with gradient descent what you would mainly find are adversarial sort of adversarial examples to this architecture here so I'm going to guess that the inner optimization is very noisy and that's because if you really let your optimizer run then it will abuse every single thing it can to match that validation loss and that will usually lead to an adversarial example since you're optimizing the data itself okay so I think this suffers from that and this is we had this in the in the planning you know planning in in learned world models in reinforcement learning where if you have a really really good planner it will just abuse the mistakes that you make in approximating the true world and the same here you're going to make mistakes approximating this architecture here and the better your your optimizer is for producing this synthetic data the probably the worse the worse the result is going to match the worst that these losses are going to actually match now okay these losses will match because they're that's what you train for but the worst these two curves will match each other because now you're just finding adversarial examples for your particular training data another concern I have here is with respect to the double descent phenomenon so if you know the double descent phenomenon if here you have your number of parameters and here you have your validation loss let's say and you know that if I add parameters I can make my validation loss go down so this is assuming I have a model with p parameters and I always train it on the train data to like to convergence if I add parameters I can generalize better until a point where I add too many parameters and I start overfitting and my validation loss goes up again but the double descent phenomenon and I think I've done a video on this shows that after a certain threshold you get the interpolation threshold the validation loss goes actually down again it goes down even further here now I'm so this is a very strange phenomenon by itself but I'm sort of concerned that if you do this abstraction that this paper proposes so you read your let's say your full model is here with a large number of parameters so it is past this interpolation threshold if you now seriously reduce the number of parameters because you want to go into this petri dish you will get maybe you will cross this interpolation threshold and actually be on this side of the curve right here now of course at the same time you reduce the amount of data which would push you over here again but it is different data so I'm not sure how all of this is going to play out it appears to work in these settings right here but I I think this is it's it's sort of it's sort of applicable in some situations and it's it'd be very cool if we develop this further such that we understand when it applies and when we can use it because I feel this can be a very cool thing if we understand it better and if we can apply it throughout alright that's the end if you like this paper leave a comment if you didn't like it leave a comment and bye bye see you next time
[ { "end": 4.96, "start": 0, "text": " Hi there! Today we're looking at Synthetic Petri dish, a novel surrogate model for" }, { "end": 11.040000000000001, "start": 4.96, "text": " rapid architecture search by Adi Tarawal, Joel Lehman, Felipe Petroski-Sucs, Jeff" }, { "end": 18, "start": 11.040000000000001, "text": " Klun and Kenneth O. Stanley. This paper on a high level, it basically says if you" }, { "end": 22.44, "start": 18, "text": " want to do neural architecture search, if you for example search for a better" }, { "end": 28.96, "start": 22.44, "text": " non-linearity, you should be able to extract that non-linearity instantiated" }, { "end": 34, "start": 28.96, "text": " in a very small network and then evaluate that very small network in order" }, { "end": 38.32, "start": 34, "text": " to predict the performance of a large network and therefore you can find a" }, { "end": 44.88, "start": 38.32, "text": " better non-linearity in much less time. Now the exact procedure how you do this" }, { "end": 50.32, "start": 44.88, "text": " in the small network is the topic of this paper. As always if you like content" }, { "end": 55.28, "start": 50.32, "text": " like this I encourage you to subscribe if you are not already and to share out" }, { "end": 61.68, "start": 55.28, "text": " the video so other people can experience the joy themselves. Alright, let's dive" }, { "end": 67.92, "start": 61.68, "text": " in. So they say in the abstract, neural architecture search explores a large" }, { "end": 75.56, "start": 67.92, "text": " space of architectural motives. So it basically means you want to find" }, { "end": 81.44, "start": 75.56, "text": " a neural architecture, let's say you have a multi-layer perceptron right here, a" }, { "end": 87, "start": 81.44, "text": " couple of layers, okay, and they're all connected by you know feet forward" }, { "end": 92.32, "start": 87, "text": " weights whatnot and each of these weights basically is a multiplication. So" }, { "end": 97.8, "start": 92.32, "text": " each one of these is a multiplication of X by your weight W and then there is a" }, { "end": 104.68, "start": 97.8, "text": " non-linearity. So the non-linearity could be a sigmoid. So the sigmoid would be" }, { "end": 110.44, "start": 104.68, "text": " something like 1 over 1 plus e to the negative X. Now there's a bit of an" }, { "end": 115.08, "start": 110.44, "text": " extension in a sigmoid where you can do a sigmoid that has like a temperature" }, { "end": 121.36, "start": 115.08, "text": " parameter attached or a slope parameter where you go CX. So in one case you can" }, { "end": 127.75999999999999, "start": 121.36, "text": " set C such that the sigmoid has a shape like this and then if you put C to a" }, { "end": 133.24, "start": 127.75999999999999, "text": " different value you can make this slope, you can make it like a shape, well this" }, { "end": 138.96, "start": 133.24, "text": " is terrible, like this. You know what I mean. Okay so this this C right here can" }, { "end": 144.12, "start": 138.96, "text": " potentially change the behavior of your network and you want to find a good" }, { "end": 148, "start": 144.12, "text": " parameter C and this is a hyper parameter. Now there are many hyper" }, { "end": 153.24, "start": 148, "text": " parameters like this for example how many units you have in a particular layer" }, { "end": 158.32, "start": 153.24, "text": " in a CNN it could be your filter size in a transformer could be the number of" }, { "end": 164.16, "start": 158.32, "text": " heads and so on. It could actually be not only the slope of the non-linearity but" }, { "end": 169.07999999999998, "start": 164.16, "text": " the actual non-linearity itself or famously in recurrent neural networks" }, { "end": 173.51999999999998, "start": 169.07999999999998, "text": " you have these recurrent cells and they're like okay we have an input" }, { "end": 179.2, "start": 173.51999999999998, "text": " signal and a carry signal and then the input here is like dot multiplied" }, { "end": 182.84, "start": 179.2, "text": " here and then there is like a gate with a non-linearity and then it's kind of" }, { "end": 187.64, "start": 182.84, "text": " like multiplied by the carry but then there's also a like a forget gate and" }, { "end": 192.92, "start": 187.64, "text": " whatnot there's a minus right here. It's very complicated and so people do" }, { "end": 197.64, "start": 192.92, "text": " architecture search over these kind of problems to find better architectures" }, { "end": 204.07999999999998, "start": 197.64, "text": " for particular problems. Now the problem is of course that how do you know if a" }, { "end": 209.76, "start": 204.07999999999998, "text": " if a given architecture is good? What you have to do is you'll have to go take" }, { "end": 214.33999999999997, "start": 209.76, "text": " that cell that you have dreamed up you think well I think that's a good cell" }, { "end": 219.44, "start": 214.33999999999997, "text": " and you have to train it on the full training data set this is a data set a" }, { "end": 224.72, "start": 219.44, "text": " database right this is a full training data set then you need to evaluate it on" }, { "end": 229.4, "start": 224.72, "text": " your validation data set and then you have like a number you have like okay" }, { "end": 235.2, "start": 229.4, "text": " this is 8 good and then you go back and you say okay what if I change this cell" }, { "end": 240.2, "start": 235.2, "text": " here what if I change it to a plus instead of a minus and you do the entire" }, { "end": 245.3, "start": 240.2, "text": " thing again train it for I don't know how much validate it and then this is" }, { "end": 250.84, "start": 245.3, "text": " like a 9 and you can say oh cool that's a 9 so this is a very basic architecture" }, { "end": 256.56, "start": 250.84, "text": " search and there has been a lot of development in this space so like" }, { "end": 260.68, "start": 256.56, "text": " evolutionary search and so on but they most of the time they require pretty" }, { "end": 266.16, "start": 260.68, "text": " much evaluating the entire thing on the full data so you get a good you get a" }, { "end": 271.12, "start": 266.16, "text": " good estimate of what your final performance back here is going to be. Now" }, { "end": 275.92, "start": 271.12, "text": " people have come up with methods to counter that and they say well if we can" }, { "end": 282.08, "start": 275.92, "text": " sort of encode the cell structure let's go with the let's go with the RNN cell" }, { "end": 288.32, "start": 282.08, "text": " if we could encode the cell structure in in a sort of a continuous way so you" }, { "end": 293, "start": 288.32, "text": " know we can encode text in a continuous way let we could also encode a cell" }, { "end": 298.12, "start": 293, "text": " structure because the cell structure I can write it down as an equation I can" }, { "end": 304.96, "start": 298.12, "text": " say like okay it's the forget gate of the carry times the sigmoid output of x" }, { "end": 314.36, "start": 304.96, "text": " plus the so this is the plus here and plus the sigmoid output of x multiplied" }, { "end": 320.4, "start": 314.36, "text": " by the input let's call that I something like this right this is text I can like" }, { "end": 326.2, "start": 320.4, "text": " write it down and then I can encode that into a vector much I can for example" }, { "end": 333.84, "start": 326.2, "text": " build another RNN ironically or something to to encode that or I can" }, { "end": 337.68, "start": 333.84, "text": " represent it as a computation graph like it is here and use a graph neural" }, { "end": 341.9, "start": 337.68, "text": " network to encode that into a single vector and then I have sort of an" }, { "end": 347.44, "start": 341.9, "text": " embedding space where each cell that I could build is a point in that embedding" }, { "end": 352.91999999999996, "start": 347.44, "text": " space and then I can evaluate a couple of them I can for example say okay this" }, { "end": 357.08000000000004, "start": 352.92, "text": " one here this one here this one here this one here I'm going to train them" }, { "end": 361.92, "start": 357.08000000000004, "text": " these cells I'm going to do the full training eval and so on get their scores" }, { "end": 367.76, "start": 361.92, "text": " and then I can learn basically in this latent space I can learn a predictor I" }, { "end": 375.92, "start": 367.76, "text": " can say okay here I get I got an 8 I got a 9 I got a 2 and I got a 4 so it" }, { "end": 380.84000000000003, "start": 375.92, "text": " appears to be that in this direction that the good cells are in this direction" }, { "end": 385.32, "start": 380.84, "text": " and then I can do it again I can sample or I can do gradient descent in this" }, { "end": 391.28, "start": 385.32, "text": " space since this is now a continuous space and the gradient descent on the" }, { "end": 396.03999999999996, "start": 391.28, "text": " model that gives me this space so right so this this method basically tries to" }, { "end": 402.55999999999995, "start": 396.03999999999996, "text": " take in the building plan of a cell and learn to predict the performance just by" }, { "end": 409.55999999999995, "start": 402.55999999999995, "text": " looking at it if if you're thinking of the a Turing machine right now then you" }, { "end": 414.28000000000003, "start": 409.56, "text": " like I I immediately thought of like this this halting problem because it" }, { "end": 417.44, "start": 414.28000000000003, "text": " appears to be exactly what it is so you're trying to build a machine that" }, { "end": 421.52, "start": 417.44, "text": " takes the building plan of another machine and tries to predict its" }, { "end": 429.2, "start": 421.52, "text": " performance now in a general sense we can already state that this problem is" }, { "end": 436.2, "start": 429.2, "text": " sort of the difficulty of this problem is equivalent to the difficulty of the" }, { "end": 441.24, "start": 436.2, "text": " original problem so I'm not sure but it appears to you know it appears to work" }, { "end": 444.68, "start": 441.24, "text": " if you throw lots of compute at it but of course that's a problem you need lots" }, { "end": 451.36, "start": 444.68, "text": " of compute right so either your your option one is to run all of these things" }, { "end": 457.4, "start": 451.36, "text": " and kind of iterate them in a neural sorry in an evolutionary way or your" }, { "end": 464.24, "start": 457.4, "text": " second option is to take the building plan and predict the performance from" }, { "end": 469.32, "start": 464.24, "text": " that both are not satisfactory and both use lots of compute now neuro petri dish" }, { "end": 475.6, "start": 469.32, "text": " is a or synthetic petri dish is a way to combine the two together it says can't" }, { "end": 482.32, "start": 475.6, "text": " we take the building plan here but actually run on the data on data to" }, { "end": 489.96000000000004, "start": 482.32, "text": " predict the performance so what they're saying is basically if I have this cell" }, { "end": 494.32, "start": 489.96, "text": " right here and usually this cell you know it deals with vectors of let's say" }, { "end": 502.64, "start": 494.32, "text": " size 512 and so on it will say it since this is let me draw it again up here so" }, { "end": 507.2, "start": 502.64, "text": " here you have the cell and here you have somehow the connections in there when you" }, { "end": 512.72, "start": 507.2, "text": " carry and the input and the input here okay and this is the output or the carry" }, { "end": 521.96, "start": 512.72, "text": " I have 512 embedding like size of this so this is a giant cell there's 512 the" }, { "end": 526.6800000000001, "start": 521.96, "text": " vector has 512 dimensions going in basically can't I take the exact same" }, { "end": 532.08, "start": 526.6800000000001, "text": " thing but and keep the connection pattern so I would keep the entire" }, { "end": 539.76, "start": 532.08, "text": " pattern of connection right here but I only do it for one or two so this is 512" }, { "end": 547.16, "start": 539.76, "text": " and this is just two right just the I just reduce the dimensionality but I" }, { "end": 553.08, "start": 547.16, "text": " sort of keep the connection pattern alive if I only do that I have like a" }, { "end": 559.6, "start": 553.08, "text": " very small network right now and the same goes for if this is so a lot of" }, { "end": 563.08, "start": 559.6, "text": " times these RNNs they have multiple layers of these things so they have" }, { "end": 569.68, "start": 563.08, "text": " another exactly equal box up here and then another up here I can just reduce" }, { "end": 574.3599999999999, "start": 569.68, "text": " this to one layer and out of the regularity of these neural network" }, { "end": 580.7199999999999, "start": 574.3599999999999, "text": " things it is known that or one can make the assumption that the performance on" }, { "end": 585.8, "start": 580.7199999999999, "text": " this thing will sort of kind of be correlated to the performance of the" }, { "end": 591.8399999999999, "start": 585.8, "text": " entire thing and that's one of the things that this petri dish paper does" }, { "end": 598.56, "start": 591.8399999999999, "text": " so we try to take out what we are trying to search over namely the connection" }, { "end": 605.7199999999999, "start": 598.56, "text": " pattern we keep that as it is up here but we reduce everything else we reduce" }, { "end": 609.4799999999999, "start": 605.7199999999999, "text": " the dimensionality we reduce the number of layers and so on now they don't" }, { "end": 612.8399999999999, "start": 609.4799999999999, "text": " actually reduce the number of layers here but you can reduce the number of" }, { "end": 618.8, "start": 612.8399999999999, "text": " units and so on okay so this in essence this works whenever you can do that" }, { "end": 625, "start": 618.8, "text": " whenever you can keep the structure you're searching over but can reduce" }, { "end": 631.16, "start": 625, "text": " the rest so that's one precondition doesn't work for everything the second" }, { "end": 636.16, "start": 631.16, "text": " part here is that you don't want to use this particular training data and then" }, { "end": 640.28, "start": 636.16, "text": " this particular validation data because first of all it's a lot of training data" }, { "end": 646.64, "start": 640.28, "text": " and second of all it won't give you that good of a prediction instead of what" }, { "end": 651.28, "start": 646.64, "text": " you're trying to do and this is the second part of the idea of petri dish is" }, { "end": 659.48, "start": 651.28, "text": " you're trying to abstract the training data to get you a very small data set" }, { "end": 664.92, "start": 659.48, "text": " search that and the validation data as well as well search that if you train" }, { "end": 671.92, "start": 664.92, "text": " on this data and evaluate on this data the performance that you get will be" }, { "end": 677.8399999999999, "start": 671.92, "text": " very predictive of the performance had you trained the big model on the big" }, { "end": 683.6800000000001, "start": 677.84, "text": " data set okay in fact in this petri dish paper these little data sets they have" }, { "end": 689.76, "start": 683.6800000000001, "text": " nothing to do with the with the original train and validation data and that's I" }, { "end": 695.2, "start": 689.76, "text": " think that's one of the cool things here these things this training data and this" }, { "end": 700.44, "start": 695.2, "text": " validation data they are optimized as well by the procedure they're optimized" }, { "end": 706.64, "start": 700.44, "text": " special data points that are trained these are trained parameters such that" }, { "end": 710.96, "start": 706.64, "text": " if you train on the training on the small training data and evaluate on the" }, { "end": 717.04, "start": 710.96, "text": " small eval data the you will be able to predict this performance back here with" }, { "end": 722.88, "start": 717.04, "text": " high accuracy okay and this I think is where previous approaches have or might" }, { "end": 727.28, "start": 722.88, "text": " have failed because it's you know the the idea of scaling down your network in" }, { "end": 731.56, "start": 727.28, "text": " order to do the architecture search is probably has it has appeared to many" }, { "end": 737.5999999999999, "start": 731.56, "text": " people before that's not you know that's not really genius idea but probably they" }, { "end": 741.16, "start": 737.5999999999999, "text": " have all found that now we can't really do it it doesn't really give us accurate" }, { "end": 746.5999999999999, "start": 741.16, "text": " enough numbers but in this case the addition of adding these synthetic data" }, { "end": 753.0999999999999, "start": 746.5999999999999, "text": " sets that are much smaller but can still if you train and evaluate on them can" }, { "end": 760, "start": 753.0999999999999, "text": " still predict with high accuracy the full score of the full model that I think" }, { "end": 765.76, "start": 760, "text": " makes this idea work alright so I guess we're already through the idea and" }, { "end": 772.88, "start": 765.76, "text": " problem setting and everything without actually reading the paper so they give" }, { "end": 780.6, "start": 772.88, "text": " this example right here at the beginning that if you have a a two layer 100 wide" }, { "end": 785.08, "start": 780.6, "text": " so 100 dimensional MNIST networks it's two layers it's I think it's an MLP" }, { "end": 794.4000000000001, "start": 785.08, "text": " it's two layer MLP with a non-linearity that is this the sigmoid right here this" }, { "end": 800.6, "start": 794.4000000000001, "text": " okay now you can see it has this temperature parameter here it has this" }, { "end": 806.2, "start": 800.6, "text": " slope parameter and you you want to do neural architecture search to find the" }, { "end": 810.36, "start": 806.2, "text": " best slope parameter now usually you would just do a grid search but this is" }, { "end": 815.76, "start": 810.36, "text": " an example because this can be of course much higher dimensional things and" }, { "end": 823.84, "start": 815.76, "text": " then you don't want to do grid search anymore okay so what do we do if you look" }, { "end": 830.8000000000001, "start": 823.84, "text": " at how the 100 wide MNIST network so we can draw it right here so this is a 100" }, { "end": 838, "start": 830.8000000000001, "text": " dimensional MNIST network so this is 100 and each cell each connection here first" }, { "end": 841.64, "start": 838, "text": " has a weight and then has the sigmoid non-linearity and the sigmoid non-" }, { "end": 848.32, "start": 841.64, "text": " linearity is parameterized by the parameter C okay and you have you have" }, { "end": 853.72, "start": 848.32, "text": " many of them right you have one here and so on and each one has a different C and" }, { "end": 860.2, "start": 853.72, "text": " each of these networks represents one blue dot here so if you let C vary so" }, { "end": 864.68, "start": 860.2, "text": " this sigmoid slope value right here that's your parameter C if you let this" }, { "end": 868.7199999999999, "start": 864.68, "text": " variant train the big network on the entire data set to convergence and then" }, { "end": 874.12, "start": 868.7199999999999, "text": " you eval on the validation data set you get the slope like the blue curve so if" }, { "end": 878.64, "start": 874.12, "text": " you see the blue curve the blue curve is if you start over here if you reduce" }, { "end": 883, "start": 878.64, "text": " this slope you'll gain in performance but if you reduce it too much you drop" }, { "end": 889.28, "start": 883, "text": " drastically okay until it's if it's if it's zero it's basically you know the" }, { "end": 895.28, "start": 889.28, "text": " the X is not the signal doesn't propagate anymore and you you have no" }, { "end": 901.1999999999999, "start": 895.28, "text": " learning occurring okay so that's the original performance now what if I only" }, { "end": 906.04, "start": 901.1999999999999, "text": " give you training data in this range right here I only showed you this" }, { "end": 911.5, "start": 906.04, "text": " particular range I can't actually zoom in that much but if I give you" }, { "end": 916.48, "start": 911.5, "text": " this and I ask you to build one of these please take the architecture and predict" }, { "end": 920.04, "start": 916.48, "text": " the performance that we saw at the beginning like one of these girdle" }, { "end": 928.84, "start": 920.04, "text": " machines or touring touring machines you would basically say well that looks" }, { "end": 934.52, "start": 928.84, "text": " to me like a line so I'm gonna predict the red thing here and even if you can" }, { "end": 938.6, "start": 934.52, "text": " you know evaluate a bunch of these it just looks like a line and you're going" }, { "end": 944.9200000000001, "start": 938.6, "text": " to predict there's probably a slope like this right this happens almost" }, { "end": 948.92, "start": 944.92, "text": " independently of which model you choose to predict right here the data of" }, { "end": 954.16, "start": 948.92, "text": " training is simply doesn't give away that the fact that there is a there is" }, { "end": 958.52, "start": 954.16, "text": " this break down here which happens in the real world so if you just give this" }, { "end": 965.4799999999999, "start": 958.52, "text": " as training data there's no way so so the the the criticism about these" }, { "end": 970.7199999999999, "start": 965.4799999999999, "text": " models is valid that they will only work where you give them training data they" }, { "end": 974.7199999999999, "start": 970.7199999999999, "text": " can at best interpolate their training data but they can't really extrapolate" }, { "end": 980.8000000000001, "start": 974.72, "text": " now here since the synthetic petri dish method which is the green thing here" }, { "end": 986.84, "start": 980.8000000000001, "text": " uses the actual not the actual non-linearity that this thing" }, { "end": 992.6800000000001, "start": 986.84, "text": " characterizes so it it instantiates the sigmoid with the parameter C that you" }, { "end": 996.5600000000001, "start": 992.6800000000001, "text": " give it just not on the large network but on a small network in fact their" }, { "end": 1003.9200000000001, "start": 996.5600000000001, "text": " network is just one unit sorry one unit and then another unit so it's just a two" }, { "end": 1009.0799999999999, "start": 1003.92, "text": " hidden layer but just with one unit instead of 100 and of course you can't" }, { "end": 1013.28, "start": 1009.0799999999999, "text": " feed in M nest right here right but we said they don't feed in the data they" }, { "end": 1019, "start": 1013.28, "text": " actually feed in their synthetic data that they learn so you give them the" }, { "end": 1025.44, "start": 1019, "text": " points here and they learn the synthetic data to to evaluate to evaluate the" }, { "end": 1031.3999999999999, "start": 1025.44, "text": " others and then once you ask them well if if my C is right here what's the" }, { "end": 1036.8400000000001, "start": 1031.4, "text": " performance going to be it's going to instantiate that in its small network it" }, { "end": 1041.44, "start": 1036.8400000000001, "text": " is going to use the training data that it has learned from this region right" }, { "end": 1045.48, "start": 1041.44, "text": " here in order to train this and then it's going to evaluate this on the" }, { "end": 1050.92, "start": 1045.48, "text": " synthetic validation data that is also learned on the training data and it is" }, { "end": 1056.24, "start": 1050.92, "text": " going to come up with a performance metric it says okay this is how good it's" }, { "end": 1061.96, "start": 1056.24, "text": " going to be and since it is an approximation in its building plan to the" }, { "end": 1069.68, "start": 1061.96, "text": " entire network it will react similarly so it will get that there is this" }, { "end": 1074.24, "start": 1069.68, "text": " performance dip right here okay so it you can see how this sort of makes sense" }, { "end": 1078.08, "start": 1074.24, "text": " you are actually running an approximation to the actual program" }, { "end": 1082.44, "start": 1078.08, "text": " instead of just looking at the plan of the program and trying to predict it" }, { "end": 1090.88, "start": 1082.44, "text": " which you know halting problem says hello okay so that is the motivating" }, { "end": 1100.2, "start": 1090.88, "text": " example of their MNIST thing and here is the entire algorithm all right so you" }, { "end": 1106.76, "start": 1100.2, "text": " take MNIST training and validation data and you instantiate a bunch of really" }, { "end": 1111.04, "start": 1106.76, "text": " big networks this is ground truth okay you you need this you need this to learn" }, { "end": 1116.36, "start": 1111.04, "text": " from you instantiate a bunch of really big networks now if I draw the graph" }, { "end": 1125.36, "start": 1116.36, "text": " from before right we had this was the performance of the actual networks you" }, { "end": 1130.56, "start": 1125.36, "text": " want you this comes from here from this region right here this is the training" }, { "end": 1135.8, "start": 1130.56, "text": " data okay so you instantiate a bunch of these networks each one you instantiate" }, { "end": 1140.76, "start": 1135.8, "text": " in one of them right each one gives rise to a different non-linearity and you do" }, { "end": 1144.96, "start": 1140.76, "text": " the full training ground truth training and evaluation on the full training set" }, { "end": 1149.52, "start": 1144.96, "text": " and the full validation set and you get validation losses right for each of" }, { "end": 1154.56, "start": 1149.52, "text": " these and these are the points right here now you that's the training data" }, { "end": 1159.28, "start": 1154.56, "text": " for your neural for your neural architecture search so for your petri" }, { "end": 1166.26, "start": 1159.28, "text": " dish method what the petri dish does is it says it extracts the motive and the" }, { "end": 1171.16, "start": 1166.26, "text": " motive is the thing that you optimize over so as I said you want to keep that" }, { "end": 1177.52, "start": 1171.16, "text": " thing in its essence but you want to reduce everything else so it reduces it" }, { "end": 1183.12, "start": 1177.52, "text": " instead of from a two layer on hundred wide MLP it reduces that to a two" }, { "end": 1193, "start": 1183.12, "text": " layer single neuron wide MLP okay and it now this over here is the training data" }, { "end": 1197.4, "start": 1193, "text": " for the procedure that we're going to do now so what it would take is it would" }, { "end": 1202.32, "start": 1197.4, "text": " take it would take one of these values it would instantiate and we have that" }, { "end": 1207.72, "start": 1202.32, "text": " here would instantiate the neural network in the small form of that and" }, { "end": 1214.4, "start": 1207.72, "text": " now we know that if I train the full data and evaluate if I train on the full" }, { "end": 1219.68, "start": 1214.4, "text": " training data and evaluate on the full validation data I should get this" }, { "end": 1227.44, "start": 1219.68, "text": " accuracy all right so I will create and we're going to look at in a second I" }, { "end": 1233.4, "start": 1227.44, "text": " will create training and validation data such that if I train on this training" }, { "end": 1240.3200000000002, "start": 1233.4, "text": " data and then validate on this validation data I get the same validation" }, { "end": 1245.4, "start": 1240.3200000000002, "text": " loss as if I had trained the big network with the same you know the same C" }, { "end": 1249.04, "start": 1245.4, "text": " parameter on the full training data and evaluate in the full validation data" }, { "end": 1254.12, "start": 1249.04, "text": " okay so in this step I'm optimizing the data here the training and validation" }, { "end": 1260.44, "start": 1254.12, "text": " data all right and now in the second step once I have this training and" }, { "end": 1265.2, "start": 1260.44, "text": " validation data such like that I can basically reproduce this this graph" }, { "end": 1273.52, "start": 1265.2, "text": " right here then I can go and actually ask my model okay now please tell me" }, { "end": 1278.08, "start": 1273.52, "text": " what happens over here so what am I gonna do I'm gonna take that I'm gonna" }, { "end": 1283.24, "start": 1278.08, "text": " instantiate it I'm going to use my training data that I learned to train it" }, { "end": 1287.3999999999999, "start": 1283.24, "text": " when to use my validation data that I learned to evaluate it and it's gonna" }, { "end": 1292.48, "start": 1287.3999999999999, "text": " give me a number and that number is going to be like close to hopefully" }, { "end": 1298.84, "start": 1292.48, "text": " close to do this so this is how we can extrapolate using that method okay now" }, { "end": 1303.48, "start": 1298.84, "text": " there are a number of assumptions right here and you can imagine this doesn't" }, { "end": 1309.76, "start": 1303.48, "text": " work in any situation this works if if you you know if you basically you have to" }, { "end": 1317.28, "start": 1309.76, "text": " get lucky in that you have to abstract the correct things right I said you need" }, { "end": 1322.88, "start": 1317.28, "text": " to reduce everything else so they reduce notably you see they reduce the 100 the" }, { "end": 1328.64, "start": 1322.88, "text": " 100 layer width to a single neuron wide MLP and they sort of guess that that" }, { "end": 1334.16, "start": 1328.64, "text": " doesn't change the fundamental thing but you can also see they leave the two" }, { "end": 1340.4, "start": 1334.16, "text": " layer right they leave the two layer neural network and I'm can almost" }, { "end": 1344.8400000000001, "start": 1340.4, "text": " guarantee you that they tried this reducing this to a one layer neural" }, { "end": 1351.24, "start": 1344.8400000000001, "text": " network and it did not work and so you have to be sort of very careful of what" }, { "end": 1356.48, "start": 1351.24, "text": " quantities you abstract and what quantities you don't because okay now" }, { "end": 1360.16, "start": 1356.48, "text": " you might always think oh I can reduce the you know number of dimensions or" }, { "end": 1365.72, "start": 1360.16, "text": " channels that's also not always the case so I think that's kind of the crux of" }, { "end": 1370.4, "start": 1365.72, "text": " the method you have to actually engineer this down compressing of the" }, { "end": 1376.72, "start": 1370.4, "text": " architecture such that it's its properties are still kept and yeah but" }, { "end": 1382.48, "start": 1376.72, "text": " yeah in other things how do you how do you actually produce training and" }, { "end": 1388, "start": 1382.48, "text": " validation data to match these and there are a number of ways but what comes to" }, { "end": 1394.1200000000001, "start": 1388, "text": " mind is is meta learning right so because what you're doing they initialize" }, { "end": 1397.56, "start": 1394.1200000000001, "text": " the training and validation data at random points so these are just random" }, { "end": 1403.84, "start": 1397.56, "text": " at the beginning and then they optimize the data itself using gradient descent" }, { "end": 1413.48, "start": 1403.84, "text": " okay now see synthetic training data and they are randomly initialized okay and" }, { "end": 1418.3999999999999, "start": 1413.48, "text": " they use gradient descent they have it somewhere yes so they have this inner" }, { "end": 1423.9599999999998, "start": 1418.3999999999999, "text": " training loop okay which is many steps of inner training and then they have the" }, { "end": 1430.36, "start": 1423.9599999999998, "text": " outer loss which is the it's the validation loss after the inner training" }, { "end": 1434.52, "start": 1430.36, "text": " loop and the difference for that to the true validation loss and then they do" }, { "end": 1439.4799999999998, "start": 1434.52, "text": " gradient descent on this outer loss now this outer loss is a result of the inner" }, { "end": 1444.28, "start": 1439.4799999999998, "text": " loss and the inner loss is a result of the inner training procedure and the" }, { "end": 1449.7199999999998, "start": 1444.28, "text": " inner training procedure is n steps of feeding in the training data every step" }, { "end": 1453.32, "start": 1449.7199999999998, "text": " you feed in the training data so your computational graph is going to look" }, { "end": 1458.1999999999998, "start": 1453.32, "text": " like so here's your training data as train and here are your initial" }, { "end": 1462.52, "start": 1458.2, "text": " parameters you at random lies initialize them randomly in the first" }, { "end": 1468.16, "start": 1462.52, "text": " step you use the training data to produce theta one then in the second" }, { "end": 1473.0800000000002, "start": 1468.16, "text": " step you use your training perhaps your training data again to produce theta 2" }, { "end": 1478.24, "start": 1473.0800000000002, "text": " and then you use it again to use data for me and so on each time you feed the" }, { "end": 1481.52, "start": 1478.24, "text": " training data in order to evolve your parameters to give you a better" }, { "end": 1486.96, "start": 1481.52, "text": " prediction right so the gradient since somewhere back here there's a loss the" }, { "end": 1492.76, "start": 1486.96, "text": " gradient here will have to flow back through all of these paths and through" }, { "end": 1496.96, "start": 1492.76, "text": " all of these connections to the training data this is kind of you back propagate" }, { "end": 1501.4, "start": 1496.96, "text": " through an optimization procedure and we have this a bunch of times here and I've" }, { "end": 1507.6000000000001, "start": 1501.4, "text": " looked at the code and the code is like really crazy and it looks like proper" }, { "end": 1511.44, "start": 1507.6000000000001, "text": " research code but it appears to be that that's actually what's happening they" }, { "end": 1517.2, "start": 1511.44, "text": " backprop through the optimization procedure to find this synthetic training" }, { "end": 1524.04, "start": 1517.2, "text": " and validation data now that's I mean that's crazy but it also kind of limits" }, { "end": 1527.4, "start": 1524.04, "text": " how far you can go with this because usually you can't backprop for more" }, { "end": 1532.56, "start": 1527.4, "text": " than a couple of steps doing this now that the model the fact that the model" }, { "end": 1537.64, "start": 1532.56, "text": " inner model is small helps but also this introduces very very much like these" }, { "end": 1542.16, "start": 1537.64, "text": " things are very brittle if you backprop through an optimization procedure like" }, { "end": 1548.24, "start": 1542.16, "text": " this these things tend to be very brittle and so I think there's another" }, { "end": 1556.4, "start": 1548.24, "text": " thing there where you have to pay careful attention alright that's it's" }, { "end": 1559.64, "start": 1556.4, "text": " basically it the last thing they say is that they can combine this with" }, { "end": 1567, "start": 1559.64, "text": " architecture search in that so not only can you predict good architectures what" }, { "end": 1572.56, "start": 1567, "text": " you can do is you can actually predict the which architectures are good and" }, { "end": 1578.6, "start": 1572.56, "text": " then you can use that prediction to get new to basically input this into your" }, { "end": 1582.84, "start": 1578.6, "text": " neural architecture search to inform it so instead of the neural architecture" }, { "end": 1587.48, "start": 1582.84, "text": " search having to evaluate all of the candidates that it produces it only has" }, { "end": 1592.32, "start": 1587.48, "text": " to now evaluate the very small subset of candidates that the synthetic petri dish" }, { "end": 1598, "start": 1592.32, "text": " training deems most worthy of being evaluated in this case here instead of" }, { "end": 1604.4399999999998, "start": 1598, "text": " evaluating all of the things here it would limit itself to whatever the" }, { "end": 1609.8799999999999, "start": 1604.4399999999998, "text": " synthetic petri dish says are the highest performing ones because if the" }, { "end": 1614.4399999999998, "start": 1609.8799999999999, "text": " synthetic petri dish is any good then it will you know give accurate predictions" }, { "end": 1618.6799999999998, "start": 1614.4399999999998, "text": " of how they're performing and then that can go in in multiple rounds so the" }, { "end": 1624.48, "start": 1618.68, "text": " architecture search can find new come up with new things that it thinks are" }, { "end": 1628.3600000000001, "start": 1624.48, "text": " better through like an evolutionary mutation algorithm the petri dish can" }, { "end": 1634.1000000000001, "start": 1628.3600000000001, "text": " evaluate them in the synthetic way and then suggest the like 10 candidates to" }, { "end": 1640.5600000000002, "start": 1634.1000000000001, "text": " evaluate on the full test set and you that way you don't have to evaluate all" }, { "end": 1647.8, "start": 1640.5600000000002, "text": " the like thousand candidates alright alright cool they do this for this M" }, { "end": 1657.44, "start": 1647.8, "text": " nest and they also do it for finding a RNN cell for the pen tree bank this is" }, { "end": 1663.68, "start": 1657.44, "text": " a language modeling task and the this is a benchmark for neural architecture" }, { "end": 1668.9199999999998, "start": 1663.68, "text": " search where you're trying to find a good RNN cell to get the perplexity" }, { "end": 1675, "start": 1668.9199999999998, "text": " really low and here you can see if they give the same amount of data to all the" }, { "end": 1681.32, "start": 1675, "text": " methods then the benchmark neural architecture search is worse than the" }, { "end": 1686.64, "start": 1681.32, "text": " synthetic petri dish informed architecture search now one has to say" }, { "end": 1693.4, "start": 1686.64, "text": " on the full date I believe the NAO gets to about here but of course if you give" }, { "end": 1700.56, "start": 1693.4, "text": " all of them the same data the neural the petri dish beats this method and I think" }, { "end": 1704.68, "start": 1700.56, "text": " still this method here uses way more compute because it always has to evaluate" }, { "end": 1711.1200000000001, "start": 1704.68, "text": " all the candidates and that's exactly one of these where I learn an" }, { "end": 1715.72, "start": 1711.1200000000001, "text": " architecture to predict the other architecture by just looking at it so it" }, { "end": 1720.88, "start": 1715.72, "text": " works but it doesn't work as well as actually running the architecture in an" }, { "end": 1725, "start": 1720.88, "text": " abstract fashion this also shows you the importance of selecting your" }, { "end": 1730.8, "start": 1725, "text": " experimental evaluation in a smart way like they argue they argue for very long" }, { "end": 1736, "start": 1730.8, "text": " why it makes sense to evaluate everything on reduced data such that" }, { "end": 1742, "start": 1736, "text": " their method here can be better and they don't have to compare to the full thing" }, { "end": 1746.6, "start": 1742, "text": " it's easier for them to work on reduced data and they argue you know it's it's" }, { "end": 1752.3999999999999, "start": 1746.6, "text": " it's what people usually do in practice and that's the task they focus on so" }, { "end": 1760.68, "start": 1752.4, "text": " you know good good good good paper writing right here yeah that's basically" }, { "end": 1768.3600000000001, "start": 1760.68, "text": " it to the paper there's a lot of things to be said here I think this works in" }, { "end": 1774.3200000000002, "start": 1768.3600000000001, "text": " very very limited settings it seems to me that it's sort of brittle with" }, { "end": 1781.0400000000002, "start": 1774.3200000000002, "text": " respect to how you abstract and also it it's always the case like how many how" }, { "end": 1784.96, "start": 1781.04, "text": " how large is this synthetic training data in their case there they like" }, { "end": 1790.52, "start": 1784.96, "text": " abstract this to 20 or 30 data points or something like this so it seems to me" }, { "end": 1795.2, "start": 1790.52, "text": " since you're optimizing this training data with gradient descent what you" }, { "end": 1800.8, "start": 1795.2, "text": " would mainly find are adversarial sort of adversarial examples to this" }, { "end": 1806.32, "start": 1800.8, "text": " architecture here so I'm going to guess that the inner optimization is very" }, { "end": 1813.04, "start": 1806.32, "text": " noisy and that's because if you really let your optimizer run then it will" }, { "end": 1817.76, "start": 1813.04, "text": " abuse every single thing it can to match that validation loss and that will" }, { "end": 1822.24, "start": 1817.76, "text": " usually lead to an adversarial example since you're optimizing the data itself" }, { "end": 1829.4399999999998, "start": 1822.24, "text": " okay so I think this suffers from that and this is we had this in the in the" }, { "end": 1834.36, "start": 1829.4399999999998, "text": " planning you know planning in in learned world models in reinforcement learning" }, { "end": 1839.28, "start": 1834.36, "text": " where if you have a really really good planner it will just abuse the mistakes" }, { "end": 1843.6799999999998, "start": 1839.28, "text": " that you make in approximating the true world and the same here you're going to" }, { "end": 1848.1599999999999, "start": 1843.6799999999998, "text": " make mistakes approximating this architecture here and the better your" }, { "end": 1853.6, "start": 1848.1599999999999, "text": " your optimizer is for producing this synthetic data the probably the worse" }, { "end": 1859.3999999999999, "start": 1853.6, "text": " the worse the result is going to match the worst that these losses are going to" }, { "end": 1864.1599999999999, "start": 1859.3999999999999, "text": " actually match now okay these losses will match because they're that's what" }, { "end": 1869.3200000000002, "start": 1864.16, "text": " you train for but the worst these two curves will match each other because now" }, { "end": 1873.68, "start": 1869.3200000000002, "text": " you're just finding adversarial examples for your particular training data" }, { "end": 1879.0400000000002, "start": 1873.68, "text": " another concern I have here is with respect to the double descent phenomenon" }, { "end": 1883.6000000000001, "start": 1879.0400000000002, "text": " so if you know the double descent phenomenon if here you have your number" }, { "end": 1889.52, "start": 1883.6000000000001, "text": " of parameters and here you have your validation loss let's say and you know" }, { "end": 1895.24, "start": 1889.52, "text": " that if I add parameters I can make my validation loss go down so this is" }, { "end": 1899.04, "start": 1895.24, "text": " assuming I have a model with p parameters and I always train it on the" }, { "end": 1904.56, "start": 1899.04, "text": " train data to like to convergence if I add parameters I can generalize better" }, { "end": 1909.28, "start": 1904.56, "text": " until a point where I add too many parameters and I start overfitting and" }, { "end": 1914.2, "start": 1909.28, "text": " my validation loss goes up again but the double descent phenomenon and I think" }, { "end": 1920.76, "start": 1914.2, "text": " I've done a video on this shows that after a certain threshold you get the" }, { "end": 1924.76, "start": 1920.76, "text": " interpolation threshold the validation loss goes actually down again it goes" }, { "end": 1931.24, "start": 1924.76, "text": " down even further here now I'm so this is a very strange phenomenon by itself" }, { "end": 1935.0800000000002, "start": 1931.24, "text": " but I'm sort of concerned that if you do this abstraction that this paper" }, { "end": 1940.52, "start": 1935.0800000000002, "text": " proposes so you read your let's say your full model is here with a large number" }, { "end": 1945.36, "start": 1940.52, "text": " of parameters so it is past this interpolation threshold if you now" }, { "end": 1949.04, "start": 1945.36, "text": " seriously reduce the number of parameters because you want to go into" }, { "end": 1955.84, "start": 1949.04, "text": " this petri dish you will get maybe you will cross this interpolation threshold" }, { "end": 1959.92, "start": 1955.84, "text": " and actually be on this side of the curve right here now of course at the" }, { "end": 1966.48, "start": 1959.92, "text": " same time you reduce the amount of data which would push you over here again but" }, { "end": 1970.8, "start": 1966.48, "text": " it is different data so I'm not sure how all of this is going to play out it" }, { "end": 1978.88, "start": 1970.8, "text": " appears to work in these settings right here but I I think this is it's it's" }, { "end": 1984.88, "start": 1978.88, "text": " sort of it's sort of applicable in some situations and it's it'd be very cool if" }, { "end": 1989.3600000000001, "start": 1984.88, "text": " we develop this further such that we understand when it applies and when we" }, { "end": 1995.68, "start": 1989.3600000000001, "text": " can use it because I feel this can be a very cool thing if we understand it" }, { "end": 2001.3600000000001, "start": 1995.68, "text": " better and if we can apply it throughout alright that's the end if you like this" }, { "end": 2008.1200000000001, "start": 2001.3600000000001, "text": " paper leave a comment if you didn't like it leave a comment and bye bye see you" }, { "end": 2027.6399999999999, "start": 2008.12, "text": " next time" } ]
CA8JPbJ75tY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
CornerNet: Detecting Objects as Paired Keypoints (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "corner", "top left", "bottom right", "corners", "cv", "computer vision", "vision", "object detection", "detr", "bounding box", "center", "anchor", "pooling", "local", "cnn", "convolutions", "convolutional neural network", "hourglass", "skip connection", "heatmap", "embedding", "push", "pull", "loss", "overlap", "filters", "channels" ]
Many object detectors focus on locating the center of the object they want to find. However, this leaves them with the secondary problem of determining the specifications of the bounding box, leading to undesirable solutions like anchor boxes. This paper directly detects the top left and the bottom right corners of objects independently, along with descriptors that allows to match the two later and form a complete bounding box. For this, a new pooling method, called corner pooling, is introduced. OUTLINE: 0:00 - Intro & High-Level Overview 1:40 - Object Detection 2:40 - Pipeline I - Hourglass 4:00 - Heatmap & Embedding Outputs 8:40 - Heatmap Loss 10:55 - Embedding Loss 14:35 - Corner Pooling 20:40 - Experiments Paper: https://arxiv.org/abs/1808.01244 Code: https://github.com/princeton-vl/CornerNet Abstract: We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that CornerNet achieves a 42.2% AP on MS COCO, outperforming all existing one-stage detectors. Authors: Hei Law, Jia Deng Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello there! Today we're looking at corner net detecting objects as paired key points by Hylah and Jia Ding. So on a high level this paper detects objects in images. Let's say this is an image and here's a chair. You know you have your chair. And the way you detect the chair for this paper is going to be you detect the bottom right and the top left corners of the bounding box of the image. So rather than detecting the middle and then specifying height and width, like we saw in the Facebook DETR paper, you detect the two corners. And this paper goes through what they have to do to get this to work, including a new pooling method called corner pooling. So that's the gist of the paper. As always, if you like content like this, consider subscribing and sharing it out to other people. That would be very helpful. So a commenter actually recommended this paper to me after I made a video on Facebook's DETR object detection pipeline. I said something like, okay, since that paper always would detect the middle of the object and the height and width, couldn't you make something that also that detects the corners here and the corner here and then that would define a bounding box just as well. And in the comments, and thank you very much for that, I, someone made that pointed me to this paper, it's a bit older, as you can see, but it's, I still think it's, it's pretty cool. So we've already seen the the problem, like the problem isn't hard. And it's detecting bounding boxes in images. And in these data set, the problems, the difficult parts are that you sometimes have multiple objects, like here, if two humans, they can be overlapping, they can be of different sizes, that could be like a third human, like small back here, there can be other objects, you don't know how many there are, and so on. So it is it is a fairly complicated problem. But as I already said, the way that corner net here does this is by predicting the locations of the top left and bottom right corner, thereby defining a bounding box. And it does this independently. So there's one network basically, that does the top left and one that does the bottom right, and they are then combined. And at the end, they're sort of refined, I think. So the architecture is pretty simple. First, you put the image through a con net, which is like a feature extractor. So this is the basic part. It was even the basic part of Facebook's DETR pipeline. First, you have some sort of con net. Now they in this case use in this hourglass architecture that described down, down here somewhere. And this basically compresses the image into a smaller resolution. So I would take that image and compress it down to very small resolution, but many, many channels. So it's sort of forced to learn a global semantic representation, and then it up samples the image again, and down samples it again, and it up samples it again through. So at each of these steps, there are many convolutional layers right here. And because that would lose you too much space, like local information, there are skip connections built in between pairs of layers where information can travel without computation, basically. So this is a fairly standard architecture right here. But then after this hourglass CNN, you get to these prediction modules. Now let me switch back to the top drawing. Ultimately, what you want as an output of these prediction modules is two things. So first of all, you want these heat maps, sorry about that. And these heat maps will simply tell you where are the corners. Okay. Now the heat maps, their dimensions are the height of the image. Sorry, the height here, h, come on, and the width of the image. And this here would be the number of classes C. Okay, so you have one channel for each of the classes that you predict. And the heat map will basically be very high at the location and channel where there is a corner of that. So you see have one heat map for the top left corners, and one heat map for the bottom right corners. And then also what you want to predict are these embeddings. Now, simply because you have, you know, I said there can be multiple instances of the same class in the in the same image. So now you have in this case, particular case, you're gonna, even if you predict absolutely correctly, you predict two top left corners and two bottom right corners. Now this isn't particularly hard because there's only one configuration that can possibly be but there could be situations where there are multiple. And that's why you need to somehow match these corners, you have to match, you have to know which ones of those are the same objects. And they do this by a second output in their heads called this embeddings. These embeddings, they're simply vectors. And the only thing that they're asked to do is they're asked to have a large inner product, whenever they belong to the same object, and they are asked to have a small inner product, sorry, when they're when they belong to the different two different objects. So this orange thing here would have a large inner product with this green bottom right corner embedding. Okay, so you train these embeddings, they don't need to mean anything. You simply train them to predict the same thing for the same objects and different things for different objects. So after that, when you match the corners, you can simply go over you can say, ah, this which one of these two right here has the larger inner product, or you can do like some Hungarian matching and maximize the total inner product or something like this. This was quite surprising to me that it works, but it's based on a line of research that is already has already established that this can work. Because ultimately, these things, these two pipelines do not really communicate, right. So I'm going to guess what they learn is sort of a sort of a descriptor of the actual object that's there. Because if both describe the objects that that's there, with their embeddings, their embeddings are going to have a large inner product. And if they describe different objects, then their embeddings are not going to match, right. So even though you train that this objective, I still think that these embeddings would pick up something about the object, something about the visual characteristics of the objects will be very interesting to see whether someone could actually parse out what they what they do, because it's almost impossible otherwise for these things to be learnable. Alright, so that's the that's the goal right here, you want to get these heat maps in these embeddings. And the way you do it is fairly easy architecturally, you have these two prediction modules, one for top left and one for bottom right. And each of them have three outputs, the heat maps, the embeddings. And here the offsets are simply a way for you to deal with the fact that you downsample and by downsampling, you have to round certain pixels to certain locations. And then the offsets, they they compensate for this. But I don't want to focus on these right now. So you simply have these two outputs right here. Now we'll look at corner pooling in a second. But how do you train this? So you can now say, okay, if I have a picture like this, there there is exactly two locations in the class human, where the the top left corner is correct. And that's right here. And that's right here. Okay, so two locations. So I fill I make my matrix, my target matrix with a one here and the one here, and zeros everywhere else. Alright 0000000000. And I train my network to give me this particular thing as an output for these heat for this heat map in the channel human. This this might work, but it is more profitable, let's say, if you allow for some slack. So what they say is, you know, since if I'm anywhere within this orange circle right here with my prediction, my resulting bounding box is still going to overlap fairly well with the ground truth bounding box. And the accuracy measures for these things, I think, are based on how much you overlap with the ground truth bounding boxes. So what they do basically is they, they give, they put a one in the spot where the actual corner is, and then they put like a 0.9 around it 0.9 0.9, and so on, and they kind of flatten out. So this is sort of a Gaussian right here in multiple dimensions. If that drawing makes any sense. And they say, well, you the closer you are, basically, the more reward you get. So you train it to predict in this general location. Now, of course, the size, exact size of this Gaussian has to be dependent on the actual size of the box itself. And they have, they, they regard that and say exactly how they calculate these Gaussians. But for the understanding, it's just important that they do give some slack here in how they compute the loss with respect to the heat map. Now the loss with respect to the to the embeddings is pretty simple, pretty straightforward. So remember these embeddings, you have two embeddings per, you have the top left embedding, that's the ETK, the top embed, the top corner embedding, and you have the bottom right embedding. And what you want is for them to be close together when they describe the same object, right? So this is this push and pull losses. So in the pull loss, what you want to do is you want to minimize the distances of these two things to this thing right here. And this thing is simply, so EK is simply the the mean, so it's ETK plus EBK divided by two, that's simply if your top left corner is here and your bottom right corner is here, and they have embeddings, this one has this embedding, and this one has that embedding, then the mean of the two embeddings, which I guess is whatever this right here. Yeah, that's about the mean. So the location is not important, actually. So it's about the embedding vectors. It's not about where the corners are. The two embedding vectors must be close together, and you model that not directly by making them close to each other, but by making both close to their mean. And that probably saves you some back propagation troubles where you, if you have two moving parts in a loss function, and you optimize both, then you tend to, so you have two things, you want to bring them closer together, they might tend to overshoot or something like this. Okay, so this brings those two closer together. And in the push loss, what you want to do is you want to simply make the mean between the two, remember this is this is the mean, this the mean embedding of this object far away from the mean embedding of any other object in the picture. Okay, so this here is a margin loss, which means that you cap it at some point. So if they're close together, if the embeddings of two different objects are close together, you can see here this quantity will be small, and therefore it will lead to this delta, you give a loss of one, the delta here is one in this case. But as they get further apart, you're more and more happy, and you reduce your loss until you don't give, you don't give any any bonus for them being super far apart, you don't simply don't want them to be closer together than one. All right. In their case, I think they have a the dimension of these vectors is actually one, which basically means they just output the single number, which I find astonishing that that works. Yes, they use embeddings of one dimension. So they just use numbers. Astonishing that it works, but okay. So that's how you train the the embedding output embeddings close together of the same objects of the two corners, and embeddings far apart for different objects. Alright, so we can now predict where the corners are, and we can match them. Now, one center part of this is the corner pooling. And why is the corner pooling necessary? So what's the problem with this sort of approach? The problem, and they have an example right here, the problem when you want to predict a corner of an object is that in a CNN, what CNN is good at is like local neighborhood information, right? So if you have to predict, let's go for the moon, actually, here, let's predict the location of the moon, if I have to predict the location of the moon, and I'm a CNN, and I have this receptive field, I'm like, Oh, yes, it's like in here. And then I have this receptive field. And I'm like, yes, it's in here. And then I zoom in on the corner, not on the moon itself, but on the corner where I need to predict, right? At some point, I like I'm sort of, I'm like, wait, wait, where is it? Because in this particular receptive field of this resolution, I have I have no clue if the moon is close, right? So at the location where the actual bounding box is, I have no local information of the object, because usually objects are not squares, they're sort of round like the moon, or like here, the plane, these corners, they have no local information about where the plane is. And corner pooling is a method to propagate that information along the axis. So what corner in corner pooling, what you would allow the location here in the CNN to do is to not only look at its itself, so its own location, but actually to extend its field of view over to the right, and down to the bottom, it's asked to predict a top left corner. So what you do is you max pool everything from here to this corner detector. So the corner detector will basically be able to detect whenever in either this band right here. So whenever in this band right here, there is the top, like the top of an object, like the top of the moon here, this corner detector can say, ah, that's probably the right height right here for a corner. And it combines this with the information of this side here, where it also says, oh, there is the side of the moon, that's probably the correct, you know, up down. So there's probably a corner right here. Okay, whereas a location right here would get the same signal from the right, or like almost the same signal, plus this signal right here. But in essence, it would also detect the top of the moon, but it would not get the same signal from down here. And therefore it says, ah, even though to the right, I see some the top of an object, I don't see the left of an object to my bottom. So I'm not going to predict a corner right here. Alright, so this corner pooling goes for the top left, and of course, equivalently goes for the bottom right, that can always max pools to up and to the left of itself. And that's exactly what you see here. So in this corner pooling, what you can do is you can propagate signal to the left and to up, and then you add the two informations, and that will give you your output feature. And you can calculate this actually fairly efficiently by doing like, like you do a cumulative sum, you do like a cumulative maximum across the different axes, and then you simply add two arrays. And that's it. So you simply put the corner pooling before you predict the these different outputs right here, the heat maps and the embeddings, which means that this hourglass network is not affected by this. Just the predictors of heat maps and embeddings, they then get the information from this hourglass network into these into these directions. Okay, I think that's a pretty, pretty neat method of solving this. And here they show how you can calculate this. And then the corner pooling is right here, they do add a skip connection here. Because sometimes, if you just aggregate this information, you might, you might actually get confused because so the trouble of course comes when there are multiple, like different objects that have, you know, the same top. And then there's also a person right here that so it gets like a signal, it gets another signal that there is the left side of a person right here. Or maybe, you know, not like this. So it will it will predict like a corner, maybe here, where there is none. So it's, it's, sometimes it is important to have local information still. And that's exactly what this skip connection is supposed to address. I guess the situation up here would be resolved by the different embeddings. But still, so you have that you add and you put another bunch of convolutional layers on top of that, and then you'll get your predictions. And that's it. You mix all the losses. So there is a detection loss from the embeddings, there's the sorry, from the heat maps, there is the pull and push losses for the embeddings. And there's this offset loss that you train to compensate for the down the down sampling errors. And that's it. And they ablate the various things here, basically, they show that they're better than other one shot or one stage predictors. So apparently, there's one stage predictors where you have a single pass through a neural network. And there's two stage predictors where you have multiple or two passes through different neural networks. And they compete in the in the one, one stage neural network category, if so to say. And they show that they get significant improvements with and due to this corner pooling, which is pretty cool to see, because it makes sense. It sort of makes sense, how you would like to think about it like this. And to see that it helps is pretty neat. Yeah, they also investigate how large they have to make these these Gaussians and so on. And these are some qualitative examples, you can see that without the corner pooling, what you'll get is that so the top here and the left and the right are correct are detected correctly. But you can see that probably the network thinks that there is an extension of the object right here, and therefore, doesn't do doesn't do a good job. Because this this position right here, it has no access to sort of it has to use like a long range access, it can't it can't really look in detail at the features here or here. So when it scans up and down the side, where the bottom corner where the bottom break is, it can it can only look at very coarse features, because it has to basically transmit information in the CNN of a higher layer and the higher layer has a higher receptive field, which means it has a lower resolution. So it can't really go and look very in very detailed fashion at this border right here. So it misses it. Okay. Same right here, as you can see, so there are a number of failure cases that they can now solve using this method compared to if they didn't use the corner pooling. They show some also some times where their method fails, for example, here, it matches the top, top left and bottom right corners of two different objects. Because they their embeddings were close enough. And yeah, that's what I'm saying. I'm I'm wondering what these embeddings actually learn, because they are generated independently. So not entirely sure. It's also not exactly what I had in mind when I formulated this idea in the last video. But I'm actually not sure what I had in mind myself, to be honest. But in my mind, it seemed to be like you should be able to train a network. If there is an object right here, you could train a network to predict for any given location, let's say how many pixels to its bottom right, or maybe you want to normalize by the area that's there, are part of a particular object. And then you could use, you could predict each pixel and use like the differences between the differences between the points as as scores for bounding boxes. I don't know if you see what I mean. You could basically tell the you have you'd have one network predict everything to the to the bottom right, and then you'd use the differences. And the transformers would be very good at that because they can sort of have this attention between each pairs of points and so on. I'm not entirely sure, but this might just be crap. Yeah, here's some more examples. This appears to work really nicely. But of course, in the qualitative, qualitative examples, it always works nicely, but they also demonstrated. All right, I found this paper all in all pretty cool, pretty neat. It's a simple idea. It's executed well. I don't have the feeling that there are like too many tricks in here. And they show really that the improvement seems to be due to their their corner pooling method. And that's pretty neat. So if you like this paper, make sure to check it out. And I'll see you next time. Bye bye.
[ { "end": 6.08, "start": 0, "text": " Hello there! Today we're looking at corner net detecting objects as paired key points by" }, { "end": 15.280000000000001, "start": 6.08, "text": " Hylah and Jia Ding. So on a high level this paper detects objects in images. Let's say this is an" }, { "end": 22.8, "start": 15.280000000000001, "text": " image and here's a chair. You know you have your chair. And the way you detect the chair for this" }, { "end": 29.64, "start": 22.8, "text": " paper is going to be you detect the bottom right and the top left corners of the bounding box of" }, { "end": 35.72, "start": 29.64, "text": " the image. So rather than detecting the middle and then specifying height and width, like we saw in" }, { "end": 41.4, "start": 35.72, "text": " the Facebook DETR paper, you detect the two corners. And this paper goes through what they" }, { "end": 48.519999999999996, "start": 41.4, "text": " have to do to get this to work, including a new pooling method called corner pooling. So that's" }, { "end": 55.120000000000005, "start": 48.519999999999996, "text": " the gist of the paper. As always, if you like content like this, consider subscribing and" }, { "end": 63.36, "start": 55.12, "text": " sharing it out to other people. That would be very helpful. So a commenter actually recommended" }, { "end": 70.67999999999999, "start": 63.36, "text": " this paper to me after I made a video on Facebook's DETR object detection pipeline. I said something" }, { "end": 78, "start": 70.67999999999999, "text": " like, okay, since that paper always would detect the middle of the object and the height and width," }, { "end": 84.03999999999999, "start": 78, "text": " couldn't you make something that also that detects the corners here and the corner here and then that" }, { "end": 89.92, "start": 84.04, "text": " would define a bounding box just as well. And in the comments, and thank you very much for that," }, { "end": 98.64, "start": 89.92, "text": " I, someone made that pointed me to this paper, it's a bit older, as you can see, but it's, I still" }, { "end": 105.96000000000001, "start": 98.64, "text": " think it's, it's pretty cool. So we've already seen the the problem, like the problem isn't hard. And" }, { "end": 113.88000000000001, "start": 105.96000000000001, "text": " it's detecting bounding boxes in images. And in these data set, the problems, the difficult parts" }, { "end": 121.36, "start": 113.88, "text": " are that you sometimes have multiple objects, like here, if two humans, they can be overlapping," }, { "end": 127, "start": 121.36, "text": " they can be of different sizes, that could be like a third human, like small back here, there can be" }, { "end": 132.51999999999998, "start": 127, "text": " other objects, you don't know how many there are, and so on. So it is it is a fairly complicated" }, { "end": 140.16, "start": 132.51999999999998, "text": " problem. But as I already said, the way that corner net here does this is by predicting the" }, { "end": 146.32, "start": 140.16, "text": " locations of the top left and bottom right corner, thereby defining a bounding box. And it does this" }, { "end": 154.51999999999998, "start": 146.32, "text": " independently. So there's one network basically, that does the top left and one that does the bottom" }, { "end": 163.8, "start": 154.51999999999998, "text": " right, and they are then combined. And at the end, they're sort of refined, I think. So the" }, { "end": 169.64, "start": 163.8, "text": " architecture is pretty simple. First, you put the image through a con net, which is like a feature" }, { "end": 178.88, "start": 169.64, "text": " extractor. So this is the basic part. It was even the basic part of Facebook's DETR pipeline. First," }, { "end": 185.11999999999998, "start": 178.88, "text": " you have some sort of con net. Now they in this case use in this hourglass architecture that" }, { "end": 195.88, "start": 185.11999999999998, "text": " described down, down here somewhere. And this basically compresses the image into a smaller" }, { "end": 200.68, "start": 195.88, "text": " resolution. So I would take that image and compress it down to very small resolution," }, { "end": 206.88, "start": 200.68, "text": " but many, many channels. So it's sort of forced to learn a global semantic representation," }, { "end": 211.35999999999999, "start": 206.88, "text": " and then it up samples the image again, and down samples it again, and it up samples it again" }, { "end": 218.44, "start": 211.35999999999999, "text": " through. So at each of these steps, there are many convolutional layers right here. And because that" }, { "end": 223.44, "start": 218.44, "text": " would lose you too much space, like local information, there are skip connections built" }, { "end": 230.44, "start": 223.44, "text": " in between pairs of layers where information can travel without computation, basically. So this is" }, { "end": 237.8, "start": 230.44, "text": " a fairly standard architecture right here. But then after this hourglass CNN, you get to these" }, { "end": 245.64, "start": 237.8, "text": " prediction modules. Now let me switch back to the top drawing. Ultimately, what you want as an output" }, { "end": 252.76, "start": 245.64, "text": " of these prediction modules is two things. So first of all, you want these heat maps, sorry about" }, { "end": 259.64, "start": 252.76, "text": " that. And these heat maps will simply tell you where are the corners. Okay. Now the heat maps," }, { "end": 270.71999999999997, "start": 259.64, "text": " their dimensions are the height of the image. Sorry, the height here, h, come on, and the width" }, { "end": 279.15999999999997, "start": 270.71999999999997, "text": " of the image. And this here would be the number of classes C. Okay, so you have one channel for each" }, { "end": 287, "start": 279.16, "text": " of the classes that you predict. And the heat map will basically be very high at the location and" }, { "end": 293.64000000000004, "start": 287, "text": " channel where there is a corner of that. So you see have one heat map for the top left corners," }, { "end": 299.96000000000004, "start": 293.64000000000004, "text": " and one heat map for the bottom right corners. And then also what you want to predict are these" }, { "end": 308.08000000000004, "start": 299.96000000000004, "text": " embeddings. Now, simply because you have, you know, I said there can be multiple instances of the same" }, { "end": 315.88, "start": 308.08, "text": " class in the in the same image. So now you have in this case, particular case, you're gonna, even if" }, { "end": 323.12, "start": 315.88, "text": " you predict absolutely correctly, you predict two top left corners and two bottom right corners. Now" }, { "end": 329.08, "start": 323.12, "text": " this isn't particularly hard because there's only one configuration that can possibly be but there" }, { "end": 334.56, "start": 329.08, "text": " could be situations where there are multiple. And that's why you need to somehow match these" }, { "end": 340.92, "start": 334.56, "text": " corners, you have to match, you have to know which ones of those are the same objects. And they do" }, { "end": 348.56, "start": 340.92, "text": " this by a second output in their heads called this embeddings. These embeddings, they're simply" }, { "end": 356.96, "start": 348.56, "text": " vectors. And the only thing that they're asked to do is they're asked to have a large inner product," }, { "end": 364.91999999999996, "start": 356.96, "text": " whenever they belong to the same object, and they are asked to have a small inner product, sorry," }, { "end": 373.79999999999995, "start": 364.91999999999996, "text": " when they're when they belong to the different two different objects. So this orange thing here would" }, { "end": 378.76, "start": 373.79999999999995, "text": " have a large inner product with this green bottom right corner embedding. Okay, so you train these" }, { "end": 384.32, "start": 378.76, "text": " embeddings, they don't need to mean anything. You simply train them to predict the same thing for" }, { "end": 392.15999999999997, "start": 384.32, "text": " the same objects and different things for different objects. So after that, when you match the corners," }, { "end": 399.64, "start": 392.15999999999997, "text": " you can simply go over you can say, ah, this which one of these two right here has the larger inner" }, { "end": 404.84, "start": 399.64, "text": " product, or you can do like some Hungarian matching and maximize the total inner product" }, { "end": 411.92, "start": 404.84, "text": " or something like this. This was quite surprising to me that it works, but it's based on a line of" }, { "end": 419.08000000000004, "start": 411.92, "text": " research that is already has already established that this can work. Because ultimately, these" }, { "end": 425.56, "start": 419.08000000000004, "text": " things, these two pipelines do not really communicate, right. So I'm going to guess what" }, { "end": 434.16, "start": 425.56, "text": " they learn is sort of a sort of a descriptor of the actual object that's there. Because if both" }, { "end": 440.32, "start": 434.16, "text": " describe the objects that that's there, with their embeddings, their embeddings are going to have a" }, { "end": 444.96, "start": 440.32, "text": " large inner product. And if they describe different objects, then their embeddings are not going to" }, { "end": 450.92, "start": 444.96, "text": " match, right. So even though you train that this objective, I still think that these embeddings" }, { "end": 457.08, "start": 450.92, "text": " would pick up something about the object, something about the visual characteristics of the objects" }, { "end": 463.88, "start": 457.08, "text": " will be very interesting to see whether someone could actually parse out what they what they do," }, { "end": 475.04, "start": 463.88, "text": " because it's almost impossible otherwise for these things to be learnable. Alright, so that's the" }, { "end": 479.71999999999997, "start": 475.04, "text": " that's the goal right here, you want to get these heat maps in these embeddings. And the way you do" }, { "end": 485.6, "start": 479.71999999999997, "text": " it is fairly easy architecturally, you have these two prediction modules, one for top left and one" }, { "end": 491.56, "start": 485.6, "text": " for bottom right. And each of them have three outputs, the heat maps, the embeddings. And here" }, { "end": 498.12, "start": 491.56, "text": " the offsets are simply a way for you to deal with the fact that you downsample and by downsampling," }, { "end": 505.36, "start": 498.12, "text": " you have to round certain pixels to certain locations. And then the offsets, they they" }, { "end": 512.5, "start": 505.36, "text": " compensate for this. But I don't want to focus on these right now. So you simply have these two" }, { "end": 520.84, "start": 512.5, "text": " outputs right here. Now we'll look at corner pooling in a second. But how do you train this? So you" }, { "end": 528.44, "start": 520.84, "text": " can now say, okay, if I have a picture like this, there there is exactly two locations in the class" }, { "end": 537.0400000000001, "start": 528.44, "text": " human, where the the top left corner is correct. And that's right here. And that's right here. Okay," }, { "end": 544.52, "start": 537.0400000000001, "text": " so two locations. So I fill I make my matrix, my target matrix with a one here and the one here," }, { "end": 552.76, "start": 544.52, "text": " and zeros everywhere else. Alright 0000000000. And I train my network to give me this particular" }, { "end": 561.64, "start": 552.76, "text": " thing as an output for these heat for this heat map in the channel human. This this might work," }, { "end": 570.96, "start": 561.64, "text": " but it is more profitable, let's say, if you allow for some slack. So what they say is, you know," }, { "end": 577.72, "start": 570.96, "text": " since if I'm anywhere within this orange circle right here with my prediction, my resulting bounding" }, { "end": 583.6, "start": 577.72, "text": " box is still going to overlap fairly well with the ground truth bounding box. And the accuracy" }, { "end": 589.2800000000001, "start": 583.6, "text": " measures for these things, I think, are based on how much you overlap with the ground truth bounding" }, { "end": 600.64, "start": 589.28, "text": " boxes. So what they do basically is they, they give, they put a one in the spot where the actual" }, { "end": 609.4, "start": 600.64, "text": " corner is, and then they put like a 0.9 around it 0.9 0.9, and so on, and they kind of flatten out." }, { "end": 616.1999999999999, "start": 609.4, "text": " So this is sort of a Gaussian right here in multiple dimensions. If that drawing makes any" }, { "end": 625.08, "start": 616.2, "text": " sense. And they say, well, you the closer you are, basically, the more reward you get. So you train" }, { "end": 633, "start": 625.08, "text": " it to predict in this general location. Now, of course, the size, exact size of this Gaussian has" }, { "end": 641.72, "start": 633, "text": " to be dependent on the actual size of the box itself. And they have, they, they regard that and" }, { "end": 647.64, "start": 641.72, "text": " say exactly how they calculate these Gaussians. But for the understanding, it's just important" }, { "end": 654.0400000000001, "start": 647.64, "text": " that they do give some slack here in how they compute the loss with respect to the heat map." }, { "end": 665.12, "start": 654.0400000000001, "text": " Now the loss with respect to the to the embeddings is pretty simple, pretty straightforward. So" }, { "end": 672.48, "start": 665.12, "text": " remember these embeddings, you have two embeddings per, you have the top left embedding, that's the" }, { "end": 680.04, "start": 672.48, "text": " ETK, the top embed, the top corner embedding, and you have the bottom right embedding. And what you" }, { "end": 688.12, "start": 680.04, "text": " want is for them to be close together when they describe the same object, right? So this is this" }, { "end": 694.98, "start": 688.12, "text": " push and pull losses. So in the pull loss, what you want to do is you want to minimize the distances" }, { "end": 702.64, "start": 694.98, "text": " of these two things to this thing right here. And this thing is simply, so EK is simply the the mean," }, { "end": 711.5600000000001, "start": 702.64, "text": " so it's ETK plus EBK divided by two, that's simply if your top left corner is here and your bottom" }, { "end": 716.48, "start": 711.5600000000001, "text": " right corner is here, and they have embeddings, this one has this embedding, and this one has" }, { "end": 724.44, "start": 716.48, "text": " that embedding, then the mean of the two embeddings, which I guess is whatever this right here. Yeah," }, { "end": 730.12, "start": 724.44, "text": " that's about the mean. So the location is not important, actually. So it's about the embedding" }, { "end": 735.6800000000001, "start": 730.12, "text": " vectors. It's not about where the corners are. The two embedding vectors must be close together," }, { "end": 741.72, "start": 735.6800000000001, "text": " and you model that not directly by making them close to each other, but by making both close to" }, { "end": 750.08, "start": 741.72, "text": " their mean. And that probably saves you some back propagation troubles where you, if you have two" }, { "end": 755.48, "start": 750.08, "text": " moving parts in a loss function, and you optimize both, then you tend to, so you have two things," }, { "end": 762.08, "start": 755.48, "text": " you want to bring them closer together, they might tend to overshoot or something like this. Okay," }, { "end": 768.8000000000001, "start": 762.08, "text": " so this brings those two closer together. And in the push loss, what you want to do is you want to" }, { "end": 778.88, "start": 768.8000000000001, "text": " simply make the mean between the two, remember this is this is the mean, this the mean embedding" }, { "end": 787.76, "start": 778.88, "text": " of this object far away from the mean embedding of any other object in the picture. Okay, so this" }, { "end": 796.04, "start": 787.76, "text": " here is a margin loss, which means that you cap it at some point. So if they're close together," }, { "end": 804.72, "start": 796.04, "text": " if the embeddings of two different objects are close together, you can see here this" }, { "end": 812.48, "start": 804.72, "text": " quantity will be small, and therefore it will lead to this delta, you give a loss of one," }, { "end": 818.96, "start": 812.48, "text": " the delta here is one in this case. But as they get further apart, you're more and more happy," }, { "end": 827.96, "start": 818.96, "text": " and you reduce your loss until you don't give, you don't give any any bonus for them being super far" }, { "end": 835.88, "start": 827.96, "text": " apart, you don't simply don't want them to be closer together than one. All right. In their case," }, { "end": 841.6800000000001, "start": 835.88, "text": " I think they have a the dimension of these vectors is actually one, which basically means they just" }, { "end": 848.9200000000001, "start": 841.6800000000001, "text": " output the single number, which I find astonishing that that works. Yes, they use embeddings of one" }, { "end": 861.8, "start": 848.92, "text": " dimension. So they just use numbers. Astonishing that it works, but okay. So that's how you train" }, { "end": 868.7199999999999, "start": 861.8, "text": " the the embedding output embeddings close together of the same objects of the two corners," }, { "end": 875.8, "start": 868.7199999999999, "text": " and embeddings far apart for different objects. Alright, so we can now predict where the corners" }, { "end": 882.8399999999999, "start": 875.8, "text": " are, and we can match them. Now, one center part of this is the corner pooling. And why is the" }, { "end": 889.7199999999999, "start": 882.8399999999999, "text": " corner pooling necessary? So what's the problem with this sort of approach? The problem, and they" }, { "end": 898.88, "start": 889.7199999999999, "text": " have an example right here, the problem when you want to predict a corner of an object is that in" }, { "end": 906.24, "start": 898.88, "text": " a CNN, what CNN is good at is like local neighborhood information, right? So if you have to" }, { "end": 910.64, "start": 906.24, "text": " predict, let's go for the moon, actually, here, let's predict the location of the moon, if I have" }, { "end": 914.96, "start": 910.64, "text": " to predict the location of the moon, and I'm a CNN, and I have this receptive field, I'm like," }, { "end": 920.04, "start": 914.96, "text": " Oh, yes, it's like in here. And then I have this receptive field. And I'm like, yes, it's in here." }, { "end": 925.2, "start": 920.04, "text": " And then I zoom in on the corner, not on the moon itself, but on the corner where I need to predict," }, { "end": 933.2, "start": 925.2, "text": " right? At some point, I like I'm sort of, I'm like, wait, wait, where is it? Because in this" }, { "end": 941.6, "start": 933.2, "text": " particular receptive field of this resolution, I have I have no clue if the moon is close, right?" }, { "end": 948.72, "start": 941.6, "text": " So at the location where the actual bounding box is, I have no local information of the object," }, { "end": 957.36, "start": 948.72, "text": " because usually objects are not squares, they're sort of round like the moon, or like here, the" }, { "end": 963.4, "start": 957.36, "text": " plane, these corners, they have no local information about where the plane is. And corner pooling is a" }, { "end": 971.36, "start": 963.4, "text": " method to propagate that information along the axis. So what corner in corner pooling, what you" }, { "end": 980, "start": 971.36, "text": " would allow the location here in the CNN to do is to not only look at its itself, so its own location," }, { "end": 989.5600000000001, "start": 980, "text": " but actually to extend its field of view over to the right, and down to the bottom, it's asked to" }, { "end": 998.84, "start": 989.5600000000001, "text": " predict a top left corner. So what you do is you max pool everything from here to this corner" }, { "end": 1006.6800000000001, "start": 998.84, "text": " detector. So the corner detector will basically be able to detect whenever in either this band" }, { "end": 1014.8000000000001, "start": 1006.6800000000001, "text": " right here. So whenever in this band right here, there is the top, like the top of an object, like" }, { "end": 1021.8000000000001, "start": 1014.8000000000001, "text": " the top of the moon here, this corner detector can say, ah, that's probably the right height right" }, { "end": 1030.52, "start": 1021.8, "text": " here for a corner. And it combines this with the information of this side here, where it also says," }, { "end": 1036.96, "start": 1030.52, "text": " oh, there is the side of the moon, that's probably the correct, you know, up down. So there's probably" }, { "end": 1045.48, "start": 1036.96, "text": " a corner right here. Okay, whereas a location right here would get the same signal from the right," }, { "end": 1051.88, "start": 1045.48, "text": " or like almost the same signal, plus this signal right here. But in essence, it would also detect" }, { "end": 1056.92, "start": 1051.88, "text": " the top of the moon, but it would not get the same signal from down here. And therefore it says," }, { "end": 1064.2, "start": 1056.92, "text": " ah, even though to the right, I see some the top of an object, I don't see the left of an object to" }, { "end": 1070.52, "start": 1064.2, "text": " my bottom. So I'm not going to predict a corner right here. Alright, so this corner pooling goes" }, { "end": 1076.04, "start": 1070.52, "text": " for the top left, and of course, equivalently goes for the bottom right, that can always max pools to" }, { "end": 1083.24, "start": 1076.04, "text": " up and to the left of itself. And that's exactly what you see here. So in this corner pooling," }, { "end": 1091.28, "start": 1083.24, "text": " what you can do is you can propagate signal to the left and to up, and then you add the two" }, { "end": 1096.68, "start": 1091.28, "text": " informations, and that will give you your output feature. And you can calculate this actually" }, { "end": 1103.0800000000002, "start": 1096.68, "text": " fairly efficiently by doing like, like you do a cumulative sum, you do like a cumulative maximum" }, { "end": 1110.96, "start": 1103.0800000000002, "text": " across the different axes, and then you simply add two arrays. And that's it. So you simply put" }, { "end": 1118.4, "start": 1110.96, "text": " the corner pooling before you predict the these different outputs right here, the heat maps and" }, { "end": 1125.04, "start": 1118.4, "text": " the embeddings, which means that this hourglass network is not affected by this. Just the" }, { "end": 1131.96, "start": 1125.04, "text": " predictors of heat maps and embeddings, they then get the information from this hourglass network" }, { "end": 1139.6, "start": 1131.96, "text": " into these into these directions. Okay, I think that's a pretty, pretty neat method of solving" }, { "end": 1146.24, "start": 1139.6, "text": " this. And here they show how you can calculate this. And then the corner pooling is right here," }, { "end": 1152.84, "start": 1146.24, "text": " they do add a skip connection here. Because sometimes, if you just aggregate this information," }, { "end": 1161.32, "start": 1152.84, "text": " you might, you might actually get confused because so the trouble of course comes when there are" }, { "end": 1171.24, "start": 1161.32, "text": " multiple, like different objects that have, you know, the same top. And then there's also a person" }, { "end": 1179.3999999999999, "start": 1171.24, "text": " right here that so it gets like a signal, it gets another signal that there is the left side of a" }, { "end": 1187.8400000000001, "start": 1179.4, "text": " person right here. Or maybe, you know, not like this. So it will it will predict like a corner," }, { "end": 1195.6000000000001, "start": 1187.8400000000001, "text": " maybe here, where there is none. So it's, it's, sometimes it is important to have local information" }, { "end": 1201.0800000000002, "start": 1195.6000000000001, "text": " still. And that's exactly what this skip connection is supposed to address. I guess the situation up" }, { "end": 1208.5600000000002, "start": 1201.0800000000002, "text": " here would be resolved by the different embeddings. But still, so you have that you add and you put" }, { "end": 1214.08, "start": 1208.56, "text": " another bunch of convolutional layers on top of that, and then you'll get your predictions. And" }, { "end": 1221.28, "start": 1214.08, "text": " that's it. You mix all the losses. So there is a detection loss from the embeddings, there's the" }, { "end": 1228.32, "start": 1221.28, "text": " sorry, from the heat maps, there is the pull and push losses for the embeddings. And there's this" }, { "end": 1237.32, "start": 1228.32, "text": " offset loss that you train to compensate for the down the down sampling errors. And that's it. And" }, { "end": 1244.52, "start": 1237.32, "text": " they ablate the various things here, basically, they show that they're better than other one shot" }, { "end": 1251.36, "start": 1244.52, "text": " or one stage predictors. So apparently, there's one stage predictors where you have a single pass" }, { "end": 1256.2, "start": 1251.36, "text": " through a neural network. And there's two stage predictors where you have multiple or two passes" }, { "end": 1262.4399999999998, "start": 1256.2, "text": " through different neural networks. And they compete in the in the one, one stage neural network" }, { "end": 1270.1200000000001, "start": 1262.44, "text": " category, if so to say. And they show that they get significant improvements with and due to this" }, { "end": 1277, "start": 1270.1200000000001, "text": " corner pooling, which is pretty cool to see, because it makes sense. It sort of makes sense," }, { "end": 1288.04, "start": 1277, "text": " how you would like to think about it like this. And to see that it helps is pretty neat. Yeah," }, { "end": 1295.3999999999999, "start": 1288.04, "text": " they also investigate how large they have to make these these Gaussians and so on. And these are" }, { "end": 1303.1599999999999, "start": 1295.3999999999999, "text": " some qualitative examples, you can see that without the corner pooling, what you'll get is that so the" }, { "end": 1309.08, "start": 1303.1599999999999, "text": " top here and the left and the right are correct are detected correctly. But you can see that" }, { "end": 1315.48, "start": 1309.08, "text": " probably the network thinks that there is an extension of the object right here, and therefore," }, { "end": 1327.4, "start": 1315.48, "text": " doesn't do doesn't do a good job. Because this this position right here, it has no access to sort" }, { "end": 1334.92, "start": 1327.4, "text": " of it has to use like a long range access, it can't it can't really look in detail at the features" }, { "end": 1340.76, "start": 1334.92, "text": " here or here. So when it scans up and down the side, where the bottom corner where the bottom" }, { "end": 1346.52, "start": 1340.76, "text": " break is, it can it can only look at very coarse features, because it has to basically transmit" }, { "end": 1351.8, "start": 1346.52, "text": " information in the CNN of a higher layer and the higher layer has a higher receptive field," }, { "end": 1358.2, "start": 1351.8, "text": " which means it has a lower resolution. So it can't really go and look very in very detailed fashion" }, { "end": 1367.24, "start": 1358.2, "text": " at this border right here. So it misses it. Okay. Same right here, as you can see, so there are a" }, { "end": 1374.6, "start": 1367.24, "text": " number of failure cases that they can now solve using this method compared to if they didn't use" }, { "end": 1382.76, "start": 1374.6, "text": " the corner pooling. They show some also some times where their method fails, for example, here," }, { "end": 1391.88, "start": 1382.76, "text": " it matches the top, top left and bottom right corners of two different objects. Because they" }, { "end": 1399.48, "start": 1391.88, "text": " their embeddings were close enough. And yeah, that's what I'm saying. I'm I'm wondering what" }, { "end": 1407.3200000000002, "start": 1399.48, "text": " these embeddings actually learn, because they are generated independently. So not entirely sure." }, { "end": 1415.0800000000002, "start": 1408.5200000000002, "text": " It's also not exactly what I had in mind when I formulated this idea in the last video. But" }, { "end": 1421.6399999999999, "start": 1415.08, "text": " I'm actually not sure what I had in mind myself, to be honest. But in my mind, it seemed to be like" }, { "end": 1428.12, "start": 1421.6399999999999, "text": " you should be able to train a network. If there is an object right here, you could train a network" }, { "end": 1436.36, "start": 1428.12, "text": " to predict for any given location, let's say how many pixels to its bottom right, or maybe you want" }, { "end": 1442.6, "start": 1436.36, "text": " to normalize by the area that's there, are part of a particular object. And then you could use," }, { "end": 1448.52, "start": 1442.6, "text": " you could predict each pixel and use like the differences between the differences between the" }, { "end": 1457.8799999999999, "start": 1448.52, "text": " points as as scores for bounding boxes. I don't know if you see what I mean. You could basically" }, { "end": 1465.3999999999999, "start": 1457.8799999999999, "text": " tell the you have you'd have one network predict everything to the to the bottom right, and then" }, { "end": 1471.32, "start": 1465.3999999999999, "text": " you'd use the differences. And the transformers would be very good at that because they can" }, { "end": 1477.32, "start": 1471.32, "text": " sort of have this attention between each pairs of points and so on. I'm not entirely sure," }, { "end": 1484.9199999999998, "start": 1477.32, "text": " but this might just be crap. Yeah, here's some more examples. This appears to work really nicely." }, { "end": 1491.1599999999999, "start": 1484.9199999999998, "text": " But of course, in the qualitative, qualitative examples, it always works nicely, but they also" }, { "end": 1497.24, "start": 1491.1599999999999, "text": " demonstrated. All right, I found this paper all in all pretty cool, pretty neat. It's a simple idea." }, { "end": 1502.52, "start": 1497.24, "text": " It's executed well. I don't have the feeling that there are like too many tricks in here." }, { "end": 1509.88, "start": 1502.52, "text": " And they show really that the improvement seems to be due to their their corner pooling method." }, { "end": 1517.72, "start": 1509.88, "text": " And that's pretty neat. So if you like this paper, make sure to check it out. And I'll see you next" }, { "end": 1528.04, "start": 1517.72, "text": " time. Bye bye." } ]
nxEr4VNgYOE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Movement Pruning: Adaptive Sparsity by Fine-Tuning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "prune", "pruning", "transfer learning", "weights", "magnitude", "gradient", "moving", "small", "importance", "huggingface", "nlp", "natural language processing", "squad", "mnli", "bert", "transformer", "attention", "cnn", "distillation", "teacher", "sparse", "sparsity", "question answering", "mobile", "edge", "tune", "fine-tune" ]
Deep neural networks are large models and pruning has become an important part of ML product pipelines, making models small while keeping their performance high. However, the classic pruning method, Magnitude Pruning, is suboptimal in models that are obtained by transfer learning. This paper proposes a solution, called Movement Pruning and shows its superior performance. OUTLINE: 0:00 - Intro & High-Level Overview 0:55 - Magnitude Pruning 4:25 - Transfer Learning 7:25 - The Problem with Magnitude Pruning in Transfer Learning 9:20 - Movement Pruning 22:20 - Experiments 24:20 - Improvements via Distillation 26:40 - Analysis of the Learned Weights Paper: https://arxiv.org/abs/2005.07683 Code: https://github.com/huggingface/transformers/tree/master/examples/movement-pruning Abstract: Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth- and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters. Authors: Victor Sanh, Thomas Wolf, Alexander M. Rush Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at movement pruning, adaptive sparsity by fine tuning by Victor Sun, Thomas Wolff and Alexander M. Rush of Hugging Face and Cornell University. On a high level, this paper proposes that if you have a transfer learning objective and you want to do pruning, you should not do pruning by weight magnitude, you should do pruning by how much the weights move during the transfer learning. This yields better results in the very sparse model regimes and is specifically relevant to current NLP transfer learning tasks such as BERT models. So if you like content like this, consider subscribing and sharing it to your friends and as always leave a comment if you have anything to say on this. Alright let's dive in. So they say magnitude pruning is a widely used strategy for reducing model size in pure supervised learning. So what is magnitude pruning? Now if I have a neural network, let's say I have a convolutional neural network and I input my little cat right here and I have a bunch of layers right and now if we look at these layers, each of these layers is going to be made up of these units of the neurons and the next layer is also made up of these neurons. Now what kind of neural network that is, it's not that important but what is important is that you have these connections from neuron to neuron and in let's say a fully connected network every neuron is connected to every other neuron. In a CNN that would be slightly different but in essence you have a lot of connections here and these are usually called weights. So these are the weights. Now the problem is if I train like these giant neural networks and I want to ship them for example to mobile devices to my customers then they won't be able to download gigabytes of models or even like hundreds of megabytes of models, it's just not possible. So what we want to do is we want to prune this model which means we want to remove parts of these weights, a lot of these weights but we don't want to lose accuracy of the network. So imagine I have a network and that's trained, it's an image classifier, it's here it's cats or dogs and I have it trained to a good accuracy. I want to delete these weights but I want to retain the performance and these methods are called pruning. Now what people do is usually they sort of go in stepwise fashion, they say well first of all I don't need some of these and then they delete some and then they sort of retrain the pruned network and after that they go again and they say well I don't really need that one and they don't really need that one so they do it in this stepwise fashion until the network is of the size that they want and the hope is that you don't lose too much accuracy. So the question is how do you select which weights you need and which ones you don't need and usually this is done by so called magnitude pruning which means that you look at the weights and the weights they'll have some distribution, there will be very negative so here is very negative weights and here is very large positive weights and what you'll say is that okay probably the weights that are very large they contribute a lot to the signal of the network within the network and the weights that are quite small they're you know since there's all this noise and stuff they're probably not that important so I'm going to cut off basically right here and everything that's in here I'm going to delete those are the non-important weights whereas on the outside those are the important weights. This is called magnitude pruning because it goes by the magnitude of the weight the absolute value of the weight. So you don't actually need so there's not one threshold here you don't need a threshold you simply need a method to order the weights right and then you keep removing them until you're satisfied with the size. So this is magnitude pruning. Now what's the problem with the magnitude pruning in these kinds of tasks? They say however it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. So what do you do in these transfer learning regimes? In the transfer learning regime and actually let's go with the image example right here even though it's mostly used in NLP we can do the same thing. So let's say we have a classifier here for cats and dogs our classifier and we had a big big database of cats and dogs images right so we were able to train that fairly well and we don't prune it yet we have this full network. Now we want to adapt this to a task where we want to recognize whether or not the animal is sick. So we developed this app for a veterinarian and it's like a short screening for a particular disease that a cat might have and we already have this cats and dogs classifier so it's reasonable to assume that this classifier has some good features to work with cats and dog images. So what we can do instead of because let's assume for this other task we just have this tiny little data set which is not enough to train a neural network of this size right but so in the first step we'll train this big neural network on the cats versus dogs and then we what we do is we transfer learning so we transfer all the weights right here and here we have a different task now sick or not sick right this is cat this is dog and here is sick or not sick not sick and of course we can't transfer these particular weights but we hope that the features here will sort of be the same so we transfer them and then we train these weights including the head right here this part we train it on this little data set and we hope that we already have this good starting point we only need to you know learn the basically the specifics of what makes these two data sets different and we won't have to learn entire task of dealing with cat and dog images from the get-go okay so this is called transfer learning now in this case we combine the two so first we want to transfer learn like if we build this app for vets and then we might say oh this is not you know this is not only for vets this is actually for anyone you know who has a cat or a dog at home so what we could do is build an app where anyone at home could scan their their cat and it would output like a probability of the cat having that disease so we want this neural network is still the same size as this neural network so now we want to do the pruning we want this neural network to become sparse to only have a couple of connections left such that it's a few kilobytes large but retain performance now they say when you do this step you can't just do the magnitude pruning like you did right here and why not because this is not this model right here is not the result of a training step like of a regular training process but is the result of a transfer learning process where first you do the big training and then second you adapt it and why is that the case well ultimately what you want to do is you want to prove the non-important weights now there could be a weight right here this one that is very important for the cat versus dog task but that is not important for the sick versus non-sick task and we also we know that in these transfer learning settings the weights they don't tend to move that much in general the research shows that once you've trained a neural network basically the beginning is important but then once you did it like if you adapt it or transfer learn it and so on the weights they won't move that much so in essence this weight maybe starts out right here and it will sort of stay around this place it will maybe go a little bit down because it's not important but it won't move much during transfer learning that's just a property of transfer learning so this paper here says we can't just use magnitude pruning when we transfer learn because what will basically go by what will basically say is the will assign the importance based on based on the original neural network task on the cat versus dog we we will miss specify the importance of the weights what we should do is actually measure the importance with respect to this task and how do they achieve it so on a high level they're basically saying okay if we start out well this was fatal if we start out with a point over here let's make that red red i want the color red well it's blue now so if we ah if we start out with a point over here what we should do what we should do is we should observe how it moves during transfer learning if it moves towards zero then it's probably not that important for the new task and if it moves to the to be even larger then it's probably important for that new task okay so that's that's a high level now how do you measure how it moves and what exactly how exactly do you do all of this during training such that you don't make mistakes that's the point of this paper they say we propose movement pruning a simple deterministic first-order weight pruning method that is more adaptive to pre-trained model fine-tuning we give mathematical foundations to the method and compare it to existing zero and first-order pruning methods okay so um yeah we said so that's basically on a high level that's that now how do they actually do it what they do right here is the following they say what we can define we can define each each network layer basically as a matrix multiplication by a weight now you can express pretty much any neural network as such a multiplication with a weight so you have x the in the signal in each layer and you multiply that by the weight matrix w now if you prune the neural network you can see that right here what you're saying is i basically in here i have the matrix m which is a mask so the mask is either zero or one for if a weight is active or if a weight is not active now this is not a matrix multiply actually this is like a hadamard product um but you have this mask matrix and what decides on this mask this mask is decided as you can see right here by this s so s s is a matrix that for each entry in w it will decide how important it is now in the classic sense in the magnitude pruning you already saw that this is just going to be the absolute value of w i j okay and then the top v simply means that you take the whoever are the most important the most magnitude that those are going to be one in the mask and everything else is going to be zero in the mask okay that's how this s this the w determines the s and the s determines the m okay so what you ultimately use is the m right here um but in now what we want to do is we want to actually make the s based on the movement and the movement is not really a defined concept because it goes over steps and so on so how do you do the movement in in a kind of dynamic way and this paper says you should do it by gradient so um you you should observe the gradient of your loss function with respect to this s matrix to this importance matrix right what does it mean so the what does it mean it means let's consider this quantity right here if s is the importance of a particular connection and if the gradient is large that means this this connection moves a lot right it like the loss pulls it into a particular direction okay so we're not talking about yet which direction actually the gradient has a sign it's either positive or negative right so by this quantity you can decide how much does this new task want this particular importance score to move so this is a direct direct measure of how much basically the loss function pulls on that importance score how much and now you can simply decide if and they have these they have i think they have a diagram yes so but i don't like that let's go so we have right here we have what's the value of this gradient of l with respect to s and here is w so if the gradient is positive and w is already positive that means the gradient goes into the positive direction so you increase the loss function in that let's put the negative gradient here because you do gradient descent right so so if the negative gradient is positive and the weight is already positive in in this case that means you're all the weight is already high but now the loss function wants to push it even higher so that must be a very very important weight right like it's like very good the same goes if the gradient the negative gradient is negative and the weight is already negative the weight being negative already means the weight you know it has a negative sign and then the gradient wants it to go even more negative the the optimization procedure says this thing should become even more negative and also we say that's probably a good weight now the other two cases means basically the weight's already positive but the gradient wants it to go negative which means it's pulled towards zero now it's entirely possible that it's going to cross zero and go like if you're here going from over here gonna here cross zero and become like super large but that violates our basic assumptions that the transfer learning doesn't move the weights too much right what you're caring for is basically this local neighborhood right here okay so you can make the fair assumption that these weights are not that important in the case where the negative gradient goes against the sign of the weight so this this is of course discrete right now but we can actually assign a number by how large the gradient is and by how large the weight already is and therefore we can make a score so the important score right here as you can see is the weight multiplied by the gradient of the weight and they can actually show mathematically that if you do this over multiple steps so you optimize while you do this pruning and they do some sort of a soft pruning so you can kind of correct your mistakes later on I mean they have hard and soft pruning but in any case they can correct their mistakes later on this will actually result in these important scores being an accumulation over the training over the entire training of this quantity and that's pretty cool because that means eventually you'll sort of have a consistent estimator of these important scores across your training procedure okay because the main fear with something like this of course is that it's very brittle and very much depends on the training dynamics and who knows if in step one something bad happens and so on but the the math behind this here gives sort of more evidence that this it can be like a self-correcting mechanism it is actually not too dependent on the particular training dynamics so they do this experimental setup now they have some they have some quirks here actually let's first go to the the the actual different methods they compare different methods right here where they say okay there's magnitude pruning it's a zero with order which basically just means you just look at the weight magnitude that that's it it's top v which means you just pick the top whatever and the objective is just the loss and the scores are just the absolute value we've seen this now movement pruning on the other hand is first order which means you look at the movement in our case the gradient as you can see here those are the importance scores and you use this straight through estimator which is basically just a way of saying that even though you're masking some things in the forward step you shouldn't mask them in the in the gradient backward step because you still want gradient signal to reach so if you have layers and you have a weight right here at least that's how I understand it I have not read that paper but if you mask this one here you still want the gradient to sort of flow backwards because you still need the actual importance scores for the weights that here below connect to this weight I think that's what is meant not entirely sure in this though so but you can see that the objective function is also the actual loss function now this is contrasted to a baseline called l0 regularization which is quite similar but is also first order but has this sort of regularizer right here and uses the gumball softmax in order to determine the scores and as you can see it also has a different score function and it has this continuous hard concrete importance masking function sorry masking function and they have a variant of movement pruning that tends to perform a little bit better which is soft movement pruning where instead of just going by the loss function optimizing the loss function they optimize the loss function plus something now they have as you can see here they have a thresholding masking function and the threshold is actually dynamic or it's determined by the importance scores and they have then a regularizer that make the importance scores sparse so instead of saying we just want the top 5% of weights they now just put a lot of mass on this lambda right here which will cause the s to be sparse and you know they think if they are not happy with how many weights they can simply increase or decrease this lambda such that they get to their desired sparsity of course you see there's the direct trade-off with the loss function right so the more you put weight on this lambda the less weight is you put basically on the loss function itself so you have the trade-off here is very explicit whereas in basic movement pruning it's just given by you masking away completely the bottom 1-v of percent of the weights but you see the score function is the same oh well the score function here the score function is the same okay now there are quite a number of tricks here like there's this sparsity scheduling function and so on so as always in NLP and with any big models there are a bunch of engineering tricks that make everything work better and you can never exactly tell how much is due to that and how much is due to the the actual technique but if you know you can sort of assess whether or not these it's done well and this here actually the rationale makes sense and that's why I tend to think that it is actually a better method and the experiments are very very convincing let's say okay this is just a pictorial comparison where you can see movement pruning sorry magnitude pruning all it does is it looks at after you fine-tune what are the weights and it just cuts away everything in the middle doesn't care about how the weights were when before however movement pruning looks at the combination of what were the weights before and what are the weights now and it cuts away everything where the weights moved towards zero which are these quadrants right here and it leaves in everything where it moved away from zero or that's that's the ordering let's say that how much it moved okay experiments now as you might already have figured out by now in the machine learning and especially the NLP community the methods presented always outperform the previous methods in this case it's pretty special so they test this on these number of tasks quad and NLMI and QQP and these are quite hard tasks from an NLP perspective like squad is question answering MNLIs like language inference so these would be on the I would guess these are on the foreign NLP system on the harder side of tasks that's it's fairly cool and as you can see here now just focus on first of all focus on this MAP which is the magnitude pruning so that's the baseline if you will and the purple one the SMVP which is the soft movement pruning okay now you can also focus on the MVP right here but they're approximately the same now the RPP you can maybe see in the graph is performing fairly well even compared to the full model it's another baseline basically but we just want to compare those to and you can see that in this regime that the magnitude pruning outperforms the movement pruning but however in this regime the movement pruning is much better and that's the percent where the percent of remaining weights is very very low so this is kind of the extreme sparse case where you only have 10% or you only have 3% of the weights left and you can see that the movement pruning is outperforming the magnitude pruning by a lot okay now they do discover that so this this happens in all in all of these tasks as you can see right here and they do they do discover that if you then distill the model further so in distillation this is yet another technique that you can use to boost the performance of the transfer learned model so in distillation you would not only train the model on you so you have you now you have your this model that you transfer learn and you have the pruned version and in the pruned version what you would do is you would simply also train it on this data set but what you can also do is you can distill this model right here the one that you trained on the same task right that's that's presumably better because it still has all the weights okay you can run a data point through both so the same data point goes through this and you get an output which are logits which is like representing a distribution saying yeah it's about this high now instead of assigning the hard labels so here we also get the label right it's a supervised learning task like one and zero you also put the data point through this model right here obtain whatever that model would have said and let's say it's about this and now presumably this model is better already so you say well the label here says that it's this class but the model that's really good says you shouldn't be too sure about it so you can sort of mix the two losses and this this process of transferring the knowledge of this model to here is called distillation with the lower model being the teacher model now if you do distillation you can actually improve your performance even more and they show that in the experiments here especially again in the low in the low parameter regime but you can see for example in squad here that the distilled movement pruned method now catches up with the magnitude pruned method in the also in the high in the not so sparse regime okay and they analyze these weights and as you can see that expected the magnitude pruned method it will simply cut out anything right here that's not not a surprise whereas the movement pruned method it will leave a lot of these weights alive so as you can as you can see basically it's it's very much the case since you can outperform the red the yellow one can outperform the red one it is almost warranted to say that the this magnitude pruning wasn't the best choice it's actually a better choice to leave some of those weights in and actually cut some of the weights that are large out just based on their movement now the V here in the middle is of course due to the fact that if a weight is here it was probably not super important in the first place and since since this thing removes anything that moves towards zero any point starting let's say around here moving towards zero would end up here so all the points that end up in this region probably moved towards zero during training and therefore are cut away so there's there's not just like for there to be points there would have been points that started even more in the middle and then moved out to here right and there's just not as many so that's why you have that the V shape is very natural to expect right here so they they analyze this then in terms of the where the model cuts the weights out now they experiment on a BERT base thing that which is a transformer with 12 layers and if you don't know what BERT is you can go look at my video on BERT but you can see that the magnitude pruning will sort of cut all the weights on the layers equally so it will sort of go through the layers and take away let's say 90 percent of each here you can see 10 percent of weights remaining whereas the movement pruning especially the soft movement pruning will actually make a large difference it will remove much much more of the later layers weights and keep the lower layer weights which i think if you do transfer learning from these language models it tends to be that the lower layers maybe pick up if you if you think of a CNN the lower layers might pick up you know on these essential image features like corners and so on and the higher layers will pick up on the task specific things now if you do like a big pre-training tasks you might have a lot of information that you need there but then if you distill it and transfer it down to like a small set small task where only a single thing is important like in squad it's only important what's the answer to the question then you can probably remove a lot of that superfluous information that was there like high level features from the pre-training task i mean that's my my guess here but they also have explained that so yeah that was this paper if you're still here and you enjoyed it leave a like tell me in the comments what you think and i'll see you next time bye bye
[ { "end": 5.96, "start": 0, "text": " Hi there, today we're looking at movement pruning, adaptive sparsity by fine tuning" }, { "end": 13.040000000000001, "start": 5.96, "text": " by Victor Sun, Thomas Wolff and Alexander M. Rush of Hugging Face and Cornell University." }, { "end": 18.76, "start": 13.040000000000001, "text": " On a high level, this paper proposes that if you have a transfer learning objective" }, { "end": 24.12, "start": 18.76, "text": " and you want to do pruning, you should not do pruning by weight magnitude, you should" }, { "end": 28.64, "start": 24.12, "text": " do pruning by how much the weights move during the transfer learning." }, { "end": 35.6, "start": 28.64, "text": " This yields better results in the very sparse model regimes and is specifically relevant" }, { "end": 41.54, "start": 35.6, "text": " to current NLP transfer learning tasks such as BERT models." }, { "end": 47.040000000000006, "start": 41.54, "text": " So if you like content like this, consider subscribing and sharing it to your friends" }, { "end": 52.120000000000005, "start": 47.040000000000006, "text": " and as always leave a comment if you have anything to say on this." }, { "end": 53.88, "start": 52.120000000000005, "text": " Alright let's dive in." }, { "end": 60.580000000000005, "start": 53.88, "text": " So they say magnitude pruning is a widely used strategy for reducing model size in pure" }, { "end": 62.400000000000006, "start": 60.580000000000005, "text": " supervised learning." }, { "end": 64.82000000000001, "start": 62.400000000000006, "text": " So what is magnitude pruning?" }, { "end": 70.32000000000001, "start": 64.82000000000001, "text": " Now if I have a neural network, let's say I have a convolutional neural network and" }, { "end": 76.36, "start": 70.32000000000001, "text": " I input my little cat right here and I have a bunch of layers right and now if we look" }, { "end": 81.92, "start": 76.36, "text": " at these layers, each of these layers is going to be made up of these units of the neurons" }, { "end": 85.48, "start": 81.92, "text": " and the next layer is also made up of these neurons." }, { "end": 90.96000000000001, "start": 85.48, "text": " Now what kind of neural network that is, it's not that important but what is important is" }, { "end": 96.08, "start": 90.96000000000001, "text": " that you have these connections from neuron to neuron and in let's say a fully connected" }, { "end": 100.28, "start": 96.08, "text": " network every neuron is connected to every other neuron." }, { "end": 105.16, "start": 100.28, "text": " In a CNN that would be slightly different but in essence you have a lot of connections" }, { "end": 108.64, "start": 105.16, "text": " here and these are usually called weights." }, { "end": 110.56, "start": 108.64, "text": " So these are the weights." }, { "end": 117.28, "start": 110.56, "text": " Now the problem is if I train like these giant neural networks and I want to ship them for" }, { "end": 124.46000000000001, "start": 117.28, "text": " example to mobile devices to my customers then they won't be able to download gigabytes" }, { "end": 129.12, "start": 124.46000000000001, "text": " of models or even like hundreds of megabytes of models, it's just not possible." }, { "end": 133.76, "start": 129.12, "text": " So what we want to do is we want to prune this model which means we want to remove parts" }, { "end": 140.28, "start": 133.76, "text": " of these weights, a lot of these weights but we don't want to lose accuracy of the network." }, { "end": 144.92, "start": 140.28, "text": " So imagine I have a network and that's trained, it's an image classifier, it's here it's cats" }, { "end": 148.92000000000002, "start": 144.92, "text": " or dogs and I have it trained to a good accuracy." }, { "end": 157, "start": 148.92000000000002, "text": " I want to delete these weights but I want to retain the performance and these methods" }, { "end": 158.4, "start": 157, "text": " are called pruning." }, { "end": 163.32, "start": 158.4, "text": " Now what people do is usually they sort of go in stepwise fashion, they say well first" }, { "end": 170.51999999999998, "start": 163.32, "text": " of all I don't need some of these and then they delete some and then they sort of retrain" }, { "end": 175.12, "start": 170.51999999999998, "text": " the pruned network and after that they go again and they say well I don't really need" }, { "end": 180.04, "start": 175.12, "text": " that one and they don't really need that one so they do it in this stepwise fashion until" }, { "end": 185.84, "start": 180.04, "text": " the network is of the size that they want and the hope is that you don't lose too much" }, { "end": 187.04, "start": 185.84, "text": " accuracy." }, { "end": 192.44, "start": 187.04, "text": " So the question is how do you select which weights you need and which ones you don't" }, { "end": 201.24, "start": 192.44, "text": " need and usually this is done by so called magnitude pruning which means that you look" }, { "end": 210.28, "start": 201.24, "text": " at the weights and the weights they'll have some distribution, there will be very negative" }, { "end": 215.28, "start": 210.28, "text": " so here is very negative weights and here is very large positive weights and what you'll" }, { "end": 220.6, "start": 215.28, "text": " say is that okay probably the weights that are very large they contribute a lot to the" }, { "end": 225.6, "start": 220.6, "text": " signal of the network within the network and the weights that are quite small they're you" }, { "end": 228.95999999999998, "start": 225.6, "text": " know since there's all this noise and stuff they're probably not that important so I'm" }, { "end": 235.6, "start": 228.95999999999998, "text": " going to cut off basically right here and everything that's in here I'm going to delete" }, { "end": 240.45999999999998, "start": 235.6, "text": " those are the non-important weights whereas on the outside those are the important weights." }, { "end": 244.6, "start": 240.45999999999998, "text": " This is called magnitude pruning because it goes by the magnitude of the weight the absolute" }, { "end": 247.44, "start": 244.6, "text": " value of the weight." }, { "end": 253.07999999999998, "start": 247.44, "text": " So you don't actually need so there's not one threshold here you don't need a threshold" }, { "end": 257.76, "start": 253.07999999999998, "text": " you simply need a method to order the weights right and then you keep removing them until" }, { "end": 260.76, "start": 257.76, "text": " you're satisfied with the size." }, { "end": 263.34, "start": 260.76, "text": " So this is magnitude pruning." }, { "end": 268.52, "start": 263.34, "text": " Now what's the problem with the magnitude pruning in these kinds of tasks?" }, { "end": 273.92, "start": 268.52, "text": " They say however it is less effective in the transfer learning regime that has become standard" }, { "end": 278.48, "start": 273.92, "text": " for state-of-the-art natural language processing applications." }, { "end": 281.08000000000004, "start": 278.48, "text": " So what do you do in these transfer learning regimes?" }, { "end": 285.88, "start": 281.08000000000004, "text": " In the transfer learning regime and actually let's go with the image example right here" }, { "end": 289.82, "start": 285.88, "text": " even though it's mostly used in NLP we can do the same thing." }, { "end": 295.04, "start": 289.82, "text": " So let's say we have a classifier here for cats and dogs our classifier and we had a" }, { "end": 300.44, "start": 295.04, "text": " big big database of cats and dogs images right so we were able to train that fairly well" }, { "end": 303.42, "start": 300.44, "text": " and we don't prune it yet we have this full network." }, { "end": 312.16, "start": 303.42, "text": " Now we want to adapt this to a task where we want to recognize whether or not the animal" }, { "end": 313.74, "start": 312.16, "text": " is sick." }, { "end": 319.28000000000003, "start": 313.74, "text": " So we developed this app for a veterinarian and it's like a short screening for a particular" }, { "end": 325.14, "start": 319.28000000000003, "text": " disease that a cat might have and we already have this cats and dogs classifier so it's" }, { "end": 331.44, "start": 325.14, "text": " reasonable to assume that this classifier has some good features to work with cats and" }, { "end": 332.66, "start": 331.44, "text": " dog images." }, { "end": 337.44, "start": 332.66, "text": " So what we can do instead of because let's assume for this other task we just have this" }, { "end": 342.24, "start": 337.44, "text": " tiny little data set which is not enough to train a neural network of this size right" }, { "end": 348.16, "start": 342.24, "text": " but so in the first step we'll train this big neural network on the cats versus dogs" }, { "end": 354, "start": 348.16, "text": " and then we what we do is we transfer learning so we transfer all the weights right here" }, { "end": 360.38, "start": 354, "text": " and here we have a different task now sick or not sick right this is cat this is dog" }, { "end": 369.1, "start": 360.38, "text": " and here is sick or not sick not sick and of course we can't transfer these particular" }, { "end": 375.76, "start": 369.1, "text": " weights but we hope that the features here will sort of be the same so we transfer them" }, { "end": 382.38, "start": 375.76, "text": " and then we train these weights including the head right here this part we train it" }, { "end": 387.34, "start": 382.38, "text": " on this little data set and we hope that we already have this good starting point we only" }, { "end": 393.38, "start": 387.34, "text": " need to you know learn the basically the specifics of what makes these two data sets different" }, { "end": 400.9, "start": 393.38, "text": " and we won't have to learn entire task of dealing with cat and dog images from the get-go" }, { "end": 408.02, "start": 400.9, "text": " okay so this is called transfer learning now in this case we combine the two so first we" }, { "end": 413.7, "start": 408.02, "text": " want to transfer learn like if we build this app for vets and then we might say oh this" }, { "end": 418.38, "start": 413.7, "text": " is not you know this is not only for vets this is actually for anyone you know who has" }, { "end": 423.62, "start": 418.38, "text": " a cat or a dog at home so what we could do is build an app where anyone at home could" }, { "end": 429.62, "start": 423.62, "text": " scan their their cat and it would output like a probability of the cat having that disease" }, { "end": 434.09999999999997, "start": 429.62, "text": " so we want this neural network is still the same size as this neural network so now we" }, { "end": 441.06, "start": 434.09999999999997, "text": " want to do the pruning we want this neural network to become sparse to only have a couple" }, { "end": 447.78000000000003, "start": 441.06, "text": " of connections left such that it's a few kilobytes large but retain performance now they say" }, { "end": 455.86, "start": 447.78000000000003, "text": " when you do this step you can't just do the magnitude pruning like you did right here" }, { "end": 463.22, "start": 455.86, "text": " and why not because this is not this model right here is not the result of a training" }, { "end": 469.46, "start": 463.22, "text": " step like of a regular training process but is the result of a transfer learning process" }, { "end": 476.7, "start": 469.46, "text": " where first you do the big training and then second you adapt it and why is that the case" }, { "end": 482.85999999999996, "start": 476.7, "text": " well ultimately what you want to do is you want to prove the non-important weights now" }, { "end": 488.14, "start": 482.85999999999996, "text": " there could be a weight right here this one that is very important for the cat versus" }, { "end": 496.46, "start": 488.14, "text": " dog task but that is not important for the sick versus non-sick task and we also we know" }, { "end": 501.18, "start": 496.46, "text": " that in these transfer learning settings the weights they don't tend to move that much" }, { "end": 508.26, "start": 501.18, "text": " in general the research shows that once you've trained a neural network basically the beginning" }, { "end": 513.06, "start": 508.26, "text": " is important but then once you did it like if you adapt it or transfer learn it and so" }, { "end": 520.9399999999999, "start": 513.06, "text": " on the weights they won't move that much so in essence this weight maybe starts out right" }, { "end": 528, "start": 520.94, "text": " here and it will sort of stay around this place it will maybe go a little bit down because" }, { "end": 531.98, "start": 528, "text": " it's not important but it won't move much during transfer learning that's just a property" }, { "end": 537.46, "start": 531.98, "text": " of transfer learning so this paper here says we can't just use magnitude pruning when we" }, { "end": 543.34, "start": 537.46, "text": " transfer learn because what will basically go by what will basically say is the will" }, { "end": 549.2600000000001, "start": 543.34, "text": " assign the importance based on based on the original neural network task on the cat versus" }, { "end": 555.14, "start": 549.26, "text": " dog we we will miss specify the importance of the weights what we should do is actually" }, { "end": 560.9399999999999, "start": 555.14, "text": " measure the importance with respect to this task and how do they achieve it so on a high" }, { "end": 568.52, "start": 560.9399999999999, "text": " level they're basically saying okay if we start out well this was fatal if we start" }, { "end": 577.84, "start": 568.52, "text": " out with a point over here let's make that red red i want the color red well it's blue" }, { "end": 583.58, "start": 577.84, "text": " now so if we ah if we start out with a point over here what we should do what we should" }, { "end": 590.1800000000001, "start": 583.58, "text": " do is we should observe how it moves during transfer learning if it moves towards zero" }, { "end": 595.94, "start": 590.1800000000001, "text": " then it's probably not that important for the new task and if it moves to the to be" }, { "end": 602.1, "start": 595.94, "text": " even larger then it's probably important for that new task okay so that's that's a high" }, { "end": 607.86, "start": 602.1, "text": " level now how do you measure how it moves and what exactly how exactly do you do all" }, { "end": 613.4200000000001, "start": 607.86, "text": " of this during training such that you don't make mistakes that's the point of this paper" }, { "end": 620.1, "start": 613.4200000000001, "text": " they say we propose movement pruning a simple deterministic first-order weight pruning method" }, { "end": 625.86, "start": 620.1, "text": " that is more adaptive to pre-trained model fine-tuning we give mathematical foundations" }, { "end": 634.38, "start": 625.86, "text": " to the method and compare it to existing zero and first-order pruning methods okay so um" }, { "end": 642.98, "start": 634.38, "text": " yeah we said so that's basically on a high level that's that now how do they actually" }, { "end": 651.98, "start": 642.98, "text": " do it what they do right here is the following they say what we can define we can define" }, { "end": 661.26, "start": 651.98, "text": " each each network layer basically as a matrix multiplication by a weight now you can express" }, { "end": 667.78, "start": 661.26, "text": " pretty much any neural network as such a multiplication with a weight so you have x the in the signal" }, { "end": 674.46, "start": 667.78, "text": " in each layer and you multiply that by the weight matrix w now if you prune the neural" }, { "end": 681.1, "start": 674.46, "text": " network you can see that right here what you're saying is i basically in here i have the matrix" }, { "end": 688.34, "start": 681.1, "text": " m which is a mask so the mask is either zero or one for if a weight is active or if a weight" }, { "end": 695.78, "start": 688.34, "text": " is not active now this is not a matrix multiply actually this is like a hadamard product um" }, { "end": 703.14, "start": 695.78, "text": " but you have this mask matrix and what decides on this mask this mask is decided as you can" }, { "end": 714.46, "start": 703.14, "text": " see right here by this s so s s is a matrix that for each entry in w it will decide how" }, { "end": 720.74, "start": 714.46, "text": " important it is now in the classic sense in the magnitude pruning you already saw that" }, { "end": 728.06, "start": 720.74, "text": " this is just going to be the absolute value of w i j okay and then the top v simply means" }, { "end": 733.6999999999999, "start": 728.06, "text": " that you take the whoever are the most important the most magnitude that those are going to" }, { "end": 739.6999999999999, "start": 733.6999999999999, "text": " be one in the mask and everything else is going to be zero in the mask okay that's how" }, { "end": 746.9, "start": 739.6999999999999, "text": " this s this the w determines the s and the s determines the m okay so what you ultimately" }, { "end": 754.78, "start": 746.9, "text": " use is the m right here um but in now what we want to do is we want to actually make" }, { "end": 759.62, "start": 754.78, "text": " the s based on the movement and the movement is not really a defined concept because it" }, { "end": 767.5, "start": 759.62, "text": " goes over steps and so on so how do you do the movement in in a kind of dynamic way and" }, { "end": 775.5, "start": 767.5, "text": " this paper says you should do it by gradient so um you you should observe the gradient" }, { "end": 782.3, "start": 775.5, "text": " of your loss function with respect to this s matrix to this importance matrix right what" }, { "end": 797.0999999999999, "start": 782.3, "text": " does it mean so the what does it mean it means let's consider this quantity right here if" }, { "end": 805.66, "start": 797.0999999999999, "text": " s is the importance of a particular connection and if the gradient is large that means this" }, { "end": 812.42, "start": 805.66, "text": " this connection moves a lot right it like the loss pulls it into a particular direction" }, { "end": 817.5799999999999, "start": 812.42, "text": " okay so we're not talking about yet which direction actually the gradient has a sign" }, { "end": 823.2199999999999, "start": 817.5799999999999, "text": " it's either positive or negative right so by this quantity you can decide how much does" }, { "end": 831.42, "start": 823.2199999999999, "text": " this new task want this particular importance score to move so this is a direct direct measure" }, { "end": 837.3399999999999, "start": 831.42, "text": " of how much basically the loss function pulls on that importance score how much and now" }, { "end": 847.88, "start": 837.3399999999999, "text": " you can simply decide if and they have these they have i think they have a diagram yes" }, { "end": 856.54, "start": 847.88, "text": " so but i don't like that let's go so we have right here we have what's the value of this" }, { "end": 866.38, "start": 856.54, "text": " gradient of l with respect to s and here is w so if the gradient is positive and w is" }, { "end": 876.74, "start": 866.38, "text": " already positive that means the gradient goes into the positive direction so you increase" }, { "end": 882.26, "start": 876.74, "text": " the loss function in that let's put the negative gradient here because you do gradient descent" }, { "end": 889.66, "start": 882.26, "text": " right so so if the negative gradient is positive and the weight is already positive in in this" }, { "end": 895.8199999999999, "start": 889.66, "text": " case that means you're all the weight is already high but now the loss function wants to push" }, { "end": 901.14, "start": 895.8199999999999, "text": " it even higher so that must be a very very important weight right like it's like very" }, { "end": 909.3, "start": 901.14, "text": " good the same goes if the gradient the negative gradient is negative and the weight is already" }, { "end": 914.0999999999999, "start": 909.3, "text": " negative the weight being negative already means the weight you know it has a negative" }, { "end": 920.14, "start": 914.0999999999999, "text": " sign and then the gradient wants it to go even more negative the the optimization procedure" }, { "end": 926.38, "start": 920.14, "text": " says this thing should become even more negative and also we say that's probably a good weight" }, { "end": 931.18, "start": 926.38, "text": " now the other two cases means basically the weight's already positive but the gradient" }, { "end": 936.26, "start": 931.18, "text": " wants it to go negative which means it's pulled towards zero now it's entirely possible that" }, { "end": 943.98, "start": 936.26, "text": " it's going to cross zero and go like if you're here going from over here gonna here cross" }, { "end": 949.3199999999999, "start": 943.98, "text": " zero and become like super large but that violates our basic assumptions that the transfer" }, { "end": 953.38, "start": 949.3199999999999, "text": " learning doesn't move the weights too much right what you're caring for is basically" }, { "end": 959.06, "start": 953.38, "text": " this local neighborhood right here okay so you can make the fair assumption that these" }, { "end": 964.7, "start": 959.06, "text": " weights are not that important in the case where the negative gradient goes against the" }, { "end": 970.34, "start": 964.7, "text": " sign of the weight so this this is of course discrete right now but we can actually assign" }, { "end": 976.4200000000001, "start": 970.34, "text": " a number by how large the gradient is and by how large the weight already is and therefore" }, { "end": 985.34, "start": 976.4200000000001, "text": " we can make a score so the important score right here as you can see is the weight multiplied" }, { "end": 991.22, "start": 985.34, "text": " by the gradient of the weight and they can actually show mathematically that if you do" }, { "end": 996.48, "start": 991.22, "text": " this over multiple steps so you optimize while you do this pruning and they do some sort" }, { "end": 1001.9, "start": 996.48, "text": " of a soft pruning so you can kind of correct your mistakes later on I mean they have hard" }, { "end": 1007.0600000000001, "start": 1001.9, "text": " and soft pruning but in any case they can correct their mistakes later on this will" }, { "end": 1012.0600000000001, "start": 1007.0600000000001, "text": " actually result in these important scores being an accumulation over the training over" }, { "end": 1020.34, "start": 1012.0600000000001, "text": " the entire training of this quantity and that's pretty cool because that means eventually" }, { "end": 1025.18, "start": 1020.34, "text": " you'll sort of have a consistent estimator of these important scores across your training" }, { "end": 1030.02, "start": 1025.18, "text": " procedure okay because the main fear with something like this of course is that it's" }, { "end": 1035.14, "start": 1030.02, "text": " very brittle and very much depends on the training dynamics and who knows if in step" }, { "end": 1041.38, "start": 1035.14, "text": " one something bad happens and so on but the the math behind this here gives sort of more" }, { "end": 1047.8600000000001, "start": 1041.38, "text": " evidence that this it can be like a self-correcting mechanism it is actually not too dependent" }, { "end": 1055.1799999999998, "start": 1047.86, "text": " on the particular training dynamics so they do this experimental setup now they have some" }, { "end": 1060.06, "start": 1055.1799999999998, "text": " they have some quirks here actually let's first go to the the the actual different methods" }, { "end": 1064.82, "start": 1060.06, "text": " they compare different methods right here where they say okay there's magnitude pruning" }, { "end": 1069.3, "start": 1064.82, "text": " it's a zero with order which basically just means you just look at the weight magnitude" }, { "end": 1076.4599999999998, "start": 1069.3, "text": " that that's it it's top v which means you just pick the top whatever and the objective" }, { "end": 1081.5, "start": 1076.46, "text": " is just the loss and the scores are just the absolute value we've seen this now movement" }, { "end": 1085.58, "start": 1081.5, "text": " pruning on the other hand is first order which means you look at the movement in our case" }, { "end": 1091.18, "start": 1085.58, "text": " the gradient as you can see here those are the importance scores and you use this straight" }, { "end": 1096.3, "start": 1091.18, "text": " through estimator which is basically just a way of saying that even though you're masking" }, { "end": 1100.9, "start": 1096.3, "text": " some things in the forward step you shouldn't mask them in the in the gradient backward" }, { "end": 1106.26, "start": 1100.9, "text": " step because you still want gradient signal to reach so if you have layers and you have" }, { "end": 1111.14, "start": 1106.26, "text": " a weight right here at least that's how I understand it I have not read that paper but" }, { "end": 1116.46, "start": 1111.14, "text": " if you mask this one here you still want the gradient to sort of flow backwards because" }, { "end": 1122.74, "start": 1116.46, "text": " you still need the actual importance scores for the weights that here below connect to" }, { "end": 1130.7, "start": 1122.74, "text": " this weight I think that's what is meant not entirely sure in this though so but you can" }, { "end": 1136.66, "start": 1130.7, "text": " see that the objective function is also the actual loss function now this is contrasted" }, { "end": 1144.04, "start": 1136.66, "text": " to a baseline called l0 regularization which is quite similar but is also first order but" }, { "end": 1149.72, "start": 1144.04, "text": " has this sort of regularizer right here and uses the gumball softmax in order to determine" }, { "end": 1155.02, "start": 1149.72, "text": " the scores and as you can see it also has a different score function and it has this" }, { "end": 1163.3799999999999, "start": 1155.02, "text": " continuous hard concrete importance masking function sorry masking function and they have" }, { "end": 1168.46, "start": 1163.3799999999999, "text": " a variant of movement pruning that tends to perform a little bit better which is soft" }, { "end": 1174.06, "start": 1168.46, "text": " movement pruning where instead of just going by the loss function optimizing the loss function" }, { "end": 1179.46, "start": 1174.06, "text": " they optimize the loss function plus something now they have as you can see here they have" }, { "end": 1188.9, "start": 1179.46, "text": " a thresholding masking function and the threshold is actually dynamic or it's determined by" }, { "end": 1193.52, "start": 1188.9, "text": " the importance scores and they have then a regularizer that make the importance scores" }, { "end": 1200.74, "start": 1193.52, "text": " sparse so instead of saying we just want the top 5% of weights they now just put a lot" }, { "end": 1207.78, "start": 1200.74, "text": " of mass on this lambda right here which will cause the s to be sparse and you know they" }, { "end": 1212.3, "start": 1207.78, "text": " think if they are not happy with how many weights they can simply increase or decrease" }, { "end": 1219.3, "start": 1212.3, "text": " this lambda such that they get to their desired sparsity of course you see there's the direct" }, { "end": 1226.7, "start": 1219.3, "text": " trade-off with the loss function right so the more you put weight on this lambda the" }, { "end": 1232.22, "start": 1226.7, "text": " less weight is you put basically on the loss function itself so you have the trade-off" }, { "end": 1239.06, "start": 1232.22, "text": " here is very explicit whereas in basic movement pruning it's just given by you masking away" }, { "end": 1247.18, "start": 1239.06, "text": " completely the bottom 1-v of percent of the weights but you see the score function is" }, { "end": 1259.94, "start": 1247.18, "text": " the same oh well the score function here the score function is the same okay now there" }, { "end": 1266.38, "start": 1259.94, "text": " are quite a number of tricks here like there's this sparsity scheduling function and so on" }, { "end": 1272.6200000000001, "start": 1266.38, "text": " so as always in NLP and with any big models there are a bunch of engineering tricks that" }, { "end": 1279.46, "start": 1272.6200000000001, "text": " make everything work better and you can never exactly tell how much is due to that and how" }, { "end": 1284.78, "start": 1279.46, "text": " much is due to the the actual technique but if you know you can sort of assess whether" }, { "end": 1290.82, "start": 1284.78, "text": " or not these it's done well and this here actually the rationale makes sense and that's" }, { "end": 1298.7, "start": 1290.82, "text": " why I tend to think that it is actually a better method and the experiments are very" }, { "end": 1305.82, "start": 1298.7, "text": " very convincing let's say okay this is just a pictorial comparison where you can see movement" }, { "end": 1311.7, "start": 1305.82, "text": " pruning sorry magnitude pruning all it does is it looks at after you fine-tune what are" }, { "end": 1316.3400000000001, "start": 1311.7, "text": " the weights and it just cuts away everything in the middle doesn't care about how the weights" }, { "end": 1322.6200000000001, "start": 1316.3400000000001, "text": " were when before however movement pruning looks at the combination of what were the" }, { "end": 1329.46, "start": 1322.6200000000001, "text": " weights before and what are the weights now and it cuts away everything where the weights" }, { "end": 1334.3, "start": 1329.46, "text": " moved towards zero which are these quadrants right here and it leaves in everything where" }, { "end": 1343.18, "start": 1334.3, "text": " it moved away from zero or that's that's the ordering let's say that how much it moved" }, { "end": 1350.7, "start": 1343.18, "text": " okay experiments now as you might already have figured out by now in the machine learning" }, { "end": 1358.78, "start": 1350.7, "text": " and especially the NLP community the methods presented always outperform the previous methods" }, { "end": 1364.26, "start": 1358.78, "text": " in this case it's pretty special so they test this on these number of tasks quad and NLMI" }, { "end": 1370.84, "start": 1364.26, "text": " and QQP and these are quite hard tasks from an NLP perspective like squad is question" }, { "end": 1377.4, "start": 1370.84, "text": " answering MNLIs like language inference so these would be on the I would guess these" }, { "end": 1384.18, "start": 1377.4, "text": " are on the foreign NLP system on the harder side of tasks that's it's fairly cool and" }, { "end": 1391.94, "start": 1384.18, "text": " as you can see here now just focus on first of all focus on this MAP which is the magnitude" }, { "end": 1401.2, "start": 1391.94, "text": " pruning so that's the baseline if you will and the purple one the SMVP which is the soft" }, { "end": 1406.6200000000001, "start": 1401.2, "text": " movement pruning okay now you can also focus on the MVP right here but they're approximately" }, { "end": 1414.1000000000001, "start": 1406.6200000000001, "text": " the same now the RPP you can maybe see in the graph is performing fairly well even compared" }, { "end": 1421.92, "start": 1414.1000000000001, "text": " to the full model it's another baseline basically but we just want to compare those to and" }, { "end": 1430.7, "start": 1421.92, "text": " you can see that in this regime that the magnitude pruning outperforms the movement pruning but" }, { "end": 1436.44, "start": 1430.7, "text": " however in this regime the movement pruning is much better and that's the percent where" }, { "end": 1441.14, "start": 1436.44, "text": " the percent of remaining weights is very very low so this is kind of the extreme sparse" }, { "end": 1447.66, "start": 1441.14, "text": " case where you only have 10% or you only have 3% of the weights left and you can see that" }, { "end": 1455.22, "start": 1447.66, "text": " the movement pruning is outperforming the magnitude pruning by a lot okay now they do" }, { "end": 1463.42, "start": 1455.22, "text": " discover that so this this happens in all in all of these tasks as you can see right" }, { "end": 1472.6200000000001, "start": 1463.42, "text": " here and they do they do discover that if you then distill the model further so in distillation" }, { "end": 1477.62, "start": 1472.62, "text": " this is yet another technique that you can use to boost the performance of the transfer" }, { "end": 1488.86, "start": 1477.62, "text": " learned model so in distillation you would not only train the model on you so you have" }, { "end": 1496.3999999999999, "start": 1488.86, "text": " you now you have your this model that you transfer learn and you have the pruned version" }, { "end": 1500.84, "start": 1496.3999999999999, "text": " and in the pruned version what you would do is you would simply also train it on this" }, { "end": 1506.58, "start": 1500.84, "text": " data set but what you can also do is you can distill this model right here the one that" }, { "end": 1510.6599999999999, "start": 1506.58, "text": " you trained on the same task right that's that's presumably better because it still" }, { "end": 1517.34, "start": 1510.6599999999999, "text": " has all the weights okay you can run a data point through both so the same data point" }, { "end": 1522.98, "start": 1517.34, "text": " goes through this and you get an output which are logits which is like representing a distribution" }, { "end": 1529.82, "start": 1522.98, "text": " saying yeah it's about this high now instead of assigning the hard labels so here we also" }, { "end": 1534.6599999999999, "start": 1529.82, "text": " get the label right it's a supervised learning task like one and zero you also put the data" }, { "end": 1544.1, "start": 1534.6599999999999, "text": " point through this model right here obtain whatever that model would have said and let's" }, { "end": 1553.6599999999999, "start": 1544.1, "text": " say it's about this and now presumably this model is better already so you say well the" }, { "end": 1559.74, "start": 1553.6599999999999, "text": " label here says that it's this class but the model that's really good says you shouldn't" }, { "end": 1566, "start": 1559.74, "text": " be too sure about it so you can sort of mix the two losses and this this process of transferring" }, { "end": 1572.14, "start": 1566, "text": " the knowledge of this model to here is called distillation with the lower model being the" }, { "end": 1578.72, "start": 1572.14, "text": " teacher model now if you do distillation you can actually improve your performance even" }, { "end": 1587.06, "start": 1578.72, "text": " more and they show that in the experiments here especially again in the low in the low" }, { "end": 1593.1, "start": 1587.06, "text": " parameter regime but you can see for example in squad here that the distilled movement" }, { "end": 1600.24, "start": 1593.1, "text": " pruned method now catches up with the magnitude pruned method in the also in the high in the" }, { "end": 1609.8999999999999, "start": 1600.24, "text": " not so sparse regime okay and they analyze these weights and as you can see that expected" }, { "end": 1616.22, "start": 1609.8999999999999, "text": " the magnitude pruned method it will simply cut out anything right here that's not not" }, { "end": 1622.08, "start": 1616.22, "text": " a surprise whereas the movement pruned method it will leave a lot of these weights alive" }, { "end": 1631.78, "start": 1622.08, "text": " so as you can as you can see basically it's it's very much the case since you can outperform" }, { "end": 1638.6200000000001, "start": 1631.78, "text": " the red the yellow one can outperform the red one it is almost warranted to say that" }, { "end": 1644.02, "start": 1638.6200000000001, "text": " the this magnitude pruning wasn't the best choice it's actually a better choice to leave" }, { "end": 1648.78, "start": 1644.02, "text": " some of those weights in and actually cut some of the weights that are large out just" }, { "end": 1654.66, "start": 1648.78, "text": " based on their movement now the V here in the middle is of course due to the fact that" }, { "end": 1663.06, "start": 1654.66, "text": " if a weight is here it was probably not super important in the first place and since since" }, { "end": 1669.42, "start": 1663.06, "text": " this thing removes anything that moves towards zero any point starting let's say around here" }, { "end": 1674.46, "start": 1669.42, "text": " moving towards zero would end up here so all the points that end up in this region probably" }, { "end": 1682.6200000000001, "start": 1674.46, "text": " moved towards zero during training and therefore are cut away so there's there's not just like" }, { "end": 1687.18, "start": 1682.6200000000001, "text": " for there to be points there would have been points that started even more in the middle" }, { "end": 1691.26, "start": 1687.18, "text": " and then moved out to here right and there's just not as many so that's why you have that" }, { "end": 1701.98, "start": 1691.26, "text": " the V shape is very natural to expect right here so they they analyze this then in terms" }, { "end": 1711.26, "start": 1701.98, "text": " of the where the model cuts the weights out now they experiment on a BERT base thing that" }, { "end": 1715.9, "start": 1711.26, "text": " which is a transformer with 12 layers and if you don't know what BERT is you can go" }, { "end": 1726.14, "start": 1715.9, "text": " look at my video on BERT but you can see that the magnitude pruning will sort of cut all" }, { "end": 1732.18, "start": 1726.14, "text": " the weights on the layers equally so it will sort of go through the layers and take away" }, { "end": 1738.18, "start": 1732.18, "text": " let's say 90 percent of each here you can see 10 percent of weights remaining whereas" }, { "end": 1743.5, "start": 1738.18, "text": " the movement pruning especially the soft movement pruning will actually make a large difference" }, { "end": 1750.9, "start": 1743.5, "text": " it will remove much much more of the later layers weights and keep the lower layer weights" }, { "end": 1755.86, "start": 1750.9, "text": " which i think if you do transfer learning from these language models it tends to be" }, { "end": 1761.5, "start": 1755.86, "text": " that the lower layers maybe pick up if you if you think of a CNN the lower layers might" }, { "end": 1766.46, "start": 1761.5, "text": " pick up you know on these essential image features like corners and so on and the higher" }, { "end": 1771.64, "start": 1766.46, "text": " layers will pick up on the task specific things now if you do like a big pre-training tasks" }, { "end": 1775.38, "start": 1771.64, "text": " you might have a lot of information that you need there but then if you distill it and" }, { "end": 1781.46, "start": 1775.38, "text": " transfer it down to like a small set small task where only a single thing is important" }, { "end": 1785.88, "start": 1781.46, "text": " like in squad it's only important what's the answer to the question then you can probably" }, { "end": 1791.22, "start": 1785.88, "text": " remove a lot of that superfluous information that was there like high level features from" }, { "end": 1798.7, "start": 1791.22, "text": " the pre-training task i mean that's my my guess here but they also have explained that" }, { "end": 1806.1000000000001, "start": 1798.7, "text": " so yeah that was this paper if you're still here and you enjoyed it leave a like tell" }, { "end": 1833.1399999999999, "start": 1806.1, "text": " me in the comments what you think and i'll see you next time bye bye" } ]
hQEnzdLkPj4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Learning To Classify Images Without Labels (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ethz", "clustering", "self-supervision", "self-labeling", "entropy", "dot product", "representation learning", "cnns", "convolutional neural network", "deep cluster", "nce", "noise contrastive estimation", "unsupervised", "overcluster", "imagenet", "cifar10", "nearest neighbors" ]
How do you learn labels without labels? How do you classify images when you don't know what to classify them into? This paper investigates a new combination of representation learning, clustering, and self-labeling in order to group visually similar images together - and achieves surprisingly high accuracy on benchmark datasets. OUTLINE: 0:00 - Intro & High-level Overview 2:15 - Problem Statement 4:50 - Why naive Clustering does not work 9:25 - Representation Learning 13:40 - Nearest-neighbor-based Clustering 28:00 - Self-Labeling 32:10 - Experiments 38:20 - ImageNet Experiments 41:00 - Overclustering Paper: https://arxiv.org/abs/2005.12320 Code: https://github.com/wvangansbeke/Unsupervised-Classification Abstract: Is it possible to automatically classify images without the use of ground-truth annotations? Or when even the classes themselves, are not a priori known? These remain important, and open questions in computer vision. Several approaches have tried to tackle this problem in an end-to-end fashion. In this paper, we deviate from recent works, and advocate a two-step approach where feature learning and clustering are decoupled. First, a self-supervised task from representation learning is employed to obtain semantically meaningful features. Second, we use the obtained features as a prior in a learnable clustering approach. In doing so, we remove the ability for cluster learning to depend on low-level features, which is present in current end-to-end learning approaches. Experimental evaluation shows that we outperform state-of-the-art methods by huge margins, in particular +26.9% on CIFAR10, +21.5% on CIFAR100-20 and +11.7% on STL10 in terms of classification accuracy. Furthermore, results on ImageNet show that our approach is the first to scale well up to 200 randomly selected classes, obtaining 69.3% top-1 and 85.5% top-5 accuracy, and marking a difference of less than 7.5% with fully-supervised methods. Finally, we applied our approach to all 1000 classes on ImageNet, and found the results to be very encouraging. The code will be made publicly available. Authors: Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, Luc Van Gool Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Check out these clusters of images right here. And just have a look at how all of them are pretty much showing the same object. So here's balloons, here's birds, here's sharks or other fish. These are images from the ImageNet dataset. And you can see that these clusters are pretty much the object classes themselves. There's all the frogs right here, all the people that have caught fish. So the astonishing thing about this is that these clusters have been obtained without any labels of the ImageNet dataset. Of course, the dataset has labels, but this method doesn't use the labels. It learns to classify images without labels. So today we're looking at this paper, Learning to Classify Images Without Labels, by Wouter von Ganzbecke, Simon Vandenhende, Stamatios Georgoulis, Mark Prozémans and Luke van Gaal. And on a high-level overview, they have a three-step procedure. Basically, first, they use self-supervised learning in order to get good representations. Second, they do a clustering. So they do a sort of k-nearest neighbor clustering, but they do clustering on top of those things. But they do it in a kind of special way. And then third, they do a refinement through self-labeling. So if you know what all of these are, you basically understand the paper already. But there's a bit of tricky steps in there. And it's pretty cool that at the end it works out like you just saw. So before we dive in, as always, if you're here and not subscribed, then please do. And if you like the video, share it out. And leave a comment if you feel like commenting. Cool. So as we already stated the problem, they ask, is it possible to automatically classify images without the use of ground truth annotations? Or even when the classes themselves are not known a priori? Now, you might think that this is outrageous. How can you classify when you don't even know what the classes are and so on? So the way you have to imagine it going forward, and they don't explicitly explain it, but it's assumed that if you have a dataset, and you learn to classify it, what basically that means is you cluster it, right? You put some of the data points in the same clusters. And then, of course, the dataset, I'm going to draw the same dataset right here, the same dataset would have an actual classification thing. So this would be class 0, this here may be class 1, and this here might be class 2. Now, you can't possibly know how the classes are called or something, which one is the first, which one is the second. So at test time, basically, if you have a method like this that doesn't use labels, what you're going to do is you're basically going to find, you're going to be as generous as possible in the assignment of these and say, oh, look, if I assign this here to cluster 0 and this here to cluster 2 and this here to cluster 1, and I just carry over the labels, what would my accuracy be under that labeling? So you're as generous as possible with the assignments of the labels. So that's how it's going to work, right? That's what you have to keep in mind. We're basically developing an algorithm that gives us this kind of clustering of the data. And then if that clustering partitions the data in the same way as the actual labeling would, the actual labeling with the test labels, then we think it's a good algorithm. OK, so they claim they have a... OK, in this paper, we deviate from recent works and advocate a two-step approach. And it's actually a three-step approach, but where feature learning and clustering are decoupled. OK, why is that? So they argue what you could do, what people have done is... And I'm going to... Well, this is just a wall of text. So what you could do is you could just basically cluster the data. Like who says you can't use clustering algorithms? And then the question is, what do you cluster them by? Like you need a distance. So if I have points in 2D, it sort of makes sense to use the Euclidean distance here. But if I have images of cats and dogs and whatnot, then the Euclidean distance between the pixels is really not a good thing. But also, so you might think we could actually... We could use a deep neural network and then basically send the image, that's the image right here, send the image through the deep neural network and then either take this last state right here. So it goes through and through and through. And we could get take either of the hidden states or we could just take, you know, the last state, that is the sort of hidden representation right here and do the clustering with that. But then of course, the question is, what do you, which neural network do you take? How do you train that neural network? And there have been a few approaches such as a deep cluster, which try to formulate basically an objective for that neural network where you first, you send all the images through, right? You send a bunch of images through to get you in embedding space, to get you points. And then in embedding space, you think, well, the features that are in the embedding space, they are somehow latent and they... If basically the entire thing is, if this neural network was used to classify images, you would have a classification head on top. And a classification head, this is like a five class classification head, is nothing else than a linear classifier boundary that you put on top of this hidden representation. So if you were to use this neural network for classification, it must be possible to draw a linear boundary between the classes. And therefore, the either things like the inner product distance or the Euclidean distance must make sense in that space. They don't make sense in the picture space, but they must make sense in the hidden representation space, because what you're going to do with them is exactly linear classification. The last classification head of a neural network is just a linear classifier. So the assumption is that, and the conclusion is, well, in this space, you should be able to cluster by Euclidean distance. So what deep cluster does, like is first get the representations. You start off with a random neural network, then cluster these representations, then basically label, self label the images in a way. Now, way over simplifying that technique right here. But you have these alternative steps of clustering and then kind of finding better representation and then clustering these representations. And what it basically says is that the CNN itself is such a is like a prior, because it's the translation of it and works very good for very well for natural images. So the CNN itself will lead to good representations if we do it in this way. And they have some good results there. But this paper argues that if you do that, then the the algorithm tends to focus a lot on very low level features. So if the pixel on the bottom right here is blue, right, then you can. And the neural network, by chance, puts two of those images where the blue pixel on the bottom right, it puts them close together. Then in the next step, it will, because they're close together, will cluster them together. And then it will basically feed back the new representation should put the two in the same class, right? It will feed back that it should focus even more on that blue pixel. So it's very, very dependent on initializations and it can jump super easily onto these low level features that have nothing to do with with the high level task you're ultimately trying to solve, which is to classify these images later. So what this paper does is it says we can eliminate this. We can eliminate this the fact that these methods will produce will produce neural networks that focus on low level features. And how do we do that? We do that by representation learning. So representation learning, you might know this as self supervised learning. And this is the task they solve in the first step of their objective. So let's go through this. This right here is an image. Now, the T is a transformation of that image. And in self supervised learning, there are several methods that you can transform an image. So, for example, you can random crop an image. You can just cut out like a piece right here and scale that up to be as large as the original image. Or you can use, for example, data augmentation, which means you take the image and you basically so if there is, I don't know, the cat right here, you kind of convolve it with something. So there's like a very squiggly cat. OK, I'm terrible. You can you can rotate it, for example. So it's like this. OK, so these are all these are all sets, including the crop sets of this transformation T. So you transform it in some way and you want after you've transformed it, you send your original image. That should be read. You send your original image and the transformed image through a neural network, each one by themselves. OK, and then after this, you say the hidden representation here should be close to each other. OK, this is this is basically the self supervised training task. It's been shown to work very, very well as a pre training method for classification neural networks. You have an image and its augmented version and you minimize the inner product or the Euclidean distance between the two versions in the hidden space. And the rationale is exactly the same. The rationale is that this hidden space, of course, should be linearly classifiable. And so the distance between those should be close. And the rationale between having these tasks is that, well, if I flip the image, right, if I flip the image to the right, it cannot focus on the pixel on the bottom right anymore, because that's not going to be the pixel on the bottom right here. And I'm not always going to flip it into the same direction. And sometimes I'm going to crop it so it also can't focus on the pixel on the bottom right, because in the crop, that pixel is like out here. It's not even in the crop. But basically what you're looking to do with the self supervised methods is you are looking to destroy this low level information. That's that's all you're looking to build a pipeline of a neural network here that destroys deliberately low level information. And you do that by coming up with tasks like this self supervision tasks that just that deliberately exclude this information from being used. I think that's what's going on generally in the self supervised learning thing. OK, so this here, as you can see, is the neural network that you train. You send both images, the original and the augmented version, through the same neural network, and then you minimize some distance, which is usually like the inner product or the Euclidean distance in this embedding space. OK, and what you train, you can see right here, you train the parameters of this neural network. So the transformations are fixed or sampled and the distance is fixed. You train the neural networks such that your embeddings minimize this task. Now, this is nothing new. This has been this has been used for a couple of years now to get better representation. Self supervised learning is a thing. But they basically say we can use this as an initialization step for this clustering procedure, because if we don't do that, we we focus on these low level features. OK, and notice you don't need any labels for this procedure. That's why it's called self supervised. OK, so the second second part is the clustering. Now they cluster, but they don't just cluster these representations. That would be that doesn't perform very well in their in their experiments. What they instead do is they minimize this entire objective right here and we'll go through it step by step. So they train a new neural network. OK, this thing right here, this is a new neural network. So first you have you already have the neural network, which was called. What was it even called? The one that gives you the embedding with the theta. OK, it's called five theta. It's the same architecture. I think they initialize one with the other. So in step one, you get five theta five theta goes give from from X gives you a representation of X. OK, let's call it hidden X. So that's the self supervised learning. But in step two, you train an entirely new neural network, this five data here, and you initialize it with this one. But now you train it to do the following again. You want to minimize. Sorry, you want to maximize the inner product right here. See, that's the inner product. You want to maximize the inner product between two things. Now, that's the same thing as before. We want to minimize the distance between two things and the dot product distance. In that case, you maximize the dot product between two things. And the two things are two images that go through the same neural network as before. Right. This and this. Now, what's different here is that here we input an one image of the data set. That's the same as before. OK, so we input one image. But here before in the self supervised learning, we input an augmented version of that. And now we input something else. We input this K right here. Now, what's K? What K comes from this neighbor set of X. OK, this is the set of neighbors of X. And these neighbors are determined with respect to this neural network right here. So what you do after step one is you take your neural network with the good embeddings. And here is your data set X. Your data set X. This should be another. Your data set X is this list basically of all the images in your data set. And what you're going to do is you're going to take all of them using that neural network that you just trained and embed them into a latent space right here. OK. This is the latent space where you have done the self supervised training. And now for each image right here. So if this is X, I, you're going to find its K nearest neighbors. And they use I think they use five as a benchmark. So you're going to find its nearest neighbors. It's five nearest neighbors. And you do this for each image. So this image has these five nearest neighbors. So in step two, what you're trying to do is you're going to try to pull together each image and its nearest neighbors in that in this this not in this space directly, but you determine which ones are the nearest neighbor from this neural network and you keep it constant. That's how you determine what the nearest neighbors are in the first task. And that is your NX set for X, I. And in the second step, you're trying to make the representations of any image and its nearest neighbors closer to each other. OK, so with with this thing right here, you maximize the inner product between X in after this neural network and a nearest neighbor of X that was was a nearest neighbor after the first task. Now, the way they cluster here is not just again by putting it into an embedding space like we saw before. But this thing right here, this neural network, as you can see here, is is a C dimensional vector in zero one. Now, C is the number of classes that you can either know that. So you don't know which classes which you don't have labels, but you could know how many classes there are. Or you could just guess how many classes there are. And as long as you as you overguess, you can still like build super clusters later. So this they simply say it's in zero one, but they also say it performs a soft assignment. So we're also going to assume that this is normalized. So for each for each data point X here, you're going to you're going to have an image. You're going to put it through this new neural network. Okay, this new neural network new, and it's going to tell you it's going to give you basically a histogram. Let's say class one, two or three, we guess there are three class and it's going to give you an assignment of the three. And you also take a nearest neighbor. Here is your data set. You also take a nearest neighbor of that. So you look for this set N of X and you take a nearest neighbor. Maybe that's that's a maybe that's a dog. I can't I really can't draw dog. Yeah, that's the best I can do. I'm sorry. And you also put that through the same network. And you're saying since they were nearest neighbor in task one, they must share some sort of interesting high level features because that's what the first task was for. Therefore, I want to make them closer together in in the in the light of these of this neural network right here. So this is also going to give you an assignment like maybe like this. Okay. And now you object you you train this network right here to basically match these two distributions. Okay. So this is this is now a classifier into C classes, but we guess C and we don't have labels. We simply our label is going to be my neighbors from the first task must have the same labels. That's our label. Now they say they also have this term right here, which is the entropy over assignments. Okay. As you can see, so they minimize the following. They minimize this quantity, which has a negative in front of it. So that means they maximize this log inner product. And they also maximize the entropy because sorry. So they minimize this thing. But the entropy is a negative quantity. Right. So they maximize the entropy because here's a plus. And now they minimize the entropy. Let's see what they say by minimizing the following objective. Now entropy is the sum of the negative sum of P log P. And this if this is P. Yes, this is the probability that an image is going to be assigned to cluster C over the entire data set. So they're going to. Yes, so it's negative. This quantity negative. Minus P log P. And this is the entropy. So they're going to minimize the entropy. Let's see what they say. We include an entropy term. The second term in equation two. Which spreads the predictions uniformly across clusters C. OK. So what we want is a uniform assignment over cluster, which means we should maximize the entropy. Oh, yes. OK. They minimize this thing. And this here is the negative entropy. Right. So they want basically what they want over the whole data set that not all of the images are going to be in the same cluster. This is cluster one. And then this is cluster two. And then this is cluster three. So that term counteracts that basically the more evenly spread the entire data set distribution is the the higher the entropy, the lower the negative entropy. And that's the goal right here. I'm sorry. This this was I was confused by the too many negative signs. And then you minimize the entire thing. All right. Now, they say they say a different thing right here. They say here this bracket denotes the dot product operator. As we saw, it's the dot product between these two distributions right here. The first term in equation two imposes this neural network to make consistent predictions for a sample XI and its neighboring samples, the neighbors of XI. And here is an interesting thing. Note that the dot product will be maximal when the predictions are one hot. That means confident and assigned to the same cluster consistent. So they basically say the objective encourages confidence because it encourages predictions to be one hot and it encourages consistency because it you know the because the distributions need to be the same. They should be in the same cluster. Right now, I agree with the consistency. Like if you make the inner product high, then of the of two of these histograms, of course, they look the same. Right. Because these are ultimately vectors. These are three dimensional vectors. Let's call them two dimensional vectors. Right. So here is class one. Here's class two. If you make the inner product small or high, they will agree on their predictions. But I disagree that this encourages anything to be one hot. Like in my mind, if you have two vectors, they're both zero one times zero one. The inner product is going to be one. And if you have two assignments that are point five and point five, then it is also going to result in an in an inner product of it. Zero point five. Right. It's also going to to be no. So what's the inner product here? The inner product is point five times point five plus point five times point five, which is point five. Am I dumb? An embarrassingly long time later. Oh, it's because the L1 norm. OK, OK, we got it. We got it. I am I am OK. I am too dumb. Yes, of course, I was thinking of these vectors being normalized in L2 space where their inner products would always be one. But of course, if you have assignments between classes and it's a probability distribution, a histogram, then all of the prob possible assignments lie on this on this thing right here. Now, the inner product with yourself, of course, is the length of the vector and the length of a vector that points to one class or the other class is longer than a vector that points in between. So, OK, I see. That's where they get this. That's where they get this must be one hot from. So, OK, I'll give that to them. It is actually encouraging one hot predictions as long as these things are normalized in L1 space, which they probably are because they're histograms. Right. Yes, that was that was dumbness of me. I was trying to make a counter example. I'm like, wait a minute, this counter example is a counter example to my counter example. OK, so, yeah, that's that. So, as you can see, they are, of course, correct here and they now make the first experiments. So they say basically after the first step of the self supervised training, they can already retrieve sort of nearest neighbors and the nearest neighbors. The nearest neighbors of these images right here are the ones that you see on the right. And after the self supervised one, these nearest neighbors are already pretty good at sharing the high level features actually crazy, crazy good. Right. This flute here is in different sizes. As you can see, the fishes aren't aren't all exactly the same. The birds. So you can see it really focuses on sort of higher level features, but I guess it's really dependent on this higher level task. And they were they also investigate this quantitatively, but I just want to focus on how good is this after only the self supervised thing. And now they do this clustering and they can already sort of could already evaluate it right here because now they have a clustering. Right. After this step, they've basically pulled together the neighbors and they have this neural network that is now assigning classes. So they could already evaluate this and they are going to do that. But that's not good enough yet. Then they do a third step, which is fine tuning through self labeling. Now self labeling is pretty much exactly what it's what it says. It's you label your own data with your own classifier. Now that might be a bit outrageous. But it's basically saying, wait a minute, if I label my own data and learn a classifier on these labels, isn't isn't it just going to come out the same? And the answer is no. Right. If you have a data set because your classifier doesn't give you just first of all, if your classifier is something like this. Right. Just happens to be and you label and you learn a new classifier. It is going to be more like this. Right. Because it sort of maximizes a lot of classifiers maximize these distances between the classes. So even if it's like that and then the second step they do is they say, OK, there are some points where we are actually more confident about such as this one. We're more confident about that one. Also this one. And then this one here is pretty close. Like we're not super neither this one, but we're very confident about these two. So we're only going to use the ones where we are in fact confident about to learn to learn the new classifier. Or basically we you can also weigh them and so on. But they go by confidence right here, as you can see in this final algorithm. So this is the entire algorithm. And I got kicked away. Our algorithm. There we go. All right. So semantic clustering by adopting nearest neighbors, their scan algorithm. So in the first step, you do this pretext task. This is the self supervision, the representation learning. For your entire data set. No, sorry. This is this year. Optimize, optimize the neural network with task T. That's just self supervised representation learning. OK, then the second thing we're going to determine the nearest neighbor set for each X. Now they also in that step, they also augment the data. They do heavy data augmentation and so on. Also in this in the third step in the self labeling, they do data augmentation. There's a lot of tricks in here, but ultimately the base algorithm goes like this. So you find your neighboring sets for each X. And then what you do while your clustering loss decreases, you update this clustering neural network by with this loss that we saw. So this is the loss where you make the nearest neighbors closer to each other while still keeping the entropy high. OK, and then in the last after you've done this. You go through and you say, while the length of Y increases, what's why? Why is all the data points that are above a certain threshold? Now you're going to filter the data set that is above a certain threshold. And that's your data set Y. And you train this same neural network. You basically fine tune it with the cross entropy loss on your own labels. So now you only have labels Y. OK, so it's not it's not labels. You have the cross entropy loss between the assignments of this and the assignments of your data set. OK, so you basically do the same task, but you filter by confidence. And they use a threshold, I think, of point seven or something like this. Now let's go into the experiments, the experiments or look as follows. So they do some ablations to find out where in their methods kind of the the gains come from and will just quickly go through them. If they just do these self supervision at the beginning and then just do K means clustering on top of that, that will give them on C for 10 a thirty five point nine percent accuracy. So not very good. So the clustering, you can't just cluster on top of these representations and then be done. If they do what they say, so this is sample and batch entropy loss. This basically means you do not care about the nearest neighbors. You do this entire thing, but you only make an image close to the prediction, close to itself and its augmentations. So you don't use any nearest neighbor information also doesn't work. I wouldn't pay too much attention that the numbers are 10, 20 or 30. It just doesn't work. Now, if you use the scan loss, you all of a sudden you get into a regime where there is actual signal. So this is this is now significantly above the this is significantly above random guessing. And if you use strong data augmentation, as I said, is a lot of this is has these tricks in it of what kind of data augmentation you do and so on. So never forget that that these papers, besides their idea, they put in all the tricks they can. So you get 10 percent more. And then if you do this self labeling step, you get another 10 percent more. And this is fairly respectable, like eighty three point five without ever seeing labels. It's fairly good. But of course, there are only 10 classes right here. So keep that in mind. But they will do it on ImageNet later. And they investigate what kind of self supervision tasks at the beginning are important. And they investigate things like ROTNET feature decoupling and noise contrastive estimation, which noise contrastive estimation is the best. And noise contrastive estimation, I think, is just where you as we said, you input an image and then it's kind of noisy versions with augmented in various ways. And then you classify them together. This has been like this. These methods have been very successful in the last few years. Yeah, so this they have various investigations into their algorithm. I want to point out this here. This is the accuracy versus confidence after the complete clustering step. So this is now the third step, the self labeling. And you can see right here as this confidence of the network goes up, the actual accuracy goes up as well. So that means the network after the clustering is really more confident about the points that it can classify more accurately. There's like a correlation between where the network is confident and the actual label of the point, which is remarkable because it has never seen the label. But also see how sort of the range here is quite small. So with the standard augmentation, it goes like from here to here. So where you set that threshold is fairly important and might be quite brittle here because you need to set the threshold right such that some points are below it and some are above it. And you don't want to pull in points where you're not because if you pull in points from here, you're only you only have the correct label for 75 percent or something like them of them. And that means if you now self label and learn on them, you're going to learn the wrong signal. So this this step seems fairly brittle, honestly, but I don't know, of course. They go on and investigate various things such as how many clusters do you need or how many nearest neighbors? Sorry. Do you need this number K here? And you can see that if you have zero neighbors, then you're doing a lot worse than if you have, let's say, five nearest neighbors. So the jump here, as you can see, is fairly high in all the data sets. But after that, it sort of doesn't really matter much. So it seems like five nearest neighbors should be enough for most things. And here they just show that when they remove the false positives, that their algorithm actually converges to the correct clustering, the correct accuracy, which is not surprising. Like if you remove the wrong samples that are wrong, then the rest of the samples are going to be right. I think that's just showing that it doesn't go into some kind of crazy downward spiral loop or something like this. But still, it's just kind of funny. OK, so they do investigate how much they improve. And they improve by quite a lot above the kind of previous methods. So they have a lot of previous methods. But even this includes things like K means and so on, GANs, deep cluster that we spoke about. And this method, it already gets, as you can see, fairly close to good accuracy. So you have like 88.6% accuracy. And that's fairly remarkable on C410 without seeing the labels. But we'll go on. And now they go into ImageNet. Now ImageNet, of course, has way more classes. It has 1,000 classes compared to C410's 10 classes. So if you think clustering 10 classes might, and they're fairly apart from each other, might work with various techniques, ImageNet, 1,000 classes, that's way more difficult. Now they do sub sample this to 5100 and 200 classes. And they get OK accuracy. As you can see, they get 81% for 50 classes where a supervised baseline would get 86%. Into 200 classes, they get 69% where a supervised baseline would get 76%. So it's fairly, it's there. And that's quite remarkable for these low number of classes. And they figure out that if they look for the samples that are kind of in the most of the middle of their cluster, they get these prototypes right here. You can see all of these images. If you know ImageNet, some of the images really only have a part of the object and so on. So here with the prototypical things, you really get center clear shot of the object with clearly visible features and so on. So this sort of repeats the fact that this clustering really does go on that sort of semantic information. Of course, the labels here are from the test label set. The network can't figure that out. And then they go for 1,000 classes. And in 1,000 classes, it doesn't really work because there might be just too many confusions right here. But they do have this confusion matrix of their method. And it shows that the confusion matrix is pretty much a block diagonal along these super clusters right here. So you can see the dogs, the network confuses the dogs fairly often and then insects with each other, but not really across here. Which is still quite remarkable. But I mean, that's you get the same thing for a lot of these methods. So I don't I don't know how much different this would be in other methods. But certainly it's interesting to look at. Now, they go into one last thing, and that is what if we don't know how many clusters there are, right? If we don't know anything. So say so far, we have assumed to to have knowledge about the number of ground truth classes. The model predictions were valid losing the Hungarian matching algorithm. We already saw this in the DETR by Facebook, if you remember. However, what happens if the number of clusters does not match the number of ground truth classes anymore? So they now say table three reports the results when we overestimate the number of ground truth classes by a factor of two. OK, so now they build just 20 classes for C for 10 instead of 10 classes. And we'll look at table three real quick. Where's table three? This is table three. OK, so when they over cluster, you get the thing here on the bottom. And you can see there is a drop in accuracy right here. Now, what I don't actually they don't actually say how they do the over cluster matching. So if you imagine if I now have, I don't know, six clusters, but I need to assign them to three clusters, you know, here. Do I still use this most optimistic thing? So do I still use I think they still use this most optimistic matching right where you assign everything to its best fitted cluster. You compute all the permutations and then you give it the best benefit of the doubt. Now, if you imagine the situation where I over cluster to the point that I have each image in its own cluster and I run this algorithm to evaluate my clustering, I give it basically the most beneficial view, then I would get 100 percent accuracy. OK, so like in one of in these over cluster approach, I would sort of expect that you actually get a better score because you can like there is more generosity of the matching algorithm involved. Now, that's counteracted by the fact that you can't group together things that obviously have similar features because they are in the same class. So there's kind of two forces pulling here. But I was kind of astounded that it's going down and the evaluation method of this matching algorithm, it sort of breaks down when you have more classes, at least in my opinion. Yeah, but but it's interesting to see that you can just overshoot and but then you need some sort of heuristic to reconcile that. In any case, I think this paper is pretty cool. It brings together a lot of things that were already present and introduces this kind of this step approach. But what you have to keep in mind and by the way, there's lots of samples down here. What you have to keep in mind is there are a lot of hyperparameters in here. There are like this threshold and you know, the first of all, yeah, the number of classes, the thresholds, the architectures and so on. And all of this has been tuned to get these numbers really high. Right. All of these steps, all of the augmentations and so on, the chosen data augmentations. It has been chosen to get this number as high as possible. So, you know, to interpret this as, oh, look, we can classify without knowing the labels is, you know, yes, in this case, but the hyperparameter choices of the algorithm are all informed by the labels. So it is still very, very unclear of how this method will actually work when you really don't have the labels, when you actually have to choose the hyperparameters in absence of anything. And yeah, I think the future might tell if they continue to work on this. All right. Thanks for listening, looking, watching and bearing with me through my wrestling with various math, basic math in this video. I wish you a good day and bye bye.
[ { "end": 10, "start": 0, "text": " Hi there! Check out these clusters of images right here. And just have a look at how all of them are pretty much showing the same object." }, { "end": 19, "start": 10, "text": " So here's balloons, here's birds, here's sharks or other fish. These are images from the ImageNet dataset." }, { "end": 26, "start": 19, "text": " And you can see that these clusters are pretty much the object classes themselves." }, { "end": 33, "start": 26, "text": " There's all the frogs right here, all the people that have caught fish." }, { "end": 41, "start": 33, "text": " So the astonishing thing about this is that these clusters have been obtained without any labels of the ImageNet dataset." }, { "end": 50, "start": 41, "text": " Of course, the dataset has labels, but this method doesn't use the labels. It learns to classify images without labels." }, { "end": 69, "start": 50, "text": " So today we're looking at this paper, Learning to Classify Images Without Labels, by Wouter von Ganzbecke, Simon Vandenhende, Stamatios Georgoulis, Mark Prozémans and Luke van Gaal." }, { "end": 83, "start": 69, "text": " And on a high-level overview, they have a three-step procedure. Basically, first, they use self-supervised learning in order to get good representations." }, { "end": 95, "start": 83, "text": " Second, they do a clustering. So they do a sort of k-nearest neighbor clustering, but they do clustering on top of those things." }, { "end": 104, "start": 95, "text": " But they do it in a kind of special way. And then third, they do a refinement through self-labeling." }, { "end": 110, "start": 104, "text": " So if you know what all of these are, you basically understand the paper already." }, { "end": 118, "start": 110, "text": " But there's a bit of tricky steps in there. And it's pretty cool that at the end it works out like you just saw." }, { "end": 128, "start": 118, "text": " So before we dive in, as always, if you're here and not subscribed, then please do. And if you like the video, share it out." }, { "end": 133, "start": 128, "text": " And leave a comment if you feel like commenting. Cool." }, { "end": 144, "start": 133, "text": " So as we already stated the problem, they ask, is it possible to automatically classify images without the use of ground truth annotations?" }, { "end": 149, "start": 144, "text": " Or even when the classes themselves are not known a priori?" }, { "end": 157, "start": 149, "text": " Now, you might think that this is outrageous. How can you classify when you don't even know what the classes are and so on?" }, { "end": 168, "start": 157, "text": " So the way you have to imagine it going forward, and they don't explicitly explain it, but it's assumed that if you have a dataset," }, { "end": 176, "start": 168, "text": " and you learn to classify it, what basically that means is you cluster it, right?" }, { "end": 181, "start": 176, "text": " You put some of the data points in the same clusters." }, { "end": 193, "start": 181, "text": " And then, of course, the dataset, I'm going to draw the same dataset right here, the same dataset would have an actual classification thing." }, { "end": 198, "start": 193, "text": " So this would be class 0, this here may be class 1, and this here might be class 2." }, { "end": 205, "start": 198, "text": " Now, you can't possibly know how the classes are called or something, which one is the first, which one is the second." }, { "end": 213, "start": 205, "text": " So at test time, basically, if you have a method like this that doesn't use labels, what you're going to do is you're basically going to find," }, { "end": 218, "start": 213, "text": " you're going to be as generous as possible in the assignment of these and say," }, { "end": 225, "start": 218, "text": " oh, look, if I assign this here to cluster 0 and this here to cluster 2 and this here to cluster 1," }, { "end": 232, "start": 225, "text": " and I just carry over the labels, what would my accuracy be under that labeling?" }, { "end": 238, "start": 232, "text": " So you're as generous as possible with the assignments of the labels." }, { "end": 242, "start": 238, "text": " So that's how it's going to work, right? That's what you have to keep in mind." }, { "end": 247, "start": 242, "text": " We're basically developing an algorithm that gives us this kind of clustering of the data." }, { "end": 254, "start": 247, "text": " And then if that clustering partitions the data in the same way as the actual labeling would," }, { "end": 262, "start": 254, "text": " the actual labeling with the test labels, then we think it's a good algorithm." }, { "end": 267, "start": 262, "text": " OK, so they claim they have a..." }, { "end": 272, "start": 267, "text": " OK, in this paper, we deviate from recent works and advocate a two-step approach." }, { "end": 279, "start": 272, "text": " And it's actually a three-step approach, but where feature learning and clustering are decoupled." }, { "end": 286, "start": 279, "text": " OK, why is that? So they argue what you could do, what people have done is..." }, { "end": 289, "start": 286, "text": " And I'm going to..." }, { "end": 295, "start": 289, "text": " Well, this is just a wall of text. So what you could do is you could just basically cluster the data." }, { "end": 298, "start": 295, "text": " Like who says you can't use clustering algorithms?" }, { "end": 303, "start": 298, "text": " And then the question is, what do you cluster them by? Like you need a distance." }, { "end": 309, "start": 303, "text": " So if I have points in 2D, it sort of makes sense to use the Euclidean distance here." }, { "end": 317, "start": 309, "text": " But if I have images of cats and dogs and whatnot, then the Euclidean distance between the pixels is really not a good thing." }, { "end": 322, "start": 317, "text": " But also, so you might think we could actually..." }, { "end": 328, "start": 322, "text": " We could use a deep neural network and then basically send the image, that's the image right here," }, { "end": 334, "start": 328, "text": " send the image through the deep neural network and then either take this last state right here." }, { "end": 337, "start": 334, "text": " So it goes through and through and through." }, { "end": 342, "start": 337, "text": " And we could get take either of the hidden states or we could just take, you know, the last state," }, { "end": 347, "start": 342, "text": " that is the sort of hidden representation right here and do the clustering with that." }, { "end": 352, "start": 347, "text": " But then of course, the question is, what do you, which neural network do you take?" }, { "end": 355, "start": 352, "text": " How do you train that neural network?" }, { "end": 359, "start": 355, "text": " And there have been a few approaches such as a deep cluster," }, { "end": 366, "start": 359, "text": " which try to formulate basically an objective for that neural network where you first, you send all the images through, right?" }, { "end": 371, "start": 366, "text": " You send a bunch of images through to get you in embedding space, to get you points." }, { "end": 376, "start": 371, "text": " And then in embedding space, you think, well, the features that are in the embedding space," }, { "end": 379, "start": 376, "text": " they are somehow latent and they..." }, { "end": 385, "start": 379, "text": " If basically the entire thing is, if this neural network was used to classify images," }, { "end": 388, "start": 385, "text": " you would have a classification head on top." }, { "end": 392, "start": 388, "text": " And a classification head, this is like a five class classification head," }, { "end": 401, "start": 392, "text": " is nothing else than a linear classifier boundary that you put on top of this hidden representation." }, { "end": 409, "start": 401, "text": " So if you were to use this neural network for classification, it must be possible to draw a linear boundary between the classes." }, { "end": 418, "start": 409, "text": " And therefore, the either things like the inner product distance or the Euclidean distance must make sense in that space." }, { "end": 424, "start": 418, "text": " They don't make sense in the picture space, but they must make sense in the hidden representation space," }, { "end": 428, "start": 424, "text": " because what you're going to do with them is exactly linear classification." }, { "end": 434, "start": 428, "text": " The last classification head of a neural network is just a linear classifier." }, { "end": 444, "start": 434, "text": " So the assumption is that, and the conclusion is, well, in this space, you should be able to cluster by Euclidean distance." }, { "end": 449, "start": 444, "text": " So what deep cluster does, like is first get the representations." }, { "end": 453, "start": 449, "text": " You start off with a random neural network, then cluster these representations," }, { "end": 459, "start": 453, "text": " then basically label, self label the images in a way." }, { "end": 462, "start": 459, "text": " Now, way over simplifying that technique right here." }, { "end": 469, "start": 462, "text": " But you have these alternative steps of clustering and then kind of finding better representation and then clustering these representations." }, { "end": 475, "start": 469, "text": " And what it basically says is that the CNN itself is such a is like a prior," }, { "end": 481, "start": 475, "text": " because it's the translation of it and works very good for very well for natural images." }, { "end": 486, "start": 481, "text": " So the CNN itself will lead to good representations if we do it in this way." }, { "end": 488, "start": 486, "text": " And they have some good results there." }, { "end": 499, "start": 488, "text": " But this paper argues that if you do that, then the the algorithm tends to focus a lot on very low level features." }, { "end": 504, "start": 499, "text": " So if the pixel on the bottom right here is blue, right, then you can." }, { "end": 513, "start": 504, "text": " And the neural network, by chance, puts two of those images where the blue pixel on the bottom right, it puts them close together." }, { "end": 517, "start": 513, "text": " Then in the next step, it will, because they're close together, will cluster them together." }, { "end": 523, "start": 517, "text": " And then it will basically feed back the new representation should put the two in the same class, right?" }, { "end": 529, "start": 523, "text": " It will feed back that it should focus even more on that blue pixel." }, { "end": 546, "start": 529, "text": " So it's very, very dependent on initializations and it can jump super easily onto these low level features that have nothing to do with with the high level task you're ultimately trying to solve, which is to classify these images later." }, { "end": 552, "start": 546, "text": " So what this paper does is it says we can eliminate this." }, { "end": 564, "start": 552, "text": " We can eliminate this the fact that these methods will produce will produce neural networks that focus on low level features." }, { "end": 568, "start": 564, "text": " And how do we do that? We do that by representation learning." }, { "end": 573, "start": 568, "text": " So representation learning, you might know this as self supervised learning." }, { "end": 580, "start": 573, "text": " And this is the task they solve in the first step of their objective." }, { "end": 586, "start": 580, "text": " So let's go through this. This right here is an image." }, { "end": 590, "start": 586, "text": " Now, the T is a transformation of that image." }, { "end": 597, "start": 590, "text": " And in self supervised learning, there are several methods that you can transform an image." }, { "end": 601, "start": 597, "text": " So, for example, you can random crop an image." }, { "end": 608, "start": 601, "text": " You can just cut out like a piece right here and scale that up to be as large as the original image." }, { "end": 620, "start": 608, "text": " Or you can use, for example, data augmentation, which means you take the image and you basically so if there is, I don't know, the cat right here, you kind of convolve it with something." }, { "end": 625, "start": 620, "text": " So there's like a very squiggly cat. OK, I'm terrible." }, { "end": 629, "start": 625, "text": " You can you can rotate it, for example." }, { "end": 636, "start": 629, "text": " So it's like this. OK, so these are all these are all sets, including the crop sets of this transformation T." }, { "end": 648, "start": 636, "text": " So you transform it in some way and you want after you've transformed it, you send your original image." }, { "end": 658, "start": 648, "text": " That should be read. You send your original image and the transformed image through a neural network, each one by themselves." }, { "end": 666, "start": 658, "text": " OK, and then after this, you say the hidden representation here should be close to each other." }, { "end": 670, "start": 666, "text": " OK, this is this is basically the self supervised training task." }, { "end": 678, "start": 670, "text": " It's been shown to work very, very well as a pre training method for classification neural networks." }, { "end": 687, "start": 678, "text": " You have an image and its augmented version and you minimize the inner product or the Euclidean distance between the two versions in the hidden space." }, { "end": 689, "start": 687, "text": " And the rationale is exactly the same." }, { "end": 695, "start": 689, "text": " The rationale is that this hidden space, of course, should be linearly classifiable." }, { "end": 698, "start": 695, "text": " And so the distance between those should be close." }, { "end": 708, "start": 698, "text": " And the rationale between having these tasks is that, well, if I flip the image, right, if I flip the image to the right," }, { "end": 714, "start": 708, "text": " it cannot focus on the pixel on the bottom right anymore, because that's not going to be the pixel on the bottom right here." }, { "end": 717, "start": 714, "text": " And I'm not always going to flip it into the same direction." }, { "end": 724, "start": 717, "text": " And sometimes I'm going to crop it so it also can't focus on the pixel on the bottom right, because in the crop, that pixel is like out here." }, { "end": 726, "start": 724, "text": " It's not even in the crop." }, { "end": 734, "start": 726, "text": " But basically what you're looking to do with the self supervised methods is you are looking to destroy this low level information." }, { "end": 741, "start": 734, "text": " That's that's all you're looking to build a pipeline of a neural network here that destroys deliberately low level information." }, { "end": 754, "start": 741, "text": " And you do that by coming up with tasks like this self supervision tasks that just that deliberately exclude this information from being used." }, { "end": 758, "start": 754, "text": " I think that's what's going on generally in the self supervised learning thing." }, { "end": 764, "start": 758, "text": " OK, so this here, as you can see, is the neural network that you train." }, { "end": 773, "start": 764, "text": " You send both images, the original and the augmented version, through the same neural network, and then you minimize some distance," }, { "end": 778, "start": 773, "text": " which is usually like the inner product or the Euclidean distance in this embedding space." }, { "end": 783, "start": 778, "text": " OK, and what you train, you can see right here, you train the parameters of this neural network." }, { "end": 787, "start": 783, "text": " So the transformations are fixed or sampled and the distance is fixed." }, { "end": 793, "start": 787, "text": " You train the neural networks such that your embeddings minimize this task." }, { "end": 799, "start": 793, "text": " Now, this is nothing new. This has been this has been used for a couple of years now to get better representation." }, { "end": 801, "start": 799, "text": " Self supervised learning is a thing." }, { "end": 808, "start": 801, "text": " But they basically say we can use this as an initialization step for this clustering procedure," }, { "end": 814, "start": 808, "text": " because if we don't do that, we we focus on these low level features." }, { "end": 817, "start": 814, "text": " OK, and notice you don't need any labels for this procedure." }, { "end": 820, "start": 817, "text": " That's why it's called self supervised." }, { "end": 826, "start": 820, "text": " OK, so the second second part is the clustering." }, { "end": 830, "start": 826, "text": " Now they cluster, but they don't just cluster these representations." }, { "end": 835, "start": 830, "text": " That would be that doesn't perform very well in their in their experiments." }, { "end": 845, "start": 835, "text": " What they instead do is they minimize this entire objective right here and we'll go through it step by step." }, { "end": 853, "start": 845, "text": " So they train a new neural network. OK, this thing right here, this is a new neural network." }, { "end": 860, "start": 853, "text": " So first you have you already have the neural network, which was called." }, { "end": 865, "start": 860, "text": " What was it even called? The one that gives you the embedding with the theta." }, { "end": 869, "start": 865, "text": " OK, it's called five theta. It's the same architecture." }, { "end": 871, "start": 869, "text": " I think they initialize one with the other." }, { "end": 881, "start": 871, "text": " So in step one, you get five theta five theta goes give from from X gives you a representation of X." }, { "end": 888, "start": 881, "text": " OK, let's call it hidden X. So that's the self supervised learning." }, { "end": 896, "start": 888, "text": " But in step two, you train an entirely new neural network, this five data here," }, { "end": 903, "start": 896, "text": " and you initialize it with this one. But now you train it to do the following again." }, { "end": 910, "start": 903, "text": " You want to minimize. Sorry, you want to maximize the inner product right here." }, { "end": 915, "start": 910, "text": " See, that's the inner product. You want to maximize the inner product between two things." }, { "end": 921, "start": 915, "text": " Now, that's the same thing as before. We want to minimize the distance between two things and the dot product distance." }, { "end": 927, "start": 921, "text": " In that case, you maximize the dot product between two things. And the two things are two images" }, { "end": 932, "start": 927, "text": " that go through the same neural network as before. Right. This and this." }, { "end": 937, "start": 932, "text": " Now, what's different here is that here we input an one image of the data set." }, { "end": 941, "start": 937, "text": " That's the same as before. OK, so we input one image." }, { "end": 947, "start": 941, "text": " But here before in the self supervised learning, we input an augmented version of that." }, { "end": 952, "start": 947, "text": " And now we input something else. We input this K right here. Now, what's K?" }, { "end": 959, "start": 952, "text": " What K comes from this neighbor set of X. OK, this is the set of neighbors of X." }, { "end": 966, "start": 959, "text": " And these neighbors are determined with respect to this neural network right here." }, { "end": 974, "start": 966, "text": " So what you do after step one is you take your neural network with the good embeddings." }, { "end": 979, "start": 974, "text": " And here is your data set X. Your data set X. This should be another." }, { "end": 985, "start": 979, "text": " Your data set X is this list basically of all the images in your data set." }, { "end": 990, "start": 985, "text": " And what you're going to do is you're going to take all of them using that neural network that you just trained" }, { "end": 997, "start": 990, "text": " and embed them into a latent space right here. OK." }, { "end": 1001, "start": 997, "text": " This is the latent space where you have done the self supervised training." }, { "end": 1010, "start": 1001, "text": " And now for each image right here. So if this is X, I, you're going to find its K nearest neighbors." }, { "end": 1016, "start": 1010, "text": " And they use I think they use five as a benchmark. So you're going to find its nearest neighbors." }, { "end": 1021, "start": 1016, "text": " It's five nearest neighbors. And you do this for each image." }, { "end": 1030, "start": 1021, "text": " So this image has these five nearest neighbors. So in step two, what you're trying to do is you're going to try to pull together" }, { "end": 1038, "start": 1030, "text": " each image and its nearest neighbors in that in this this not in this space directly," }, { "end": 1044, "start": 1038, "text": " but you determine which ones are the nearest neighbor from this neural network and you keep it constant." }, { "end": 1047, "start": 1044, "text": " That's how you determine what the nearest neighbors are in the first task." }, { "end": 1057, "start": 1047, "text": " And that is your NX set for X, I. And in the second step, you're trying to make the representations" }, { "end": 1063, "start": 1057, "text": " of any image and its nearest neighbors closer to each other." }, { "end": 1074, "start": 1063, "text": " OK, so with with this thing right here, you maximize the inner product between X in after this neural network" }, { "end": 1082, "start": 1074, "text": " and a nearest neighbor of X that was was a nearest neighbor after the first task." }, { "end": 1089, "start": 1082, "text": " Now, the way they cluster here is not just again by putting it into an embedding space like we saw before." }, { "end": 1100, "start": 1089, "text": " But this thing right here, this neural network, as you can see here, is is a C dimensional vector in zero one." }, { "end": 1104, "start": 1100, "text": " Now, C is the number of classes that you can either know that." }, { "end": 1109, "start": 1104, "text": " So you don't know which classes which you don't have labels, but you could know how many classes there are." }, { "end": 1113, "start": 1109, "text": " Or you could just guess how many classes there are." }, { "end": 1119, "start": 1113, "text": " And as long as you as you overguess, you can still like build super clusters later." }, { "end": 1125, "start": 1119, "text": " So this they simply say it's in zero one, but they also say it performs a soft assignment." }, { "end": 1129, "start": 1125, "text": " So we're also going to assume that this is normalized." }, { "end": 1136, "start": 1129, "text": " So for each for each data point X here, you're going to you're going to have an image." }, { "end": 1139, "start": 1136, "text": " You're going to put it through this new neural network." }, { "end": 1146, "start": 1139, "text": " Okay, this new neural network new, and it's going to tell you it's going to give you basically a histogram." }, { "end": 1153, "start": 1146, "text": " Let's say class one, two or three, we guess there are three class and it's going to give you an assignment of the three." }, { "end": 1156, "start": 1153, "text": " And you also take a nearest neighbor." }, { "end": 1158, "start": 1156, "text": " Here is your data set." }, { "end": 1161, "start": 1158, "text": " You also take a nearest neighbor of that." }, { "end": 1166, "start": 1161, "text": " So you look for this set N of X and you take a nearest neighbor." }, { "end": 1171, "start": 1166, "text": " Maybe that's that's a maybe that's a dog." }, { "end": 1174, "start": 1171, "text": " I can't I really can't draw dog." }, { "end": 1176, "start": 1174, "text": " Yeah, that's the best I can do." }, { "end": 1177, "start": 1176, "text": " I'm sorry." }, { "end": 1180, "start": 1177, "text": " And you also put that through the same network." }, { "end": 1188, "start": 1180, "text": " And you're saying since they were nearest neighbor in task one, they must share some sort of interesting high level features" }, { "end": 1191, "start": 1188, "text": " because that's what the first task was for." }, { "end": 1198, "start": 1191, "text": " Therefore, I want to make them closer together in in the in the light of these of this neural network right here." }, { "end": 1204, "start": 1198, "text": " So this is also going to give you an assignment like maybe like this." }, { "end": 1205, "start": 1204, "text": " Okay." }, { "end": 1214, "start": 1205, "text": " And now you object you you train this network right here to basically match these two distributions." }, { "end": 1222, "start": 1214, "text": " Okay. So this is this is now a classifier into C classes, but we guess C and we don't have labels." }, { "end": 1228, "start": 1222, "text": " We simply our label is going to be my neighbors from the first task must have the same labels." }, { "end": 1230, "start": 1228, "text": " That's our label." }, { "end": 1237, "start": 1230, "text": " Now they say they also have this term right here, which is the entropy over assignments." }, { "end": 1238, "start": 1237, "text": " Okay." }, { "end": 1240, "start": 1238, "text": " As you can see, so they minimize the following." }, { "end": 1244, "start": 1240, "text": " They minimize this quantity, which has a negative in front of it." }, { "end": 1247, "start": 1244, "text": " So that means they maximize this log inner product." }, { "end": 1255, "start": 1247, "text": " And they also maximize the entropy because sorry." }, { "end": 1257, "start": 1255, "text": " So they minimize this thing." }, { "end": 1260, "start": 1257, "text": " But the entropy is a negative quantity." }, { "end": 1261, "start": 1260, "text": " Right." }, { "end": 1266, "start": 1261, "text": " So they maximize the entropy because here's a plus." }, { "end": 1271, "start": 1266, "text": " And now they minimize the entropy." }, { "end": 1276, "start": 1271, "text": " Let's see what they say by minimizing the following objective." }, { "end": 1280, "start": 1276, "text": " Now entropy is the sum of the negative sum of P log P." }, { "end": 1283, "start": 1280, "text": " And this if this is P." }, { "end": 1291, "start": 1283, "text": " Yes, this is the probability that an image is going to be assigned to cluster C over the entire data set." }, { "end": 1295, "start": 1291, "text": " So they're going to." }, { "end": 1297, "start": 1295, "text": " Yes, so it's negative." }, { "end": 1302, "start": 1297, "text": " This quantity negative." }, { "end": 1305, "start": 1302, "text": " Minus P log P." }, { "end": 1308, "start": 1305, "text": " And this is the entropy." }, { "end": 1311, "start": 1308, "text": " So they're going to minimize the entropy." }, { "end": 1315, "start": 1311, "text": " Let's see what they say." }, { "end": 1319, "start": 1315, "text": " We include an entropy term." }, { "end": 1321, "start": 1319, "text": " The second term in equation two." }, { "end": 1328, "start": 1321, "text": " Which spreads the predictions uniformly across clusters C." }, { "end": 1329, "start": 1328, "text": " OK." }, { "end": 1341, "start": 1329, "text": " So what we want is a uniform assignment over cluster, which means we should maximize the entropy." }, { "end": 1342, "start": 1341, "text": " Oh, yes." }, { "end": 1343, "start": 1342, "text": " OK." }, { "end": 1344, "start": 1343, "text": " They minimize this thing." }, { "end": 1347, "start": 1344, "text": " And this here is the negative entropy." }, { "end": 1348, "start": 1347, "text": " Right." }, { "end": 1356, "start": 1348, "text": " So they want basically what they want over the whole data set that not all of the images are going to be in the same cluster." }, { "end": 1358, "start": 1356, "text": " This is cluster one." }, { "end": 1359, "start": 1358, "text": " And then this is cluster two." }, { "end": 1360, "start": 1359, "text": " And then this is cluster three." }, { "end": 1372, "start": 1360, "text": " So that term counteracts that basically the more evenly spread the entire data set distribution is the the higher the entropy, the lower the negative entropy." }, { "end": 1373, "start": 1372, "text": " And that's the goal right here." }, { "end": 1374, "start": 1373, "text": " I'm sorry." }, { "end": 1378, "start": 1374, "text": " This this was I was confused by the too many negative signs." }, { "end": 1380, "start": 1378, "text": " And then you minimize the entire thing." }, { "end": 1381, "start": 1380, "text": " All right." }, { "end": 1384, "start": 1381, "text": " Now, they say they say a different thing right here." }, { "end": 1388, "start": 1384, "text": " They say here this bracket denotes the dot product operator." }, { "end": 1395, "start": 1388, "text": " As we saw, it's the dot product between these two distributions right here." }, { "end": 1407, "start": 1395, "text": " The first term in equation two imposes this neural network to make consistent predictions for a sample XI and its neighboring samples, the neighbors of XI." }, { "end": 1409, "start": 1407, "text": " And here is an interesting thing." }, { "end": 1414, "start": 1409, "text": " Note that the dot product will be maximal when the predictions are one hot." }, { "end": 1418, "start": 1414, "text": " That means confident and assigned to the same cluster consistent." }, { "end": 1432, "start": 1418, "text": " So they basically say the objective encourages confidence because it encourages predictions to be one hot and it encourages consistency because it you know the because the distributions need to be the same." }, { "end": 1434, "start": 1432, "text": " They should be in the same cluster." }, { "end": 1437, "start": 1434, "text": " Right now, I agree with the consistency." }, { "end": 1445, "start": 1437, "text": " Like if you make the inner product high, then of the of two of these histograms, of course, they look the same." }, { "end": 1446, "start": 1445, "text": " Right." }, { "end": 1449, "start": 1446, "text": " Because these are ultimately vectors. These are three dimensional vectors." }, { "end": 1451, "start": 1449, "text": " Let's call them two dimensional vectors." }, { "end": 1453, "start": 1451, "text": " Right. So here is class one." }, { "end": 1454, "start": 1453, "text": " Here's class two." }, { "end": 1462, "start": 1454, "text": " If you make the inner product small or high, they will agree on their predictions." }, { "end": 1467, "start": 1462, "text": " But I disagree that this encourages anything to be one hot." }, { "end": 1472, "start": 1467, "text": " Like in my mind, if you have two vectors, they're both zero one times zero one." }, { "end": 1474, "start": 1472, "text": " The inner product is going to be one." }, { "end": 1485, "start": 1474, "text": " And if you have two assignments that are point five and point five, then it is also going to result in an in an inner product of it." }, { "end": 1487, "start": 1485, "text": " Zero point five." }, { "end": 1492, "start": 1487, "text": " Right. It's also going to to be no." }, { "end": 1494, "start": 1492, "text": " So what's the inner product here?" }, { "end": 1503, "start": 1494, "text": " The inner product is point five times point five plus point five times point five, which is point five." }, { "end": 1505, "start": 1503, "text": " Am I dumb?" }, { "end": 1509, "start": 1505, "text": " An embarrassingly long time later." }, { "end": 1511, "start": 1509, "text": " Oh, it's because the L1 norm." }, { "end": 1513, "start": 1511, "text": " OK, OK, we got it." }, { "end": 1516, "start": 1513, "text": " We got it." }, { "end": 1518, "start": 1516, "text": " I am I am OK." }, { "end": 1519, "start": 1518, "text": " I am too dumb." }, { "end": 1526, "start": 1519, "text": " Yes, of course, I was thinking of these vectors being normalized in L2 space where their inner products would always be one." }, { "end": 1541, "start": 1526, "text": " But of course, if you have assignments between classes and it's a probability distribution, a histogram, then all of the prob possible assignments lie on this on this thing right here." }, { "end": 1554, "start": 1541, "text": " Now, the inner product with yourself, of course, is the length of the vector and the length of a vector that points to one class or the other class is longer than a vector that points in between." }, { "end": 1556, "start": 1554, "text": " So, OK, I see." }, { "end": 1557, "start": 1556, "text": " That's where they get this." }, { "end": 1560, "start": 1557, "text": " That's where they get this must be one hot from." }, { "end": 1562, "start": 1560, "text": " So, OK, I'll give that to them." }, { "end": 1572, "start": 1562, "text": " It is actually encouraging one hot predictions as long as these things are normalized in L1 space, which they probably are because they're histograms." }, { "end": 1574, "start": 1572, "text": " Right." }, { "end": 1579, "start": 1574, "text": " Yes, that was that was dumbness of me." }, { "end": 1581, "start": 1579, "text": " I was trying to make a counter example." }, { "end": 1587, "start": 1581, "text": " I'm like, wait a minute, this counter example is a counter example to my counter example." }, { "end": 1591, "start": 1587, "text": " OK, so, yeah, that's that." }, { "end": 1602, "start": 1591, "text": " So, as you can see, they are, of course, correct here and they now make the first experiments." }, { "end": 1613, "start": 1602, "text": " So they say basically after the first step of the self supervised training, they can already retrieve sort of nearest neighbors and the nearest neighbors." }, { "end": 1620, "start": 1613, "text": " The nearest neighbors of these images right here are the ones that you see on the right." }, { "end": 1629, "start": 1620, "text": " And after the self supervised one, these nearest neighbors are already pretty good at sharing the high level features actually crazy, crazy good." }, { "end": 1632, "start": 1629, "text": " Right. This flute here is in different sizes." }, { "end": 1638, "start": 1632, "text": " As you can see, the fishes aren't aren't all exactly the same." }, { "end": 1640, "start": 1638, "text": " The birds." }, { "end": 1648, "start": 1640, "text": " So you can see it really focuses on sort of higher level features, but I guess it's really dependent on this higher level task." }, { "end": 1660, "start": 1648, "text": " And they were they also investigate this quantitatively, but I just want to focus on how good is this after only the self supervised thing." }, { "end": 1668, "start": 1660, "text": " And now they do this clustering and they can already sort of could already evaluate it right here because now they have a clustering." }, { "end": 1676, "start": 1668, "text": " Right. After this step, they've basically pulled together the neighbors and they have this neural network that is now assigning classes." }, { "end": 1679, "start": 1676, "text": " So they could already evaluate this and they are going to do that." }, { "end": 1682, "start": 1679, "text": " But that's not good enough yet." }, { "end": 1688, "start": 1682, "text": " Then they do a third step, which is fine tuning through self labeling." }, { "end": 1693, "start": 1688, "text": " Now self labeling is pretty much exactly what it's what it says." }, { "end": 1697, "start": 1693, "text": " It's you label your own data with your own classifier." }, { "end": 1700, "start": 1697, "text": " Now that might be a bit outrageous." }, { "end": 1710, "start": 1700, "text": " But it's basically saying, wait a minute, if I label my own data and learn a classifier on these labels, isn't isn't it just going to come out the same?" }, { "end": 1712, "start": 1710, "text": " And the answer is no." }, { "end": 1726, "start": 1712, "text": " Right. If you have a data set because your classifier doesn't give you just first of all, if your classifier is something like this." }, { "end": 1727, "start": 1726, "text": " Right." }, { "end": 1731, "start": 1727, "text": " Just happens to be and you label and you learn a new classifier." }, { "end": 1733, "start": 1731, "text": " It is going to be more like this." }, { "end": 1741, "start": 1733, "text": " Right. Because it sort of maximizes a lot of classifiers maximize these distances between the classes." }, { "end": 1751, "start": 1741, "text": " So even if it's like that and then the second step they do is they say, OK, there are some points where we are actually more confident about such as this one." }, { "end": 1753, "start": 1751, "text": " We're more confident about that one." }, { "end": 1754, "start": 1753, "text": " Also this one." }, { "end": 1757, "start": 1754, "text": " And then this one here is pretty close." }, { "end": 1761, "start": 1757, "text": " Like we're not super neither this one, but we're very confident about these two." }, { "end": 1772, "start": 1761, "text": " So we're only going to use the ones where we are in fact confident about to learn to learn the new classifier." }, { "end": 1776, "start": 1772, "text": " Or basically we you can also weigh them and so on." }, { "end": 1783, "start": 1776, "text": " But they go by confidence right here, as you can see in this final algorithm." }, { "end": 1786, "start": 1783, "text": " So this is the entire algorithm." }, { "end": 1792, "start": 1786, "text": " And I got kicked away." }, { "end": 1793, "start": 1792, "text": " Our algorithm." }, { "end": 1794, "start": 1793, "text": " There we go." }, { "end": 1797, "start": 1794, "text": " All right." }, { "end": 1803, "start": 1797, "text": " So semantic clustering by adopting nearest neighbors, their scan algorithm." }, { "end": 1806, "start": 1803, "text": " So in the first step, you do this pretext task." }, { "end": 1809, "start": 1806, "text": " This is the self supervision, the representation learning." }, { "end": 1813, "start": 1809, "text": " For your entire data set." }, { "end": 1814, "start": 1813, "text": " No, sorry." }, { "end": 1815, "start": 1814, "text": " This is this year." }, { "end": 1819, "start": 1815, "text": " Optimize, optimize the neural network with task T." }, { "end": 1822, "start": 1819, "text": " That's just self supervised representation learning." }, { "end": 1830, "start": 1822, "text": " OK, then the second thing we're going to determine the nearest neighbor set for each X." }, { "end": 1833, "start": 1830, "text": " Now they also in that step, they also augment the data." }, { "end": 1836, "start": 1833, "text": " They do heavy data augmentation and so on." }, { "end": 1840, "start": 1836, "text": " Also in this in the third step in the self labeling, they do data augmentation." }, { "end": 1845, "start": 1840, "text": " There's a lot of tricks in here, but ultimately the base algorithm goes like this." }, { "end": 1849, "start": 1845, "text": " So you find your neighboring sets for each X." }, { "end": 1860, "start": 1849, "text": " And then what you do while your clustering loss decreases, you update this clustering neural network by with this loss that we saw." }, { "end": 1867, "start": 1860, "text": " So this is the loss where you make the nearest neighbors closer to each other while still keeping the entropy high." }, { "end": 1871, "start": 1867, "text": " OK, and then in the last after you've done this." }, { "end": 1877, "start": 1871, "text": " You go through and you say, while the length of Y increases, what's why?" }, { "end": 1883, "start": 1877, "text": " Why is all the data points that are above a certain threshold?" }, { "end": 1888, "start": 1883, "text": " Now you're going to filter the data set that is above a certain threshold." }, { "end": 1890, "start": 1888, "text": " And that's your data set Y." }, { "end": 1893, "start": 1890, "text": " And you train this same neural network." }, { "end": 1898, "start": 1893, "text": " You basically fine tune it with the cross entropy loss on your own labels." }, { "end": 1905, "start": 1898, "text": " So now you only have labels Y." }, { "end": 1910, "start": 1905, "text": " OK, so it's not it's not labels." }, { "end": 1917, "start": 1910, "text": " You have the cross entropy loss between the assignments of this and the assignments of your data set." }, { "end": 1927, "start": 1917, "text": " OK, so you basically do the same task, but you filter by confidence." }, { "end": 1932, "start": 1927, "text": " And they use a threshold, I think, of point seven or something like this." }, { "end": 1940, "start": 1932, "text": " Now let's go into the experiments, the experiments or look as follows." }, { "end": 1949, "start": 1940, "text": " So they do some ablations to find out where in their methods kind of the the gains come from and will just quickly go through them." }, { "end": 1956, "start": 1949, "text": " If they just do these self supervision at the beginning and then just do K means clustering on top of that," }, { "end": 1961, "start": 1956, "text": " that will give them on C for 10 a thirty five point nine percent accuracy." }, { "end": 1963, "start": 1961, "text": " So not very good." }, { "end": 1970, "start": 1963, "text": " So the clustering, you can't just cluster on top of these representations and then be done." }, { "end": 1978, "start": 1970, "text": " If they do what they say, so this is sample and batch entropy loss." }, { "end": 1981, "start": 1978, "text": " This basically means you do not care about the nearest neighbors." }, { "end": 1989, "start": 1981, "text": " You do this entire thing, but you only make an image close to the prediction, close to itself and its augmentations." }, { "end": 1993, "start": 1989, "text": " So you don't use any nearest neighbor information also doesn't work." }, { "end": 1998, "start": 1993, "text": " I wouldn't pay too much attention that the numbers are 10, 20 or 30." }, { "end": 2000, "start": 1998, "text": " It just doesn't work." }, { "end": 2009, "start": 2000, "text": " Now, if you use the scan loss, you all of a sudden you get into a regime where there is actual signal." }, { "end": 2018, "start": 2009, "text": " So this is this is now significantly above the this is significantly above random guessing." }, { "end": 2027, "start": 2018, "text": " And if you use strong data augmentation, as I said, is a lot of this is has these tricks in it of what kind of data augmentation you do and so on." }, { "end": 2035, "start": 2027, "text": " So never forget that that these papers, besides their idea, they put in all the tricks they can." }, { "end": 2037, "start": 2035, "text": " So you get 10 percent more." }, { "end": 2043, "start": 2037, "text": " And then if you do this self labeling step, you get another 10 percent more." }, { "end": 2049, "start": 2043, "text": " And this is fairly respectable, like eighty three point five without ever seeing labels." }, { "end": 2051, "start": 2049, "text": " It's fairly good." }, { "end": 2054, "start": 2051, "text": " But of course, there are only 10 classes right here." }, { "end": 2056, "start": 2054, "text": " So keep that in mind." }, { "end": 2059, "start": 2056, "text": " But they will do it on ImageNet later." }, { "end": 2065, "start": 2059, "text": " And they investigate what kind of self supervision tasks at the beginning are important." }, { "end": 2071, "start": 2065, "text": " And they investigate things like ROTNET feature decoupling and noise contrastive estimation," }, { "end": 2073, "start": 2071, "text": " which noise contrastive estimation is the best." }, { "end": 2083, "start": 2073, "text": " And noise contrastive estimation, I think, is just where you as we said, you input an image and then it's kind of noisy versions with augmented in various ways." }, { "end": 2088, "start": 2083, "text": " And then you classify them together." }, { "end": 2090, "start": 2088, "text": " This has been like this." }, { "end": 2096, "start": 2090, "text": " These methods have been very successful in the last few years." }, { "end": 2103, "start": 2096, "text": " Yeah, so this they have various investigations into their algorithm." }, { "end": 2106, "start": 2103, "text": " I want to point out this here." }, { "end": 2113, "start": 2106, "text": " This is the accuracy versus confidence after the complete clustering step." }, { "end": 2116, "start": 2113, "text": " So this is now the third step, the self labeling." }, { "end": 2125, "start": 2116, "text": " And you can see right here as this confidence of the network goes up, the actual accuracy goes up as well." }, { "end": 2132, "start": 2125, "text": " So that means the network after the clustering is really more confident about the points that it can classify more accurately." }, { "end": 2142, "start": 2132, "text": " There's like a correlation between where the network is confident and the actual label of the point, which is remarkable because it has never seen the label." }, { "end": 2147, "start": 2142, "text": " But also see how sort of the range here is quite small." }, { "end": 2151, "start": 2147, "text": " So with the standard augmentation, it goes like from here to here." }, { "end": 2166, "start": 2151, "text": " So where you set that threshold is fairly important and might be quite brittle here because you need to set the threshold right such that some points are below it and some are above it." }, { "end": 2183, "start": 2166, "text": " And you don't want to pull in points where you're not because if you pull in points from here, you're only you only have the correct label for 75 percent or something like them of them." }, { "end": 2188, "start": 2183, "text": " And that means if you now self label and learn on them, you're going to learn the wrong signal." }, { "end": 2199, "start": 2188, "text": " So this this step seems fairly brittle, honestly, but I don't know, of course." }, { "end": 2207, "start": 2199, "text": " They go on and investigate various things such as how many clusters do you need or how many nearest neighbors?" }, { "end": 2210, "start": 2207, "text": " Sorry. Do you need this number K here?" }, { "end": 2219, "start": 2210, "text": " And you can see that if you have zero neighbors, then you're doing a lot worse than if you have, let's say, five nearest neighbors." }, { "end": 2224, "start": 2219, "text": " So the jump here, as you can see, is fairly high in all the data sets." }, { "end": 2228, "start": 2224, "text": " But after that, it sort of doesn't really matter much." }, { "end": 2233, "start": 2228, "text": " So it seems like five nearest neighbors should be enough for most things." }, { "end": 2244, "start": 2233, "text": " And here they just show that when they remove the false positives, that their algorithm actually converges to the correct clustering, the correct accuracy, which is not surprising." }, { "end": 2250, "start": 2244, "text": " Like if you remove the wrong samples that are wrong, then the rest of the samples are going to be right." }, { "end": 2256, "start": 2250, "text": " I think that's just showing that it doesn't go into some kind of crazy downward spiral loop or something like this." }, { "end": 2260, "start": 2256, "text": " But still, it's just kind of funny." }, { "end": 2264, "start": 2260, "text": " OK, so they do investigate how much they improve." }, { "end": 2268, "start": 2264, "text": " And they improve by quite a lot above the kind of previous methods." }, { "end": 2270, "start": 2268, "text": " So they have a lot of previous methods." }, { "end": 2278, "start": 2270, "text": " But even this includes things like K means and so on, GANs, deep cluster that we spoke about." }, { "end": 2284, "start": 2278, "text": " And this method, it already gets, as you can see, fairly close to good accuracy." }, { "end": 2290, "start": 2284, "text": " So you have like 88.6% accuracy." }, { "end": 2299, "start": 2290, "text": " And that's fairly remarkable on C410 without seeing the labels." }, { "end": 2301, "start": 2299, "text": " But we'll go on." }, { "end": 2303, "start": 2301, "text": " And now they go into ImageNet." }, { "end": 2306, "start": 2303, "text": " Now ImageNet, of course, has way more classes." }, { "end": 2310, "start": 2306, "text": " It has 1,000 classes compared to C410's 10 classes." }, { "end": 2315, "start": 2310, "text": " So if you think clustering 10 classes might, and they're fairly apart from each other," }, { "end": 2320, "start": 2315, "text": " might work with various techniques, ImageNet, 1,000 classes, that's way more difficult." }, { "end": 2327, "start": 2320, "text": " Now they do sub sample this to 5100 and 200 classes." }, { "end": 2333, "start": 2327, "text": " And they get OK accuracy." }, { "end": 2344, "start": 2333, "text": " As you can see, they get 81% for 50 classes where a supervised baseline would get 86%." }, { "end": 2350, "start": 2344, "text": " Into 200 classes, they get 69% where a supervised baseline would get 76%." }, { "end": 2355, "start": 2350, "text": " So it's fairly, it's there." }, { "end": 2361, "start": 2355, "text": " And that's quite remarkable for these low number of classes." }, { "end": 2368, "start": 2361, "text": " And they figure out that if they look for the samples that are kind of in the most of the middle of their cluster," }, { "end": 2371, "start": 2368, "text": " they get these prototypes right here." }, { "end": 2373, "start": 2371, "text": " You can see all of these images." }, { "end": 2378, "start": 2373, "text": " If you know ImageNet, some of the images really only have a part of the object and so on." }, { "end": 2388, "start": 2378, "text": " So here with the prototypical things, you really get center clear shot of the object with clearly visible features and so on." }, { "end": 2401, "start": 2388, "text": " So this sort of repeats the fact that this clustering really does go on that sort of semantic information." }, { "end": 2405, "start": 2401, "text": " Of course, the labels here are from the test label set." }, { "end": 2410, "start": 2405, "text": " The network can't figure that out." }, { "end": 2413, "start": 2410, "text": " And then they go for 1,000 classes." }, { "end": 2420, "start": 2413, "text": " And in 1,000 classes, it doesn't really work because there might be just too many confusions right here." }, { "end": 2426, "start": 2420, "text": " But they do have this confusion matrix of their method." }, { "end": 2434, "start": 2426, "text": " And it shows that the confusion matrix is pretty much a block diagonal along these super clusters right here." }, { "end": 2442, "start": 2434, "text": " So you can see the dogs, the network confuses the dogs fairly often and then insects with each other, but not really across here." }, { "end": 2445, "start": 2442, "text": " Which is still quite remarkable." }, { "end": 2449, "start": 2445, "text": " But I mean, that's you get the same thing for a lot of these methods." }, { "end": 2456, "start": 2449, "text": " So I don't I don't know how much different this would be in other methods." }, { "end": 2459, "start": 2456, "text": " But certainly it's interesting to look at." }, { "end": 2466, "start": 2459, "text": " Now, they go into one last thing, and that is what if we don't know how many clusters there are, right?" }, { "end": 2468, "start": 2466, "text": " If we don't know anything." }, { "end": 2473, "start": 2468, "text": " So say so far, we have assumed to to have knowledge about the number of ground truth classes." }, { "end": 2477, "start": 2473, "text": " The model predictions were valid losing the Hungarian matching algorithm." }, { "end": 2484, "start": 2477, "text": " We already saw this in the DETR by Facebook, if you remember." }, { "end": 2490, "start": 2484, "text": " However, what happens if the number of clusters does not match the number of ground truth classes anymore?" }, { "end": 2498, "start": 2490, "text": " So they now say table three reports the results when we overestimate the number of ground truth classes by a factor of two." }, { "end": 2505, "start": 2498, "text": " OK, so now they build just 20 classes for C for 10 instead of 10 classes." }, { "end": 2508, "start": 2505, "text": " And we'll look at table three real quick." }, { "end": 2510, "start": 2508, "text": " Where's table three?" }, { "end": 2512, "start": 2510, "text": " This is table three." }, { "end": 2519, "start": 2512, "text": " OK, so when they over cluster, you get the thing here on the bottom." }, { "end": 2523, "start": 2519, "text": " And you can see there is a drop in accuracy right here." }, { "end": 2532, "start": 2523, "text": " Now, what I don't actually they don't actually say how they do the over cluster matching." }, { "end": 2544, "start": 2532, "text": " So if you imagine if I now have, I don't know, six clusters, but I need to assign them to three clusters, you know, here." }, { "end": 2547, "start": 2544, "text": " Do I still use this most optimistic thing?" }, { "end": 2557, "start": 2547, "text": " So do I still use I think they still use this most optimistic matching right where you assign everything to its best fitted cluster." }, { "end": 2562, "start": 2557, "text": " You compute all the permutations and then you give it the best benefit of the doubt." }, { "end": 2574, "start": 2562, "text": " Now, if you imagine the situation where I over cluster to the point that I have each image in its own cluster" }, { "end": 2583, "start": 2574, "text": " and I run this algorithm to evaluate my clustering, I give it basically the most beneficial view, then I would get 100 percent accuracy." }, { "end": 2596, "start": 2583, "text": " OK, so like in one of in these over cluster approach, I would sort of expect that you actually get a better score" }, { "end": 2602, "start": 2596, "text": " because you can like there is more generosity of the matching algorithm involved." }, { "end": 2611, "start": 2602, "text": " Now, that's counteracted by the fact that you can't group together things that obviously have similar features because they are in the same class." }, { "end": 2613, "start": 2611, "text": " So there's kind of two forces pulling here." }, { "end": 2620, "start": 2613, "text": " But I was kind of astounded that it's going down and the evaluation method of this matching algorithm," }, { "end": 2626, "start": 2620, "text": " it sort of breaks down when you have more classes, at least in my opinion." }, { "end": 2636, "start": 2626, "text": " Yeah, but but it's interesting to see that you can just overshoot and but then you need some sort of heuristic to reconcile that." }, { "end": 2640, "start": 2636, "text": " In any case, I think this paper is pretty cool." }, { "end": 2647, "start": 2640, "text": " It brings together a lot of things that were already present and introduces this kind of this step approach." }, { "end": 2652, "start": 2647, "text": " But what you have to keep in mind and by the way, there's lots of samples down here." }, { "end": 2656, "start": 2652, "text": " What you have to keep in mind is there are a lot of hyperparameters in here." }, { "end": 2666, "start": 2656, "text": " There are like this threshold and you know, the first of all, yeah, the number of classes, the thresholds, the architectures and so on." }, { "end": 2672, "start": 2666, "text": " And all of this has been tuned to get these numbers really high." }, { "end": 2678, "start": 2672, "text": " Right. All of these steps, all of the augmentations and so on, the chosen data augmentations." }, { "end": 2683, "start": 2678, "text": " It has been chosen to get this number as high as possible." }, { "end": 2692, "start": 2683, "text": " So, you know, to interpret this as, oh, look, we can classify without knowing the labels is, you know," }, { "end": 2700, "start": 2692, "text": " yes, in this case, but the hyperparameter choices of the algorithm are all informed by the labels." }, { "end": 2708, "start": 2700, "text": " So it is still very, very unclear of how this method will actually work when you really don't have the labels," }, { "end": 2713, "start": 2708, "text": " when you actually have to choose the hyperparameters in absence of anything." }, { "end": 2719, "start": 2713, "text": " And yeah, I think the future might tell if they continue to work on this." }, { "end": 2729, "start": 2719, "text": " All right. Thanks for listening, looking, watching and bearing with me through my wrestling with various math," }, { "end": 2733, "start": 2729, "text": " basic math in this video. I wish you a good day and bye bye." } ]
3_qGrmD6iQY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
On the Measure of Intelligence by François Chollet - Part 1: Foundations (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "chollet", "keras", "google", "francois", "intelligence", "iq", "iq test", "deep neural networks", "prior", "skill", "performance", "measurement", "measure", "test", "number", "intelligent", "smart", "learning", "generalization", "ability", "experience", "humans", "evolution", "nature", "nurture", "psychometrics", "range", "adaptability", "arc", "kaggle", "difficulty", "entropy", "core knowledge", "objectness", "navigation", "contact", "agent", "goal" ]
How does one measure the Intelligence of an AI? Is AlphaGo intelligent? How about GPT-3? In this landmark paper, Chollet proposes a solid measure of intelligence for AI that revolves around generalization, rather than skill. OUTLINE: 0:00 - Intro 1:15 - The need for a measure of intelligence 3:35 - Intelligence as generalization ability 5:45 - Nature vs nurture 11:45 - Skill-based evaluation 18:30 - Generalization based evaluation 30:25 - Inspiration from psychometrics 36:30 - Conclusion https://arxiv.org/abs/1911.01547 Abstract: To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans. Authors: François Chollet Thumbnail: Photo by mohamed hassan Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello there! Today we're going to look at On the Measure of Intelligence by François Cholet of Google. This is a bit of a special episode I would say because if you look at the paper it is first of all it's very long and then second of all it is a wall of text basically. Now it's very interesting text but if I were to go through this with you we basically just be kind of scrolling and reading along. So what I've done is I've basically read this and taken notes and I will attempt to just tell you what happens at least for the first part. So I intend for this to be a multi-part series because it's so long. So the first part as you can see here is context and history which is a little less boring than it sounds. The second part is going to be a new perspective where Cholet proposes his measure of intelligence and the third part is going to be about the benchmark, the ARC benchmark that is currently I believe running on Kaggle. So as it looks right now three parts and today we're going to dive into that first part. So here we go. He basically says that we need to define what intelligence means. We need an explicit goal to measure where if we think about AI like artificial intelligence what does intelligence mean? We need something where we can basically put a number or multiple numbers on it and says that's intelligent, that's not intelligent. What we have right now is just basically anecdotes. We all kind of feel what seems intelligent but we are not like sure and sometimes it's very misleading. He brings up the Turing test for example which is that you have a computer or a human behind a wall and on the other end sits a human and the human needs to kind of communicate without seeing what he or she communicates with and then determine whether or not on the other side of the wall is a human or a computer and if the computer could fool a human into kind of a 50-50 guess then the computer would be passing the Turing test and therefore intelligent. Now Shirley doesn't go right now into why that's not sufficient but he's basically saying this is not sufficient it's distracting and second of all it's basically just outsourcing the problems of defining intelligence to human right to this human right here who is fallible and noisy and you know doesn't all doesn't really know all you all you tell the human is basically like is this thing intelligent does this thing seem human to you it's also not clearly defined so we need some something more and Shirley says the definitions that exist today of intelligence are basically they they have implicit they are implicit definitions that are loaded with biases and biases basically from a human perspective on what intelligence is and if we want to really make progress in terms of in of measuring intelligence we need to point out these biases that are in these measures okay they has a range of quotes namely one here intelligence measures and agents ability to achieve goals in a wide range of environments that was I believe the conclusion of a an author that distilled lots of different definitions and try to distill them into one one sentence and that's it intelligence measures and agents ability to achieve goals in a wide range of environments so the crucial parts here is to ability to achieve goals so it must the agent must be you know doing something useful doing it like in reinforcement learning we'd say it must be getting higher rewards and the second part is in a wide range of environments so the the notion right here that we're going to encounter time and time again is basically an addition of skill and adaptivity so if you have it's not enough to have high skill you also need to be kind of adaptive to very very different environments to a range of environments and that's this this is the main issue that surely has with the current sort of definitions of intelligence and the current direction of the AI field because it mostly measures skill and not generalization or adaptivity now he he says in this thing in this sentence right here that you just saw there is an implicit sort of something is said implicitly namely that these these these skills this ability to achieve goals in this wide range of environments it must be acquired it must be learned these these new tasks these different environments the agent should basically learn to adapt to the different environments then and then the agent is intelligent it's not that intelligent when it is sort of pre-programmed to already handle these environments so he says that's that's sort of implicit in that statement and we're going to see how this is made explicit later he goes into basically there are two two different viewpoints on intelligence this old nature versus nurture debate and that refers to two things like crystallized intelligence versus skill acquisition intelligence so the evolutionary view would be that intelligence is sort of this set of static programs and here we simply kind of boil down these two views to their extremes right so don't I don't think any major evolutionary biologist is complete like is apps is that extreme right now but these were historical set of views that were held one of them was that intelligence is basically just it's all pre-group pre-programmed into you by evolution so you can you can solve this puzzle because during evolution you know your ancestors that could solve these puzzles were were survived you can plan your path through a a tree jungle because you know that was beneficiary to you and so evolution put that into your brain and therefore what results is AI is the science of making machines capable of performing tasks that would require intelligence if done by humans that's basically what Minsky says a quote I believe by Minsky at least Cholay says it's by Minsky or I misread where if you have this this set of view that that AI is basically just this set of static programs that means that if a human applies that set of programs to a task right and the human achieves 200 points it means if the if an AI comes along and achieves 201 points then it is intelligent because it has simply the better set of the better it has outperformed the static set of programs intelligence is this static set of programs and the AI has a better set of static set of programs so it's basically Minsky says if we know of a task that would require intelligence if done by a human then if that something that can solve that task is intelligent and this equates learning basically just to memorization if you if you ask a proponent of this viewpoint well what's what's learning like if everything's pre-programmed what we can still learn and they would say yeah but the learning is just you memorize situations and that particular ability is also pre-programmed into you the other extreme viewpoint is this tabula rasa viewpoint where it basically says you come into this world and your brain is a blank slate and everything you all of your abilities you basically must acquire through learning throughout your life so this is another extreme viewpoint and in terms of intelligence where that leads is following AI is the science and engineering of making machines do tasks they have never seen and have not been prepared for beforehand and that's a quote by McCarthy and Friedberg if we are ever to make a machine that will speak understand translate human languages solve mathematical problems with imagination practice a profession or direct an organization either we must reduce these activities to a science so exact that we can tell a machine precisely how to go about doing them or we must develop a machine that can do things without being told precisely how so this leads to more of of these notions right here that you can see here how the machines have not been prepared for a particular situation so if we make a machine that can do a task that it has not been prepared for we we know it's basically intelligent and again so if we make a machine that can do all of these things all of the things right here then either Friedberg says we must reduce these activities to exciting so basically we must program the solution in there already or we must develop a machine that can do things without being told precisely how and as you as you might realize this is much closer to the to the machine learning paradigm it's basically it's all about how much you say precisely because the extreme proponent of this thing would would basically recognize any sort of learning anything that you haven't seen before is an intelligent right if you if you can handle any new situation you're intelligent and show they is going to argue that that's also not really the case like we have to be a bit more graded about it but this is basically the machine learning approach it's it's we build machines that can do things without being told precisely how that they have not been prepared for beforehand like it can solve things that are not in the training data that's one interpretation and if you're a very strong proponent of this you would call that intelligent and she'll is going to argue that the truth of course is somewhere in the middle between these two viewpoints and therefore defining intelligence in either of these terms is going to lack in in expressivity and in usefulness so how do we evaluate AI and show legos through different levels here of AI evaluation so first of all he contrasts these these two things right here skill-based evaluation and generalization based evaluation so in skill-based evaluation you basically go for one given task so you evaluate a system on one given task one example here is for example the touring test and that's done by human review another example is where you have like a proof so you evaluate a system in by giving its optimality proof you can analyze it and you can say it is always correct at this particular task what you can also do is this pure competition so this is maybe what we see in sort of like chess so we let the bots play first humans and then we let them play other bots and we determine which one's the best and also the most familiar one benchmarks so this would be where your I don't know your image net net test set is right that's right here that's a skill-based evaluation that's one given task how well can you solve the image net test set without looking at it that's one task so the problem surely says with these this skill-based evaluation is sort of obvious it's like a single focus you can't like you are only good at this particular thing and that is one of the examples of this is the fact that the Kaggle models are usually the winning Kaggle models are usually useless outside of that particular data set because they're just so hyper optimized and hyper focused on winning that particular Kaggle competition so it's actually it's actually pretty strong science on how to set up a Kaggle competition such that you can then use the model the winning model afterwards for doing something actually useful no there are no conditions on how to arrive at a solution and there surely lets a bit of that that's basically his point that's gonna come in to the the measurement later into the math where he says you simply have to arrive at a solution in this skill-based evaluation it this skill-based evaluation usually doesn't care how you arrive there so the image net test set score doesn't care how you got the neural network or whatnot that you got it simply cares how many images do you classify correctly and this leads to what is called the AI effect which I didn't know it was called like this until recently but it's fairly obvious where people say people say people come up with a task that's that is intelligent so people used to say oh checkers the game of checkers it requires intelligence and then you build a machine to solve checkers because you can just I don't know search do like a bit of a smart tree search and you solve it and you tell them here's like a tree search that does checkers and they'll say well but that's not that's not really intelligent it's just like a tree search but but chess chess you can't possibly do the tree the full tree search so chess is intelligent and the then you build like a smarter tree search they build stockfish and they're like yeah but that's just you know that's just this machine thing and so the goal posts keep moving every time they come up with a task and you solve the tasks they'll just say wow that's not really intelligence this next task that's intelligence and it's easy to see that if you just do this skill-based evaluation you will never get there because it's always going to be now the next task the next task the next task it's overly anthropocentric it's overly based on how humans view the world and what is not left in here and again this this acquisition what is not in this definition is the fact that why do we think that someone that plays chess very well like why do we think Magnus Carlsen is smart why do we think someone like a go master is very intelligent and that's because we know that this person is human at least we believe there are doubts about is some of these grand masters but we believe that they are humans and therefore we know that they have only had whatever 20 30 years to learn this and they must eat regularly and they can only think so fast and it's it's hard to memorize things as a human so we know all of their constraints that went into learning this and we we basically know there is it's not like we are not aware of something like Neo has in the matrix where you can just upload the solution to chess into your brain we know what's required to achieve that level of success and we know the only way it this can be done is through general intelligence we know that there is this correlation in humans that if you are good at chess you must have this or you're very very likely to have this general problem-solving ability right that's a human centric view and that does not count for machines machines can take forever to calculate they can distill years and years of experience like thousands of years and this would also this would be the same case with this open AI dota 5 right dota 5 is exactly here alpha go is exactly here we only think they might be intelligent if a human does it because we know what's required for humans to get there again focus skill acquisition now you might be a board a little bit okay it's about skill acquisition but think about it it's it's not that easy to actually define this skill acquisition thing it without falling back into the exact same trap so it goes into say okay as opposed to this skill based we can measure generalization so what's the generalization generalization is the broad ability to handle tasks that differ from previous tasks so they they you have a task and it's different from previous tasks you generalize now there are two ways you can view this there is system centric generalization and that's basically if you take the strict definition here so this would be a machine learning system trains on the training set and then is evaluated on the test set it has never seen the test set before so it's generalizing right that's called system centric generalization but that's not really enough here because we also need to take into account the developer of the system so developer aware generalization means that you generalize two situations that are new to the system and to the developer so a developer of an image net model knows that it is going to be evaluated on the image net test set and that is that is in this category system centric because the developer knows however a broader generalization this developer aware generalization also takes into account that fact and it would say developer aware generalization is only when the system generalizes to something that is not known to developer that is new to even to the developer themselves they don't they haven't foreseen that so this accounts for prior knowledge of the developer it shall it defines different degrees of generalization largely along these lines so absence of generalization is when you have like an algorithm that you know you absolutely have built in that it works for every possible situation like a certain assorting algorithm that you have proven mathematically proven to work for all sequences of numbers no generalization everything has been foreseen then there is local generalization and this in machine learning we call this something like robustness this would be your test set robustness your a small distribution shift so the test set here comes from a known distribution so this is the notion of known unknowns you you have an idea of what can come at your system and you require basically you require a dense sampling of the input space usually machine learning training sets are very very densely sample that means there's a lot of data there that we can learn from so we have like lots and lots and lots and lots and lots of data and when the test point comes it is going to be like somewhere really in within between all of these training data points so we can infer from the surrounding training data points what the test data point is going to be like if there's a classification boundary right here we can sort of nearest neighbor it and there are arguments that deep networks are basically large nearest neighbor classifiers but that's a topic for another day and we are here basically we are here in machine learning right now we do local generalization we know our unknowns we know our test set as the opposition to this is broad generalization broad generalization is where you don't know what you don't know unknown unknowns you don't know what comes at test time and you can't pre-build sort of your expectations into the system this is more akin to something like level five autonomous driving where you build this car but you don't really know what kind of situations coming no no this is a this is a fuzzy definition right I mean you do sort of know what situations will come at the car you can certainly probabilistically make a statement about what so this is it's not a clear-cut definition and I think we're going to so in the math it seems clear-cut but when we get there I don't think it is that clear-cut honestly it's still kind of a an intuition thing what you categorize as local and broad and so on also here the Wozniak coffee cup example where it basically Wozniak says you should be able to build a robot that goes into any kitchen and gets you a cup of coffee and here you have known sorry unknown unknowns because you can't possibly foresee all possible kitchen arrangements there might be obstacles and so on do you know the coffee might there might be different coffee makers that you've never encountered before but I've long been saying that this is a bit of a trick right here because what what you can always do is you can construct a room a kitchen right and right here is the coffee machine so there's the how do we draw this there's the coffee machine right here one of these fancy Nespresso machines you put in a capsule here and here's the coffee machine okay but then you you build a wall around it and the wall has a door and the door the door will only open if you solve an IQ test right so or any sort of any surface so whatever you put whatever you put in that spot that's the level of generalization you you can achieve basically so you can always up the level of generalization to or you can put I don't know you can put the halting problem here right you can you can you can here you can say you only solve this door if you can whatever give me a proof of the ABC conjecture something like this so coffee cup example kind of kind of has some back doors in any case you sort of know what was the acme needs you should be able to go into a standard kitchen but the standard kitchens are still diverse enough you can't foresee all of them like I don't if any of you has this sort of kitchen that I'm talking about like mad respect all we will all get this robot and you'll you'll just have to wait for the next iteration ok then there's extreme extreme generalization is where you have kind of open-ended you you don't know what's going to come you don't even know the broad category of tasks that is going to come right broad here is still broad still refers to a broad category of related tasks so it is sort of a general ability and the extreme generalization just means you know whatever whatever comes you can solve it but it is different from universal universal generalization surely says is any conceivable task in the universe and that's pointless it's pointless because it's just too much there's this no free lunch theorem right plus what we actually want is we want human level intelligence and human level intelligence has this property of extreme generalization with extreme generalization we mean the scope see it's dependent on a scope we mean the scope of all human tasks of all tasks that humans could produce or could find useful could find themselves in or could pose of this system not all tasks that the universe could pose so that here you you don't even have the relation between tasks the relation between tasks are at most abstract so there maybe it's like the general ability of sorting things generally in in whatever fashion in and things whatever these things are with whatever properties or the general ability to communicate an idea or something like this and this in humans is called the g factor if you or it's related to but we're going to take a like surely it really goes after psychometrics here and really models its his framework after psychometrics for humans and the sort of achievement in psychometrics one of the achievements is this measure of the g factor and that's what we humans usually call intelligence he says note that humans have system centric and developer aware generalization though you know that one this this and count this contains the other one so why because we can handle situations that previous humans haven't experienced now I'm not I'm not sure he basically says humans have developer aware generalization because we can we can fare well in situations that no humans during evolution have experienced prior but okay let's let's have this abstractly let's say our developer is the evolution process you still have to ask can humans really solve things that the evolutionary process has not built into them in some sort I guess that refers back to the nature versus nurture like humans humans cannot you know multiply long floating point numbers it doesn't matter how I get without a pen and paper it doesn't matter how how much you learn or something like this there are some things that they just can't do but would want to do and I guess the evolutionary path simply didn't provide us for doing that kind of stuff we have a finite working memory and so on so I think the discussion here is still to be had if we really do have developer aware generalization if you consider our developer to be the evolutionary process but but we can forgive a little bit here so this is the general diagram that also emerges from kind of theories of intelligence from psychology where generally you have a general intelligence factor which is one factor this is quite remarkable in humans there is one general intelligence factor statistically all all these general intelligence tasks they broadly correlate and lead to one statistical factor it's not it's not obvious why that should be but turns out to be one factor and that distributes hierarchically into these things which are called broad abilities broad cognitive abilities and in shoeless framework that would correspond to broad generalization and then these are again hierarchically subdivided and sometimes as you can see your shared task specific skills okay and this in in shoeless framework would be local or no generalization so again he basically goes into psychometrics and specifically IQ tests for humans can they inform the measuring process the note and the thing to note here according to shalé is in an IQ test you want to measure these broad abilities you want to measure ultimately you want to measure G but if even if you measure different things in psychometrics you want to measure these broad abilities but these are like these are abstract concepts so what you're left with what you can only do is you can only measure really tasks okay and is this wrongly numbered or is this intentional I don't know you can only measure tasks but you somehow have to make an inference about the broad ability from measuring the tasks so that's the difficulty in psychometrics right you you you want to measure the abilities but you can only measure tasks the abilities are abstract concepts and the skill are the measurable things where you can put a number on it now you you can so what these IQ tests do they usually usually employ these broad battery of tests so you don't give the human just one tasks you give you give the human a lot of tasks you like okay complete this series which number comes next draw like rotate this in your head and so on but there there are so very human centric things like reading comprehension and so on but you do this broad battery of tests and you might think oh oh okay this is sort of like the Atari you know sweet where one reinforcement learning agent has to solve these whole bunch of Atari games or a super glue in NLP where one NLP system has to learn to do all these different NLP tasks you know there is entailment there is sentiment there is Boolean question answering and but this is according to chalet it's sort of not really it's not really the case that these are equivalent because it is a battery but it is known to the developer so the developer knows that the NLP system has to solve the super glue thing so the developer can first of all train the system until it reaches a good super glue score but then also it will have built in already the assumptions of the developer that you have to solve this so the second important thing about these battery of tests and IQ test is that they are unknown to the tested the tested cannot or ideally should not practice for them that's why people keep developing new and new IQ tests because we sort of know they all correlate first of all so they measure the same thing but also second because otherwise people if you just always do the same test people could practice it and then you would no longer measure the general ability you would only measure that one test by the way that's also why a lot of these you know brain brain exercise apps and so on they none of them really ups your intelligence you you you only get better at one app if you do that you don't you don't get smarter in general so if and and show they says there have been a number of attempts at making machines making AI solve human IQ tests right well the reasoning is the follows like oh okay humans develop IQ tests for humans and presumably those are no you're not known and so on but again the tasks broadly of IQ tests are known I guess really IQ tests work on humans because they only work on humans who don't really care like if someone really really really really cared they would you know research what kind of tests there are they would look at all the tests from history there's only so many tests you can come up with the new ones are going to be like variations on the old ones so you could technically if you really wanted you could like prepare super hard and that's exactly what developers are going to do they're basically going to look at all these tasks they're going to pre solve the problem and then they're going to program their you know pre-solved solution into an AI system so we can't just let AI systems solve human IQ tests what we need are tests that are reliable which means they're reproducible that are valid that means they really measure IQ or if they really measure artificial intelligence and not you know just task specific skill or something else they're standardized across the spectrum so they're standardized so everyone can do them in the same way by the way the current benchmarks are standardized that's the good part about them and they should be they should be free from bias which means they should not measure anything orthogonal to what they claim to measure and the example he gives is they should not measure reaction time which is also a big component in human IQ tests you also measure how fast the human is at the test and the machine obviously if you simply put more electrons through the cable it's going to run faster you or if you put more more GPUs there so in broad terms we what we should focus on is this new skill acquisition as I said from the beginning but it is not as easy as you might think right now and we're going to dive into the next episode and is going to be math heavy and that's going to be fun so I hope you enjoyed this kind of special episode maybe let me know if you like this style the paper doesn't have any pictures so you're just left with what I'm what I'm drawing yeah if you enjoyed this leave a like leave comments share it out and I'll see you next time bye bye
[ { "end": 5.28, "start": 0, "text": " Hello there! Today we're going to look at On the Measure of Intelligence by" }, { "end": 12.32, "start": 5.28, "text": " François Cholet of Google. This is a bit of a special episode I would say because" }, { "end": 18.52, "start": 12.32, "text": " if you look at the paper it is first of all it's very long and then second of" }, { "end": 24.88, "start": 18.52, "text": " all it is a wall of text basically. Now it's very interesting text but if I were" }, { "end": 29.72, "start": 24.88, "text": " to go through this with you we basically just be kind of scrolling and reading" }, { "end": 36.08, "start": 29.72, "text": " along. So what I've done is I've basically read this and taken notes and" }, { "end": 41, "start": 36.08, "text": " I will attempt to just tell you what happens at least for the first part. So" }, { "end": 46, "start": 41, "text": " I intend for this to be a multi-part series because it's so long. So the first" }, { "end": 51.72, "start": 46, "text": " part as you can see here is context and history which is a little less boring" }, { "end": 56.72, "start": 51.72, "text": " than it sounds. The second part is going to be a new perspective where Cholet" }, { "end": 62.04, "start": 56.72, "text": " proposes his measure of intelligence and the third part is going to be about the" }, { "end": 67.32, "start": 62.04, "text": " benchmark, the ARC benchmark that is currently I believe running on Kaggle. So" }, { "end": 73.64, "start": 67.32, "text": " as it looks right now three parts and today we're going to dive into" }, { "end": 81.16, "start": 73.64, "text": " that first part. So here we go. He basically says that we need to define" }, { "end": 88.44, "start": 81.16, "text": " what intelligence means. We need an explicit goal to measure where if we" }, { "end": 92.08, "start": 88.44, "text": " think about AI like artificial intelligence what does intelligence" }, { "end": 96.12, "start": 92.08, "text": " mean? We need something where we can basically put a number or multiple" }, { "end": 101, "start": 96.12, "text": " numbers on it and says that's intelligent, that's not intelligent. What we have" }, { "end": 106.36, "start": 101, "text": " right now is just basically anecdotes. We all kind of feel what seems" }, { "end": 113.44, "start": 106.36, "text": " intelligent but we are not like sure and sometimes it's very misleading. He brings" }, { "end": 119.92, "start": 113.44, "text": " up the Turing test for example which is that you have a computer or a human" }, { "end": 125.2, "start": 119.92, "text": " behind a wall and on the other end sits a human and the human needs to kind of" }, { "end": 130.12, "start": 125.2, "text": " communicate without seeing what he or she communicates with" }, { "end": 135.48, "start": 130.12, "text": " and then determine whether or not on the other side of the wall is a human or a" }, { "end": 141.51999999999998, "start": 135.48, "text": " computer and if the computer could fool a human into kind of a 50-50 guess then" }, { "end": 148.56, "start": 141.51999999999998, "text": " the computer would be passing the Turing test and therefore intelligent. Now" }, { "end": 153.28, "start": 148.56, "text": " Shirley doesn't go right now into why that's not sufficient but he's basically" }, { "end": 158.35999999999999, "start": 153.28, "text": " saying this is not sufficient it's distracting and second of all it's" }, { "end": 162.79999999999998, "start": 158.35999999999999, "text": " basically just outsourcing the problems of defining intelligence to human right" }, { "end": 169.36, "start": 162.8, "text": " to this human right here who is fallible and noisy and you know doesn't all" }, { "end": 174.96, "start": 169.36, "text": " doesn't really know all you all you tell the human is basically like is this" }, { "end": 179.84, "start": 174.96, "text": " thing intelligent does this thing seem human to you it's also not clearly" }, { "end": 186.84, "start": 179.84, "text": " defined so we need some something more and Shirley says the definitions that" }, { "end": 193.32, "start": 186.84, "text": " exist today of intelligence are basically they they have implicit they" }, { "end": 199.44, "start": 193.32, "text": " are implicit definitions that are loaded with biases and biases basically from a" }, { "end": 203.92000000000002, "start": 199.44, "text": " human perspective on what intelligence is and if we want to really make" }, { "end": 209.12, "start": 203.92000000000002, "text": " progress in terms of in of measuring intelligence we need to point out these" }, { "end": 218.20000000000002, "start": 209.12, "text": " biases that are in these measures okay they has a range of quotes namely one" }, { "end": 224.84, "start": 218.20000000000002, "text": " here intelligence measures and agents ability to achieve goals in a wide range" }, { "end": 231.12, "start": 224.84, "text": " of environments that was I believe the conclusion of a an author that distilled" }, { "end": 237.08, "start": 231.12, "text": " lots of different definitions and try to distill them into one one sentence and" }, { "end": 241.16000000000003, "start": 237.08, "text": " that's it intelligence measures and agents ability to achieve goals in a" }, { "end": 247.4, "start": 241.16000000000003, "text": " wide range of environments so the crucial parts here is to ability to" }, { "end": 252.8, "start": 247.4, "text": " achieve goals so it must the agent must be you know doing something useful doing" }, { "end": 257.12, "start": 252.8, "text": " it like in reinforcement learning we'd say it must be getting higher rewards" }, { "end": 264.68, "start": 257.12, "text": " and the second part is in a wide range of environments so the the notion right" }, { "end": 269.32, "start": 264.68, "text": " here that we're going to encounter time and time again is basically an addition" }, { "end": 275.84000000000003, "start": 269.32, "text": " of skill and adaptivity so if you have it's not enough to have high skill you" }, { "end": 281.2, "start": 275.84000000000003, "text": " also need to be kind of adaptive to very very different environments to a range of" }, { "end": 286.92, "start": 281.2, "text": " environments and that's this this is the main issue that surely has with the" }, { "end": 291.2, "start": 286.92, "text": " current sort of definitions of intelligence and the current direction of" }, { "end": 298.24, "start": 291.2, "text": " the AI field because it mostly measures skill and not generalization or" }, { "end": 305.44, "start": 298.24, "text": " adaptivity now he he says in this thing in this sentence right here that you" }, { "end": 311.76, "start": 305.44, "text": " just saw there is an implicit sort of something is said implicitly namely that" }, { "end": 317.88, "start": 311.76, "text": " these these these skills this ability to achieve goals in this wide range of" }, { "end": 323.52, "start": 317.88, "text": " environments it must be acquired it must be learned these these new tasks these" }, { "end": 328.64, "start": 323.52, "text": " different environments the agent should basically learn to adapt to the" }, { "end": 333.71999999999997, "start": 328.64, "text": " different environments then and then the agent is intelligent it's not that" }, { "end": 337.44, "start": 333.71999999999997, "text": " intelligent when it is sort of pre-programmed to already handle these" }, { "end": 342.4, "start": 337.44, "text": " environments so he says that's that's sort of implicit in that statement and" }, { "end": 347.79999999999995, "start": 342.4, "text": " we're going to see how this is made explicit later he goes into basically" }, { "end": 352.28, "start": 347.79999999999995, "text": " there are two two different viewpoints on intelligence this old nature versus" }, { "end": 357.91999999999996, "start": 352.28, "text": " nurture debate and that refers to two things like crystallized intelligence" }, { "end": 364.67999999999995, "start": 357.91999999999996, "text": " versus skill acquisition intelligence so the evolutionary view would be that" }, { "end": 370.12, "start": 364.67999999999995, "text": " intelligence is sort of this set of static programs and here we simply kind" }, { "end": 374.96, "start": 370.12, "text": " of boil down these two views to their extremes right so don't I don't think any" }, { "end": 382.72, "start": 374.96, "text": " major evolutionary biologist is complete like is apps is that extreme right now" }, { "end": 388.88, "start": 382.72, "text": " but these were historical set of views that were held one of them was that" }, { "end": 393.34000000000003, "start": 388.88, "text": " intelligence is basically just it's all pre-group pre-programmed into you by" }, { "end": 399.32, "start": 393.34000000000003, "text": " evolution so you can you can solve this puzzle because during evolution you know" }, { "end": 404.59999999999997, "start": 399.32, "text": " your ancestors that could solve these puzzles were were survived you can plan" }, { "end": 410.64, "start": 404.59999999999997, "text": " your path through a a tree jungle because you know that was beneficiary to" }, { "end": 418.84, "start": 410.64, "text": " you and so evolution put that into your brain and therefore what results is AI" }, { "end": 423.64, "start": 418.84, "text": " is the science of making machines capable of performing tasks that would" }, { "end": 430.28, "start": 423.64, "text": " require intelligence if done by humans that's basically what Minsky says a" }, { "end": 436.32, "start": 430.28, "text": " quote I believe by Minsky at least Cholay says it's by Minsky or I misread" }, { "end": 441.96, "start": 436.32, "text": " where if you have this this set of view that that AI is basically just this set" }, { "end": 448.15999999999997, "start": 441.96, "text": " of static programs that means that if a human applies that set of programs to a" }, { "end": 457.32000000000005, "start": 448.16, "text": " task right and the human achieves 200 points it means if the if an AI comes" }, { "end": 463.8, "start": 457.32000000000005, "text": " along and achieves 201 points then it is intelligent because it has simply the" }, { "end": 469.08000000000004, "start": 463.8, "text": " better set of the better it has outperformed the static set of programs" }, { "end": 473.20000000000005, "start": 469.08000000000004, "text": " intelligence is this static set of programs and the AI has a better set of" }, { "end": 481.24, "start": 473.2, "text": " static set of programs so it's basically Minsky says if we know of a task that" }, { "end": 490.56, "start": 481.24, "text": " would require intelligence if done by a human then if that something that can" }, { "end": 497.15999999999997, "start": 490.56, "text": " solve that task is intelligent and this equates learning basically just to" }, { "end": 502, "start": 497.15999999999997, "text": " memorization if you if you ask a proponent of this viewpoint well what's" }, { "end": 505.52, "start": 502, "text": " what's learning like if everything's pre-programmed what we can still learn" }, { "end": 509.76, "start": 505.52, "text": " and they would say yeah but the learning is just you memorize situations and that" }, { "end": 517.68, "start": 509.76, "text": " particular ability is also pre-programmed into you the other extreme" }, { "end": 525.2, "start": 517.68, "text": " viewpoint is this tabula rasa viewpoint where it basically says you come into" }, { "end": 530.16, "start": 525.2, "text": " this world and your brain is a blank slate and everything you all of your" }, { "end": 535.6, "start": 530.16, "text": " abilities you basically must acquire through learning throughout your life so" }, { "end": 542.76, "start": 535.6, "text": " this is another extreme viewpoint and in terms of intelligence where that leads" }, { "end": 549.28, "start": 542.76, "text": " is following AI is the science and engineering of making machines do tasks" }, { "end": 555.48, "start": 549.28, "text": " they have never seen and have not been prepared for beforehand and that's a" }, { "end": 560.6800000000001, "start": 555.48, "text": " quote by McCarthy and Friedberg if we are ever to make a machine that will" }, { "end": 564.6, "start": 560.6800000000001, "text": " speak understand translate human languages solve mathematical problems" }, { "end": 570.16, "start": 564.6, "text": " with imagination practice a profession or direct an organization either we must" }, { "end": 574.2, "start": 570.16, "text": " reduce these activities to a science so exact that we can tell a machine" }, { "end": 579.54, "start": 574.2, "text": " precisely how to go about doing them or we must develop a machine that can do" }, { "end": 586.14, "start": 579.54, "text": " things without being told precisely how so this leads to more of of these notions" }, { "end": 593.28, "start": 586.14, "text": " right here that you can see here how the machines have not been prepared for a" }, { "end": 598.12, "start": 593.28, "text": " particular situation so if we make a machine that can do a task that it has" }, { "end": 607, "start": 598.12, "text": " not been prepared for we we know it's basically intelligent and again so if we" }, { "end": 611.8, "start": 607, "text": " make a machine that can do all of these things all of the things right here then" }, { "end": 618, "start": 611.8, "text": " either Friedberg says we must reduce these activities to exciting so" }, { "end": 623.76, "start": 618, "text": " basically we must program the solution in there already or we must develop a" }, { "end": 629.04, "start": 623.76, "text": " machine that can do things without being told precisely how and as you as you" }, { "end": 634.12, "start": 629.04, "text": " might realize this is much closer to the to the machine learning paradigm it's" }, { "end": 642.72, "start": 634.12, "text": " basically it's all about how much you say precisely because the extreme" }, { "end": 648.28, "start": 642.72, "text": " proponent of this thing would would basically recognize any sort of learning" }, { "end": 653.84, "start": 648.28, "text": " anything that you haven't seen before is an intelligent right if you if you can" }, { "end": 660.28, "start": 653.84, "text": " handle any new situation you're intelligent and show they is going to" }, { "end": 664.68, "start": 660.28, "text": " argue that that's also not really the case like we have to be a bit more" }, { "end": 669.88, "start": 664.68, "text": " graded about it but this is basically the machine learning approach it's it's" }, { "end": 676.3199999999999, "start": 669.88, "text": " we build machines that can do things without being told precisely how that" }, { "end": 681.68, "start": 676.3199999999999, "text": " they have not been prepared for beforehand like it can solve things that" }, { "end": 686.8399999999999, "start": 681.68, "text": " are not in the training data that's one interpretation and if you're a very" }, { "end": 693.1600000000001, "start": 686.84, "text": " strong proponent of this you would call that intelligent and she'll is going to" }, { "end": 696.72, "start": 693.1600000000001, "text": " argue that the truth of course is somewhere in the middle between these" }, { "end": 700.84, "start": 696.72, "text": " two viewpoints and therefore defining intelligence in either of these terms is" }, { "end": 711.48, "start": 700.84, "text": " going to lack in in expressivity and in usefulness so how do we evaluate AI and" }, { "end": 719.04, "start": 711.48, "text": " show legos through different levels here of AI evaluation so first of all he" }, { "end": 725.32, "start": 719.04, "text": " contrasts these these two things right here skill-based evaluation and" }, { "end": 731.96, "start": 725.32, "text": " generalization based evaluation so in skill-based evaluation you basically go" }, { "end": 739.5600000000001, "start": 731.96, "text": " for one given task so you evaluate a system on one given task one example" }, { "end": 746.56, "start": 739.56, "text": " here is for example the touring test and that's done by human review another" }, { "end": 754.3599999999999, "start": 746.56, "text": " example is where you have like a proof so you evaluate a system in by giving" }, { "end": 759.4, "start": 754.3599999999999, "text": " its optimality proof you can analyze it and you can say it is always correct at" }, { "end": 765.28, "start": 759.4, "text": " this particular task what you can also do is this pure competition so this is" }, { "end": 771.04, "start": 765.28, "text": " maybe what we see in sort of like chess so we let the bots play first humans and" }, { "end": 777.76, "start": 771.04, "text": " then we let them play other bots and we determine which one's the best and also" }, { "end": 783.3399999999999, "start": 777.76, "text": " the most familiar one benchmarks so this would be where your I don't know your" }, { "end": 790.8, "start": 783.3399999999999, "text": " image net net test set is right that's right here that's a skill-based" }, { "end": 797, "start": 790.8, "text": " evaluation that's one given task how well can you solve the image net test" }, { "end": 802.56, "start": 797, "text": " set without looking at it that's one task so the problem surely says with" }, { "end": 808.04, "start": 802.56, "text": " these this skill-based evaluation is sort of obvious it's like a single focus" }, { "end": 815.88, "start": 808.04, "text": " you can't like you are only good at this particular thing and that is one of the" }, { "end": 820.3599999999999, "start": 815.88, "text": " examples of this is the fact that the Kaggle models are usually the winning" }, { "end": 824.5600000000001, "start": 820.36, "text": " Kaggle models are usually useless outside of that particular data set" }, { "end": 828.6, "start": 824.5600000000001, "text": " because they're just so hyper optimized and hyper focused on winning that" }, { "end": 834.5600000000001, "start": 828.6, "text": " particular Kaggle competition so it's actually it's actually pretty strong" }, { "end": 838.84, "start": 834.5600000000001, "text": " science on how to set up a Kaggle competition such that you can then use" }, { "end": 846.2, "start": 838.84, "text": " the model the winning model afterwards for doing something actually useful no" }, { "end": 852.6, "start": 846.2, "text": " there are no conditions on how to arrive at a solution and there surely lets a" }, { "end": 857.72, "start": 852.6, "text": " bit of that that's basically his point that's gonna come in to the the" }, { "end": 864.1600000000001, "start": 857.72, "text": " measurement later into the math where he says you simply have to arrive at a" }, { "end": 869.08, "start": 864.1600000000001, "text": " solution in this skill-based evaluation it this skill-based evaluation usually" }, { "end": 874.5200000000001, "start": 869.08, "text": " doesn't care how you arrive there so the image net test set score doesn't care" }, { "end": 879.6, "start": 874.52, "text": " how you got the neural network or whatnot that you got it simply cares how" }, { "end": 886.0799999999999, "start": 879.6, "text": " many images do you classify correctly and this leads to what is called the AI" }, { "end": 891.0799999999999, "start": 886.0799999999999, "text": " effect which I didn't know it was called like this until recently but it's fairly" }, { "end": 896, "start": 891.0799999999999, "text": " obvious where people say people say people come up with a task that's that" }, { "end": 901.1999999999999, "start": 896, "text": " is intelligent so people used to say oh checkers the game of checkers it" }, { "end": 906.2, "start": 901.2, "text": " requires intelligence and then you build a machine to solve checkers because you" }, { "end": 912.12, "start": 906.2, "text": " can just I don't know search do like a bit of a smart tree search and you solve" }, { "end": 914.76, "start": 912.12, "text": " it and you tell them here's like a tree search that does checkers and they'll" }, { "end": 918.1600000000001, "start": 914.76, "text": " say well but that's not that's not really intelligent it's just like a" }, { "end": 924.1600000000001, "start": 918.1600000000001, "text": " tree search but but chess chess you can't possibly do the tree the full" }, { "end": 932.56, "start": 924.16, "text": " tree search so chess is intelligent and the then you build like a smarter tree" }, { "end": 937.12, "start": 932.56, "text": " search they build stockfish and they're like yeah but that's just you know" }, { "end": 942.9599999999999, "start": 937.12, "text": " that's just this machine thing and so the goal posts keep moving every time" }, { "end": 946.9599999999999, "start": 942.9599999999999, "text": " they come up with a task and you solve the tasks they'll just say wow that's" }, { "end": 953.0799999999999, "start": 946.9599999999999, "text": " not really intelligence this next task that's intelligence and it's easy to see" }, { "end": 958.0400000000001, "start": 953.08, "text": " that if you just do this skill-based evaluation you will never get there" }, { "end": 962.6800000000001, "start": 958.0400000000001, "text": " because it's always going to be now the next task the next task the next task" }, { "end": 969.84, "start": 962.6800000000001, "text": " it's overly anthropocentric it's overly based on how humans view the world and" }, { "end": 975.12, "start": 969.84, "text": " what is not left in here and again this this acquisition what is not in this" }, { "end": 981.24, "start": 975.12, "text": " definition is the fact that why do we think that someone that plays chess very" }, { "end": 987.4, "start": 981.24, "text": " well like why do we think Magnus Carlsen is smart why do we think someone like a" }, { "end": 994.52, "start": 987.4, "text": " go master is very intelligent and that's because we know that this person is" }, { "end": 999.48, "start": 994.52, "text": " human at least we believe there are doubts about is some of these grand" }, { "end": 1005.08, "start": 999.48, "text": " masters but we believe that they are humans and therefore we know that they" }, { "end": 1012.08, "start": 1005.08, "text": " have only had whatever 20 30 years to learn this and they must eat regularly" }, { "end": 1016.84, "start": 1012.08, "text": " and they can only think so fast and it's it's hard to memorize things as a human" }, { "end": 1022.1600000000001, "start": 1016.84, "text": " so we know all of their constraints that went into learning this and we we" }, { "end": 1031.16, "start": 1022.1600000000001, "text": " basically know there is it's not like we are not aware of something like Neo has" }, { "end": 1036.52, "start": 1031.16, "text": " in the matrix where you can just upload the solution to chess into your brain we" }, { "end": 1041.72, "start": 1036.52, "text": " know what's required to achieve that level of success and we know the only" }, { "end": 1046.68, "start": 1041.72, "text": " way it this can be done is through general intelligence we know that there" }, { "end": 1052.88, "start": 1046.68, "text": " is this correlation in humans that if you are good at chess you must have this" }, { "end": 1058.52, "start": 1052.88, "text": " or you're very very likely to have this general problem-solving ability right" }, { "end": 1063.48, "start": 1058.52, "text": " that's a human centric view and that does not count for machines machines can" }, { "end": 1069.4, "start": 1063.48, "text": " take forever to calculate they can distill years and years of experience" }, { "end": 1073.52, "start": 1069.4, "text": " like thousands of years and this would also this would be the same case with" }, { "end": 1082.68, "start": 1073.52, "text": " this open AI dota 5 right dota 5 is exactly here alpha go is exactly here we" }, { "end": 1086.96, "start": 1082.68, "text": " only think they might be intelligent if a human does it because we know what's" }, { "end": 1094.68, "start": 1086.96, "text": " required for humans to get there again focus skill acquisition now you might be" }, { "end": 1099.8400000000001, "start": 1094.68, "text": " a board a little bit okay it's about skill acquisition but think about it" }, { "end": 1106.64, "start": 1099.8400000000001, "text": " it's it's not that easy to actually define this skill acquisition thing it" }, { "end": 1113.48, "start": 1106.64, "text": " without falling back into the exact same trap so it goes into say okay as opposed" }, { "end": 1118.1200000000001, "start": 1113.48, "text": " to this skill based we can measure generalization so what's the" }, { "end": 1122.6, "start": 1118.1200000000001, "text": " generalization generalization is the broad ability to handle tasks that" }, { "end": 1129.3600000000001, "start": 1122.6, "text": " differ from previous tasks so they they you have a task and it's different from" }, { "end": 1135.24, "start": 1129.3600000000001, "text": " previous tasks you generalize now there are two ways you can view this there is" }, { "end": 1140.1200000000001, "start": 1135.24, "text": " system centric generalization and that's basically if you take the strict" }, { "end": 1144.56, "start": 1140.12, "text": " definition here so this would be a machine learning system trains on the" }, { "end": 1150.4799999999998, "start": 1144.56, "text": " training set and then is evaluated on the test set it has never seen the test" }, { "end": 1155.3999999999999, "start": 1150.4799999999998, "text": " set before so it's generalizing right that's called system centric" }, { "end": 1160.8, "start": 1155.3999999999999, "text": " generalization but that's not really enough here because we also need to take" }, { "end": 1167.56, "start": 1160.8, "text": " into account the developer of the system so developer aware generalization means" }, { "end": 1171.84, "start": 1167.56, "text": " that you generalize two situations that are new to the system and to the" }, { "end": 1177.6, "start": 1171.84, "text": " developer so a developer of an image net model knows that it is going to be" }, { "end": 1184.36, "start": 1177.6, "text": " evaluated on the image net test set and that is that is in this category system" }, { "end": 1190, "start": 1184.36, "text": " centric because the developer knows however a broader generalization this" }, { "end": 1195.1599999999999, "start": 1190, "text": " developer aware generalization also takes into account that fact and it" }, { "end": 1201.92, "start": 1195.16, "text": " would say developer aware generalization is only when the system generalizes to" }, { "end": 1206.72, "start": 1201.92, "text": " something that is not known to developer that is new to even to the developer" }, { "end": 1212.8400000000001, "start": 1206.72, "text": " themselves they don't they haven't foreseen that so this accounts for prior" }, { "end": 1218.96, "start": 1212.8400000000001, "text": " knowledge of the developer it shall it defines different degrees of" }, { "end": 1224.48, "start": 1218.96, "text": " generalization largely along these lines so absence of generalization is when you" }, { "end": 1228.92, "start": 1224.48, "text": " have like an algorithm that you know you absolutely have built in that it works" }, { "end": 1233.64, "start": 1228.92, "text": " for every possible situation like a certain assorting algorithm that you" }, { "end": 1238.22, "start": 1233.64, "text": " have proven mathematically proven to work for all sequences of numbers no" }, { "end": 1243.24, "start": 1238.22, "text": " generalization everything has been foreseen then there is local" }, { "end": 1246.98, "start": 1243.24, "text": " generalization and this in machine learning we call this something like" }, { "end": 1252.16, "start": 1246.98, "text": " robustness this would be your test set robustness your a small distribution" }, { "end": 1258.96, "start": 1252.16, "text": " shift so the test set here comes from a known distribution so this is the notion" }, { "end": 1264.76, "start": 1258.96, "text": " of known unknowns you you have an idea of what can come at your system and you" }, { "end": 1269.44, "start": 1264.76, "text": " require basically you require a dense sampling of the input space usually" }, { "end": 1273.8400000000001, "start": 1269.44, "text": " machine learning training sets are very very densely sample that means there's a" }, { "end": 1278.92, "start": 1273.8400000000001, "text": " lot of data there that we can learn from so we have like lots and lots and lots" }, { "end": 1284.5600000000002, "start": 1278.92, "text": " and lots and lots of data and when the test point comes it is going to be like" }, { "end": 1290.68, "start": 1284.5600000000002, "text": " somewhere really in within between all of these training data points so we can" }, { "end": 1295.3200000000002, "start": 1290.68, "text": " infer from the surrounding training data points what the test data point is going" }, { "end": 1299.3200000000002, "start": 1295.3200000000002, "text": " to be like if there's a classification boundary right here we can sort of" }, { "end": 1304.3200000000002, "start": 1299.3200000000002, "text": " nearest neighbor it and there are arguments that deep networks are" }, { "end": 1311, "start": 1304.32, "text": " basically large nearest neighbor classifiers but that's a topic for another day and we" }, { "end": 1316.48, "start": 1311, "text": " are here basically we are here in machine learning right now we do local" }, { "end": 1325.2, "start": 1316.48, "text": " generalization we know our unknowns we know our test set as the opposition to" }, { "end": 1330.72, "start": 1325.2, "text": " this is broad generalization broad generalization is where you don't know" }, { "end": 1335.16, "start": 1330.72, "text": " what you don't know unknown unknowns you don't know what comes at test time and" }, { "end": 1342.08, "start": 1335.16, "text": " you can't pre-build sort of your expectations into the system this is" }, { "end": 1350.04, "start": 1342.08, "text": " more akin to something like level five autonomous driving where you build this" }, { "end": 1354.72, "start": 1350.04, "text": " car but you don't really know what kind of situations coming no no this is a" }, { "end": 1359.16, "start": 1354.72, "text": " this is a fuzzy definition right I mean you do sort of know what situations" }, { "end": 1366.0800000000002, "start": 1359.16, "text": " will come at the car you can certainly probabilistically make a statement" }, { "end": 1370.6000000000001, "start": 1366.0800000000002, "text": " about what so this is it's not a clear-cut definition and I think we're" }, { "end": 1376.28, "start": 1370.6000000000001, "text": " going to so in the math it seems clear-cut but when we get there I don't" }, { "end": 1382.4, "start": 1376.28, "text": " think it is that clear-cut honestly it's still kind of a an intuition thing what" }, { "end": 1388.0800000000002, "start": 1382.4, "text": " you categorize as local and broad and so on also here the Wozniak coffee cup" }, { "end": 1392.1999999999998, "start": 1388.08, "text": " example where it basically Wozniak says you should be able to build a robot that" }, { "end": 1399.04, "start": 1392.1999999999998, "text": " goes into any kitchen and gets you a cup of coffee and here you have known sorry" }, { "end": 1403.96, "start": 1399.04, "text": " unknown unknowns because you can't possibly foresee all possible kitchen" }, { "end": 1408.3, "start": 1403.96, "text": " arrangements there might be obstacles and so on do you know the coffee might" }, { "end": 1413.56, "start": 1408.3, "text": " there might be different coffee makers that you've never encountered before but" }, { "end": 1419.8, "start": 1413.56, "text": " I've long been saying that this is a bit of a trick right here because what what" }, { "end": 1427.12, "start": 1419.8, "text": " you can always do is you can construct a room a kitchen right and right here is" }, { "end": 1432.24, "start": 1427.12, "text": " the coffee machine so there's the how do we draw this there's the coffee machine" }, { "end": 1438.08, "start": 1432.24, "text": " right here one of these fancy Nespresso machines you put in a capsule here and" }, { "end": 1444.08, "start": 1438.08, "text": " here's the coffee machine okay but then you you build a wall around it and the" }, { "end": 1452.6399999999999, "start": 1444.08, "text": " wall has a door and the door the door will only open if you solve an IQ test" }, { "end": 1458.96, "start": 1452.6399999999999, "text": " right so or any sort of any surface so whatever you put whatever you put in" }, { "end": 1462.72, "start": 1458.96, "text": " that spot that's the level of generalization you you can achieve" }, { "end": 1469.76, "start": 1462.72, "text": " basically so you can always up the level of generalization to or you can put I" }, { "end": 1473.72, "start": 1469.76, "text": " don't know you can put the halting problem here right you can you can you" }, { "end": 1480.3600000000001, "start": 1473.72, "text": " can here you can say you only solve this door if you can whatever give me a proof" }, { "end": 1487.96, "start": 1480.3600000000001, "text": " of the ABC conjecture something like this so coffee cup example kind of kind" }, { "end": 1494.88, "start": 1487.96, "text": " of has some back doors in any case you sort of know what was the acme needs you" }, { "end": 1500.08, "start": 1494.88, "text": " should be able to go into a standard kitchen but the standard kitchens are" }, { "end": 1506.72, "start": 1500.08, "text": " still diverse enough you can't foresee all of them like I don't if any of you" }, { "end": 1513.04, "start": 1506.72, "text": " has this sort of kitchen that I'm talking about like mad respect all we" }, { "end": 1516.92, "start": 1513.04, "text": " will all get this robot and you'll you'll just have to wait for the next" }, { "end": 1524.88, "start": 1516.92, "text": " iteration ok then there's extreme extreme generalization is where you have" }, { "end": 1528.4, "start": 1524.88, "text": " kind of open-ended you you don't know what's going to come you don't even" }, { "end": 1532.8400000000001, "start": 1528.4, "text": " know the broad category of tasks that is going to come right broad here is still" }, { "end": 1539.3600000000001, "start": 1532.8400000000001, "text": " broad still refers to a broad category of related tasks so it is sort of a" }, { "end": 1544.72, "start": 1539.3600000000001, "text": " general ability and the extreme generalization just means you know" }, { "end": 1550.3600000000001, "start": 1544.72, "text": " whatever whatever comes you can solve it but it is different from universal" }, { "end": 1557.08, "start": 1550.3600000000001, "text": " universal generalization surely says is any conceivable task in the universe and" }, { "end": 1561.92, "start": 1557.08, "text": " that's pointless it's pointless because it's just too much there's this no" }, { "end": 1569.28, "start": 1561.92, "text": " free lunch theorem right plus what we actually want is we want human level" }, { "end": 1574.1200000000001, "start": 1569.28, "text": " intelligence and human level intelligence has this property of extreme" }, { "end": 1579.12, "start": 1574.12, "text": " generalization with extreme generalization we mean the scope see" }, { "end": 1584.7199999999998, "start": 1579.12, "text": " it's dependent on a scope we mean the scope of all human tasks of all tasks" }, { "end": 1590.4399999999998, "start": 1584.7199999999998, "text": " that humans could produce or could find useful could find themselves in or could" }, { "end": 1600.36, "start": 1590.4399999999998, "text": " pose of this system not all tasks that the universe could pose so that here you" }, { "end": 1605.08, "start": 1600.36, "text": " you don't even have the relation between tasks the relation between tasks are at" }, { "end": 1613.4799999999998, "start": 1605.08, "text": " most abstract so there maybe it's like the general ability of sorting things" }, { "end": 1619.04, "start": 1613.4799999999998, "text": " generally in in whatever fashion in and things whatever these things are with" }, { "end": 1625.04, "start": 1619.04, "text": " whatever properties or the general ability to communicate an idea or" }, { "end": 1633.68, "start": 1625.04, "text": " something like this and this in humans is called the g factor if you or it's" }, { "end": 1639.68, "start": 1633.68, "text": " related to but we're going to take a like surely it really goes after" }, { "end": 1646.24, "start": 1639.68, "text": " psychometrics here and really models its his framework after psychometrics for" }, { "end": 1651.28, "start": 1646.24, "text": " humans and the sort of achievement in psychometrics one of the achievements is" }, { "end": 1656.52, "start": 1651.28, "text": " this measure of the g factor and that's what we humans usually call intelligence" }, { "end": 1663.16, "start": 1656.52, "text": " he says note that humans have system centric and developer aware" }, { "end": 1669.48, "start": 1663.16, "text": " generalization though you know that one this this and count this contains the" }, { "end": 1676.72, "start": 1669.48, "text": " other one so why because we can handle situations that previous humans haven't" }, { "end": 1681.84, "start": 1676.72, "text": " experienced now I'm not I'm not sure he basically says humans have developer" }, { "end": 1686.92, "start": 1681.84, "text": " aware generalization because we can we can fare well in situations that no" }, { "end": 1694.04, "start": 1686.92, "text": " humans during evolution have experienced prior but okay let's let's have this" }, { "end": 1700.1200000000001, "start": 1694.04, "text": " abstractly let's say our developer is the evolution process you still have to" }, { "end": 1708.84, "start": 1700.12, "text": " ask can humans really solve things that the evolutionary process has not built" }, { "end": 1713.8799999999999, "start": 1708.84, "text": " into them in some sort I guess that refers back to the nature versus nurture" }, { "end": 1721.4799999999998, "start": 1713.8799999999999, "text": " like humans humans cannot you know multiply long floating point numbers it" }, { "end": 1728.76, "start": 1721.4799999999998, "text": " doesn't matter how I get without a pen and paper it doesn't matter how how" }, { "end": 1733.92, "start": 1728.76, "text": " much you learn or something like this there are some things that they just" }, { "end": 1740.8799999999999, "start": 1733.92, "text": " can't do but would want to do and I guess the evolutionary path simply didn't" }, { "end": 1745.48, "start": 1740.8799999999999, "text": " provide us for doing that kind of stuff we have a finite working memory and so" }, { "end": 1752.16, "start": 1745.48, "text": " on so I think the discussion here is still to be had if we really do have" }, { "end": 1757.72, "start": 1752.16, "text": " developer aware generalization if you consider our developer to be the" }, { "end": 1766.6000000000001, "start": 1757.72, "text": " evolutionary process but but we can forgive a little bit here so this is the" }, { "end": 1771, "start": 1766.6000000000001, "text": " general diagram that also emerges from kind of theories of intelligence from" }, { "end": 1777.8, "start": 1771, "text": " psychology where generally you have a general intelligence factor which is one" }, { "end": 1783.34, "start": 1777.8, "text": " factor this is quite remarkable in humans there is one general intelligence" }, { "end": 1788.6799999999998, "start": 1783.34, "text": " factor statistically all all these general intelligence tasks they broadly" }, { "end": 1793.4399999999998, "start": 1788.6799999999998, "text": " correlate and lead to one statistical factor it's not it's not obvious why" }, { "end": 1799.9599999999998, "start": 1793.4399999999998, "text": " that should be but turns out to be one factor and that distributes hierarchically" }, { "end": 1805.52, "start": 1799.9599999999998, "text": " into these things which are called broad abilities broad cognitive abilities and" }, { "end": 1809.9599999999998, "start": 1805.52, "text": " in shoeless framework that would correspond to broad generalization and" }, { "end": 1814.32, "start": 1809.96, "text": " then these are again hierarchically subdivided and sometimes as you can see" }, { "end": 1821.56, "start": 1814.32, "text": " your shared task specific skills okay and this in in shoeless framework would" }, { "end": 1831.24, "start": 1821.56, "text": " be local or no generalization so again he basically goes into psychometrics" }, { "end": 1836.88, "start": 1831.24, "text": " and specifically IQ tests for humans can they inform the measuring process the" }, { "end": 1843.88, "start": 1836.88, "text": " note and the thing to note here according to shalé is in an IQ test you" }, { "end": 1848.72, "start": 1843.88, "text": " want to measure these broad abilities you want to measure ultimately you want" }, { "end": 1853.5200000000002, "start": 1848.72, "text": " to measure G but if even if you measure different things in psychometrics you" }, { "end": 1857.3200000000002, "start": 1853.5200000000002, "text": " want to measure these broad abilities but these are like these are abstract" }, { "end": 1862.72, "start": 1857.3200000000002, "text": " concepts so what you're left with what you can only do is you can only measure" }, { "end": 1871.8, "start": 1862.72, "text": " really tasks okay and is this wrongly numbered or is this intentional I don't" }, { "end": 1877.72, "start": 1871.8, "text": " know you can only measure tasks but you somehow have to make an inference about" }, { "end": 1882.88, "start": 1877.72, "text": " the broad ability from measuring the tasks so that's the difficulty in" }, { "end": 1888.04, "start": 1882.88, "text": " psychometrics right you you you want to measure the abilities but you can only" }, { "end": 1892.96, "start": 1888.04, "text": " measure tasks the abilities are abstract concepts and the skill are the" }, { "end": 1899.12, "start": 1892.96, "text": " measurable things where you can put a number on it now you you can so what" }, { "end": 1906.24, "start": 1899.12, "text": " these IQ tests do they usually usually employ these broad battery of tests so" }, { "end": 1911, "start": 1906.24, "text": " you don't give the human just one tasks you give you give the human a lot of" }, { "end": 1917.8, "start": 1911, "text": " tasks you like okay complete this series which number comes next draw like rotate" }, { "end": 1922.32, "start": 1917.8, "text": " this in your head and so on but there there are so very human centric things" }, { "end": 1928.3999999999999, "start": 1922.32, "text": " like reading comprehension and so on but you do this broad battery of tests and" }, { "end": 1934.9199999999998, "start": 1928.3999999999999, "text": " you might think oh oh okay this is sort of like the Atari you know sweet where" }, { "end": 1939.8, "start": 1934.9199999999998, "text": " one reinforcement learning agent has to solve these whole bunch of Atari games" }, { "end": 1946.72, "start": 1939.8, "text": " or a super glue in NLP where one NLP system has to learn to do all these" }, { "end": 1951.08, "start": 1946.72, "text": " different NLP tasks you know there is entailment there is sentiment there is" }, { "end": 1958.3600000000001, "start": 1951.08, "text": " Boolean question answering and but this is according to chalet it's sort of not" }, { "end": 1966.32, "start": 1958.3600000000001, "text": " really it's not really the case that these are equivalent because it is a" }, { "end": 1970.64, "start": 1966.32, "text": " battery but it is known to the developer so the developer knows that the NLP" }, { "end": 1976.16, "start": 1970.64, "text": " system has to solve the super glue thing so the developer can first of all train" }, { "end": 1981.48, "start": 1976.16, "text": " the system until it reaches a good super glue score but then also it will have" }, { "end": 1986.64, "start": 1981.48, "text": " built in already the assumptions of the developer that you have to solve this so" }, { "end": 1991.0400000000002, "start": 1986.64, "text": " the second important thing about these battery of tests and IQ test is that" }, { "end": 1996.52, "start": 1991.0400000000002, "text": " they are unknown to the tested the tested cannot or ideally should not" }, { "end": 2002.02, "start": 1996.52, "text": " practice for them that's why people keep developing new and new IQ tests because" }, { "end": 2005.3200000000002, "start": 2002.02, "text": " we sort of know they all correlate first of all so they measure the same thing" }, { "end": 2012.6799999999998, "start": 2005.32, "text": " but also second because otherwise people if you just always do the same test" }, { "end": 2018.32, "start": 2012.6799999999998, "text": " people could practice it and then you would no longer measure the general" }, { "end": 2022.6, "start": 2018.32, "text": " ability you would only measure that one test by the way that's also why a lot of" }, { "end": 2031.08, "start": 2022.6, "text": " these you know brain brain exercise apps and so on they none of them really ups" }, { "end": 2037.84, "start": 2031.08, "text": " your intelligence you you you only get better at one app if you do that you" }, { "end": 2049.48, "start": 2037.84, "text": " don't you don't get smarter in general so if and and show they says there have" }, { "end": 2055.2, "start": 2049.48, "text": " been a number of attempts at making machines making AI solve human IQ tests" }, { "end": 2060.52, "start": 2055.2, "text": " right well the reasoning is the follows like oh okay humans develop IQ tests for" }, { "end": 2068.68, "start": 2060.52, "text": " humans and presumably those are no you're not known and so on but again the" }, { "end": 2073.84, "start": 2068.68, "text": " tasks broadly of IQ tests are known I guess really IQ tests work on humans" }, { "end": 2079.24, "start": 2073.84, "text": " because they only work on humans who don't really care like if someone really" }, { "end": 2084.44, "start": 2079.24, "text": " really really really cared they would you know research what kind of tests" }, { "end": 2087.28, "start": 2084.44, "text": " there are they would look at all the tests from history there's only so many" }, { "end": 2090.96, "start": 2087.28, "text": " tests you can come up with the new ones are going to be like variations on the" }, { "end": 2095.36, "start": 2090.96, "text": " old ones so you could technically if you really wanted you could like prepare" }, { "end": 2100.76, "start": 2095.36, "text": " super hard and that's exactly what developers are going to do they're" }, { "end": 2103.76, "start": 2100.76, "text": " basically going to look at all these tasks they're going to pre solve the" }, { "end": 2107.6000000000004, "start": 2103.76, "text": " problem and then they're going to program their you know pre-solved" }, { "end": 2114.6800000000003, "start": 2107.6000000000004, "text": " solution into an AI system so we can't just let AI systems solve human IQ tests" }, { "end": 2121.04, "start": 2114.68, "text": " what we need are tests that are reliable which means they're reproducible that" }, { "end": 2125.72, "start": 2121.04, "text": " are valid that means they really measure IQ or if they really measure" }, { "end": 2130.8799999999997, "start": 2125.72, "text": " artificial intelligence and not you know just task specific skill or something" }, { "end": 2137.68, "start": 2130.8799999999997, "text": " else they're standardized across the spectrum so they're standardized so" }, { "end": 2141.9199999999996, "start": 2137.68, "text": " everyone can do them in the same way by the way the current benchmarks are" }, { "end": 2147.56, "start": 2141.92, "text": " standardized that's the good part about them and they should be they should be" }, { "end": 2153.08, "start": 2147.56, "text": " free from bias which means they should not measure anything orthogonal to what" }, { "end": 2157.96, "start": 2153.08, "text": " they claim to measure and the example he gives is they should not measure" }, { "end": 2162.4, "start": 2157.96, "text": " reaction time which is also a big component in human IQ tests you also" }, { "end": 2167.28, "start": 2162.4, "text": " measure how fast the human is at the test and the machine obviously if you" }, { "end": 2173.1600000000003, "start": 2167.28, "text": " simply put more electrons through the cable it's going to run faster you or if" }, { "end": 2181.76, "start": 2173.1600000000003, "text": " you put more more GPUs there so in broad terms we what we should focus on is this" }, { "end": 2189, "start": 2181.76, "text": " new skill acquisition as I said from the beginning but it is not as easy as you" }, { "end": 2194.52, "start": 2189, "text": " might think right now and we're going to dive into the next episode and is going" }, { "end": 2202.2, "start": 2194.52, "text": " to be math heavy and that's going to be fun so I hope you enjoyed this kind of" }, { "end": 2207, "start": 2202.2, "text": " special episode maybe let me know if you like this style the paper doesn't have" }, { "end": 2212.96, "start": 2207, "text": " any pictures so you're just left with what I'm what I'm drawing yeah if you" }, { "end": 2217.72, "start": 2212.96, "text": " enjoyed this leave a like leave comments share it out and I'll see you next time" }, { "end": 2224.72, "start": 2217.72, "text": " bye bye" } ]
HYEzHX6-fIA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Dynamics-Aware Unsupervised Discovery of Skills (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "deep rl", "control", "planning", "world model", "dads", "skills", "latent", "high level", "unsupervised", "tree search", "deep reinforcement learning", "mujoco", "ant", "google" ]
This RL framework can discover low-level skills all by itself without any reward. Even better, at test time it can compose its learned skills and reach a specified goal without any additional learning! Warning: Math-heavy! OUTLINE: 0:00 - Motivation 2:15 - High-Level Overview 3:20 - Model-Based vs Model-Free Reinforcement Learning 9:00 - Skills 12:10 - Mutual Information Objective 18:40 - Decomposition of the Objective 27:10 - Unsupervised Skill Discovery Algorithm 42:20 - Planning in Skill Space 48:10 - Conclusion Paper: https://arxiv.org/abs/1907.01657 Website: https://sites.google.com/view/dads-skill Code: https://github.com/google-research/dads Abstract: Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse-reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery. Authors: Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Take a look at this humanoid right here. It walks from one checkpoint to another checkpoint and then to the next checkpoint and so on. And that is its task. It gets reward from walking from checkpoint to checkpoint. Take a look at this ant. This is called the ant. It also walks from checkpoint to checkpoint. Now we've seen a lot of reinforcement learning algorithms in this environment. It's called Mukojo, where you basically teach these little things to walk around. So what's the impressive part here? The impressive part is that at training time, this ant has never ever seen what a checkpoint is and has never gotten any reward from walking from checkpoint to another checkpoint. Actually, it has never gotten any reward for anything that is given from the environment. It has discovered the skill of walking by itself. And then at test time, there is no additional learning when it goes from checkpoint to checkpoint. It simply composes the skills that it knows from its unsupervised discovery phase in order to go from checkpoint to checkpoint. So here you can see this paper basically proposes to learn these skills in a completely unsupervised way at the beginning, sort of, so in the training phase, it learns these skills, you can see these skills that the humanoid has learned. And then all you have to do at test time is to compose these skills to reach a given goal. And these are the things that the ant has learned. Watch out, this is trippy. You can see it has learned various walks, various ways of walking here. And if you know anything about this environment, it's actually not that easy to make the ant walk by itself. So the discovery here that these skills that are discovered are various ways of walking is actually already pretty impressive. And the last thing here, this cheetah, of course, also has to has learned to walk back, forward, going to jump around, and so on. So we're going to dive into this paper. It's called Dynamics Aware, Unsupervised Discovery of Skills by Archie Sharma and other people of Google Brain. So this was published at iClear 2020. And on a high level, I already said it's basically proposing to learn unsupervised skills, and then to compose these skills in a model based planning method at test time to reach a given goal without additional training, without additional training on the reward that you give at test time. As always, if you like videos like this, you're very welcome to subscribe and share it to everyone you know. Yeah, okay, that's live in. So they say, conventionally, model based reinforcement learning aims to learn a global model for the dynamics of the environment, which is not exactly true, right. So we have we dive into model based and model free reinforcement learning. Model based reinforcement learning basically means that you have a model of the environment. A example for this is, let's say, tic tac toe. So in tic tac toe, I know, like I have nine actions at my disposal. And if I take action, let's say I take action zero, which is to make a let's say I'm the X player, so I take action zero. And if I you know, number my things correctly, then that will result in this state of the world. Okay, so I know exactly how the world will look when I take a given action. And what that allows me to do is that allows me to actually plan. So I can now plan ahead, I can say what would happen if I took action zero, so I can do this in my mind. And then what would happen if I took action one, I can be like, okay, that's going to happen. And I can do this with many things. And I can, in my mind, continue this and basically roll out the entire games or and then only do the given action that has led to the best result at the end, right. So this is model model based reinforcement learning means you have a model of the environment, you know, what's going to happen when you perform given actions. And you can also combine this with machine learning, like, you know, alpha, alpha go alpha zero, or so they have models of the games they're playing, they know what's going to happen. But it's very intractable to basically go down this entire tree and plan out everything. So they combine it with machine learning. It doesn't change that it's model based. Now in, in opposition to that in model three, in in reinforcement learning, what you do, you are this agent, there's the environment, and you simply have to do an action, I do action zero, and the environment just gives you back a reward and the next observation. And it you have basically no clue how the environment will change. If you do something, all these and all these, these agents do or the classic model free agents do is basically they're trying to have a neural network somewhere in them. And you put the observation in here and outcomes in action. And you can do this in various ways into Q learning or policy gradient or actor critic and so on. But ultimately, it's simply mapping the up the current observation and maybe the last few to the best action to take without explicitly modeling what happens in the environment. Now when they say model based reinforcement learning, what they mean is technically what you can do if you're in if you are in the model free, if you're in this situation, what you could do is you could say, well, since these model based RL techniques tend to work better, I could hear inside the agent, I could try to learn a model of the environment, E prime, and I could try to basically learn what happens in my environment when I do a certain action. And then I could use that model right here, in order to do this planning that I know from up here. Okay, so in this case, they go for exactly this they go for, let's learn a model of the environment, this is not an exact model, it's a learned model. And then let's use that to plan. Now this usually has a bunch of, you know, very a bunch of things that go against it, namely, if this model right here is bad, then the planning in the model will often accumulate and even exaggerate the errors that are in this model. So it's sometimes very hard to learn a model of the world and then use that for planning. And I've recently done a paper where, where curious AI takes the noise in auto encoders to regularize exactly such a planning procedure to counter this. And this paper right here is a different approach of combining this learned model. This learned model. So, okay. That was about the first sentence. They say it aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks, which is true, right? If I have a model of the environment, then I could just use it to plan. I wouldn't even have to do anything fancy anymore, right? If I have a model of how my tic tac toe works, I can just plan, plan my way to success. And I can do this all for zero style. Or if the if this state tree is small enough, I can actually just use a planner. And I don't even have to, I don't have to do anything anymore. If I have a good model. They say however, learning an accurate model for complex dynamical systems is difficult. And even then the model might not generalize well outside the distribution of states on which it was trained. So this is another problem. If you learn a model, it's only going to be valid in a certain range. Okay. And say in this work, we combine model based learning with model free learning of the model free learning is of primitives that make model based planning easy. So what they attempt to do is they attempt in an unsupervised fashion to learn so called set of skills. And the set of skills could be something like walk forward, walk backward, stay put, jump. So they attempt to learn things like this in a model free way, that the model is simply asked to come up with these things or the agent is simply asked to come up with these things. And then in in stage two, a planner can use these skills and decompose a plan. Now this plan here, the special thing about this plan, this planner, it doesn't operate in the space of actions of like small scale actions, it actually operates in the space of these skills right here. So here would be walk forward, walk back. So and with if we have a good enough model of the environment, it will tell us if I walk forward in this situation, what will happen? Okay, so I can walk forward. And then after that, I could walk backward, what's going to happen? And if I have a good model of the environment over the and the actions now are these macro actions of these skills, then I can use planning to reach my goal. Okay, so the question is, how do we come up with useful skills that the planner can then use? So they need to be somewhat diverse, right? But also, and here is the the crucial part and the the sort of contribution of this paper, they say, how can we discover skills whose outcomes are easy to predict? And this is how they counteract this notion here, that if your environment model is crap, then you're you're, it can't basically can't be used for planning, you'll just make it worse. So what they say is that these things right here, these skills that we learn, we will learn them in a way that make them easy to predict. So it makes it easy to predict what will happen after I do them. So they must be at the same time diverse. So only you know, if you stay put, it's pretty easy to predict what's going to happen like nothing. Okay. But we're going to see in the exact objective that they have to be sort of diverse. But also, so only one of them can be stay put, the other ones have to do something else. But also, they should be easily predictable by the environment model. And if you learn the skills such that they're easily predictable, your environment model will make less errors, and then you can use it for planning. Okay, let's dive in. They do actually open source code, and they have more of these videos, if you want to check it out, I'll link everything in the description. Okay, let's actually dive into the meat right here. They say they do maximize the mutual information. And we're going to see between between what and what they want to maximize the mutual information. If you don't know what the mutual information is, the mutual information is a quantity. The mutual information between x and y is a quantity, that's the entropy of x minus the entropy of x conditioned on y, or you can also decompose it in the other way around entropy of y minus entropy of y conditioned on x. And we're going to they apply this, where is it equation two right here. Okay, so what they want is the following, they want to maximize the mutual information. And the mutual information basically means how much does one variable tell me about the other variable, okay? The mutual information between the skill, and we're going to see what that is, and the next state. So what is a skill? A skill is one entry in our table right here. So these here are skills, skills. And they're in their index by z. Now z here, it seems like they're discrete, right? But in this case, they would be z would be a continuous vector. But it's easier if you imagine that so each one of this is like z one, z two, and so on. They're going to be continuous, but in our case, we'll just think of a discrete set of skills to be learned. Okay, so they say we want to maximize the mutual information between the skill that's currently in action. So in every, every time the agent has to like choose a skill and saying like, okay, now I'm going to walk forward. And it's in a given state. And what you want to say is you have to maximize the mutual information between the skill and the next state, which means that it means two things, which you can see right here, you can decompose it in two different ways. One way is the following is to say, if I know which state I'm in, what's the entropy over my? Okay, that's the wrong bad way for me leading it. It's if I knew, if I know these two things, so if I know the state I'm in, and the next state that I'm going to write, I can in hindsight, I look back, if I know both things, what can I say about this skill z right here, that I couldn't say just from the starting state. So the starting state, let's say is the person right here, and the end state is the person a little bit more over there. So that's, that's called this forward, it's the person is looking to the right. Okay. Now, if I only show you if I only show you the state on the left, the starting state, what can you tell me which action is going to follow right here? Basically, you can tell me much it could be any action like walk forward, walk back, stay put. But if I also show you the next state, then you can pretty confidently say, ah, I know what you did, you did walk forward. Okay, so this, this, in this situation, we would have a high mutual information between z and s prime in the formulation here, if you decompose it in the other way, this is equivalent, but it's a different way of thinking about it, it means when I show you these two things, how much more can you tell me about the next state than if I only show you this. So in this formulation, what we would do is, I would say I tell you the human is here looking to the right, what can you tell me about the next state? And you like what I, I couldn't tell you very much, right? It can be anything. But if I then tell you the action is walk forward, then you could say, ah, now I now I get it, it's probably going to be something like this. Okay. This also would be a high mutual information. So you see that the the task of maximizing this mutual information is good, because what happens if I don't, if I have a low mutual information, if I have a low mutual information, it would mean that I could predict the next state just as well from the current state. It doesn't make a difference whether you give me the skill or not, it would not make a difference. And that is only the case if my skills are basically either all the same or all pretty, pretty random and pretty useless. Right? So if if all my skills are basically walking backwards, then I don't, you don't have to tell me which skill you do, I'm going to know that the next state is like this. So you can see that the objective of maximizing the mutual information between the skill and the next state is going to result in a situation where these skills are going to be first of all, diverse. And second of all, easy to easy to predict. Okay. And to see this, yeah, we only have to imagine what would happen, what would happen in a in a situation where the skills weren't diverse or weren't easy to predict. And you'll get exactly the situation where the information of the skill doesn't help you in predicting the next state, because yeah, either it's obvious or it's random. Okay, so we agree that it makes sense to maximize the mutual information. And they decompose this into two objectives. So they say the mutual information will stay is basically what you'll have to do is you'll have to it decomposes into two terms where you can into two terms in a lower bound in the mutual information. And this is kind of the this is sort of the standard variational approximation literature. If you're into that read up on variational auto encoders and things like this. Basically, the two steps here are you tighten the variational lower bound and you maximize the approximate lower bound. Okay, so you have the mutual information, and you can lower bound it. Okay, you can you can lower bound it by this quantity. Now the if since this is a lower bound, you can prove that this is a lower bound, it means that the higher you make this term, then the the more basically, okay, if it's a lower, I don't know how to formulate, but it should be fairly obvious if this if this thing on the right is a lower bound to the mutual information, then maximizing the thing on the right will maximize the thing on the left. And it will do so very so imagine I is up here. And this E on the right side is down here, it's a lower bound, right? It's lower than I. So if I maximize E, well, I haven't really done anything to I but if I maximize it even more up to here, and since it's a lower bound, I know now my I must be at least higher than this. Okay, so maximizing a lower bound to a quantity will ultimately increase the quantity. But you can also the efficiency by which it does this depends on how tight the bound is. So if the bound is very tight, like this, you see I is not much above E. If the bound is very tight, then maximizing E will result in a faster maximization of I. Okay, so you can do two things, you can maximize the quantity that is the lower bound, or you can tighten the bound. And here we can see that the difference that the tightness of the bound depends on this quantity right here, which is the KL difference between this and this. So yeah, let's watch this in the context of we can actually go go through it on the high level. If you've never done this variational approximation sort of math, then this might be a bit informative. Okay, so the thing right here is just pops out of the definition of the mutual information. It's the it's basically the differences of the entropy's, which the entropy's are log quantities, right? So if you have a log A minus log B, that you can also write this as the log of the fraction of A over B, that's just a property of the log. And so it's expectations over logs, these entropy's. So you can write it as this thing right here. Okay. And this basically says, this is very high, if or very low, depending so you need to whether or something is lower high always will depend on what you exactly you have to consider. But ultimately, what you'll want is the ratio between this quantity, which is the probability of the next state given the current state and the skill you're taking divided by just the probability over the next state, given the the current state, in expectation over all the skills, current states and next states. Now what they're saying is, this here, this is basically the environment, right? This is if you are in a state and you perform a skill, what's the next state? That's the true environment. P here is the true environment, which we don't know, right? We don't know what the environment's going to do. But we would like to learn a model for the environment. And this model for the environment is now Q, Q, theta, phi, theta, phi, Greek letter. So Q phi here is going to be a neural network that will approximate the environment. And in this probabilistic framework, it is going to be a learned distribution that will approximate the distribution of P. So we approximate it by this. But now this is here it says equal equal, right? This is not equal, because this is just an approximation. So the equality must become must be basically compensated for by this term right here, you can see this here is expanded into these two, you can go through the exact definitions and see why this is an equality. But basically, you can say that the mutual information is this expectation, or it is this expectation. But now you have to correct for the fact that here you only have an approximation. And you have to correct for the fact by exactly the amount by which the approximation is different than the quantity that you're approximating. And this is this key KL divergence right here. So the KL divergence basically measures how different two distributions are. It's sort of a distance, not exactly distance, but sort of a distance between these two distributions right here, it says, here's the real world. And here is your estimate of the real world, how much do they disagree, and that quantity plus, then you can replace your world, the exact world distribution by your approximate distribution. And you still are equal to the mutual information. And now the basically the trick is you say, oh, the KL divergence is always positive, it's a quantity that it can only be a positive number. So if I leave it away, certainly, this is only this is going to be a lower bound to the quantity. Okay. All right. So two tasks right here. First of all, tighten the variational bound, which means make this quantity small, make your approximate world model as close as possible to the real world. How do we do this neural network? Okay, you input trajectories. I was in this state, I performed this skill, and I ended up in this state. Sorry, that's this and then you simply match your neural network simply matches what happens in the real world, it learns the transition function, basically. So that's, that's the tightening of the variational bound. And the second step is this right here to to maximize the approximate lower bound, right? The first step was Titan variation lower bound, that basically means make your world model more accurate. And the second is Titan that maximize the approximate lower bound. Now this is going to part, this is going to be the part that says, now given that I already have a better world model right here, can I improve my can I sort of improve my skills such that they become easier to predict and more diverse? Can I improve my skills such that this mutual information right here gets to be high as high as possible? Okay. So this is sort of an alternating thing. And you can see this in this very, very, very, very confusing diagram, honestly. So what are you going to do in this algorithm? First of all, in each episode, you're going to select a skill at random. And as I said, these skills, they're not predefined. So no one tells the agent to walk forward, it simply says, okay, you have like, in a discrete case, you would have like, you have five skill slots, right? And the only thing I require is that they're sort of, you know, consistent over time. So skill one is always going to be sort of the same thing and skill two, but agent, you can basically decide what skill one is, right? But make the skill such that it's predictable and that the different skills are diverse. Okay, so you're going to sample one of the skills, like skill zero or whatnot. And then you're going to do two things. First of all, you're going to learn these skill dynamics, which is you're going to learn your approximate model of the world. Okay. And how do you do that? Basically, here, you're the agent and the agent will. So what does the agent have to do? The agent will take in the skill Z and it will take in the current state of the world and it will output an action. Now, this is the model free part, right? So the agent that somehow has to come up with saying, ah, skill zero, that's walking forward. And in this situation, walking forward means I have to lift my leg or something like this. So you're going to take your skill, you're going to with your agent perform an action based on that skill and the current state of the world. Then the environment is going to give you the next state right here. And from those things, you can now learn your world model. You know, I was in state S, I performed action a but I performed action a based on skills Z. And then I ended up in state S prime. And I can learn a model of the world, right? This is a triple, I can do supervised learning of a world model. Now here, they do probabilistic learning, but and we're going to see in a second how that works. But ultimately, they approximate the world with their model. Cool. So that's the this outer loop. And then what are they going to do next, they're going to use that world model to determine a reward for the agent, and the reward for the agent for taking the action. So the reward is going to be, oh, agent, you took action a. Now, what's your reward for doing this? This is the model free reinforcement learning part, your reward is going to be very high, if if this was very predictable. And if it is also diverse, right, so now, the agent has to sort of max sort of the agent has to go and make this quantity very high this, we want the outcome of these actions to be predictable, and dive and the actions themselves to be diverse. It is, I'm sorry, it's very hard to keep all of this very straight. Okay. But ultimately, two steps, learn world model from the experience that you've generated. And second thing, learn the agent such that it maximizes this this quantity that we've seen before. And you do this via giving the agent a reward that is proportional to the mutual information. And we've already seen that we can approximate the mutual information by by this quantity here. Okay. So learn world model, and make the agent go higher mutual information, two steps. Okay, learn world model is very, very classic, you can say, okay, I need to improve, I need to minimize this KL divergence. So I need the gradient with respect to the parameters of my world model, I can write down the KL divergence like this. And then since I can do this reverse, so log a over b is log a minus log b. And since the world doesn't depend on the parameters of my model, this will simply give me this thing right here, which is the gradient of the log probability basically, of my neural network. And this can be just optimized straightforward, this is a neural network, optimize it with gradient descent. These are the inputs, this is the output. Now, okay, you, this is all probability distributions, but ultimately, you can you can do it pretty straightforward. Okay, so corresponds to maximizing the likelihood of the samples from P under Q. Now, the second step maximize the approximate lower bound. Okay. So after they say after fitting Q, after improving our world model, we can optimize pi pi is the agent that actually takes the actions based on the skill. So it's given a skill, and it needs to perform an action. And it needs to maximize this quantity, as we've seen, needs to maximize the mutual information between if I know the action and if I don't, or the mutual information between the skill and the next state. I say note, this is a reinforcement learning style optimization, with a reward function of this quantity. However, so you look at the quantity that they need right here, the quantity is going to be this thing. And this thing is just, I feed the skill and the state into my world model. And I look what what comes out of the world model. So this I can compute, right? But this thing right here, I can't compute because this is this is what happens in the world when I'm in state s, and I just run my agent over in expectation over all the skills. So this I don't know, they have a log of this is intractable. And then so we approximate the reward function for pi as this thing right here. Now, first, let's look at what this thing is. So the reward of taking action a, and action a is based on skill z, right? So skill z was fed into the agent, the agent comes up with action a, oh, you want me to walk forward in this situation, okay, I'm gonna lift my leg. That's the action. Okay. So the reward for this action, given this skill, and given the current state is going to be what it's going to be very high, if this here is very high. So it's going to be very high if the probability so this s prime is the state you ended up in, right? So after taking the action, you ended up in s prime. So if what does it mean when this quantity is very high? It means that my world model q, that is approximating the world, thinks that this state is very probable if you were in this state and are given the skill z. So this basically means that the neural network can predict with very high accuracy, what's going to happen if you are in this state and are given this skill to perform, right? This is one of the things that we want. Now, what is it divided by? It's divided by this. And you can see here, the z i are other skills. So it is, what does this mean? This is almost the same quantity. It means how well can the same neural network predict the next state if you were given a different skill. So it means if I'm here, and I ended up here, how well can you predict it if I tell you that I walked forward? And here you ask, well, how well can you predict it if I told you you walked backward, if I told you you jumped, if I told you, and so on. So you basically aggregate over all the other skills you could perform. And each time you ask the neural network, well, how likely is it that you end up in the state that I ended up with? So what does it mean if this quantity is high, or sorry, if the entire sum here is high? That means that the skill doesn't really give you much information, the neural network is very good, no matter which skill you selected, right? It's very accurate in predicting the next state doesn't really matter. The skill doesn't really matter. And this is what we don't want, right? We want that the skills are very diverse, right? So the top part is, they're easy, it's easy to predict what will happen if you perform a given skill. And we divide this by the bottom part. And this makes it such that these skills are very diverse, because if they're not diverse, then it doesn't really matter which one you perform. And then this quantity on the bottom will be very high, but we divide by it. So we want, we want it to be low. Okay, now the reward is going to be the log of this fraction here. And this makes sense, right intuitively, but they're going to try to motivate this mathematically. And for motivating this mathematically, of course, they need to approximate this quantity right here. This quantity is the denominator. So they this denominator is a proc is an approximation to this. It's an approximation. As you can see here, this is sort of sort of a sample based approximation to the transition from s to s prime under the distribution of z. But what you want is just is the transition from s to s prime, not in your approximation, but in the real world. And they formulate this, they say, okay, we can decompose it as such as a as an integral over this conditional right here. So they bring in the z variable. And then they say, well, this, this is approximately approximately we can replace this here by this. And we can replace this here by this. They say, well, since the this is a an approximation, the two, this is the world model is an approximation to the real world, we can sort of replace that. And then this is the this is the part that doesn't convince me they say, well, this P z of s, we can just replace it by P z. Now this is it's very tricky to see what these quantities are. Ultimately, it ends up being that right here. But it's it's it's so tricky. So they say we replace P z given s by P of z. And, okay, let's think about this for a second. What does the top the bottom quantity is simply the distribution over your skills. And depending on how you sample them, this could be like a uniform distribution over your skills, like that's fine. But what's the top thing, the top thing, basically, we can use base formula to reformulate it. It's P of s given z times P of right times P of z divided by P of s. So this quantity depends on multiple things. Here's that prior again. And this means what's the general distribution of states? What's the general distribution of states? If your agent acts in the world, right, this? And this, we don't know, we don't we don't know. And also this right here, what's the distribution in the true world? What's the probability of a state given a given given that you were acting under a skill z? And this is also something we don't know, because we don't know the world, we we don't have the world model. So you run into the same problem again and again, that you're trying to approximate this. And they want to make this so mathematically rigorous, but ultimately, and they go in the appendix, they go through various ways that they could solve this. But ultimately, they just say, well, this is approximately the same. So this right here, it basically means what skills, if you're if you're in a certain state, what skills brought you here, basically, what what skills brought you here, what's the distribution of skills that brought you to this state, and they say, well, we're just going to approximate that by the prior distribution over our skills, basically disregard the state here. And this seems overly shaky. Like, as I said, the entire paper makes sense. But I just feel it's trying to be overly mathematical, and then run into a point where you can't be and then they're just like, okay, we'll, we'll just replace it. And then sort of things break down, like, you can only be overly mathematical to some degree, it doesn't really fit. But okay, so this is how you discover the skills, you maximize these quantities, alternately, you learn the world model, and you improve your your skills by making them diverse and easily predictable. So how do you then plan using these skills? This is the second part of the paper, and this is just as complicated as the first part. So they say given the learned skills, so the learned skills are policies over action given the DZ, right? So now you know how to like walk forward and walk back and so on. And now you're, you're placed in a world, and you're given this checkpoint, it says, well, walk there. And you want this to do this using planning, you don't want to learn anymore, you simply want to plan. Okay, what do you do? And as I said, this is even more so what you want to do is you want to do something like model predictive control, but not over actions, but over your learned skills. So you have this planner in the NPC, and the planner will in its head roll out a number of different, a number of different plans, it will kind of explore a bunch of different, different plans Z, it will roll them out, I'll say, okay, if I do this, and this, and this, and this, what will happen using its world model that it has learned, it will observe what's going to be the reward in each of these cases. Now, they say here, access the environment reward, but can also be estimated. And this is another sort of, I feel weak point of this in that they now assume they have the true reward function. But they don't have a world model, right? They don't have the world model, but they assume that they can sort of always ask for the true reward, which is probably not the case if you had a true world, but it could be the case, the reward could be something like, well, if you're over there, you get higher reward, but you don't exactly know how to get over there. In any case, so they roll out a bunch of trajectories in their head, they kind of plan forward, see what's going to happen if they do this or that, or this or that. And then they choose the best one of these forward thoughts, and they execute it in the real world, right? So they say, well, I'm going to use choose the skill walk forward. So the agent is now going to be tasked with walking forward. And it's going to do that in the real world for a certain amount of steps, like 10 steps of walking forward, after 10 steps of walking forward, you go back and say, I'm in this new situation right here, what should I do? And again, the planner is going to be like, if you first walk forward, and then walk back, where are you going to be, and so on. So the planner will always plan, basically to go from where you are to the checkpoint using a composition of the skills that you have learned. So the planner may be fine. Okay, if I first walk forward, walk back a bit, and so on, I'm going to get to the goal, I'm going to reach the goal. Now please agent execute this first thing, walk forward, the agent executes it, and maybe it won't, you know, it won't do as well, it will maybe end up here. And then it says, well, I'm here now, please plan again. So it plans again, it says, okay, I can still kind of walk back, I'll be here, here, but then I have to do something else. So now walk back. And okay, so this is what's going to happen. But it is going to happen in a weird way. Namely, what we keep are normal, since everything is continuous, we'll keep normal distributions of all our future steps. So we don't say, okay, I go here, and then I go here. What you'll say is I approximately go here. And after that, I'll approximately go here. And you will do it in such a way that the peak of this normal distribution is going to be the highest, where you think you will get the most reward. If you follow this trajectory, like if you follow this trajectory, you get a very high reward. And if I follow a trajectory that maybe goes here, I won't get a high reward. If it actually turns out in your imagination that you do get a high reward for this trajectory, you'll change this distribution, such that the peak is here. And of course, the tighter the peak is, the more sure you are. So you sort of are looking, if you look out into the world, you want the closest steps to be very peaky. And then as you look out, they can be more sort of broad. And that's how you plan ahead, you keep doing a step. So if you go from here to finally you choose, I want to go here, where the tip is the highest, go here, then you imagine forward again, you refine these distributions over the future. And then you take the next step that gets you to the where the highest peak is right here, basically. And so on. This is simply planning in a continuous domain, it is pretty analogous to how you would plan in like, alpha go, if you or tic tac toe, if you had a planner. But since everything's continuous, it makes it just so much harder. So they yeah, they always update these distributions, as you can see here, to the skill that gave you a high reward in your imagination, compared to the rewards of the other plans that you had. Okay, well, this was a long, long way until we got here. But if you recap, so first, they in an unsupervised fashion, learn these low level skills, such that they're easily predictable by their own world model, and diverse. And then in the second step, they can use that to, to do basically planning. So they first learn these skills, and then the planner composes them to make the agent do something. And again, the agent will never have to learn how to do this go from checkpoint to checkpoint, because the planner can just compose these low level skills. So they have these experiments right here. And we won't go through the experiment because this video is already very, very long. But they basically show that they they're learned things, actually, their learned skills do end up being very diverse, do end up predictable, have a high variance, and so on, they have to give certain priors to it to make it actually work in a real setting. But the results you can actually see in these videos and in the graphs, I'm about you to check out the paper if you're still here. Thanks for being here, I hope this this work was one of the most more complicated and mathy papers we looked at. But I think I still think it's fun. And I still think the outcome is pretty impressive right here, how you can use math to derive basically these intuitive, very intuitive objectives to learn. It's also pretty cool. Alright, that was it for me. And bye bye.
[ { "end": 8.44, "start": 0, "text": " Hi there! Take a look at this humanoid right here. It walks from one checkpoint to another" }, { "end": 13.76, "start": 8.44, "text": " checkpoint and then to the next checkpoint and so on. And that is its task. It gets reward" }, { "end": 20, "start": 13.76, "text": " from walking from checkpoint to checkpoint. Take a look at this ant. This is called the" }, { "end": 26.42, "start": 20, "text": " ant. It also walks from checkpoint to checkpoint. Now we've seen a lot of reinforcement learning" }, { "end": 33.160000000000004, "start": 26.42, "text": " algorithms in this environment. It's called Mukojo, where you basically teach these little" }, { "end": 39.480000000000004, "start": 33.160000000000004, "text": " things to walk around. So what's the impressive part here? The impressive part is that at" }, { "end": 47.2, "start": 39.480000000000004, "text": " training time, this ant has never ever seen what a checkpoint is and has never gotten" }, { "end": 52.32000000000001, "start": 47.2, "text": " any reward from walking from checkpoint to another checkpoint. Actually, it has never" }, { "end": 58.52, "start": 52.32, "text": " gotten any reward for anything that is given from the environment. It has discovered the" }, { "end": 65.96000000000001, "start": 58.52, "text": " skill of walking by itself. And then at test time, there is no additional learning when" }, { "end": 71.32, "start": 65.96000000000001, "text": " it goes from checkpoint to checkpoint. It simply composes the skills that it knows from" }, { "end": 80.2, "start": 71.32, "text": " its unsupervised discovery phase in order to go from checkpoint to checkpoint. So here" }, { "end": 86.44, "start": 80.2, "text": " you can see this paper basically proposes to learn these skills in a completely unsupervised" }, { "end": 92.28, "start": 86.44, "text": " way at the beginning, sort of, so in the training phase, it learns these skills, you can see" }, { "end": 97.64, "start": 92.28, "text": " these skills that the humanoid has learned. And then all you have to do at test time is" }, { "end": 103.46000000000001, "start": 97.64, "text": " to compose these skills to reach a given goal. And these are the things that the ant has" }, { "end": 109.2, "start": 103.46000000000001, "text": " learned. Watch out, this is trippy. You can see it has learned various walks, various" }, { "end": 114.28, "start": 109.2, "text": " ways of walking here. And if you know anything about this environment, it's actually not" }, { "end": 122.82000000000001, "start": 114.28, "text": " that easy to make the ant walk by itself. So the discovery here that these skills that" }, { "end": 128.68, "start": 122.82000000000001, "text": " are discovered are various ways of walking is actually already pretty impressive. And" }, { "end": 134, "start": 128.68, "text": " the last thing here, this cheetah, of course, also has to has learned to walk back, forward," }, { "end": 139.48, "start": 134, "text": " going to jump around, and so on. So we're going to dive into this paper. It's called" }, { "end": 145.76, "start": 139.48, "text": " Dynamics Aware, Unsupervised Discovery of Skills by Archie Sharma and other people of" }, { "end": 152.52, "start": 145.76, "text": " Google Brain. So this was published at iClear 2020. And on a high level, I already said" }, { "end": 159.96, "start": 152.52, "text": " it's basically proposing to learn unsupervised skills, and then to compose these skills in" }, { "end": 167.52, "start": 159.96, "text": " a model based planning method at test time to reach a given goal without additional training," }, { "end": 174.64000000000001, "start": 167.52, "text": " without additional training on the reward that you give at test time. As always, if" }, { "end": 180.8, "start": 174.64000000000001, "text": " you like videos like this, you're very welcome to subscribe and share it to everyone you" }, { "end": 189.88, "start": 180.8, "text": " know. Yeah, okay, that's live in. So they say, conventionally, model based reinforcement" }, { "end": 196.32, "start": 189.88, "text": " learning aims to learn a global model for the dynamics of the environment, which is" }, { "end": 202.12, "start": 196.32, "text": " not exactly true, right. So we have we dive into model based and model free reinforcement" }, { "end": 208.96, "start": 202.12, "text": " learning. Model based reinforcement learning basically means that you have a model of the" }, { "end": 217.4, "start": 208.96, "text": " environment. A example for this is, let's say, tic tac toe. So in tic tac toe, I know," }, { "end": 222.96, "start": 217.4, "text": " like I have nine actions at my disposal. And if I take action, let's say I take action" }, { "end": 230.84, "start": 222.96, "text": " zero, which is to make a let's say I'm the X player, so I take action zero. And if I" }, { "end": 236.28, "start": 230.84, "text": " you know, number my things correctly, then that will result in this state of the world." }, { "end": 242.04000000000002, "start": 236.28, "text": " Okay, so I know exactly how the world will look when I take a given action. And what" }, { "end": 246.88, "start": 242.04000000000002, "text": " that allows me to do is that allows me to actually plan. So I can now plan ahead, I" }, { "end": 252.79999999999998, "start": 246.88, "text": " can say what would happen if I took action zero, so I can do this in my mind. And then" }, { "end": 258.08, "start": 252.79999999999998, "text": " what would happen if I took action one, I can be like, okay, that's going to happen." }, { "end": 262.96, "start": 258.08, "text": " And I can do this with many things. And I can, in my mind, continue this and basically" }, { "end": 271.2, "start": 262.96, "text": " roll out the entire games or and then only do the given action that has led to the best" }, { "end": 275.36, "start": 271.2, "text": " result at the end, right. So this is model model based reinforcement learning means you" }, { "end": 280.7, "start": 275.36, "text": " have a model of the environment, you know, what's going to happen when you perform given" }, { "end": 287.76, "start": 280.7, "text": " actions. And you can also combine this with machine learning, like, you know, alpha, alpha" }, { "end": 292.96000000000004, "start": 287.76, "text": " go alpha zero, or so they have models of the games they're playing, they know what's going" }, { "end": 299.68, "start": 292.96000000000004, "text": " to happen. But it's very intractable to basically go down this entire tree and plan out everything." }, { "end": 306.84000000000003, "start": 299.68, "text": " So they combine it with machine learning. It doesn't change that it's model based. Now" }, { "end": 312.8, "start": 306.84000000000003, "text": " in, in opposition to that in model three, in in reinforcement learning, what you do," }, { "end": 317.84000000000003, "start": 312.8, "text": " you are this agent, there's the environment, and you simply have to do an action, I do" }, { "end": 323.52, "start": 317.84000000000003, "text": " action zero, and the environment just gives you back a reward and the next observation." }, { "end": 331.44, "start": 323.52, "text": " And it you have basically no clue how the environment will change. If you do something," }, { "end": 337.59999999999997, "start": 331.44, "text": " all these and all these, these agents do or the classic model free agents do is basically" }, { "end": 345.79999999999995, "start": 337.59999999999997, "text": " they're trying to have a neural network somewhere in them. And you put the observation in here" }, { "end": 350.47999999999996, "start": 345.79999999999995, "text": " and outcomes in action. And you can do this in various ways into Q learning or policy" }, { "end": 357.28000000000003, "start": 350.48, "text": " gradient or actor critic and so on. But ultimately, it's simply mapping the up the current observation" }, { "end": 363.84000000000003, "start": 357.28000000000003, "text": " and maybe the last few to the best action to take without explicitly modeling what happens" }, { "end": 369.68, "start": 363.84000000000003, "text": " in the environment. Now when they say model based reinforcement learning, what they mean" }, { "end": 377.08000000000004, "start": 369.68, "text": " is technically what you can do if you're in if you are in the model free, if you're in" }, { "end": 383.8, "start": 377.08, "text": " this situation, what you could do is you could say, well, since these model based RL techniques" }, { "end": 390.24, "start": 383.8, "text": " tend to work better, I could hear inside the agent, I could try to learn a model of the" }, { "end": 397.28, "start": 390.24, "text": " environment, E prime, and I could try to basically learn what happens in my environment when" }, { "end": 403.47999999999996, "start": 397.28, "text": " I do a certain action. And then I could use that model right here, in order to do this" }, { "end": 412.04, "start": 403.48, "text": " planning that I know from up here. Okay, so in this case, they go for exactly this they" }, { "end": 417.16, "start": 412.04, "text": " go for, let's learn a model of the environment, this is not an exact model, it's a learned" }, { "end": 425, "start": 417.16, "text": " model. And then let's use that to plan. Now this usually has a bunch of, you know, very" }, { "end": 432.04, "start": 425, "text": " a bunch of things that go against it, namely, if this model right here is bad, then the" }, { "end": 439.36, "start": 432.04, "text": " planning in the model will often accumulate and even exaggerate the errors that are in" }, { "end": 444.28000000000003, "start": 439.36, "text": " this model. So it's sometimes very hard to learn a model of the world and then use that" }, { "end": 453.44, "start": 444.28000000000003, "text": " for planning. And I've recently done a paper where, where curious AI takes the noise in" }, { "end": 459.52000000000004, "start": 453.44, "text": " auto encoders to regularize exactly such a planning procedure to counter this. And this" }, { "end": 467.64, "start": 459.52, "text": " paper right here is a different approach of combining this learned model. This learned" }, { "end": 477, "start": 467.64, "text": " model. So, okay. That was about the first sentence. They say it aims to learn a global" }, { "end": 482.28, "start": 477, "text": " model for the dynamics of the environment. A good model can potentially enable planning" }, { "end": 487.52, "start": 482.28, "text": " algorithms to generate a large variety of behaviors and solve diverse tasks, which is" }, { "end": 492.96, "start": 487.52, "text": " true, right? If I have a model of the environment, then I could just use it to plan. I wouldn't" }, { "end": 498.79999999999995, "start": 492.96, "text": " even have to do anything fancy anymore, right? If I have a model of how my tic tac toe works," }, { "end": 504.96, "start": 498.79999999999995, "text": " I can just plan, plan my way to success. And I can do this all for zero style. Or if the" }, { "end": 510.2, "start": 504.96, "text": " if this state tree is small enough, I can actually just use a planner. And I don't even" }, { "end": 516.12, "start": 510.2, "text": " have to, I don't have to do anything anymore. If I have a good model. They say however," }, { "end": 520.88, "start": 516.12, "text": " learning an accurate model for complex dynamical systems is difficult. And even then the model" }, { "end": 525.92, "start": 520.88, "text": " might not generalize well outside the distribution of states on which it was trained. So this" }, { "end": 531.2, "start": 525.92, "text": " is another problem. If you learn a model, it's only going to be valid in a certain range." }, { "end": 540.96, "start": 531.2, "text": " Okay. And say in this work, we combine model based learning with model free learning of" }, { "end": 547.96, "start": 540.96, "text": " the model free learning is of primitives that make model based planning easy. So what they" }, { "end": 556, "start": 547.96, "text": " attempt to do is they attempt in an unsupervised fashion to learn so called set of skills." }, { "end": 570.88, "start": 556, "text": " And the set of skills could be something like walk forward, walk backward, stay put, jump." }, { "end": 578.68, "start": 570.88, "text": " So they attempt to learn things like this in a model free way, that the model is simply" }, { "end": 583.48, "start": 578.68, "text": " asked to come up with these things or the agent is simply asked to come up with these" }, { "end": 592.04, "start": 583.48, "text": " things. And then in in stage two, a planner can use these skills and decompose a plan." }, { "end": 597.48, "start": 592.04, "text": " Now this plan here, the special thing about this plan, this planner, it doesn't operate" }, { "end": 603.64, "start": 597.48, "text": " in the space of actions of like small scale actions, it actually operates in the space" }, { "end": 609.88, "start": 603.64, "text": " of these skills right here. So here would be walk forward, walk back. So and with if" }, { "end": 615.24, "start": 609.88, "text": " we have a good enough model of the environment, it will tell us if I walk forward in this" }, { "end": 620.44, "start": 615.24, "text": " situation, what will happen? Okay, so I can walk forward. And then after that, I could" }, { "end": 627.2, "start": 620.44, "text": " walk backward, what's going to happen? And if I have a good model of the environment" }, { "end": 633.6400000000001, "start": 627.2, "text": " over the and the actions now are these macro actions of these skills, then I can use planning" }, { "end": 641.24, "start": 633.6400000000001, "text": " to reach my goal. Okay, so the question is, how do we come up with useful skills that" }, { "end": 646.6800000000001, "start": 641.24, "text": " the planner can then use? So they need to be somewhat diverse, right? But also, and" }, { "end": 655.6, "start": 646.6800000000001, "text": " here is the the crucial part and the the sort of contribution of this paper, they say, how" }, { "end": 662.64, "start": 655.6, "text": " can we discover skills whose outcomes are easy to predict? And this is how they counteract" }, { "end": 668.72, "start": 662.64, "text": " this notion here, that if your environment model is crap, then you're you're, it can't" }, { "end": 673.16, "start": 668.72, "text": " basically can't be used for planning, you'll just make it worse. So what they say is that" }, { "end": 680.32, "start": 673.16, "text": " these things right here, these skills that we learn, we will learn them in a way that" }, { "end": 686.6800000000001, "start": 680.32, "text": " make them easy to predict. So it makes it easy to predict what will happen after I do" }, { "end": 691.5600000000001, "start": 686.6800000000001, "text": " them. So they must be at the same time diverse. So only you know, if you stay put, it's pretty" }, { "end": 696.44, "start": 691.5600000000001, "text": " easy to predict what's going to happen like nothing. Okay. But we're going to see in the" }, { "end": 703.36, "start": 696.44, "text": " exact objective that they have to be sort of diverse. But also, so only one of them" }, { "end": 708.48, "start": 703.36, "text": " can be stay put, the other ones have to do something else. But also, they should be easily" }, { "end": 715.4, "start": 708.48, "text": " predictable by the environment model. And if you learn the skills such that they're" }, { "end": 720.52, "start": 715.4, "text": " easily predictable, your environment model will make less errors, and then you can use" }, { "end": 729.82, "start": 720.52, "text": " it for planning. Okay, let's dive in. They do actually open source code, and they have" }, { "end": 735, "start": 729.82, "text": " more of these videos, if you want to check it out, I'll link everything in the description." }, { "end": 748, "start": 735, "text": " Okay, let's actually dive into the meat right here. They say they do maximize the mutual" }, { "end": 755.64, "start": 748, "text": " information. And we're going to see between between what and what they want to maximize" }, { "end": 758.84, "start": 755.64, "text": " the mutual information. If you don't know what the mutual information is, the mutual" }, { "end": 764.82, "start": 758.84, "text": " information is a quantity. The mutual information between x and y is a quantity, that's the" }, { "end": 769.44, "start": 764.82, "text": " entropy of x minus the entropy of x conditioned on y, or you can also decompose it in the" }, { "end": 779.12, "start": 769.44, "text": " other way around entropy of y minus entropy of y conditioned on x. And we're going to" }, { "end": 788.12, "start": 779.12, "text": " they apply this, where is it equation two right here. Okay, so what they want is the" }, { "end": 793.32, "start": 788.12, "text": " following, they want to maximize the mutual information. And the mutual information basically" }, { "end": 802.5600000000001, "start": 793.32, "text": " means how much does one variable tell me about the other variable, okay? The mutual information" }, { "end": 812.7600000000001, "start": 802.5600000000001, "text": " between the skill, and we're going to see what that is, and the next state. So what" }, { "end": 822.8800000000001, "start": 812.7600000000001, "text": " is a skill? A skill is one entry in our table right here. So these here are skills, skills." }, { "end": 829.36, "start": 822.88, "text": " And they're in their index by z. Now z here, it seems like they're discrete, right? But" }, { "end": 834.88, "start": 829.36, "text": " in this case, they would be z would be a continuous vector. But it's easier if you imagine that" }, { "end": 842.8, "start": 834.88, "text": " so each one of this is like z one, z two, and so on. They're going to be continuous," }, { "end": 850.88, "start": 842.8, "text": " but in our case, we'll just think of a discrete set of skills to be learned. Okay, so they" }, { "end": 857.48, "start": 850.88, "text": " say we want to maximize the mutual information between the skill that's currently in action." }, { "end": 862.72, "start": 857.48, "text": " So in every, every time the agent has to like choose a skill and saying like, okay, now" }, { "end": 870.32, "start": 862.72, "text": " I'm going to walk forward. And it's in a given state. And what you want to say is you have" }, { "end": 876.12, "start": 870.32, "text": " to maximize the mutual information between the skill and the next state, which means" }, { "end": 882.48, "start": 876.12, "text": " that it means two things, which you can see right here, you can decompose it in two different" }, { "end": 893.4, "start": 882.48, "text": " ways. One way is the following is to say, if I know which state I'm in, what's the entropy" }, { "end": 904.76, "start": 893.4, "text": " over my? Okay, that's the wrong bad way for me leading it. It's if I knew, if I know these" }, { "end": 912, "start": 904.76, "text": " two things, so if I know the state I'm in, and the next state that I'm going to write," }, { "end": 919.28, "start": 912, "text": " I can in hindsight, I look back, if I know both things, what can I say about this skill" }, { "end": 927.4399999999999, "start": 919.28, "text": " z right here, that I couldn't say just from the starting state. So the starting state," }, { "end": 940.32, "start": 927.44, "text": " let's say is the person right here, and the end state is the person a little bit more" }, { "end": 944.7600000000001, "start": 940.32, "text": " over there. So that's, that's called this forward, it's the person is looking to the" }, { "end": 951.5200000000001, "start": 944.7600000000001, "text": " right. Okay. Now, if I only show you if I only show you the state on the left, the starting" }, { "end": 957.4000000000001, "start": 951.5200000000001, "text": " state, what can you tell me which action is going to follow right here? Basically, you" }, { "end": 962.36, "start": 957.4, "text": " can tell me much it could be any action like walk forward, walk back, stay put. But if" }, { "end": 969.56, "start": 962.36, "text": " I also show you the next state, then you can pretty confidently say, ah, I know what you" }, { "end": 975.52, "start": 969.56, "text": " did, you did walk forward. Okay, so this, this, in this situation, we would have a high" }, { "end": 982.4, "start": 975.52, "text": " mutual information between z and s prime in the formulation here, if you decompose it" }, { "end": 986.72, "start": 982.4, "text": " in the other way, this is equivalent, but it's a different way of thinking about it," }, { "end": 993.36, "start": 986.72, "text": " it means when I show you these two things, how much more can you tell me about the next" }, { "end": 1000, "start": 993.36, "text": " state than if I only show you this. So in this formulation, what we would do is, I would" }, { "end": 1005.6800000000001, "start": 1000, "text": " say I tell you the human is here looking to the right, what can you tell me about the" }, { "end": 1014.1600000000001, "start": 1005.6800000000001, "text": " next state? And you like what I, I couldn't tell you very much, right? It can be anything." }, { "end": 1020, "start": 1014.16, "text": " But if I then tell you the action is walk forward, then you could say, ah, now I now" }, { "end": 1025.52, "start": 1020, "text": " I get it, it's probably going to be something like this. Okay. This also would be a high" }, { "end": 1032.44, "start": 1025.52, "text": " mutual information. So you see that the the task of maximizing this mutual information" }, { "end": 1038.36, "start": 1032.44, "text": " is good, because what happens if I don't, if I have a low mutual information, if I have" }, { "end": 1046.36, "start": 1038.36, "text": " a low mutual information, it would mean that I could predict the next state just as well" }, { "end": 1052.7199999999998, "start": 1046.36, "text": " from the current state. It doesn't make a difference whether you give me the skill or" }, { "end": 1059, "start": 1052.7199999999998, "text": " not, it would not make a difference. And that is only the case if my skills are basically" }, { "end": 1064.8, "start": 1059, "text": " either all the same or all pretty, pretty random and pretty useless. Right? So if if" }, { "end": 1070.08, "start": 1064.8, "text": " all my skills are basically walking backwards, then I don't, you don't have to tell me which" }, { "end": 1075.72, "start": 1070.08, "text": " skill you do, I'm going to know that the next state is like this. So you can see that the" }, { "end": 1084.06, "start": 1075.72, "text": " objective of maximizing the mutual information between the skill and the next state is going" }, { "end": 1090.8, "start": 1084.06, "text": " to result in a situation where these skills are going to be first of all, diverse. And" }, { "end": 1100.32, "start": 1090.8, "text": " second of all, easy to easy to predict. Okay. And to see this, yeah, we only have to imagine" }, { "end": 1106.32, "start": 1100.32, "text": " what would happen, what would happen in a in a situation where the skills weren't diverse" }, { "end": 1111.9199999999998, "start": 1106.32, "text": " or weren't easy to predict. And you'll get exactly the situation where the information" }, { "end": 1116.28, "start": 1111.9199999999998, "text": " of the skill doesn't help you in predicting the next state, because yeah, either it's" }, { "end": 1123.52, "start": 1116.28, "text": " obvious or it's random. Okay, so we agree that it makes sense to maximize the mutual" }, { "end": 1131.32, "start": 1123.52, "text": " information. And they decompose this into two objectives. So they say the mutual information" }, { "end": 1139.72, "start": 1131.32, "text": " will stay is basically what you'll have to do is you'll have to it decomposes into two" }, { "end": 1148.92, "start": 1139.72, "text": " terms where you can into two terms in a lower bound in the mutual information. And this" }, { "end": 1154.26, "start": 1148.92, "text": " is kind of the this is sort of the standard variational approximation literature. If you're" }, { "end": 1160.78, "start": 1154.26, "text": " into that read up on variational auto encoders and things like this. Basically, the two steps" }, { "end": 1168.2, "start": 1160.78, "text": " here are you tighten the variational lower bound and you maximize the approximate lower" }, { "end": 1178.48, "start": 1168.2, "text": " bound. Okay, so you have the mutual information, and you can lower bound it. Okay, you can" }, { "end": 1184.64, "start": 1178.48, "text": " you can lower bound it by this quantity. Now the if since this is a lower bound, you can" }, { "end": 1191.68, "start": 1184.64, "text": " prove that this is a lower bound, it means that the higher you make this term, then the" }, { "end": 1197.52, "start": 1191.68, "text": " the more basically, okay, if it's a lower, I don't know how to formulate, but it should" }, { "end": 1202.6, "start": 1197.52, "text": " be fairly obvious if this if this thing on the right is a lower bound to the mutual information," }, { "end": 1208.32, "start": 1202.6, "text": " then maximizing the thing on the right will maximize the thing on the left. And it will" }, { "end": 1216.8, "start": 1208.32, "text": " do so very so imagine I is up here. And this E on the right side is down here, it's a lower" }, { "end": 1223.8, "start": 1216.8, "text": " bound, right? It's lower than I. So if I maximize E, well, I haven't really done anything to" }, { "end": 1229.3999999999999, "start": 1223.8, "text": " I but if I maximize it even more up to here, and since it's a lower bound, I know now my" }, { "end": 1235, "start": 1229.3999999999999, "text": " I must be at least higher than this. Okay, so maximizing a lower bound to a quantity" }, { "end": 1241.44, "start": 1235, "text": " will ultimately increase the quantity. But you can also the efficiency by which it does" }, { "end": 1248.9199999999998, "start": 1241.44, "text": " this depends on how tight the bound is. So if the bound is very tight, like this, you" }, { "end": 1257.6000000000001, "start": 1248.92, "text": " see I is not much above E. If the bound is very tight, then maximizing E will result" }, { "end": 1264.6000000000001, "start": 1257.6000000000001, "text": " in a faster maximization of I. Okay, so you can do two things, you can maximize the quantity" }, { "end": 1270.24, "start": 1264.6000000000001, "text": " that is the lower bound, or you can tighten the bound. And here we can see that the difference" }, { "end": 1275.3200000000002, "start": 1270.24, "text": " that the tightness of the bound depends on this quantity right here, which is the KL" }, { "end": 1285.96, "start": 1275.32, "text": " difference between this and this. So yeah, let's watch this in the context of we can" }, { "end": 1293.04, "start": 1285.96, "text": " actually go go through it on the high level. If you've never done this variational approximation" }, { "end": 1300.74, "start": 1293.04, "text": " sort of math, then this might be a bit informative. Okay, so the thing right here is just pops" }, { "end": 1307.28, "start": 1300.74, "text": " out of the definition of the mutual information. It's the it's basically the differences of" }, { "end": 1313.28, "start": 1307.28, "text": " the entropy's, which the entropy's are log quantities, right? So if you have a log A" }, { "end": 1319.56, "start": 1313.28, "text": " minus log B, that you can also write this as the log of the fraction of A over B, that's" }, { "end": 1327.36, "start": 1319.56, "text": " just a property of the log. And so it's expectations over logs, these entropy's. So you can write" }, { "end": 1337.28, "start": 1327.36, "text": " it as this thing right here. Okay. And this basically says, this is very high, if or very" }, { "end": 1345.32, "start": 1337.28, "text": " low, depending so you need to whether or something is lower high always will depend on what you" }, { "end": 1353.8999999999999, "start": 1345.32, "text": " exactly you have to consider. But ultimately, what you'll want is the ratio between this" }, { "end": 1360.52, "start": 1353.9, "text": " quantity, which is the probability of the next state given the current state and the" }, { "end": 1368.0400000000002, "start": 1360.52, "text": " skill you're taking divided by just the probability over the next state, given the the current" }, { "end": 1375.6200000000001, "start": 1368.0400000000002, "text": " state, in expectation over all the skills, current states and next states. Now what they're" }, { "end": 1383, "start": 1375.6200000000001, "text": " saying is, this here, this is basically the environment, right? This is if you are in" }, { "end": 1388.56, "start": 1383, "text": " a state and you perform a skill, what's the next state? That's the true environment. P" }, { "end": 1393.12, "start": 1388.56, "text": " here is the true environment, which we don't know, right? We don't know what the environment's" }, { "end": 1399.32, "start": 1393.12, "text": " going to do. But we would like to learn a model for the environment. And this model" }, { "end": 1412.96, "start": 1399.32, "text": " for the environment is now Q, Q, theta, phi, theta, phi, Greek letter. So Q phi here is" }, { "end": 1420.04, "start": 1412.96, "text": " going to be a neural network that will approximate the environment. And in this probabilistic" }, { "end": 1425.88, "start": 1420.04, "text": " framework, it is going to be a learned distribution that will approximate the distribution of" }, { "end": 1435.1200000000001, "start": 1425.88, "text": " P. So we approximate it by this. But now this is here it says equal equal, right? This is" }, { "end": 1442.9199999999998, "start": 1435.12, "text": " not equal, because this is just an approximation. So the equality must become must be basically" }, { "end": 1451.84, "start": 1442.9199999999998, "text": " compensated for by this term right here, you can see this here is expanded into these two," }, { "end": 1456.06, "start": 1451.84, "text": " you can go through the exact definitions and see why this is an equality. But basically," }, { "end": 1463.28, "start": 1456.06, "text": " you can say that the mutual information is this expectation, or it is this expectation." }, { "end": 1467.24, "start": 1463.28, "text": " But now you have to correct for the fact that here you only have an approximation. And you" }, { "end": 1474.2, "start": 1467.24, "text": " have to correct for the fact by exactly the amount by which the approximation is different" }, { "end": 1480.68, "start": 1474.2, "text": " than the quantity that you're approximating. And this is this key KL divergence right here." }, { "end": 1488.8799999999999, "start": 1480.68, "text": " So the KL divergence basically measures how different two distributions are. It's sort" }, { "end": 1493.68, "start": 1488.88, "text": " of a distance, not exactly distance, but sort of a distance between these two distributions" }, { "end": 1497.5800000000002, "start": 1493.68, "text": " right here, it says, here's the real world. And here is your estimate of the real world," }, { "end": 1506.24, "start": 1497.5800000000002, "text": " how much do they disagree, and that quantity plus, then you can replace your world, the" }, { "end": 1512.72, "start": 1506.24, "text": " exact world distribution by your approximate distribution. And you still are equal to the" }, { "end": 1518.76, "start": 1512.72, "text": " mutual information. And now the basically the trick is you say, oh, the KL divergence" }, { "end": 1525.44, "start": 1518.76, "text": " is always positive, it's a quantity that it can only be a positive number. So if I leave" }, { "end": 1532, "start": 1525.44, "text": " it away, certainly, this is only this is going to be a lower bound to the quantity. Okay." }, { "end": 1537.2, "start": 1532, "text": " All right. So two tasks right here. First of all, tighten the variational bound, which" }, { "end": 1542.76, "start": 1537.2, "text": " means make this quantity small, make your approximate world model as close as possible" }, { "end": 1550.46, "start": 1542.76, "text": " to the real world. How do we do this neural network? Okay, you input trajectories. I was" }, { "end": 1556.64, "start": 1550.46, "text": " in this state, I performed this skill, and I ended up in this state. Sorry, that's this" }, { "end": 1561.56, "start": 1556.64, "text": " and then you simply match your neural network simply matches what happens in the real world," }, { "end": 1568.48, "start": 1561.56, "text": " it learns the transition function, basically. So that's, that's the tightening of the variational" }, { "end": 1579.08, "start": 1568.48, "text": " bound. And the second step is this right here to" }, { "end": 1583.6799999999998, "start": 1579.08, "text": " to maximize the approximate lower bound, right? The first step was Titan variation lower bound," }, { "end": 1588.6399999999999, "start": 1583.6799999999998, "text": " that basically means make your world model more accurate. And the second is Titan that" }, { "end": 1595.24, "start": 1588.64, "text": " maximize the approximate lower bound. Now this is going to part, this is going to be" }, { "end": 1601.76, "start": 1595.24, "text": " the part that says, now given that I already have a better world model right here, can" }, { "end": 1611, "start": 1601.76, "text": " I improve my can I sort of improve my skills such that they become easier to predict and" }, { "end": 1617.96, "start": 1611, "text": " more diverse? Can I improve my skills such that this mutual information right here gets" }, { "end": 1626.44, "start": 1617.96, "text": " to be high as high as possible? Okay. So this is sort of an alternating thing. And you can" }, { "end": 1633.52, "start": 1626.44, "text": " see this in this very, very, very, very confusing diagram, honestly. So what are you going to" }, { "end": 1638.96, "start": 1633.52, "text": " do in this algorithm? First of all, in each episode, you're going to select a skill at" }, { "end": 1644.28, "start": 1638.96, "text": " random. And as I said, these skills, they're not predefined. So no one tells the agent" }, { "end": 1649.44, "start": 1644.28, "text": " to walk forward, it simply says, okay, you have like, in a discrete case, you would have" }, { "end": 1655.44, "start": 1649.44, "text": " like, you have five skill slots, right? And the only thing I require is that they're sort" }, { "end": 1659.48, "start": 1655.44, "text": " of, you know, consistent over time. So skill one is always going to be sort of the same" }, { "end": 1665.56, "start": 1659.48, "text": " thing and skill two, but agent, you can basically decide what skill one is, right? But make" }, { "end": 1672.28, "start": 1665.56, "text": " the skill such that it's predictable and that the different skills are diverse. Okay, so" }, { "end": 1677.72, "start": 1672.28, "text": " you're going to sample one of the skills, like skill zero or whatnot. And then you're" }, { "end": 1687.76, "start": 1677.72, "text": " going to do two things. First of all, you're going to learn these skill dynamics, which" }, { "end": 1694.24, "start": 1687.76, "text": " is you're going to learn your approximate model of the world. Okay. And how do you do" }, { "end": 1702.68, "start": 1694.24, "text": " that? Basically, here, you're the agent and the agent will. So what does the agent have" }, { "end": 1709.96, "start": 1702.68, "text": " to do? The agent will take in the skill Z and it will take in the current state of the" }, { "end": 1715.64, "start": 1709.96, "text": " world and it will output an action. Now, this is the model free part, right? So the agent" }, { "end": 1722.8, "start": 1715.64, "text": " that somehow has to come up with saying, ah, skill zero, that's walking forward. And in" }, { "end": 1729.8799999999999, "start": 1722.8, "text": " this situation, walking forward means I have to lift my leg or something like this. So" }, { "end": 1734.52, "start": 1729.8799999999999, "text": " you're going to take your skill, you're going to with your agent perform an action based" }, { "end": 1738.58, "start": 1734.52, "text": " on that skill and the current state of the world. Then the environment is going to give" }, { "end": 1745.48, "start": 1738.58, "text": " you the next state right here. And from those things, you can now learn your world model." }, { "end": 1752.76, "start": 1745.48, "text": " You know, I was in state S, I performed action a but I performed action a based on skills" }, { "end": 1761.92, "start": 1752.76, "text": " Z. And then I ended up in state S prime. And I can learn a model of the world, right? This" }, { "end": 1766.8, "start": 1761.92, "text": " is a triple, I can do supervised learning of a world model. Now here, they do probabilistic" }, { "end": 1772.12, "start": 1766.8, "text": " learning, but and we're going to see in a second how that works. But ultimately, they" }, { "end": 1780.08, "start": 1772.12, "text": " approximate the world with their model. Cool. So that's the this outer loop. And then what" }, { "end": 1786.36, "start": 1780.08, "text": " are they going to do next, they're going to use that world model to determine a reward" }, { "end": 1793.1399999999999, "start": 1786.36, "text": " for the agent, and the reward for the agent for taking the action. So the reward is going" }, { "end": 1799.24, "start": 1793.1399999999999, "text": " to be, oh, agent, you took action a. Now, what's your reward for doing this? This is" }, { "end": 1809.04, "start": 1799.24, "text": " the model free reinforcement learning part, your reward is going to be very high, if if" }, { "end": 1817, "start": 1809.04, "text": " this was very predictable. And if it is also diverse, right, so now, the agent has to sort" }, { "end": 1826.36, "start": 1817, "text": " of max sort of the agent has to go and make this quantity very high this, we want the" }, { "end": 1833.44, "start": 1826.36, "text": " outcome of these actions to be predictable, and dive and the actions themselves to be" }, { "end": 1840.4, "start": 1833.44, "text": " diverse. It is, I'm sorry, it's very hard to keep all of this very straight. Okay. But" }, { "end": 1847.4, "start": 1840.4, "text": " ultimately, two steps, learn world model from the experience that you've generated. And" }, { "end": 1853.92, "start": 1847.4, "text": " second thing, learn the agent such that it maximizes this this quantity that we've seen" }, { "end": 1861.24, "start": 1853.92, "text": " before. And you do this via giving the agent a reward that is proportional to the mutual" }, { "end": 1871.4, "start": 1861.24, "text": " information. And we've already seen that we can approximate the mutual information by" }, { "end": 1881.8, "start": 1871.4, "text": " by this quantity here. Okay. So learn world model, and make the agent go higher mutual" }, { "end": 1889.08, "start": 1881.8, "text": " information, two steps. Okay, learn world model is very, very classic, you can say," }, { "end": 1894.6399999999999, "start": 1889.08, "text": " okay, I need to improve, I need to minimize this KL divergence. So I need the gradient" }, { "end": 1902, "start": 1894.6399999999999, "text": " with respect to the parameters of my world model, I can write down the KL divergence" }, { "end": 1910.48, "start": 1902, "text": " like this. And then since I can do this reverse, so log a over b is log a minus log b. And" }, { "end": 1916.48, "start": 1910.48, "text": " since the world doesn't depend on the parameters of my model, this will simply give me this" }, { "end": 1922.66, "start": 1916.48, "text": " thing right here, which is the gradient of the log probability basically, of my neural" }, { "end": 1928.4, "start": 1922.66, "text": " network. And this can be just optimized straightforward, this is a neural network, optimize it with" }, { "end": 1935.84, "start": 1928.4, "text": " gradient descent. These are the inputs, this is the output. Now, okay, you, this is all" }, { "end": 1940.24, "start": 1935.84, "text": " probability distributions, but ultimately, you can you can do it pretty straightforward." }, { "end": 1948.92, "start": 1940.24, "text": " Okay, so corresponds to maximizing the likelihood of the samples from P under Q. Now, the second" }, { "end": 1958.64, "start": 1948.92, "text": " step maximize the approximate lower bound. Okay. So after they say after fitting Q, after" }, { "end": 1964.56, "start": 1958.64, "text": " improving our world model, we can optimize pi pi is the agent that actually takes the" }, { "end": 1970.9199999999998, "start": 1964.56, "text": " actions based on the skill. So it's given a skill, and it needs to perform an action." }, { "end": 1977.8, "start": 1970.9199999999998, "text": " And it needs to maximize this quantity, as we've seen, needs to maximize the mutual information" }, { "end": 1984.96, "start": 1977.8, "text": " between if I know the action and if I don't, or the mutual information between the skill" }, { "end": 1993.1599999999999, "start": 1984.96, "text": " and the next state. I say note, this is a reinforcement learning style optimization," }, { "end": 2000.1200000000001, "start": 1993.16, "text": " with a reward function of this quantity. However, so you look at the quantity that they need" }, { "end": 2006.88, "start": 2000.1200000000001, "text": " right here, the quantity is going to be this thing. And this thing is just, I feed the" }, { "end": 2012.78, "start": 2006.88, "text": " skill and the state into my world model. And I look what what comes out of the world model." }, { "end": 2019.64, "start": 2012.78, "text": " So this I can compute, right? But this thing right here, I can't compute because this is" }, { "end": 2028.3000000000002, "start": 2019.64, "text": " this is what happens in the world when I'm in state s, and I just run my agent over in" }, { "end": 2036.66, "start": 2028.3000000000002, "text": " expectation over all the skills. So this I don't know, they have a log of this is intractable." }, { "end": 2043.5600000000002, "start": 2036.66, "text": " And then so we approximate the reward function for pi as this thing right here. Now, first," }, { "end": 2053.88, "start": 2043.56, "text": " let's look at what this thing is. So the reward of taking action a, and action a is based" }, { "end": 2059.7599999999998, "start": 2053.88, "text": " on skill z, right? So skill z was fed into the agent, the agent comes up with action" }, { "end": 2065, "start": 2059.7599999999998, "text": " a, oh, you want me to walk forward in this situation, okay, I'm gonna lift my leg. That's" }, { "end": 2070.84, "start": 2065, "text": " the action. Okay. So the reward for this action, given this skill, and given the current state" }, { "end": 2078.4, "start": 2070.84, "text": " is going to be what it's going to be very high, if this here is very high. So it's going" }, { "end": 2087.76, "start": 2078.4, "text": " to be very high if the probability so this s prime is the state you ended up in, right?" }, { "end": 2094.1200000000003, "start": 2087.76, "text": " So after taking the action, you ended up in s prime. So if what does it mean when this" }, { "end": 2103, "start": 2094.12, "text": " quantity is very high? It means that my world model q, that is approximating the world," }, { "end": 2110.04, "start": 2103, "text": " thinks that this state is very probable if you were in this state and are given the skill" }, { "end": 2117.2799999999997, "start": 2110.04, "text": " z. So this basically means that the neural network can predict with very high accuracy," }, { "end": 2124.44, "start": 2117.28, "text": " what's going to happen if you are in this state and are given this skill to perform," }, { "end": 2131.6400000000003, "start": 2124.44, "text": " right? This is one of the things that we want. Now, what is it divided by? It's divided by" }, { "end": 2141, "start": 2131.6400000000003, "text": " this. And you can see here, the z i are other skills. So it is, what does this mean? This" }, { "end": 2148.2, "start": 2141, "text": " is almost the same quantity. It means how well can the same neural network predict the" }, { "end": 2158.6, "start": 2148.2, "text": " next state if you were given a different skill. So it means if I'm here, and I ended up here," }, { "end": 2165, "start": 2158.6, "text": " how well can you predict it if I tell you that I walked forward? And here you ask, well," }, { "end": 2170.04, "start": 2165, "text": " how well can you predict it if I told you you walked backward, if I told you you jumped," }, { "end": 2179.32, "start": 2170.04, "text": " if I told you, and so on. So you basically aggregate over all the other skills you could" }, { "end": 2183.92, "start": 2179.32, "text": " perform. And each time you ask the neural network, well, how likely is it that you end" }, { "end": 2190.7599999999998, "start": 2183.92, "text": " up in the state that I ended up with? So what does it mean if this quantity is high, or" }, { "end": 2198.36, "start": 2190.7599999999998, "text": " sorry, if the entire sum here is high? That means that the skill doesn't really give you" }, { "end": 2202.8, "start": 2198.36, "text": " much information, the neural network is very good, no matter which skill you selected," }, { "end": 2207, "start": 2202.8, "text": " right? It's very accurate in predicting the next state doesn't really matter. The skill" }, { "end": 2214.88, "start": 2207, "text": " doesn't really matter. And this is what we don't want, right? We want that the skills" }, { "end": 2220.08, "start": 2214.88, "text": " are very diverse, right? So the top part is, they're easy, it's easy to predict what will" }, { "end": 2227.6800000000003, "start": 2220.08, "text": " happen if you perform a given skill. And we divide this by the bottom part. And this makes" }, { "end": 2233.24, "start": 2227.68, "text": " it such that these skills are very diverse, because if they're not diverse, then it doesn't" }, { "end": 2238.8399999999997, "start": 2233.24, "text": " really matter which one you perform. And then this quantity on the bottom will be very high," }, { "end": 2246.2799999999997, "start": 2238.8399999999997, "text": " but we divide by it. So we want, we want it to be low. Okay, now the reward is going to" }, { "end": 2253.24, "start": 2246.2799999999997, "text": " be the log of this fraction here. And this makes sense, right intuitively, but they're" }, { "end": 2258.64, "start": 2253.24, "text": " going to try to motivate this mathematically. And for motivating this mathematically, of" }, { "end": 2264.3999999999996, "start": 2258.64, "text": " course, they need to approximate this quantity right here. This quantity is the denominator." }, { "end": 2272.08, "start": 2264.3999999999996, "text": " So they this denominator is a proc is an approximation to this. It's an approximation. As you can" }, { "end": 2280.3199999999997, "start": 2272.08, "text": " see here, this is sort of sort of a sample based approximation to the transition from" }, { "end": 2289.84, "start": 2280.32, "text": " s to s prime under the distribution of z. But what you want is just is the transition" }, { "end": 2298.8, "start": 2289.84, "text": " from s to s prime, not in your approximation, but in the real world. And they formulate" }, { "end": 2308.7200000000003, "start": 2298.8, "text": " this, they say, okay, we can decompose it as such as a as an integral over this conditional" }, { "end": 2318.3199999999997, "start": 2308.72, "text": " right here. So they bring in the z variable. And then they say, well, this, this is approximately" }, { "end": 2329.2, "start": 2318.3199999999997, "text": " approximately we can replace this here by this. And we can replace this here by this." }, { "end": 2336.3199999999997, "start": 2329.2, "text": " They say, well, since the this is a an approximation, the two, this is the world model is an approximation" }, { "end": 2343.32, "start": 2336.32, "text": " to the real world, we can sort of replace that. And then this is the this is the part" }, { "end": 2352.6000000000004, "start": 2343.32, "text": " that doesn't convince me they say, well, this P z of s, we can just replace it by P z. Now" }, { "end": 2358.84, "start": 2352.6000000000004, "text": " this is it's very tricky to see what these quantities are. Ultimately, it ends up being" }, { "end": 2369.48, "start": 2358.84, "text": " that right here. But it's it's it's so tricky. So they say we replace P z given s by P of" }, { "end": 2379.1600000000003, "start": 2369.48, "text": " z. And, okay, let's think about this for a second. What does the top the bottom quantity" }, { "end": 2383.6800000000003, "start": 2379.1600000000003, "text": " is simply the distribution over your skills. And depending on how you sample them, this" }, { "end": 2389.72, "start": 2383.68, "text": " could be like a uniform distribution over your skills, like that's fine. But what's the top thing," }, { "end": 2399.52, "start": 2389.72, "text": " the top thing, basically, we can use base formula to reformulate it. It's P of s given z times" }, { "end": 2414.24, "start": 2399.52, "text": " P of right times P of z divided by P of s. So this quantity depends on multiple things." }, { "end": 2422.6, "start": 2414.24, "text": " Here's that prior again. And this means what's the general distribution of states? What's" }, { "end": 2430.08, "start": 2422.6, "text": " the general distribution of states? If your agent acts in the world, right, this? And this, we" }, { "end": 2438.36, "start": 2430.08, "text": " don't know, we don't we don't know. And also this right here, what's the distribution in the true" }, { "end": 2447.72, "start": 2438.36, "text": " world? What's the probability of a state given a given given that you were acting under a skill" }, { "end": 2454.3599999999997, "start": 2447.72, "text": " z? And this is also something we don't know, because we don't know the world, we we don't have" }, { "end": 2459.24, "start": 2454.3599999999997, "text": " the world model. So you run into the same problem again and again, that you're trying to approximate" }, { "end": 2464.72, "start": 2459.24, "text": " this. And they want to make this so mathematically rigorous, but ultimately, and they go in the" }, { "end": 2469.24, "start": 2464.72, "text": " appendix, they go through various ways that they could solve this. But ultimately, they just say," }, { "end": 2478.08, "start": 2469.24, "text": " well, this is approximately the same. So this right here, it basically means what skills," }, { "end": 2485.7999999999997, "start": 2478.08, "text": " if you're if you're in a certain state, what skills brought you here, basically, what what" }, { "end": 2490.4799999999996, "start": 2485.7999999999997, "text": " skills brought you here, what's the distribution of skills that brought you to this state, and" }, { "end": 2495.2799999999997, "start": 2490.4799999999996, "text": " they say, well, we're just going to approximate that by the prior distribution over our skills," }, { "end": 2503.6000000000004, "start": 2495.28, "text": " basically disregard the state here. And this seems overly shaky. Like, as I said, the entire paper" }, { "end": 2512.48, "start": 2503.6000000000004, "text": " makes sense. But I just feel it's trying to be overly mathematical, and then run into a point" }, { "end": 2518.48, "start": 2512.48, "text": " where you can't be and then they're just like, okay, we'll, we'll just replace it. And then sort" }, { "end": 2526.32, "start": 2518.48, "text": " of things break down, like, you can only be overly mathematical to some degree, it doesn't really fit." }, { "end": 2533.52, "start": 2527.84, "text": " But okay, so this is how you discover the skills, you maximize these quantities, alternately," }, { "end": 2540.56, "start": 2533.52, "text": " you learn the world model, and you improve your your skills by making them diverse and easily" }, { "end": 2546.16, "start": 2540.56, "text": " predictable. So how do you then plan using these skills? This is the second part of the paper," }, { "end": 2553.04, "start": 2546.16, "text": " and this is just as complicated as the first part. So they say given the learned skills," }, { "end": 2560.08, "start": 2553.04, "text": " so the learned skills are policies over action given the DZ, right? So now you know how to like" }, { "end": 2566.64, "start": 2560.08, "text": " walk forward and walk back and so on. And now you're, you're placed in a world, and you're given" }, { "end": 2573.7599999999998, "start": 2566.64, "text": " this checkpoint, it says, well, walk there. And you want this to do this using planning, you don't" }, { "end": 2581.6800000000003, "start": 2573.76, "text": " want to learn anymore, you simply want to plan. Okay, what do you do? And as I said, this is even" }, { "end": 2588.6400000000003, "start": 2581.6800000000003, "text": " more so what you want to do is you want to do something like model predictive control, but not" }, { "end": 2598.6400000000003, "start": 2588.6400000000003, "text": " over actions, but over your learned skills. So you have this planner in the NPC, and the planner will" }, { "end": 2607.3599999999997, "start": 2598.64, "text": " in its head roll out a number of different, a number of different plans, it will kind of" }, { "end": 2614.48, "start": 2607.3599999999997, "text": " explore a bunch of different, different plans Z, it will roll them out, I'll say, okay, if I do this," }, { "end": 2619.7599999999998, "start": 2614.48, "text": " and this, and this, and this, what will happen using its world model that it has learned," }, { "end": 2627.44, "start": 2620.7999999999997, "text": " it will observe what's going to be the reward in each of these cases. Now, they say here, access" }, { "end": 2635.28, "start": 2627.44, "text": " the environment reward, but can also be estimated. And this is another sort of, I feel weak point of" }, { "end": 2643.44, "start": 2635.28, "text": " this in that they now assume they have the true reward function. But they don't have a world model," }, { "end": 2649.2000000000003, "start": 2643.44, "text": " right? They don't have the world model, but they assume that they can sort of always ask for the" }, { "end": 2657.9199999999996, "start": 2649.2, "text": " true reward, which is probably not the case if you had a true world, but it could be the case," }, { "end": 2661.2, "start": 2657.9199999999996, "text": " the reward could be something like, well, if you're over there, you get higher reward," }, { "end": 2670.48, "start": 2661.7599999999998, "text": " but you don't exactly know how to get over there. In any case, so they roll out a bunch of trajectories" }, { "end": 2675.9199999999996, "start": 2670.48, "text": " in their head, they kind of plan forward, see what's going to happen if they do this or that," }, { "end": 2685.2000000000003, "start": 2675.92, "text": " or this or that. And then they choose the best one of these forward thoughts, and they execute it in" }, { "end": 2691.6800000000003, "start": 2685.2000000000003, "text": " the real world, right? So they say, well, I'm going to use choose the skill walk forward. So the agent" }, { "end": 2696.96, "start": 2691.6800000000003, "text": " is now going to be tasked with walking forward. And it's going to do that in the real world for" }, { "end": 2702.16, "start": 2696.96, "text": " a certain amount of steps, like 10 steps of walking forward, after 10 steps of walking forward," }, { "end": 2707.52, "start": 2702.16, "text": " you go back and say, I'm in this new situation right here, what should I do? And again, the planner" }, { "end": 2712.48, "start": 2707.52, "text": " is going to be like, if you first walk forward, and then walk back, where are you going to be," }, { "end": 2719.8399999999997, "start": 2712.48, "text": " and so on. So the planner will always plan, basically to go from where you are to the" }, { "end": 2727.12, "start": 2719.8399999999997, "text": " checkpoint using a composition of the skills that you have learned. So the planner may be fine. Okay," }, { "end": 2733.8399999999997, "start": 2727.12, "text": " if I first walk forward, walk back a bit, and so on, I'm going to get to the goal, I'm going to" }, { "end": 2740.48, "start": 2733.8399999999997, "text": " reach the goal. Now please agent execute this first thing, walk forward, the agent executes it," }, { "end": 2746.08, "start": 2740.48, "text": " and maybe it won't, you know, it won't do as well, it will maybe end up here. And then it says, well," }, { "end": 2751.3599999999997, "start": 2746.08, "text": " I'm here now, please plan again. So it plans again, it says, okay, I can still kind of walk back," }, { "end": 2758.1600000000003, "start": 2751.36, "text": " I'll be here, here, but then I have to do something else. So now walk back. And okay, so this is what's" }, { "end": 2768.4, "start": 2758.1600000000003, "text": " going to happen. But it is going to happen in a weird way. Namely, what we keep are normal," }, { "end": 2775.92, "start": 2768.4, "text": " since everything is continuous, we'll keep normal distributions of all our future steps. So we don't" }, { "end": 2783.84, "start": 2775.92, "text": " say, okay, I go here, and then I go here. What you'll say is I approximately go here. And after that," }, { "end": 2790.64, "start": 2783.84, "text": " I'll approximately go here. And you will do it in such a way that the peak of this normal distribution" }, { "end": 2796.32, "start": 2790.64, "text": " is going to be the highest, where you think you will get the most reward. If you follow this" }, { "end": 2800.88, "start": 2796.32, "text": " trajectory, like if you follow this trajectory, you get a very high reward. And if I follow a" }, { "end": 2807.12, "start": 2800.88, "text": " trajectory that maybe goes here, I won't get a high reward. If it actually turns out in your" }, { "end": 2812.2400000000002, "start": 2807.12, "text": " imagination that you do get a high reward for this trajectory, you'll change this distribution," }, { "end": 2818.7200000000003, "start": 2812.2400000000002, "text": " such that the peak is here. And of course, the tighter the peak is, the more sure you are. So" }, { "end": 2825.44, "start": 2818.7200000000003, "text": " you sort of are looking, if you look out into the world, you want the closest steps to be very peaky." }, { "end": 2832.96, "start": 2825.44, "text": " And then as you look out, they can be more sort of broad. And that's how you plan ahead, you keep" }, { "end": 2840.16, "start": 2833.84, "text": " doing a step. So if you go from here to finally you choose, I want to go here, where the tip is the" }, { "end": 2847.36, "start": 2840.16, "text": " highest, go here, then you imagine forward again, you refine these distributions over the future." }, { "end": 2855.12, "start": 2848, "text": " And then you take the next step that gets you to the where the highest peak is right here, basically." }, { "end": 2864.08, "start": 2855.12, "text": " And so on. This is simply planning in a continuous domain, it is pretty analogous to how you would" }, { "end": 2870.4, "start": 2864.08, "text": " plan in like, alpha go, if you or tic tac toe, if you had a planner. But since everything's" }, { "end": 2877.8399999999997, "start": 2870.4, "text": " continuous, it makes it just so much harder. So they yeah, they always update these distributions," }, { "end": 2884.16, "start": 2877.8399999999997, "text": " as you can see here, to the skill that gave you a high reward in your imagination," }, { "end": 2894.56, "start": 2884.16, "text": " compared to the rewards of the other plans that you had. Okay, well, this was a long, long way" }, { "end": 2902, "start": 2894.56, "text": " until we got here. But if you recap, so first, they in an unsupervised fashion, learn these low" }, { "end": 2908.16, "start": 2902, "text": " level skills, such that they're easily predictable by their own world model, and diverse. And then in" }, { "end": 2917.52, "start": 2908.16, "text": " the second step, they can use that to, to do basically planning. So they first learn these" }, { "end": 2925.6, "start": 2917.52, "text": " skills, and then the planner composes them to make the agent do something. And again, the agent will" }, { "end": 2930.7999999999997, "start": 2925.6, "text": " never have to learn how to do this go from checkpoint to checkpoint, because the planner" }, { "end": 2939.04, "start": 2930.8, "text": " can just compose these low level skills. So they have these experiments right here. And we won't" }, { "end": 2944.5600000000004, "start": 2939.04, "text": " go through the experiment because this video is already very, very long. But they basically show" }, { "end": 2952.7200000000003, "start": 2944.5600000000004, "text": " that they they're learned things, actually, their learned skills do end up being very diverse," }, { "end": 2960.5600000000004, "start": 2952.7200000000003, "text": " do end up predictable, have a high variance, and so on, they have to give certain priors to it" }, { "end": 2968.64, "start": 2960.56, "text": " to make it actually work in a real setting. But the results you can actually see in these videos" }, { "end": 2974.08, "start": 2968.64, "text": " and in the graphs, I'm about you to check out the paper if you're still here. Thanks for being here," }, { "end": 2980.56, "start": 2974.08, "text": " I hope this this work was one of the most more complicated and mathy papers we looked at. But I" }, { "end": 2987.44, "start": 2980.56, "text": " think I still think it's fun. And I still think the outcome is pretty impressive right here, how you" }, { "end": 2996.56, "start": 2987.44, "text": " can use math to derive basically these intuitive, very intuitive objectives to learn. It's also" }, { "end": 3017.84, "start": 2996.56, "text": " pretty cool. Alright, that was it for me. And bye bye." } ]
q7QP_lfqnQM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Synthesizer: Rethinking Self-Attention in Transformer Models (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nlp", "natural language processing", "machine translation", "google", "attention mechanism", "attention", "transformer", "seq2seq", "bert", "memory", "lsh", "locality sensitive hashing", "reversible", "revertible", "flow", "long sequence" ]
Do we really need dot-product attention? The attention mechanism is a central part of modern Transformers, mainly due to the dot-product attention mechanism. This paper changes the mechanism to remove the quadratic interaction terms and comes up with a new model, the Synthesizer. As it turns out, you can do pretty well like that! OUTLINE: 0:00 - Intro & High Level Overview 1:00 - Abstract 2:30 - Attention Mechanism as Information Routing 5:45 - Dot Product Attention 8:05 - Dense Synthetic Attention 15:00 - Random Synthetic Attention 17:15 - Comparison to Feed-Forward Layers 22:00 - Factorization & Mixtures 23:10 - Number of Parameters 25:35 - Machine Translation & Language Modeling Experiments 36:15 - Summarization & Dialogue Generation Experiments 37:15 - GLUE & SuperGLUE Experiments 42:00 - Weight Sizes & Number of Head Ablations 47:05 - Conclusion Paper: https://arxiv.org/abs/2005.00743 My Video on Transformers (Attention Is All You Need): https://youtu.be/iDulhoQ2pro My Video on BERT: https://youtu.be/-9evrZnBorM Abstract: The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is not that important after all. To this end, we propose \textsc{Synthesizer}, a model that learns synthetic attention weights without token-token interactions. Our experimental results show that \textsc{Synthesizer} is competitive against vanilla Transformer models across a range of tasks, including MT (EnDe, EnFr), language modeling (LM1B), abstractive summarization (CNN/Dailymail), dialogue generation (PersonaChat) and Multi-task language understanding (GLUE, SuperGLUE). Authors: Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, Che Zheng Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at synthesizer rethinking self-attention in transformer models by Yi Tai, Dara Barry, Donald Metzler, Da Cheng Chuan, Chih-Zhao and Chih-Ching. These people are of Google research and on a high level they're trying to replace the self-attention mechanism which is currently a dot product mechanism in a transformer by a sort of a learned attention mechanism, therefore eliminating this expensive dot product. They test the model and conclude that it sometimes works a bit. So the results are sort of inconclusive. But that's the paper on a high level and it's fairly cool to go through. As always, if you like content like this, consider subscribing and sharing it out. Alright, so they say the dot product self-attention is known to be central and indispensable to state-of-the-art transformer models. If you don't know what a transformer is, it's best I made a video on the attention is all you need paper and that explains what a transformer is and what an attention mechanism is in detail. But they are right. Of course the attention mechanism that is via the dot product of queries and keys is pretty much what makes transformers transformers. And they here ask is it really required? Which is a bold question in light of that, right? They say they investigate whether or not you really need this and they say via extensive experiments we find that first random alignment matrices surprisingly perform quite competitively and two, learning attention weights from token-token, that means query key interactions, which is this dot product interaction, is not that important after all. Okay, they propose this new model called synthesizer, a model that learns synthetic attention weights without token-token interactions. Our experimental results show that synthesizer is competitive against even NILA transformer models across a range of tasks. Okay, so let's dive in. So what is different here? They're basically saying look in each transformer layer boils down to something like this, where you have an input sequence X right here and you want to get an output sequence Y. And in order to do that you need some sort of this thing, which is the attention matrix, multiplied by this thing, which are called the values. And we'll explore that a bit deeper over here. So in these transformers it's always kind of helpful to visualize yourself the input sequence as sort of nodes. And so this would be one layer, we have a five length sequence and we want to transform it into the next length five sequence. And maybe it even helps to label maybe like A, B, C, D, E. You can just imagine kind of these, of course as you go up the layers it doesn't necessarily always correspond to the same input token. But the position labeling them is still pretty helpful, I find, especially for things like BERT or something like this. So you want to transform the sequence that's incoming here into another sequence. And the basic mechanism in a transformer, if you go up the layers, is the routing of information. So you want to route information around the sequence and basically such that at the end the whole sequence knows about every word in the sentence, knows about every other word that there is in the sentence, knows about the associations between the other words and so on, such that you gain sort of, you start out with individual words and that the end, what you want, is sort of that every word has a pretty good idea of what's going on with every other word. And that's why you continuously, as you go up the layer, route around this information. Now the question is how do you route this information? How do you know which word goes to which other word? And here maybe the sentence starts with the word, let's call Sarah, okay? Sarah and then it goes on and at some point it says she. So she is the pronoun and we can also label these Sarah, she. So if we want to, if we think how do we route information, it would be beneficial for us if the information that there is a word Sarah here in the sentence would be routed to the word she, because the word she, it's a pronoun, it knows I'm a pronoun and if there is like a person in the sentence that would be valuable information for me, like to know what's kind of going on and to understand myself better. Basically every word wants to understand itself and it kind of calls out for information from the other words. In a transformer this is done via what this paper calls this dot product attention and that's the follows. Every word, every token emits what is called a key and a value and the key and the value are just two vectors. So every word is going to emit two vectors. I'm going to draw one at the bottom here and I'm going to draw one at the top. Like that. So you can imagine the key as sort of the word advertising what it is to other words and so these are the keys down here and you, sorry, the top, I think I called it value, that's wrong, it's called a query. You can imagine the query as a word asking, describing what it wants from others to know. So in that case you'll see that the vector here and the vector here, these are now routed by dot product. So the ones that align in the dot product, in the angle, they will be routed to each other. So this would be routed here and maybe, you know, okay I drew this, this one would be routed here and the others would be kind of routed some a bit here and a bit here maybe. Okay it gets fuzzy but you get the concept. But in order to do that you basically need to pull to put the dot product from every single key with every value, sorry, query. And that gives you basically this quadratic dot product that these transformers have and that's expensive. Okay so they have a little picture here. This is what a vanilla transformer does. Every input here emits two things, a query and a key and then there's the dot product attention to decide what's this attention matrix. Okay now this attention matrix is then used to aggregate these values. So actually every token emits three things, also this value here, which is basically, it's not that important but this just describes the information that you want to pass on to the next layer and then it goes through this routing right here that routes the information to the correct output places and you get your output. Now what they propose is something different. They propose this dense synthesizer right here where instead of the dot product attention every single input here emits a basically a row of this matrix directly without having to go through the dot product. That helps a bit if you imagine it in our little framework here. So let's draw this again and let's see what this synthesizer, by the way they call this the dense synthesizer because they have another variant as well. Okay here is our sequence, the lower layer we want to transform it in the upper layer. This is the Sarah node and this is the she node. So how do we route information now? Okay I missed that. In the dense synthesizer framework every token just gets to output, basically it already gets to output where it wants information from. So every single token here gets to output where it wants information to come from and by where because in the original transformer the where was basically defined by these inner product. Now the where is just defined by the position. So it just says I want information from position 2 and 3 or this node here could say I want information from position 5 and 3. And this is dependent on which token there is. So each token looks at itself and in the case here of she you can imagine this token says well I'm a pronoun therefore I may be referring to a person and I know that in the English language a person is often at the beginning of the sentence and therefore I certainly want information from token from position 0. It doesn't see that there is this word Sarah here. It simply can see only the positions 0, 1, 2, 3, 4. So it will output, it will basically output, each token here will output an L dimensional vector and L here is the length of the sequence. An L dimensional vector that already defines the distribution of how you want the information. So I want lots of that and then not much of that and maybe it wants a bit of that and then not much of that. So each word up here is going to emit this L dimensional vector. So each word, each token decides for itself where it wants information to come from based purely on what the token itself is. And of course in the higher layers this information, like the information of what the token is and what else is there gets aggregated and that's how computation happens. But in a fundamental level each node looks at itself and decides where do I want information from just given what I am and not what others are. So this results in you not having to do this dot product attention. But of course you lose the information of what's down here. You simply go on the positions of the nodes and they formalize this like this. So they basically say okay each transformer mechanism needs some sort of a softmax over this matrix B right here. This is this routing matrix and then G of X is just the values of X. So G is often just a linear function. And they say well this B here in the classic transformer is computed via this dot product attention. Can't we just simply have a function right here that just outputs the B given an X. So you see here XI refers to one row. So X here is an L by D matrix and they say the sequence length is L. So if you imagine this is the sequence length, every XI is a vector here of dimension D. Sort of like a word embedding. Now what you want to do is you want to take each individually, run it through the function F and then get out an L dimensional vector. This is of dimension L. And if you do this with enough of the with all of the X's you'll get an L by L matrix which basically is now your routing matrix. So this thing tells you that this particular piece in the input sequence wants how much information from this particular piece in the output sequence or vice versa. See this is the problem here. They don't really specify this B, how this B matrix will be composed from the B I. Is B I a column or a row of this B matrix? I don't know. And therefore it could actually be the other way around. That's the information, it's not the sort of the tokens deciding where they want information from but it could be the tokens deciding where they want to send information to. But just from the notation I can sort of guess that it's the way that I described right here. But I hope you see the difference here. Before we had this dot product here each of these columns basically is an independent evaluation of this function that only considers X I and doesn't consider any of the X J that it wants information from. It simply goes by position. And they use for this, they use this basically this two layer, one hidden layer neural network right here. Two weight matrices and a non-linearity, a ReLU non-linearity. So they replace the dot product by simply learning what the attention pattern is going to be per individual token. Now they do it one step further. They say okay so we've already lost the dependency basically on what the input sequence tokens here are. Can't we also just kind of lose the dependency of what the output token sequences here are? So what they propose in their second variant, this random synthesizer, is the following. Why don't we just learn how the information is going to be routed, irrespective of which tokens come in? We're just gonna learn this and it's going to be one routing pattern for all of the possible input output sequences. It's just going to be this routing pattern. So they have actually two variants. First one where it is really just random, like they just leave it random and they don't even train it. And the second one where they train this thing. But these things now have nothing to do with the tokens. They're just fixed and they are just global. So this would directly be, you learn this L by L matrix. So if this strikes you a bit in an odd way, because you kind of lose the dependency on your data in this routing pattern and it's not really routing anymore, and if you think that you've seen this before somewhere and you think, hey that looks like a feet forward layer from your very first MLP, then you would be absolutely correct. And I'm not sure why they don't point this out. I have a really hard time believing that they themselves tricked them into such a thing right here. So I can actually show it. So the question is how is this here, this dense synthesizer, is this still different or the same as a feet forward layer? And is this different or the same as a feet forward layer? So if you do the math and you look at what a feet forward layer is, in a feet forward layer, yi, my i-th entry in the output, is going to be a sum over all the inputs xj multiplied by a weight ij. So whenever I can represent something in this fashion, where I have a sum and then I have like a fixed weight that I learn and that has nothing to do with x, that is not dependent on x, multiplied by x, then it is a feet forward layer basically, or like a fancy feet forward layer. So let's look at the dense synthesizer. What does the dense synthesizer do? The dense synthesizer says yi is equal to a sum. Okay we're starting off, we're starting off well. And so it says g of xj, but this g usually is just a weight matrix where we compute the values. We said this was the values of x. So g is usually just a matrix, let's call it vw. And then here we have like some softmax thing. We have some softmax and the softmax is going to be over this dense pattern right here that we described here. And this pattern is going to be f of xj. So no, xi, f of xi. Is that correct? Xi and here is a j. Maybe. Yes, that's correct. And f, okay, it's like two, it's two layers, but we can basically say it's like a weight matrix. Because ultimately if we learn a neural network or a single layer, it doesn't really matter to the discussion here. So let's call this wb xi. So you see right here we do have a weighted sum over the xjs, but the weight that the weighted sum is using is dependent on xi right here. And therefore you can't represent this as just a feet forward layer right here. Of course if you have the full dot product attention, then in here would actually be a dot product between xj and xi, right? So xj transposed xi or something like this. So what about this random synthesizer? So the random synthesizer has yi, has a weighted sum over this wv xi, that's the values, softmax over this matrix R. And R is simply this L by L matrix right here. Now you can immediately see that this part right here is static, it doesn't depend on any x and it is learned as a joint function, right? So this, if I just call this w, wij, then I'm back to my formulation, right? I'm sorry, ij. Then I basically have my feet forward layer. So the random synthesizer is just a fancy way of writing a feet forward layer. Now of course if you're going to have the softmax you maybe have some different inductive biases in learning it, but ultimately it is a straightforward feet forward layer. At least that's what it looks like to me. I am very open to be convinced otherwise. Okay so they have this drawing right here on the left you see the vanilla transformer, the dense synthesizer in the middle where you kind of learn how to produce this matrix and then route the value through it. And on the right where you simply output this in a learned or actually completely random fashion and then route your values through that to the output. Okay now the question of course is, okay they also do factorize it, but this is not really the... this is more of a point where now you can actually, if you have such a matrix or you produce such a matrix, you can then factorize it into sort of lower dimensional matrices. And that is first of all to save space and it is also a regularizer because what you're essentially saying is you're applying an inductive prior to say I think these matrices have like some low level structure to them and if you factorize them that's a prior on that exactly. So you can factorize the dense and the random synthesizer into smaller matrices and that will save you parameters. And you can actually also mix two. So you can for example mix the random and the dense synthesizer. Now you have to pay attention it's not like an interpolation. If you mix random and dense you will have to learn the parameters of the random end of the dense synthesizer. So that's going to be like strictly more powerful than either one alone. They list everything here where they say the standard dot product attention. What we have is we have this formula right here you can actually formulate it in their framework. You condition on all the Xj for any Xi and ah see here I wrote this as Xj I was dumb. It should be the entire X. And there is interaction between the tokens and it's going to cost you 2d squared parameters. Now parameters are different from computation which if you don't do the dot product you also save a bunch of computation but here they look at a number of parameters. So in this random synthesizer you simply output this matrix R. It's global. There's no interaction and you are you are it's cost you L squared memory in L squared parameters. Now often in these models L and D are actually pretty similar. So L might be something like 512 tokens the length and the dimension right here might also be something like 512. So per se this is not really a saving in parameters only when you go to the to the factorized models right here. Can you bring in this K and K is this lower dimension of factorization and if K is much much smaller than L then you save a bunch of parameters. The dense synthesizer formula is like this. This is how you produce the attention matrix. You condition on XI but not Xj right? For each YI you condition on XI and you do not care about the Xjs. It is local that means it depends on XI so the routing actually depends on the information that goes through but there is no interaction and you're going into D squared plus DL which is also pretty much 2D squared. Or you go to this lower number here if you choose a good K. Alright now experiments. So they apply this and we are absolutely stoked how this is going to turn out. So they go on machine translation. Now okay before we go into the results do you think machine translation is a good or a bad task for this model? Okay I think it is a good task for this model. It is a very favorable task for this model. Why? Why is machine translation a favorable task? Well mostly in machine translation if you think about how information is routed. So I have a sequence of German. Let's call it or English let's call it the dog barks. And I have a sequence of German. Der Hund belt. Come on. Hund belt. Now okay first of all I know they are only talking about self-attention so this example here actually makes little sense in the actual practical applications but I just want to demonstrate why machine translation specifically has... So how would you route information here if you have to route information between the two things? What you would do is pretty deterministically do this right? So in machine translation what is very very very often the case is that mostly you're going to align the positions in the same way independent of the input. Specifically here you would always most of the time align the beginning with the beginning the end with the end and so on because for most languages especially similar languages like English and German the order of sentences and number of words per thing you need to express is going to be roughly the same. So if you did not know about even about what the sequences were or you only knew one of them like you only knew there and you have to guess where should information come from well I know in English you also start with like this what's this called an article yeah it yes this is an article showing my linguistic skills here you you would also you would also start with that right you would say I want most information from position zero obviously I don't care what there what what is there so I and again I know they only do it in self-attention so it actually makes no sense that I have two different languages here but machine translation is probably a task that lends itself very much to sort of global globally learned or only partially partial observably learned attention patterns because just because of the nature of the task right so let's keep that in mind and go to the go to the results right here now they first of all what they do is they list the original transformer paper and actually have it here because they have to they have this same experiment now this is the kind of transformer we're talking about right here and it is notable that this paper only proposes to replace the self attention that means the attention that would be within one of these two columns and not the attention that goes across from the left to the right right but still you can see that the attention that goes from the left to the right then in the next layer is going to end up as self-attention information right so my I think my argument still counts in the in this case in the machine translation case alright so they have this same experiment right here here yes they have English German translation and they their base model gets twenty seven point three and that's what they evaluate right here they list this twenty seven point three but they also say when we train it we get a bit of a higher number twenty seven point six seven and especially on English French they get a higher number than that but let's stick to English German for now now they also do language modeling which the original paper didn't do and record the perplexity right here okay so the first thing they point out is if we train the synthesizer with a fixed random matrix that means we just put a random routing and we do not we do not ever change it what's there to learn so if you want to learn something in the transformer there's still many things to learn there's the feet forward layers right there is the the value encoder and so on so it is it is reasonable to assume that the transformer could sort of learn to just handle the attend the routing pattern that is in place that did the rest of the model can sort of absorb that shock and interestingly you get on to twenty three point nine so almost twenty four blue points and I mean it seems they point out that it's fairly close if you look here this twenty four is actually pretty far away it's the worst the worst baseline right here in the original paper despite net had this twenty four blue I mean it's I guess it's cool to point out that it works as such but you know in these tasks actually many things work right with with if you distill this down to some sort of a bag of words model and so on I'm pretty sure you can get pretty pretty good results as well and you can get you know fairly you can go to twenty four blue actually have no clue of this field but I just want to point out just because the number is in the same ballpark doesn't mean that it is very astonishing it's maybe just you have so many parameters that the rest of the model can sort of absorb this shock of not of not being able to learn this and can just handle whatever pattern you put there it can just kind of work with it that's many people have observed if you like put just random junk in the lower layers of a CNN like random filters never train them you can still the rest of the network can adapt so that's basically this effect right here I don't think it's a testament to we don't need the dot product attention it's more like this this just happens in deep learning then however they say if we now learn this one matrix so we learn this routing but globally we get into twenty seven point two seven blue and this already seems fairly close right and you mainly need to compare with this number right here because that's actually the same training run and so on so but still it is quite it is quite a bit away it's point four blue points away and that is a sort of significant difference I think then they go further if they go to the dense synthesizer you can see right here the model size is lower than this one and they get twenty seven point four three now they get even closer right and they are actually on par if they mix random and dense right here and you can see that it's also almost the same amount of parameters and when they mix these random and vanilla so what they now have is the dot product attention plus a purely global feed-forward sort of like a bias of what to route where then they can out compete this original model but also now they have more parameters right so this model you would expect it to be you know strictly better than either of the two alone and it is and it is actually astounding that the synthesizer that mixes the vanilla with the dense even though it has even more parameters it does worse so with these sort of results especially then you go fiddle with like point one you know between this and that I know I said point four blow is a lot so point one must be something and it surely is but also it's always the question of how many hyper parameter tunings you put into something like this and generally you you should always sort of look at this if you were a researcher had to put the best possible numbers here what would you do and then you correct for that in your mind for how much it might actually work if you if you are to if you are to you know go ahead and and train that on your data but nevertheless it gives some cool insights right what I'm a bit confused by is that if you sort of look at the original paper and you look at the perplexities and they have a table down here where they compare a bunch of their instantiations of their model and you compare the comparison the perplexity on a language modeling task and the perplexity here seems to correlate extremely well with the blue score right where as the perplexity here if you look at the perplexities over here they do correlate but I somehow I have the feeling they don't really correlate as much here which sort of speaks to the fact that you're going to see in the rest of the paper that these models they tend to sometimes be able to do well but then other times not and it's not really clear super clear when so look at this for example they now apply their models to summarization and dialogue generation right these are two tasks where you need to output text and you can see that the results are all over the place so in this metric Rouge to and Rouge is sort of an engram overlap metric between gold standards and what you produce in this metric the original transformer is best but in Rouge one this synthesizer mixed here is the best and in Rouge L this one is the best and in dialogue generation all of them are actually not as good as this one right here where it's just the dense which is strictly less powerful than the ones on the bottom but so as you as you can see yeah I think what you should take away from this is that it it is interesting that it sometimes works but it seems to be a fair bit of shakiness to these to these results okay now they go on and they test this on super glue and this is a benchmark so glue and super glue they consist of these different tasks right here and now we are out of the text generation game we are in the game of for example you have two sentences and you need to decide which which one is like which one entails the other or are they contradictory or things like this so it's more of a late say a classification task and people apply different models so it's no longer a text generation task so they switch model instead of the vanilla transformer from the attention is all you need paper they now go on to the t5 the text to text transformer and they change they simply take the architecture and they change the attention in there with their attention and you can see right here that the results are quite different than before so in every single case either the t5 model the base model with the dot product attention is the best model or the synthesizer but including V so plus V means plus vanilla means it also has the dot product attention plus this learned thing right here okay so our is now the learned I think the learned right I would be surprised if it was the random random but it could also be but in any case it's strictly better right it's strictly more powerful model and the only way you can actually perform worse is when you know it's too many parameters and so on and they kind of take stuff from each other and there is effects where more parameters can hurt you but never is any model that doesn't have the dot product attention on on top and these authors here argue that that's this can be largely attributed to the fact that the encoder self-attention in the t5 setting also functions as a cross-sentence attention so what do they mean here if in the t5 is just as I understand it this is just like an encoder like Bert so imagine imagine maybe this is Bert right what Bert is simply an encoder only transformer that means you here you put in your sequence and out again comes a sequence and you have like a special token that you use for classification and so on so this is less when you have to generate text but more when you want to classify text or things like this find something in a text and what you would do if you have two sentences you need to decide something about them you put the first sentence here and then you say you put like a separator token here this usually called like a separator token and then you put the second sentence here is you just concatenate them and you let them go into the transformer and they argue that if you do self-attention on this entire sequence then you get attention patterns like this and this is sort of like cross-attention between sequences right it's not really self-attention and that's why their method doesn't work because it basically deals with self-attention but I'm not really buying that argument I mean if this is a sequence it is if this is one sequence this is self-attention and if you were going to argue that out of the blue a token in your case like in your original formulation can simply you know just by looking at itself know where where which position it wants the information from and certainly here this token could also learn that it wants information from over here or from the first word here I don't I don't really see the difference maybe maybe you need to somehow standardize where this separator token is so that it's always in the same place and that the second sentence always starts at the same place but if you have that then I really don't see any difference in the argument you can make here that this shouldn't work as much as the others what I think is happening is that this task is simply involves more difficult reasoning involves more routing of information like dynamic routing that's actually dependent on what's in the tasks rather than something like machine translation which most of the time has some global routing bias like like some some pattern that works pretty well across all alright so the last part here is where they kind of introspect the model and in the in the first thing they say okay we look at the distribution of weights so these are the weights the weights in the decoder at the beginning of training and you can already see that the standard transformer weights are and the synthesizer weights are different from the sorry the dense synthesizer weights are different from the random synthesizer weights and this probably is mostly due to the fact of how you initialize like these deep learning frameworks if you have a matrix they will look at what's this dimension what's this dimension and calculate how they have to initialize it such that sort of the the total norm of a random vector that goes through stays the same sorry the vector would go through like it would go in here and out there so you see if it changes dimension then if you just randomly initialize with all the same like every matrix with the same number then like with the normal distribution then the in this case the vector would gain in norm and to account for that you initialize the matrices such that the vector norms approximately stay the same and this is why there I guess why there are different initializations here and you can see this at the end of the training now these in in different layers right here it's pretty much always the same pattern that they they say they just remark it so this this is what I find weird they just say what the graphs show they don't interpret it like I would expect something like oh this pattern is exactly what we would expect from our model because something something something right like if they claim that this attention is being able to be learned I just don't see why they do this stuff they simply point out oh yeah this this is higher here and this is higher here but I don't even see that as too interesting given that is this is how you initialize it like if you shift everything to the left of it and you know this is a wall so it like this piles up here then this is exactly what turns out I don't I don't see you know what what this is supposed to mean especially since they don't make any claim of what it is supposed to mean and the same here they say the effect of the number of heads okay we we investigate the effect and the number of heads on the random synthesizer models you know and they train the number of heads now somewhere in the text they say I remember they say since you know since we don't dynamically route it is very important for our models very crucial to have many attention heads right such that basically you don't have one routing pattern you have many routing patterns that you learn globally so they say it's very important for our model to have many attention heads and I guess that's what they're trying to demonstrate here but again they simply say what's happening they don't interpret it and and they don't compare it to anything they just you know put it here they just put the number and I don't like is this good is this is this bad can you compare it to something and also here in the so here in the original paper they do the same thing here as you can see the number H is the heads and they do ablate this but at the same time they adjust the dimensions of the key and value vectors such that in total they have the same amount of parameters right so they can really investigate is one big attention head better or worse than many small attention heads is there a trade-off and they find here that there is a bit of a trade-off like there is a sweet spot you don't want too many don't want too much because they get too small something like this but first we like we don't know whether or not have they simply changed the heads but left every other parameter the same or have they also adjusted the dimension because if they haven't adjusted the dimensions then this this increase would be absolutely expected because you now have more parameters and if they have adjusted then it can we compare this to you know something because this here is the this is the t5 small this is not the original transformer like is this big is this small and what does it say about the claim that you made that the number of heads is so important for your model can you validate this using this so it's just a bit of like this entire page here it's just they just measure some things and then they state them here and you're somehow supposed to guess what they mean by stating that here okay but that was enough for me ranting so they give some supplementary material right here but in essence what I like about the paper is sort of the thinking that goes into this thinking outside the box asking the fundamental questions about these models do we really need this what do they do I don't think it's super well investigated really from a scientific point like the formulation of hypotheses it simply trains these things and then make some claims but the claims interact you know with the number of parameters here and so on so and they're sort of noisy all around and of course the fact that this thing here turns out to be a fully connected layer in disguise is also pretty funny but I get it it's a fan it's like it's it's more it's not exactly the same thing but it you know yeah all right so that was my take on this paper if you have a different one let me know in the comments for sure I read all of them and at least I try and I've always succeeded so far all right I'll see you next time bye bye
[ { "end": 5.72, "start": 0, "text": " Hi there! Today we're looking at synthesizer rethinking self-attention in" }, { "end": 12.120000000000001, "start": 5.72, "text": " transformer models by Yi Tai, Dara Barry, Donald Metzler, Da Cheng Chuan," }, { "end": 17.400000000000002, "start": 12.120000000000001, "text": " Chih-Zhao and Chih-Ching. These people are of Google research and on a high level" }, { "end": 22.28, "start": 17.400000000000002, "text": " they're trying to replace the self-attention mechanism which is" }, { "end": 28.68, "start": 22.28, "text": " currently a dot product mechanism in a transformer by a sort of a learned" }, { "end": 33.6, "start": 28.68, "text": " attention mechanism, therefore eliminating this expensive dot product." }, { "end": 41.32, "start": 33.6, "text": " They test the model and conclude that it sometimes works a bit. So the results" }, { "end": 46.04, "start": 41.32, "text": " are sort of inconclusive. But that's the paper on a high level and it's" }, { "end": 50.480000000000004, "start": 46.04, "text": " fairly cool to go through. As always, if you like content like this, consider" }, { "end": 53.08, "start": 50.480000000000004, "text": " subscribing and sharing it out." }, { "end": 59.12, "start": 53.08, "text": " Alright, so they say the dot product self-attention is known to be central and" }, { "end": 63.08, "start": 59.12, "text": " indispensable to state-of-the-art transformer models. If you don't know" }, { "end": 67.64, "start": 63.08, "text": " what a transformer is, it's best I made a video on the attention is all you need" }, { "end": 71.6, "start": 67.64, "text": " paper and that explains what a transformer is and what an attention" }, { "end": 77.72, "start": 71.6, "text": " mechanism is in detail. But they are right. Of course the attention mechanism" }, { "end": 84.24, "start": 77.72, "text": " that is via the dot product of queries and keys is pretty much what makes" }, { "end": 90.68, "start": 84.24, "text": " transformers transformers. And they here ask is it really required? Which is a" }, { "end": 97.12, "start": 90.68, "text": " bold question in light of that, right? They say they investigate whether or" }, { "end": 102.56, "start": 97.12, "text": " not you really need this and they say via extensive experiments we find" }, { "end": 108.16, "start": 102.56, "text": " that first random alignment matrices surprisingly perform quite" }, { "end": 114.36, "start": 108.16, "text": " competitively and two, learning attention weights from token-token, that means" }, { "end": 119.68, "start": 114.36, "text": " query key interactions, which is this dot product interaction, is not that" }, { "end": 125.48, "start": 119.68, "text": " important after all. Okay, they propose this new model called synthesizer, a" }, { "end": 130.16, "start": 125.48, "text": " model that learns synthetic attention weights without token-token interactions." }, { "end": 134.6, "start": 130.16, "text": " Our experimental results show that synthesizer is competitive against even" }, { "end": 137.28, "start": 134.6, "text": " NILA transformer models across a range of tasks." }, { "end": 146.4, "start": 137.28, "text": " Okay, so let's dive in. So what is different here? They're basically" }, { "end": 155.56, "start": 146.4, "text": " saying look in each transformer layer boils down to something like" }, { "end": 163.36, "start": 155.56, "text": " this, where you have an input sequence X right here and you want to get an output" }, { "end": 169.84, "start": 163.36, "text": " sequence Y. And in order to do that you need some sort of this thing, which is" }, { "end": 175.44, "start": 169.84, "text": " the attention matrix, multiplied by this thing, which are called the values. And" }, { "end": 181.52, "start": 175.44, "text": " we'll explore that a bit deeper over here. So in these transformers it's" }, { "end": 187.16, "start": 181.52, "text": " always kind of helpful to visualize yourself the input sequence as sort of" }, { "end": 195.64000000000001, "start": 187.16, "text": " nodes. And so this would be one layer, we have a five length sequence and we" }, { "end": 199.76000000000002, "start": 195.64000000000001, "text": " want to transform it into the next length five sequence. And maybe it even" }, { "end": 207.60000000000002, "start": 199.76000000000002, "text": " helps to label maybe like A, B, C, D, E. You can just imagine kind of these, of course as" }, { "end": 211.6, "start": 207.6, "text": " you go up the layers it doesn't necessarily always correspond to the same" }, { "end": 217.51999999999998, "start": 211.6, "text": " input token. But the position labeling them is still pretty helpful, I find," }, { "end": 221.6, "start": 217.51999999999998, "text": " especially for things like BERT or something like this. So you want to" }, { "end": 228.95999999999998, "start": 221.6, "text": " transform the sequence that's incoming here into another sequence. And the" }, { "end": 234.6, "start": 228.95999999999998, "text": " basic mechanism in a transformer, if you go up the layers, is the routing" }, { "end": 240.04, "start": 234.6, "text": " of information. So you want to route information around the sequence and" }, { "end": 246.35999999999999, "start": 240.04, "text": " basically such that at the end the whole sequence knows about every word in the" }, { "end": 250.48, "start": 246.35999999999999, "text": " sentence, knows about every other word that there is in the sentence, knows" }, { "end": 254.76, "start": 250.48, "text": " about the associations between the other words and so on, such that you gain sort" }, { "end": 259.8, "start": 254.76, "text": " of, you start out with individual words and that the end, what you want, is sort" }, { "end": 264.6, "start": 259.8, "text": " of that every word has a pretty good idea of what's going on with every other" }, { "end": 269.64, "start": 264.6, "text": " word. And that's why you continuously, as you go up the layer, route around" }, { "end": 275.36, "start": 269.64, "text": " this information. Now the question is how do you route this information? How do you" }, { "end": 281.6, "start": 275.36, "text": " know which word goes to which other word? And here maybe the sentence starts with" }, { "end": 290.12, "start": 281.6, "text": " the word, let's call Sarah, okay? Sarah and then it goes on and at some point it" }, { "end": 302.12, "start": 290.12, "text": " says she. So she is the pronoun and we can also label these Sarah, she. So if we" }, { "end": 306.98, "start": 302.12, "text": " want to, if we think how do we route information, it would be" }, { "end": 313.20000000000005, "start": 306.98, "text": " beneficial for us if the information that there is a word Sarah here in the" }, { "end": 318.32, "start": 313.20000000000005, "text": " sentence would be routed to the word she, because the word she, it's a pronoun, it" }, { "end": 324.08000000000004, "start": 318.32, "text": " knows I'm a pronoun and if there is like a person in the sentence that would be" }, { "end": 327.68, "start": 324.08000000000004, "text": " valuable information for me, like to know what's kind of going on and to" }, { "end": 332.72, "start": 327.68, "text": " understand myself better. Basically every word wants to understand itself and it" }, { "end": 337.44000000000005, "start": 332.72, "text": " kind of calls out for information from the other words. In a transformer this is" }, { "end": 341.28000000000003, "start": 337.44000000000005, "text": " done via what this paper calls this dot product attention and that's the" }, { "end": 349.04, "start": 341.28000000000003, "text": " follows. Every word, every token emits what is called a key and a value and the" }, { "end": 353.06, "start": 349.04, "text": " key and the value are just two vectors. So every word is going to emit two" }, { "end": 359.12, "start": 353.06, "text": " vectors. I'm going to draw one at the bottom here and I'm going to draw one at" }, { "end": 368.64, "start": 359.12, "text": " the top. Like that. So you can imagine the key as sort of the word advertising" }, { "end": 375.12, "start": 368.64, "text": " what it is to other words and so these are the keys down here and you, sorry, the" }, { "end": 380.72, "start": 375.12, "text": " top, I think I called it value, that's wrong, it's called a query. You can" }, { "end": 386.64, "start": 380.72, "text": " imagine the query as a word asking, describing what it wants from others to" }, { "end": 392.96, "start": 386.64, "text": " know. So in that case you'll see that the vector here and the vector here," }, { "end": 398.88, "start": 392.96, "text": " these are now routed by dot product. So the ones that align in the dot product," }, { "end": 403.2, "start": 398.88, "text": " in the angle, they will be routed to each other. So this would be routed here and" }, { "end": 410.36, "start": 403.2, "text": " maybe, you know, okay I drew this, this one would be routed here and the others" }, { "end": 418.2, "start": 410.36, "text": " would be kind of routed some a bit here and a bit here maybe. Okay it gets" }, { "end": 422.72, "start": 418.2, "text": " fuzzy but you get the concept. But in order to do that you basically need to" }, { "end": 430, "start": 422.72, "text": " pull to put the dot product from every single key with every value, sorry, query." }, { "end": 435, "start": 430, "text": " And that gives you basically this quadratic dot product that these" }, { "end": 442.44, "start": 435, "text": " transformers have and that's expensive. Okay so they have a little picture here." }, { "end": 449.4, "start": 442.44, "text": " This is what a vanilla transformer does. Every input here emits two things, a" }, { "end": 454.8, "start": 449.4, "text": " query and a key and then there's the dot product attention to decide what's this" }, { "end": 461.88, "start": 454.8, "text": " attention matrix. Okay now this attention matrix is then used to aggregate these" }, { "end": 467.6, "start": 461.88, "text": " values. So actually every token emits three things, also this value here, which" }, { "end": 474.84, "start": 467.6, "text": " is basically, it's not that important but this just describes the information that" }, { "end": 479.28, "start": 474.84, "text": " you want to pass on to the next layer and then it goes through this routing" }, { "end": 484, "start": 479.28, "text": " right here that routes the information to the correct output places and you get" }, { "end": 489.96, "start": 484, "text": " your output. Now what they propose is something different. They propose this" }, { "end": 494.64, "start": 489.96, "text": " dense synthesizer right here where instead of the dot product attention" }, { "end": 503.84, "start": 494.64, "text": " every single input here emits a basically a row of this matrix" }, { "end": 509.52, "start": 503.84, "text": " directly without having to go through the dot product. That helps a bit if you" }, { "end": 516.28, "start": 509.52, "text": " imagine it in our little framework here. So let's draw this again and let's see" }, { "end": 521.24, "start": 516.28, "text": " what this synthesizer, by the way they call this the dense synthesizer because" }, { "end": 526.04, "start": 521.24, "text": " they have another variant as well. Okay here is our sequence, the lower" }, { "end": 530.0799999999999, "start": 526.04, "text": " layer we want to transform it in the upper layer. This is the Sarah node" }, { "end": 540.4399999999999, "start": 530.0799999999999, "text": " and this is the she node. So how do we route information now?" }, { "end": 549.4000000000001, "start": 540.44, "text": " Okay I missed that. In the dense synthesizer framework every token just" }, { "end": 556.72, "start": 549.4000000000001, "text": " gets to output, basically it already gets to output where it wants information" }, { "end": 567.8000000000001, "start": 556.72, "text": " from. So every single token here gets to output where it wants information to" }, { "end": 573.4399999999999, "start": 567.8, "text": " come from and by where because in the original transformer the where was" }, { "end": 578.68, "start": 573.4399999999999, "text": " basically defined by these inner product. Now the where is just defined by the" }, { "end": 585.76, "start": 578.68, "text": " position. So it just says I want information from position 2 and 3 or" }, { "end": 591.12, "start": 585.76, "text": " this node here could say I want information from position 5 and 3." }, { "end": 597.4399999999999, "start": 591.12, "text": " And this is dependent on which token there is. So each token looks at" }, { "end": 603.36, "start": 597.44, "text": " itself and in the case here of she you can imagine this token says well I'm a" }, { "end": 610.08, "start": 603.36, "text": " pronoun therefore I may be referring to a person and I know that in the English" }, { "end": 614.6400000000001, "start": 610.08, "text": " language a person is often at the beginning of the sentence and therefore I" }, { "end": 621.36, "start": 614.6400000000001, "text": " certainly want information from token from position 0. It doesn't see that" }, { "end": 628.8000000000001, "start": 621.36, "text": " there is this word Sarah here. It simply can see only the positions 0, 1, 2, 3, 4." }, { "end": 635.4, "start": 628.8000000000001, "text": " So it will output, it will basically output, each token here will output an" }, { "end": 641.6, "start": 635.4, "text": " L dimensional vector and L here is the length of the sequence. An L dimensional" }, { "end": 646.76, "start": 641.6, "text": " vector that already defines the distribution of how you want the" }, { "end": 650.52, "start": 646.76, "text": " information. So I want lots of that and then not much of that and maybe it wants" }, { "end": 655.4399999999999, "start": 650.52, "text": " a bit of that and then not much of that. So each word up here is going to" }, { "end": 662.56, "start": 655.4399999999999, "text": " emit this L dimensional vector. So each word, each token decides for itself" }, { "end": 668.6999999999999, "start": 662.56, "text": " where it wants information to come from based purely on what the token" }, { "end": 674, "start": 668.6999999999999, "text": " itself is. And of course in the higher layers this information, like the" }, { "end": 677.56, "start": 674, "text": " information of what the token is and what else is there gets aggregated and" }, { "end": 682.4799999999999, "start": 677.56, "text": " that's how computation happens. But in a fundamental level each node looks at" }, { "end": 687.92, "start": 682.4799999999999, "text": " itself and decides where do I want information from just given what I am" }, { "end": 694.76, "start": 687.92, "text": " and not what others are. So this results in you not having to do this" }, { "end": 698.8, "start": 694.76, "text": " dot product attention. But of course you lose the information of what's down here." }, { "end": 709.7199999999999, "start": 698.8, "text": " You simply go on the positions of the nodes and they formalize this" }, { "end": 714.56, "start": 709.7199999999999, "text": " like this. So they basically say okay each transformer mechanism needs" }, { "end": 721.64, "start": 714.56, "text": " some sort of a softmax over this matrix B right here. This is" }, { "end": 727.8, "start": 721.64, "text": " this routing matrix and then G of X is just the values of X. So G is often just" }, { "end": 733.4, "start": 727.8, "text": " a linear function. And they say well this B here in the classic transformer is" }, { "end": 738.7199999999999, "start": 733.4, "text": " computed via this dot product attention. Can't we just simply have a function" }, { "end": 747.3599999999999, "start": 738.7199999999999, "text": " right here that just outputs the B given an X. So you see here XI" }, { "end": 754.88, "start": 747.3599999999999, "text": " refers to one row. So X here is an L by D matrix and they say the sequence length" }, { "end": 762.12, "start": 754.88, "text": " is L. So if you imagine this is the sequence length, every XI" }, { "end": 769.6, "start": 762.12, "text": " is a vector here of dimension D. Sort of like a word embedding." }, { "end": 776.4399999999999, "start": 769.6, "text": " Now what you want to do is you want to take each individually, run it through" }, { "end": 786.1600000000001, "start": 776.44, "text": " the function F and then get out an L dimensional vector." }, { "end": 792.5600000000001, "start": 786.1600000000001, "text": " This is of dimension L. And if you do this with enough of the with all of the X's" }, { "end": 799.6400000000001, "start": 792.5600000000001, "text": " you'll get an L by L matrix which basically is now your routing matrix." }, { "end": 805.6400000000001, "start": 799.6400000000001, "text": " So this thing tells you that this particular piece in the input" }, { "end": 810.8, "start": 805.64, "text": " sequence wants how much information from this particular piece in the output" }, { "end": 815.6, "start": 810.8, "text": " sequence or vice versa. See this is the problem here. They don't really" }, { "end": 823.3199999999999, "start": 815.6, "text": " specify this B, how this B matrix will be composed from the B I." }, { "end": 830, "start": 823.3199999999999, "text": " Is B I a column or a row of this B matrix? I don't know. And therefore it" }, { "end": 835.84, "start": 830, "text": " could actually be the other way around. That's the information, it's not the" }, { "end": 842.44, "start": 835.84, "text": " sort of the tokens deciding where they want information from but it could be" }, { "end": 847.88, "start": 842.44, "text": " the tokens deciding where they want to send information to. But just from the" }, { "end": 853.52, "start": 847.88, "text": " notation I can sort of guess that it's the way that I described right here. But" }, { "end": 858, "start": 853.52, "text": " I hope you see the difference here. Before we had this dot product" }, { "end": 863.52, "start": 858, "text": " here each of these columns basically is an independent evaluation of this" }, { "end": 869.52, "start": 863.52, "text": " function that only considers X I and doesn't consider any of the X J that it" }, { "end": 875.4, "start": 869.52, "text": " wants information from. It simply goes by position. And they use for this, they use" }, { "end": 882.96, "start": 875.4, "text": " this basically this two layer, one hidden layer neural network right here. Two" }, { "end": 889.84, "start": 882.96, "text": " weight matrices and a non-linearity, a ReLU non-linearity. So they replace the" }, { "end": 895.1600000000001, "start": 889.84, "text": " dot product by simply learning what the attention pattern is going to be per" }, { "end": 901.84, "start": 895.1600000000001, "text": " individual token. Now they do it one step further. They say okay so we've" }, { "end": 907.9200000000001, "start": 901.84, "text": " already lost the dependency basically on what the input sequence" }, { "end": 912.84, "start": 907.9200000000001, "text": " tokens here are. Can't we also just kind of lose the dependency of what the" }, { "end": 918.08, "start": 912.84, "text": " output token sequences here are? So what they propose in their second variant," }, { "end": 928.9200000000001, "start": 918.08, "text": " this random synthesizer, is the following. Why don't we just learn how" }, { "end": 934.76, "start": 928.9200000000001, "text": " the information is going to be routed, irrespective of which tokens come in?" }, { "end": 939.0400000000001, "start": 934.76, "text": " We're just gonna learn this and it's going to be one routing" }, { "end": 945.5999999999999, "start": 939.04, "text": " pattern for all of the possible input output sequences. It's just going" }, { "end": 949.7199999999999, "start": 945.5999999999999, "text": " to be this routing pattern. So they have actually two variants. First one where it" }, { "end": 953.92, "start": 949.7199999999999, "text": " is really just random, like they just leave it random and they don't even" }, { "end": 958.7199999999999, "start": 953.92, "text": " train it. And the second one where they train this thing. But these things now" }, { "end": 965.7199999999999, "start": 958.7199999999999, "text": " have nothing to do with the tokens. They're just fixed and they are just" }, { "end": 972.12, "start": 965.72, "text": " global. So this would directly be, you learn this L by L" }, { "end": 978.36, "start": 972.12, "text": " matrix. So if this strikes you a bit in an odd way, because you kind of" }, { "end": 983.84, "start": 978.36, "text": " lose the dependency on your data in this routing pattern and it's not really" }, { "end": 988.96, "start": 983.84, "text": " routing anymore, and if you think that you've seen this before somewhere" }, { "end": 997.36, "start": 988.96, "text": " and you think, hey that looks like a feet forward layer from your" }, { "end": 1003.48, "start": 997.36, "text": " very first MLP, then you would be absolutely correct. And I'm not sure why" }, { "end": 1008.8000000000001, "start": 1003.48, "text": " they don't point this out. I have a really hard time believing that" }, { "end": 1014.9200000000001, "start": 1008.8000000000001, "text": " they themselves tricked them into such a thing right here. So I can" }, { "end": 1022.88, "start": 1014.92, "text": " actually show it. So the question is how is this here, this dense" }, { "end": 1028.36, "start": 1022.88, "text": " synthesizer, is this still different or the same as a feet" }, { "end": 1034.1599999999999, "start": 1028.36, "text": " forward layer? And is this different or the same as a feet forward layer? So if" }, { "end": 1040.32, "start": 1034.1599999999999, "text": " you do the math and you look at what a feet forward layer is, in a" }, { "end": 1047.24, "start": 1040.32, "text": " feet forward layer, yi, my i-th entry in the output, is going to be a sum" }, { "end": 1057.6599999999999, "start": 1047.24, "text": " over all the inputs xj multiplied by a weight ij. So whenever I can" }, { "end": 1062.52, "start": 1057.6599999999999, "text": " represent something in this fashion, where I have a sum and then I have like" }, { "end": 1068.6, "start": 1062.52, "text": " a fixed weight that I learn and that has nothing to do with x, that is not" }, { "end": 1074.56, "start": 1068.6, "text": " dependent on x, multiplied by x, then it is a feet forward layer basically, or" }, { "end": 1081.28, "start": 1074.56, "text": " like a fancy feet forward layer. So let's look at the dense synthesizer. What does" }, { "end": 1086.6, "start": 1081.28, "text": " the dense synthesizer do? The dense synthesizer says yi is equal to a sum." }, { "end": 1096.7199999999998, "start": 1086.6, "text": " Okay we're starting off, we're starting off well. And so it says g of xj," }, { "end": 1103.24, "start": 1096.72, "text": " but this g usually is just a weight matrix where we compute the values." }, { "end": 1109.48, "start": 1103.24, "text": " We said this was the values of x. So g is usually just a matrix, let's call it" }, { "end": 1120.28, "start": 1109.48, "text": " vw. And then here we have like some softmax thing. We have some softmax" }, { "end": 1125.6000000000001, "start": 1120.28, "text": " and the softmax is going to be over this dense pattern right here that we" }, { "end": 1143.9199999999998, "start": 1125.6, "text": " described here. And this pattern is going to be f of xj. So no, xi, f of" }, { "end": 1156.6000000000001, "start": 1143.92, "text": " xi. Is that correct? Xi and here is a j. Maybe. Yes, that's correct. And f, okay," }, { "end": 1163.64, "start": 1156.6000000000001, "text": " it's like two, it's two layers, but we can basically say it's like a weight" }, { "end": 1169.24, "start": 1163.64, "text": " matrix. Because ultimately if we learn a neural network or a single layer, it" }, { "end": 1177.68, "start": 1169.24, "text": " doesn't really matter to the discussion here. So let's call this wb xi. So you" }, { "end": 1186.68, "start": 1177.68, "text": " see right here we do have a weighted sum over the xjs, but the weight that the" }, { "end": 1194.1200000000001, "start": 1186.68, "text": " weighted sum is using is dependent on xi right here. And therefore you can't" }, { "end": 1201.1599999999999, "start": 1194.12, "text": " represent this as just a feet forward layer right here. Of course if you have" }, { "end": 1207.1999999999998, "start": 1201.1599999999999, "text": " the full dot product attention, then in here would actually be a dot product" }, { "end": 1216.6799999999998, "start": 1207.1999999999998, "text": " between xj and xi, right? So xj transposed xi or something like this. So" }, { "end": 1225.48, "start": 1216.68, "text": " what about this random synthesizer? So the random synthesizer has yi, has a" }, { "end": 1237.8400000000001, "start": 1225.48, "text": " weighted sum over this wv xi, that's the values, softmax over this matrix R. And R" }, { "end": 1243.68, "start": 1237.8400000000001, "text": " is simply this L by L matrix right here. Now you can immediately see that this" }, { "end": 1249.8400000000001, "start": 1243.68, "text": " part right here is static, it doesn't depend on any x and it is learned as a" }, { "end": 1260.2, "start": 1249.8400000000001, "text": " joint function, right? So this, if I just call this w, wij, then I'm back to my" }, { "end": 1272.0800000000002, "start": 1260.2, "text": " formulation, right? I'm sorry, ij. Then I basically have my feet forward layer. So" }, { "end": 1277.08, "start": 1272.08, "text": " the random synthesizer is just a fancy way of writing a feet forward layer. Now" }, { "end": 1279.6399999999999, "start": 1277.08, "text": " of course if you're going to have the softmax you maybe have some different" }, { "end": 1285.96, "start": 1279.6399999999999, "text": " inductive biases in learning it, but ultimately it is a straightforward" }, { "end": 1292.96, "start": 1285.96, "text": " feet forward layer. At least that's what it looks like to me. I am very open to be" }, { "end": 1298.72, "start": 1292.96, "text": " convinced otherwise. Okay so they have this drawing right here on the left you" }, { "end": 1303.72, "start": 1298.72, "text": " see the vanilla transformer, the dense synthesizer in the middle where you kind" }, { "end": 1308.1200000000001, "start": 1303.72, "text": " of learn how to produce this matrix and then route the value through it. And on" }, { "end": 1312.76, "start": 1308.1200000000001, "text": " the right where you simply output this in a learned or actually completely" }, { "end": 1319.2, "start": 1312.76, "text": " random fashion and then route your values through that to the output. Okay" }, { "end": 1325.24, "start": 1319.2, "text": " now the question of course is, okay they also do factorize it, but this is not" }, { "end": 1331.72, "start": 1325.24, "text": " really the... this is more of a point where now you can actually, if you have such a" }, { "end": 1336.84, "start": 1331.72, "text": " matrix or you produce such a matrix, you can then factorize it into sort of" }, { "end": 1340.72, "start": 1336.84, "text": " lower dimensional matrices. And that is first of all to save space and it is" }, { "end": 1344.76, "start": 1340.72, "text": " also a regularizer because what you're essentially saying is you're applying an" }, { "end": 1350.16, "start": 1344.76, "text": " inductive prior to say I think these matrices have like some low level" }, { "end": 1356.5600000000002, "start": 1350.16, "text": " structure to them and if you factorize them that's a prior on that" }, { "end": 1362.4, "start": 1356.5600000000002, "text": " exactly. So you can factorize the dense and the random synthesizer into smaller" }, { "end": 1368.0400000000002, "start": 1362.4, "text": " matrices and that will save you parameters. And you can actually also" }, { "end": 1375.0600000000002, "start": 1368.0400000000002, "text": " mix two. So you can for example mix the random and the dense synthesizer. Now you" }, { "end": 1379.0400000000002, "start": 1375.0600000000002, "text": " have to pay attention it's not like an interpolation. If you mix random and" }, { "end": 1383.56, "start": 1379.04, "text": " dense you will have to learn the parameters of the random end of the" }, { "end": 1389.04, "start": 1383.56, "text": " dense synthesizer. So that's going to be like strictly more powerful than either" }, { "end": 1395.72, "start": 1389.04, "text": " one alone. They list everything here where they say the standard dot product" }, { "end": 1400.36, "start": 1395.72, "text": " attention. What we have is we have this formula right here you can actually" }, { "end": 1408.08, "start": 1400.36, "text": " formulate it in their framework. You condition on all the Xj for any Xi and" }, { "end": 1420.4399999999998, "start": 1408.08, "text": " ah see here I wrote this as Xj I was dumb. It should be the entire X. And there" }, { "end": 1424.76, "start": 1420.4399999999998, "text": " is interaction between the tokens and it's going to cost you 2d squared" }, { "end": 1430.48, "start": 1424.76, "text": " parameters. Now parameters are different from computation which if you don't do" }, { "end": 1433.9199999999998, "start": 1430.48, "text": " the dot product you also save a bunch of computation but here they look at a" }, { "end": 1443.4, "start": 1433.92, "text": " number of parameters. So in this random synthesizer you simply output this" }, { "end": 1450.52, "start": 1443.4, "text": " matrix R. It's global. There's no interaction and you are you are it's" }, { "end": 1460.1200000000001, "start": 1450.52, "text": " cost you L squared memory in L squared parameters. Now often in these models L" }, { "end": 1465.56, "start": 1460.12, "text": " and D are actually pretty similar. So L might be something like 512 tokens the" }, { "end": 1470.4799999999998, "start": 1465.56, "text": " length and the dimension right here might also be something like 512. So per" }, { "end": 1477, "start": 1470.4799999999998, "text": " se this is not really a saving in parameters only when you go to the to" }, { "end": 1482.1599999999999, "start": 1477, "text": " the factorized models right here. Can you bring in this K and K is this lower" }, { "end": 1487.6, "start": 1482.1599999999999, "text": " dimension of factorization and if K is much much smaller than L then you save a" }, { "end": 1493.9599999999998, "start": 1487.6, "text": " bunch of parameters. The dense synthesizer formula is like this. This" }, { "end": 1499.3999999999999, "start": 1493.9599999999998, "text": " is how you produce the attention matrix. You condition on XI but not Xj right?" }, { "end": 1509.4399999999998, "start": 1499.3999999999999, "text": " For each YI you condition on XI and you do not care about the Xjs. It is" }, { "end": 1516.7199999999998, "start": 1509.4399999999998, "text": " local that means it depends on XI so the routing actually depends on" }, { "end": 1520.44, "start": 1516.72, "text": " the information that goes through but there is no interaction and you're going" }, { "end": 1528.6000000000001, "start": 1520.44, "text": " into D squared plus DL which is also pretty much 2D squared. Or you" }, { "end": 1535.6000000000001, "start": 1528.6000000000001, "text": " go to this lower number here if you choose a good K. Alright now" }, { "end": 1542.3600000000001, "start": 1535.6000000000001, "text": " experiments. So they apply this and we are absolutely stoked how this is going" }, { "end": 1549.76, "start": 1542.36, "text": " to turn out. So they go on machine translation. Now okay before we go into" }, { "end": 1556.6, "start": 1549.76, "text": " the results do you think machine translation is a good or a bad task for" }, { "end": 1563.56, "start": 1556.6, "text": " this model? Okay I think it is a good task for this model. It is a very favorable" }, { "end": 1569.6399999999999, "start": 1563.56, "text": " task for this model. Why? Why is machine translation a favorable task? Well mostly" }, { "end": 1574.3200000000002, "start": 1569.64, "text": " in machine translation if you think about how information is routed. So I" }, { "end": 1584.6000000000001, "start": 1574.3200000000002, "text": " have a sequence of German. Let's call it or English let's call it" }, { "end": 1600.28, "start": 1584.6, "text": " the dog barks. And I have a sequence of German. Der Hund belt. Come on. Hund belt." }, { "end": 1605.1999999999998, "start": 1600.28, "text": " Now okay first of all I know they are only talking about self-attention so" }, { "end": 1608.76, "start": 1605.1999999999998, "text": " this example here actually makes little sense in the actual practical" }, { "end": 1613.36, "start": 1608.76, "text": " applications but I just want to demonstrate why machine translation" }, { "end": 1619.9599999999998, "start": 1613.36, "text": " specifically has... So how would you route information here if you have to route" }, { "end": 1623.32, "start": 1619.9599999999998, "text": " information between the two things? What you would do is pretty" }, { "end": 1629.32, "start": 1623.32, "text": " deterministically do this right? So in machine translation what is very very" }, { "end": 1636, "start": 1629.32, "text": " very often the case is that mostly you're going to align the positions in" }, { "end": 1640.76, "start": 1636, "text": " the same way independent of the input. Specifically here you would always most" }, { "end": 1644.4, "start": 1640.76, "text": " of the time align the beginning with the beginning the end with the end and so on" }, { "end": 1649.08, "start": 1644.4, "text": " because for most languages especially similar languages like English and" }, { "end": 1654.64, "start": 1649.08, "text": " German the order of sentences and number of words per thing you need to express" }, { "end": 1661.32, "start": 1654.64, "text": " is going to be roughly the same. So if you did not know about even about what" }, { "end": 1668.6, "start": 1661.32, "text": " the sequences were or you only knew one of them like you only knew there and you" }, { "end": 1672.6399999999999, "start": 1668.6, "text": " have to guess where should information come from well I know in English you" }, { "end": 1680.6799999999998, "start": 1672.6399999999999, "text": " also start with like this what's this called an article yeah it yes this is an" }, { "end": 1685.9199999999998, "start": 1680.6799999999998, "text": " article showing my linguistic skills here you you would also you would also" }, { "end": 1690.9599999999998, "start": 1685.9199999999998, "text": " start with that right you would say I want most information from position zero" }, { "end": 1697.1999999999998, "start": 1690.9599999999998, "text": " obviously I don't care what there what what is there so I and again I know they" }, { "end": 1700.0800000000002, "start": 1697.2, "text": " only do it in self-attention so it actually makes no sense that I have two" }, { "end": 1705.72, "start": 1700.0800000000002, "text": " different languages here but machine translation is probably a task that lends" }, { "end": 1712.64, "start": 1705.72, "text": " itself very much to sort of global globally learned or only partially" }, { "end": 1718.24, "start": 1712.64, "text": " partial observably learned attention patterns because just because of the" }, { "end": 1726, "start": 1718.24, "text": " nature of the task right so let's keep that in mind and go to the go to the" }, { "end": 1731, "start": 1726, "text": " results right here now they first of all what they do is they list the original" }, { "end": 1735.52, "start": 1731, "text": " transformer paper and actually have it here because they have to they have this" }, { "end": 1742.72, "start": 1735.52, "text": " same experiment now this is the kind of transformer we're talking about right" }, { "end": 1747.36, "start": 1742.72, "text": " here and it is notable that this paper only proposes to replace the self" }, { "end": 1752.72, "start": 1747.36, "text": " attention that means the attention that would be within one of these two columns" }, { "end": 1758.32, "start": 1752.72, "text": " and not the attention that goes across from the left to the right right but" }, { "end": 1762.64, "start": 1758.32, "text": " still you can see that the attention that goes from the left to the right then" }, { "end": 1769.84, "start": 1762.64, "text": " in the next layer is going to end up as self-attention information right so my I" }, { "end": 1774.76, "start": 1769.84, "text": " think my argument still counts in the in this case in the machine translation" }, { "end": 1784.56, "start": 1774.76, "text": " case alright so they have this same experiment right here here yes they have" }, { "end": 1791.4, "start": 1784.56, "text": " English German translation and they their base model gets twenty seven point" }, { "end": 1794.8, "start": 1791.4, "text": " three and that's what they evaluate right here they list this twenty seven" }, { "end": 1801.2, "start": 1794.8, "text": " point three but they also say when we train it we get a bit of a higher" }, { "end": 1806.24, "start": 1801.2, "text": " number twenty seven point six seven and especially on English French they get a" }, { "end": 1813.32, "start": 1806.24, "text": " higher number than that but let's stick to English German for now now they also" }, { "end": 1819.96, "start": 1813.32, "text": " do language modeling which the original paper didn't do and record the perplexity" }, { "end": 1826.96, "start": 1819.96, "text": " right here okay so the first thing they point out is if we train the synthesizer" }, { "end": 1834.44, "start": 1826.96, "text": " with a fixed random matrix that means we just put a random routing and we do not" }, { "end": 1840.28, "start": 1834.44, "text": " we do not ever change it what's there to learn so if you want to learn something" }, { "end": 1842.8, "start": 1840.28, "text": " in the transformer there's still many things to learn there's the feet" }, { "end": 1851.16, "start": 1842.8, "text": " forward layers right there is the the value encoder and so on so it is it is" }, { "end": 1854.8400000000001, "start": 1851.16, "text": " reasonable to assume that the transformer could sort of learn to just" }, { "end": 1859.76, "start": 1854.84, "text": " handle the attend the routing pattern that is in place that did the rest of" }, { "end": 1865.48, "start": 1859.76, "text": " the model can sort of absorb that shock and interestingly you get on to twenty" }, { "end": 1871.72, "start": 1865.48, "text": " three point nine so almost twenty four blue points and I mean it seems they" }, { "end": 1877.56, "start": 1871.72, "text": " point out that it's fairly close if you look here this twenty four is actually" }, { "end": 1882.76, "start": 1877.56, "text": " pretty far away it's the worst the worst baseline right here in the original" }, { "end": 1889.16, "start": 1882.76, "text": " paper despite net had this twenty four blue I mean it's I guess it's cool to" }, { "end": 1895.64, "start": 1889.16, "text": " point out that it works as such but you know in these tasks actually many things" }, { "end": 1901.44, "start": 1895.64, "text": " work right with with if you distill this down to some sort of a bag of words" }, { "end": 1906.04, "start": 1901.44, "text": " model and so on I'm pretty sure you can get pretty pretty good results as well" }, { "end": 1912.28, "start": 1906.04, "text": " and you can get you know fairly you can go to twenty four blue actually have no" }, { "end": 1916.48, "start": 1912.28, "text": " clue of this field but I just want to point out just because the number is in" }, { "end": 1923.28, "start": 1916.48, "text": " the same ballpark doesn't mean that it is very astonishing it's maybe just you" }, { "end": 1929.68, "start": 1923.28, "text": " have so many parameters that the rest of the model can sort of absorb this shock" }, { "end": 1934.2, "start": 1929.68, "text": " of not of not being able to learn this and can just handle whatever pattern you" }, { "end": 1936.48, "start": 1934.2, "text": " put there it can just kind of work with it" }, { "end": 1942.44, "start": 1936.48, "text": " that's many people have observed if you like put just random junk in the lower" }, { "end": 1947.04, "start": 1942.44, "text": " layers of a CNN like random filters never train them you can still the rest" }, { "end": 1952.04, "start": 1947.04, "text": " of the network can adapt so that's basically this effect right here I don't" }, { "end": 1956.76, "start": 1952.04, "text": " think it's a testament to we don't need the dot product attention it's more like" }, { "end": 1965.04, "start": 1956.76, "text": " this this just happens in deep learning then however they say if we now learn" }, { "end": 1971.52, "start": 1965.04, "text": " this one matrix so we learn this routing but globally we get into twenty seven" }, { "end": 1977.76, "start": 1971.52, "text": " point two seven blue and this already seems fairly close right and you mainly" }, { "end": 1982.68, "start": 1977.76, "text": " need to compare with this number right here because that's actually the same" }, { "end": 1989.8799999999999, "start": 1982.68, "text": " training run and so on so but still it is quite it is quite a bit away it's" }, { "end": 1995, "start": 1989.8799999999999, "text": " point four blue points away and that is a sort of significant difference I think" }, { "end": 2005.8, "start": 1995, "text": " then they go further if they go to the dense synthesizer you can see right here" }, { "end": 2014.72, "start": 2005.8, "text": " the model size is lower than this one and they get twenty seven point four three" }, { "end": 2022.52, "start": 2014.72, "text": " now they get even closer right and they are actually on par if they mix random" }, { "end": 2031.04, "start": 2022.52, "text": " and dense right here and you can see that it's also almost the same amount of" }, { "end": 2038.8799999999999, "start": 2031.04, "text": " parameters and when they mix these random and vanilla so what they now have" }, { "end": 2045.04, "start": 2038.8799999999999, "text": " is the dot product attention plus a purely global feed-forward sort of like" }, { "end": 2051.92, "start": 2045.04, "text": " a bias of what to route where then they can out compete this original model but" }, { "end": 2058.16, "start": 2051.92, "text": " also now they have more parameters right so this model you would expect it to be" }, { "end": 2062.48, "start": 2058.16, "text": " you know strictly better than either of the two alone and it is and it is" }, { "end": 2068.96, "start": 2062.48, "text": " actually astounding that the synthesizer that mixes the vanilla with the dense" }, { "end": 2074.36, "start": 2068.96, "text": " even though it has even more parameters it does worse so with these sort of" }, { "end": 2078.8, "start": 2074.36, "text": " results especially then you go fiddle with like point one you know between" }, { "end": 2084.5600000000004, "start": 2078.8, "text": " this and that I know I said point four blow is a lot so point one must be" }, { "end": 2089.92, "start": 2084.5600000000004, "text": " something and it surely is but also it's always the question of how many hyper" }, { "end": 2096.32, "start": 2089.92, "text": " parameter tunings you put into something like this and generally you you should" }, { "end": 2101.7200000000003, "start": 2096.32, "text": " always sort of look at this if you were a researcher had to put the best possible" }, { "end": 2106.52, "start": 2101.7200000000003, "text": " numbers here what would you do and then you correct for that in your mind for" }, { "end": 2112.68, "start": 2106.52, "text": " how much it might actually work if you if you are to if you are to you know go" }, { "end": 2119.6, "start": 2112.68, "text": " ahead and and train that on your data but nevertheless it gives some cool" }, { "end": 2127.12, "start": 2119.6, "text": " insights right what I'm a bit confused by is that if you sort of look at the" }, { "end": 2131.6, "start": 2127.12, "text": " original paper and you look at the perplexities and they have a table down" }, { "end": 2135.88, "start": 2131.6, "text": " here where they compare a bunch of their instantiations of their model and you" }, { "end": 2142.56, "start": 2135.88, "text": " compare the comparison the perplexity on a language modeling task and the" }, { "end": 2147.04, "start": 2142.56, "text": " perplexity here seems to correlate extremely well with the blue score" }, { "end": 2152.08, "start": 2147.04, "text": " right where as the perplexity here if you look at the perplexities over here" }, { "end": 2159.2400000000002, "start": 2152.08, "text": " they do correlate but I somehow I have the feeling they don't really correlate" }, { "end": 2164.8, "start": 2159.2400000000002, "text": " as much here which sort of speaks to the fact that you're going to see in the" }, { "end": 2171.2000000000003, "start": 2164.8, "text": " rest of the paper that these models they tend to sometimes be able to do well but" }, { "end": 2177.6400000000003, "start": 2171.2000000000003, "text": " then other times not and it's not really clear super clear when so look at this" }, { "end": 2184.1600000000003, "start": 2177.6400000000003, "text": " for example they now apply their models to summarization and dialogue generation" }, { "end": 2189.44, "start": 2184.1600000000003, "text": " right these are two tasks where you need to output text and you can see that the" }, { "end": 2194.1600000000003, "start": 2189.44, "text": " results are all over the place so in this metric Rouge to and Rouge is sort" }, { "end": 2199.44, "start": 2194.16, "text": " of an engram overlap metric between gold standards and what you produce in this" }, { "end": 2206.04, "start": 2199.44, "text": " metric the original transformer is best but in Rouge one this synthesizer mixed" }, { "end": 2211.72, "start": 2206.04, "text": " here is the best and in Rouge L this one is the best and in dialogue generation" }, { "end": 2216.56, "start": 2211.72, "text": " all of them are actually not as good as this one right here where it's just the" }, { "end": 2221.52, "start": 2216.56, "text": " dense which is strictly less powerful than the ones on the bottom but so as" }, { "end": 2226.8, "start": 2221.52, "text": " you as you can see yeah I think what you should take away from this is that it it" }, { "end": 2233.04, "start": 2226.8, "text": " is interesting that it sometimes works but it seems to be a fair bit of" }, { "end": 2240.92, "start": 2233.04, "text": " shakiness to these to these results okay now they go on and they test this on" }, { "end": 2246.16, "start": 2240.92, "text": " super glue and this is a benchmark so glue and super glue they consist of" }, { "end": 2252.08, "start": 2246.16, "text": " these different tasks right here and now we are out of the text generation game" }, { "end": 2256.44, "start": 2252.08, "text": " we are in the game of for example you have two sentences and you need to" }, { "end": 2262, "start": 2256.44, "text": " decide which which one is like which one entails the other or are they" }, { "end": 2266.2799999999997, "start": 2262, "text": " contradictory or things like this so it's more of a late say a classification" }, { "end": 2271.44, "start": 2266.2799999999997, "text": " task and people apply different models so it's no longer a text generation task" }, { "end": 2276.2400000000002, "start": 2271.44, "text": " so they switch model instead of the vanilla transformer from the attention" }, { "end": 2282.48, "start": 2276.2400000000002, "text": " is all you need paper they now go on to the t5 the text to text transformer and" }, { "end": 2287.08, "start": 2282.48, "text": " they change they simply take the architecture and they change the" }, { "end": 2294.44, "start": 2287.08, "text": " attention in there with their attention and you can see right here that the" }, { "end": 2301.12, "start": 2294.44, "text": " results are quite different than before so in every single case either the t5" }, { "end": 2307.3599999999997, "start": 2301.12, "text": " model the base model with the dot product attention is the best model or" }, { "end": 2315.08, "start": 2307.3599999999997, "text": " the synthesizer but including V so plus V means plus vanilla means it also has" }, { "end": 2322.56, "start": 2315.08, "text": " the dot product attention plus this learned thing right here okay so our is" }, { "end": 2326.44, "start": 2322.56, "text": " now the learned I think the learned right I would be surprised if it was the" }, { "end": 2332.52, "start": 2326.44, "text": " random random but it could also be but in any case it's strictly better right" }, { "end": 2337.88, "start": 2332.52, "text": " it's strictly more powerful model and the only way you can actually perform" }, { "end": 2341.92, "start": 2337.88, "text": " worse is when you know it's too many parameters and so on and they kind of" }, { "end": 2346.04, "start": 2341.92, "text": " take stuff from each other and there is effects where more parameters can hurt" }, { "end": 2352.6, "start": 2346.04, "text": " you but never is any model that doesn't have the dot product attention on on top" }, { "end": 2360.8399999999997, "start": 2352.6, "text": " and these authors here argue that that's this can be largely attributed to the" }, { "end": 2365.7599999999998, "start": 2360.8399999999997, "text": " fact that the encoder self-attention in the t5 setting also functions as a" }, { "end": 2372.68, "start": 2365.7599999999998, "text": " cross-sentence attention so what do they mean here if in the t5 is just as I" }, { "end": 2379.04, "start": 2372.68, "text": " understand it this is just like an encoder like Bert so imagine imagine" }, { "end": 2384.04, "start": 2379.04, "text": " maybe this is Bert right what Bert is simply an encoder only transformer that" }, { "end": 2390, "start": 2384.04, "text": " means you here you put in your sequence and out again comes a sequence and you" }, { "end": 2393.88, "start": 2390, "text": " have like a special token that you use for classification and so on so this is" }, { "end": 2401.04, "start": 2393.88, "text": " less when you have to generate text but more when you want to classify text or" }, { "end": 2406.2799999999997, "start": 2401.04, "text": " things like this find something in a text and what you would do if you have" }, { "end": 2409.28, "start": 2406.28, "text": " two sentences you need to decide something about them you put the first" }, { "end": 2415.4, "start": 2409.28, "text": " sentence here and then you say you put like a separator token here this usually" }, { "end": 2419.32, "start": 2415.4, "text": " called like a separator token and then you put the second sentence here is you" }, { "end": 2423.26, "start": 2419.32, "text": " just concatenate them and you let them go into the transformer and they argue" }, { "end": 2428.2400000000002, "start": 2423.26, "text": " that if you do self-attention on this entire sequence then you get attention" }, { "end": 2433.2000000000003, "start": 2428.2400000000002, "text": " patterns like this and this is sort of like cross-attention between sequences" }, { "end": 2437.96, "start": 2433.2, "text": " right it's not really self-attention and that's why their method doesn't work" }, { "end": 2443.24, "start": 2437.96, "text": " because it basically deals with self-attention but I'm not really buying" }, { "end": 2447.7999999999997, "start": 2443.24, "text": " that argument I mean if this is a sequence it is if this is one sequence" }, { "end": 2452.72, "start": 2447.7999999999997, "text": " this is self-attention and if you were going to argue that out of the blue a" }, { "end": 2461.3399999999997, "start": 2452.72, "text": " token in your case like in your original formulation can simply you know just by" }, { "end": 2465.52, "start": 2461.34, "text": " looking at itself know where where which position it wants the information from" }, { "end": 2469.92, "start": 2465.52, "text": " and certainly here this token could also learn that it wants information from" }, { "end": 2474.08, "start": 2469.92, "text": " over here or from the first word here I don't I don't really see the difference" }, { "end": 2480.6400000000003, "start": 2474.08, "text": " maybe maybe you need to somehow standardize where this separator token" }, { "end": 2485.2400000000002, "start": 2480.6400000000003, "text": " is so that it's always in the same place and that the second sentence always" }, { "end": 2489.56, "start": 2485.2400000000002, "text": " starts at the same place but if you have that then I really don't see any" }, { "end": 2495.48, "start": 2489.56, "text": " difference in the argument you can make here that this shouldn't work as much as" }, { "end": 2501.24, "start": 2495.48, "text": " the others what I think is happening is that this task is simply involves more" }, { "end": 2506.36, "start": 2501.24, "text": " difficult reasoning involves more routing of information like dynamic" }, { "end": 2510.2799999999997, "start": 2506.36, "text": " routing that's actually dependent on what's in the tasks rather than" }, { "end": 2515.88, "start": 2510.2799999999997, "text": " something like machine translation which most of the time has some global" }, { "end": 2523.7200000000003, "start": 2515.88, "text": " routing bias like like some some pattern that works pretty well across all alright" }, { "end": 2529.6, "start": 2523.7200000000003, "text": " so the last part here is where they kind of introspect the model and in the in" }, { "end": 2534.92, "start": 2529.6, "text": " the first thing they say okay we look at the distribution of weights so these are" }, { "end": 2540.7200000000003, "start": 2534.92, "text": " the weights the weights in the decoder at the beginning of training and you can" }, { "end": 2546.9599999999996, "start": 2540.72, "text": " already see that the standard transformer weights are and the" }, { "end": 2551.9199999999996, "start": 2546.9599999999996, "text": " synthesizer weights are different from the sorry the dense synthesizer weights" }, { "end": 2556.24, "start": 2551.9199999999996, "text": " are different from the random synthesizer weights and this probably is" }, { "end": 2559.52, "start": 2556.24, "text": " mostly due to the fact of how you initialize like these deep learning" }, { "end": 2564.04, "start": 2559.52, "text": " frameworks if you have a matrix they will look at what's this dimension" }, { "end": 2568.3999999999996, "start": 2564.04, "text": " what's this dimension and calculate how they have to initialize it such that" }, { "end": 2573.36, "start": 2568.4, "text": " sort of the the total norm of a random vector that goes through stays the same" }, { "end": 2578.52, "start": 2573.36, "text": " sorry the vector would go through like it would go in here and out there so you" }, { "end": 2584.2000000000003, "start": 2578.52, "text": " see if it changes dimension then if you just randomly initialize with all the" }, { "end": 2590.6800000000003, "start": 2584.2000000000003, "text": " same like every matrix with the same number then like with the normal" }, { "end": 2596.52, "start": 2590.6800000000003, "text": " distribution then the in this case the vector would gain in norm and to account" }, { "end": 2602.16, "start": 2596.52, "text": " for that you initialize the matrices such that the vector norms approximately" }, { "end": 2606.16, "start": 2602.16, "text": " stay the same and this is why there I guess why there are different" }, { "end": 2612.96, "start": 2606.16, "text": " initializations here and you can see this at the end of the training now these" }, { "end": 2617.08, "start": 2612.96, "text": " in in different layers right here it's pretty much always the same pattern that" }, { "end": 2621.88, "start": 2617.08, "text": " they they say they just remark it so this this is what I find weird they just" }, { "end": 2627.6800000000003, "start": 2621.88, "text": " say what the graphs show they don't interpret it like I would expect" }, { "end": 2632.76, "start": 2627.6800000000003, "text": " something like oh this pattern is exactly what we would expect from our" }, { "end": 2637.04, "start": 2632.76, "text": " model because something something something right like if they claim that" }, { "end": 2642.52, "start": 2637.04, "text": " this attention is being able to be learned I just don't see why they do this" }, { "end": 2646.76, "start": 2642.52, "text": " stuff they simply point out oh yeah this this is higher here and this is higher" }, { "end": 2652, "start": 2646.76, "text": " here but I don't even see that as too interesting given that is this is how" }, { "end": 2656.1200000000003, "start": 2652, "text": " you initialize it like if you shift everything to the left of it and you" }, { "end": 2661.36, "start": 2656.1200000000003, "text": " know this is a wall so it like this piles up here then this is exactly what" }, { "end": 2667.7200000000003, "start": 2661.36, "text": " turns out I don't I don't see you know what what this is supposed to mean" }, { "end": 2671.0400000000004, "start": 2667.7200000000003, "text": " especially since they don't make any claim of what it is supposed to mean and" }, { "end": 2676.5600000000004, "start": 2671.0400000000004, "text": " the same here they say the effect of the number of heads okay we we investigate" }, { "end": 2681.68, "start": 2676.56, "text": " the effect and the number of heads on the random synthesizer models you know" }, { "end": 2687.96, "start": 2681.68, "text": " and they train the number of heads now somewhere in the text they say I remember" }, { "end": 2694.12, "start": 2687.96, "text": " they say since you know since we don't dynamically route it is very important" }, { "end": 2698.16, "start": 2694.12, "text": " for our models very crucial to have many attention heads right such that" }, { "end": 2702.48, "start": 2698.16, "text": " basically you don't have one routing pattern you have many routing patterns" }, { "end": 2707.76, "start": 2702.48, "text": " that you learn globally so they say it's very important for our model to have" }, { "end": 2712.2400000000002, "start": 2707.76, "text": " many attention heads and I guess that's what they're trying to demonstrate here" }, { "end": 2718.72, "start": 2712.2400000000002, "text": " but again they simply say what's happening they don't interpret it and" }, { "end": 2724.92, "start": 2718.72, "text": " and they don't compare it to anything they just you know put it here they just" }, { "end": 2731.4, "start": 2724.92, "text": " put the number and I don't like is this good is this is this bad can you compare" }, { "end": 2736.88, "start": 2731.4, "text": " it to something and also here in the so here in the original paper they do the" }, { "end": 2741.7200000000003, "start": 2736.88, "text": " same thing here as you can see the number H is the heads and they do ablate" }, { "end": 2746.52, "start": 2741.7200000000003, "text": " this but at the same time they adjust the dimensions of the key and value" }, { "end": 2751.04, "start": 2746.52, "text": " vectors such that in total they have the same amount of parameters right so they" }, { "end": 2756.6800000000003, "start": 2751.04, "text": " can really investigate is one big attention head better or worse than many" }, { "end": 2762, "start": 2756.68, "text": " small attention heads is there a trade-off and they find here that there" }, { "end": 2766.08, "start": 2762, "text": " is a bit of a trade-off like there is a sweet spot you don't want too many don't" }, { "end": 2772.52, "start": 2766.08, "text": " want too much because they get too small something like this but first we like we" }, { "end": 2776.3599999999997, "start": 2772.52, "text": " don't know whether or not have they simply changed the heads but left every" }, { "end": 2780.24, "start": 2776.3599999999997, "text": " other parameter the same or have they also adjusted the dimension because if" }, { "end": 2784.6, "start": 2780.24, "text": " they haven't adjusted the dimensions then this this increase would be" }, { "end": 2789.08, "start": 2784.6, "text": " absolutely expected because you now have more parameters and if they have" }, { "end": 2795.52, "start": 2789.08, "text": " adjusted then it can we compare this to you know something because this here is" }, { "end": 2801.48, "start": 2795.52, "text": " the this is the t5 small this is not the original transformer like is this big is" }, { "end": 2809.64, "start": 2801.48, "text": " this small and what does it say about the claim that you made that the number" }, { "end": 2814.2, "start": 2809.64, "text": " of heads is so important for your model can you validate this using this so it's" }, { "end": 2819.3199999999997, "start": 2814.2, "text": " just a bit of like this entire page here it's just they just measure some things" }, { "end": 2825.08, "start": 2819.3199999999997, "text": " and then they state them here and you're somehow supposed to guess what they mean" }, { "end": 2833.2799999999997, "start": 2825.08, "text": " by stating that here okay but that was enough for me ranting so they give some" }, { "end": 2838.64, "start": 2833.2799999999997, "text": " supplementary material right here but in essence what I like about the paper is" }, { "end": 2844.12, "start": 2838.64, "text": " sort of the thinking that goes into this thinking outside the box asking the" }, { "end": 2847.96, "start": 2844.12, "text": " fundamental questions about these models do we really need this what do they do I" }, { "end": 2853.7999999999997, "start": 2847.96, "text": " don't think it's super well investigated really from a scientific point like the" }, { "end": 2858.04, "start": 2853.7999999999997, "text": " formulation of hypotheses it simply trains these things and then make some" }, { "end": 2862.3599999999997, "start": 2858.04, "text": " claims but the claims interact you know with the number of parameters here and so" }, { "end": 2869.04, "start": 2862.3599999999997, "text": " on so and they're sort of noisy all around and of course the fact that this" }, { "end": 2874.2799999999997, "start": 2869.04, "text": " thing here turns out to be a fully connected layer in disguise is also" }, { "end": 2879.72, "start": 2874.2799999999997, "text": " pretty funny but I get it it's a fan it's like it's it's more it's not exactly" }, { "end": 2885.88, "start": 2879.72, "text": " the same thing but it you know yeah all right so that was my take on this paper" }, { "end": 2891.68, "start": 2885.88, "text": " if you have a different one let me know in the comments for sure I read all of" }, { "end": 2898.8, "start": 2891.68, "text": " them and at least I try and I've always succeeded so far all right I'll see you" }, { "end": 2901.52, "start": 2898.8, "text": " next time bye bye" } ]
LfUsGv-ESbc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Code] How to use Facebook's DETR object detection algorithm in Python (Full Tutorial)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "facebook", "fair", "fb", "facebook ai", "object detection", "coco", "bounding boxes", "hungarian", "matching", "bipartite", "cnn", "transformer", "attention", "encoder", "decoder", "images", "vision", "pixels", "segmentation", "classes", "stuff", "things", "attention mechanism", "squared", "unrolled", "overlap", "threshold", "rcnn", "code", "pytorch", "colab", "notebook", "ipython", "python", "torch", "hub", "torchvision", "bounding box", "image", "computer vision" ]
Watch my as I struggle my way up the glorious path of using the DETR object detection model in PyTorch. Original Video on DETR: https://youtu.be/T35ba_VXkMY Their GitHub repo: https://github.com/facebookresearch/detr My Colab: https://colab.research.google.com/drive/1Exoc3-A141_h8GKk-B6cJxoidJsgOZOZ?usp=sharing OUTLINE: 0:00 - Intro 0:45 - TorchHub Model 2:00 - Getting an Image 6:00 - Image to PyTorch Tensor 7:50 - Handling Model Output 15:00 - Draw Bounding Boxes 20:10 - The Dress 22:00 - Rorschach Ink Blots 23:00 - Forcing More Predictions 28:30 - Jackson Pollock Images 32:00 - Elephant Herds Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Howdy ho, how's it going? So today we are going to try out the DETR, the end-to-end object detection with transformers from Facebook AI research. And they have a github repo and they pretty much give you everything like the model, the pre-trained weights and so on. So today we're going to check out how easy it is to get started with that. So in order to do that they have like a colab but we won't look at it too much. I've glanced at it and we'll basically see how far can we go without looking at it too much and how easy is that. So what I've done is I've spun up a colab that I will share at the end and I've imported torch and just loaded the model so you don't have to wait for that to happen. So I've loaded that up and now we have it in the cache. So now we can basically go ahead and load an image into the model and try to detect objects in the image. So first of all this is super easy, right? You simply load this from torch hub. It's kind of like the the tensorflow hub. You simply give the name of the model. You say I want the pre-trained please. Shag-a-boom! You now have a model. So if we look at that model this is going to be this entire entire DETR model right here with all the transformer and ResNet and whatnot. Okay this is almost a bit too much right here. So what we want is an image. So let's go find an image. Where better to find an image than Google? So let's find an image of dogs because dogs is one of the classes in this Coco dataset. This one's nice, right? Okay so we want the image address. We want to load it in here somehow. So let the URL is... Let's make this into some sort of like an input thing where we can paste the URL right here. Okay there we go. So we have this right here and that's the URL. All right no that's not the URL at all. Is it? Cool better. Now we need to load this. For that we're gonna use the requests library. Always a pleasure. Requests, requests. So the way to load a binary file is you can put the URL here and you can say streamed here. I glanced this from the other thing and the raw entry will get you the bytes. No, oh sorry. Get URL streamed. Stream. Yeah so this will get you the sort of the bytes of the image and then use just say image.open and of course we need the image from the pill library, the python image library. So import image. We got that and we can open that image up and with a bit of luck. Yeah yeah. So this model expects I think Coco dataset is 640 by 480 images but they if you can see right here and we're going to take a quick glance at their transforming they resize it to 800 so we're gonna we're gonna steal that part right here. People last time were some found it really funny that I called copy pasting to go serage. So we'll from now on we'll call it just seraging. What we also need are the class labels because that's in defining the Coco dataset right. So these are the class labels. Let's take those and okay so this T here these are torch vision transforms. We're gonna need that so from let's say so if you don't know torch vision it's kind of an addition to PyTorch that just helps you with with images and has a lot of datasets and these transforms they're really helpful because so let's call this image because you can you know resize but they have much more like random cropping and rotating images and so on pretty much everything you need for pre-training and this here is just the standard image net I believe the image net normalization so these are the means and these are the standard deviations so you can see that. So this is what the image net antivirus looks like. So now in terms of pans right here for example What we need are the fx 800 and I believe if you rescale the 640 to 800 you get 600 here. Fairly sure. And then let's display it just because we can. It's a bit squished but we don't care. And let's put that up here so we only need to execute it once. Nice. So from now on it should be a breeze. So what these transforms do is they resize the image. We don't need that anymore. They make it into a tensor and then they normalize by that. So if we run our image through this, because our image right now is this pill image, right? So our image is this pill image but if we run it through the transforms then we'll get a tensor. So that's pretty cool. So the model as it is a deep learning model it expects batches. So we'll unsqueeze that in the first dimension and then we get batches. So shape, let's see, we don't have unskies. No, of course we don't. So this is a one image of three channels of 600 by 800. So this is the y-index coordinates I guess are shifted. Yes, in PyTorch. Cool. So we'll call this our image tensor. Now we just need to put it into the model. So model, we put that in there. And since we don't, let's actually up here put the model in eval mode. I don't know if that's already done but you know you can never be sure enough that the batch norms aren't, but I think it probably doesn't have batch norms. Okay, you're not utilizing the GPU. We'll do that. We'll do that. Thanks. So, how do we use the GPU? We put our model on the GPU. Model equals model.cuda. Yes, yes, yes. I think so. This is gonna work. Okay. We're gonna come back to this later. So we forward our image, of course we also need that on the GPU. And this worked. This worked. This worked. Nice. Okay. And since this is just for evaluation, we should probably go with no grad right here because we don't need this whole gradient stuff if we do that. Okay. I'm dumb. There you go. And nothing happens of course because we need to capture the output somehow. Let's look at that. Output. Wow. Wow. Just wow. So the output is a dictionary, right, because we get back class labels and bounding boxes. So let's look at the pred boxes. Let's look at that tensor. That's a tensor. Very nice. Let's look at its shape. Let's not print giant tensors anymore. Cool. So since this was a batch of one, we should probably go with the zero. And you can see right here, there is a hundred bounding boxes and each one has four numbers. And if you go with the other thing that's in there, the logits, then you'll see that there are also should be a hundred logits and hello, there should be a hundred logits and each one is of size 92 because there are 92 different classes. 92. We'll see about that. Well one is going to be the nothing class, right? By the way, how many classes do we have? We have 91 classes. Okay. Cool. We can deal with that. All right. So what are we going to do next? What we want to do is for each of the logit predictions, we want to find which class it corresponds to. So what we're going to do is we're going to take the argmax of the last dimension. So you can see here, almost all of these things correspond to class 91 and class 91 is not in our classes because our class is only length 91. So that must be the nothing class. So what we can technically do is for logits and boxes in, let's just zip them together. And like this. Okay. Class is oops. Class is the logits argmax. If that's 92 or let's say, if that's larger than the length of our classes, we'll just skip it for now. Okay. So that should work somehow. And if not, then our label should be the class index right here. So let's just see what the detector detects right here. It detects nothing. Why does it detect nothing? That doesn't seem good. What are we doing wrong? We zip together the logits. Oh yeah, of course, we still need the zero with entry. We are cool. So so so so we can delete this. And now finally, beautiful dogs, two dogs detected. Excellent. So now for each of these dogs, we want the bounding box. Okay. So now we somehow need to think of how are we going to draw this on an image. And well, let's, let's actually make a copy of that image, because I don't really trust myself. And then at the end of this, we're just going to display that image, right. Now actually, the reason I make a copy is because in these in this pillow library, you can actually draw on these images. And we're going to use that to draw these bounding boxes. So for that, we need an image draw, if I remember correctly. And I think later, we also want some text. So we need an image font. Yes. All right. So let's draw a bounding box right here, where, so first of all, let's look at that bounding box. Let's call this box box, print box dot shape and break right here. What's happening? Let's not do this right now. So this is a boxes of size four. Now this could be two things. It could be x zero, y zero, x one, y one. So the two corner points are the kind of the boundaries, or it could be x, y width height. Now from the paper, I know that they predict the center and the width and the height. So I'm going to go with that, and I'm just going to guess that it's like x, y, w, h, and not some other way around. If this is a bad guess, then yeah, we'll see. We can just print out one of these boxes. And honestly, no clue that looks reason. Oh, by the way, we should scale that up. Yeah. So these are normalized coordinates, probably between zero and one. So we should scale that up. So we should probably the x coordinates, which is scale by 800 and the y by 600. So let's do it. So first of all, we scale our box by 800 in the x and here is a y and the width is the x direction and this is the y direction. Boom. Okay. We should probably get that on CPU. We'll just hack together a bunch of things right here. Okay, so now this isn't the correct. So we so our x and y and w and h are going to be this box. So now we need to actually draw on the image. We're going to do that. So let's first go x zero x one is x minus w half x plus w half y zero y one is the same for y with h plus h half. Coolio. Now we need an image draw object. So I think draw on this image. So whatever you draw on the draw object will end up on the image. So we can use that to draw a bounding box and let's just quickly look it up. So pill Python draw rectangle, maybe. There we go. Okay, so there's this rectangle. Yeah, there's the rectangle function. And you can see you put in a shape x y here and with height like this. Wait for real, we wouldn't even have to need to transform it. I'm pretty sure you can go x. I thought I remember you could do the different thing as well. But it's called rectangle. Okay, so let's do that. So draw rectangle and we'll go we'll go x zero or we'll go x, y with height. Let's display that down here. Yeah, that looks that looks nothing like we want. But it's you know, it's a start. Maybe actually we need the other thing here. We need x zero, y zero, x one, y one. Yes, yes, doggy. Okay, we still have the break in here. Now we get both dogs. Nice. Okay, let's do I think fill. Yes. Red. And let's go for width five or so. Five seems like a good width. Oh god, five is a terrible width. Oh, it's not fill. I think it's it's outline. Yeah, yeah, yeah. Okay. Okay. Let's still go with five. Cool, we got our dogs. Now we need to put like some snappy text labels. I think there is actually a pill image draw text. I think that exists because I've this font thing. Yeah, exactly. So you need the font thing. Get a font in there. And then Yeah, exactly. You could put a text like this. Okay, so you probably need the x and y coordinates of the text. So let's do that. W dot text. And let's just go with x and y right here, put it right in the middle. And the text is going to be our label, of course. And we want the fill that's now going to be the color of the text. Let's go with white and the font. We're going to load some font right here. Font dot. How are we doing this? True type, true type. Okay. Ah, no, no cheating. Let's just go with regular fonts. It won't look as fancy, but we'll be fine. So where where is our text? You see it? I don't see it. Red, let's make it red. Yes, there we go. Okay, so it wasn't red enough. This should work. Did we just not see it? I'm dumb enough. Cool. So we have two dogs. How easy was that? Actually, we wasted the most time with like bounding boxes and stuff. Absolutely cool. Right. Okay, so now we can have some fun with it. I'm going to scale this down for a bit because you don't need to see the actual code anymore so much so you can see the image more. So we'll go to the images. And the first thing I want to do is the dress. What does this think of the dress? Okay, so we'll copy that. And we'll go into our colab and just paste this right here. But a boom, but a beam sounds nice. And what is wrong? The size of a tensor must match the size of a tensor. We do something wrong. Transform image. Our image is this. Maybe this is like an RGBA image. I think if this is RGBA, we should just convert it to like an RGB. Pretty sure you can do something like this right here. This should work. If it has an alpha channel, then that will remove it. Yes, now it works. Okay, let's see what the model thinks of this. Okay, apparently there's a car and there's a surfboard and there's a person and there's a person. Nice, see? Well we didn't figure out whether the dress was blue or white or gold. It was just a person. Now you could actually like threshold by how sure you are of a given class. But where's the fun in that? So let's go further. Let's do some Rorschach inkblots, because those are always lots and lots of fun. So which one should we go for? This one looks like fun. So we'll put this into here. And it's astonishing, right? This Cocoa data set, it only has these 90 classes. Like it doesn't have anything else. So it's a cake. And this here, what is it? Okay, we'll have to go maybe with blue. What is it? Pop sign. Okay, but so you might think, what if we want more? Like what if we want more predictions? So there is a hack, right? Right now the model can always assign mass to this not a class thing, like right here, this class 91. In order for it to say, I don't think there's anything there. But generally we have 100 predictions, right? So you see where this is going. So yes, let's change it up a bit. And let's go here. Let's first extract these tensors and boxes. Okay, so we have the boxes and this and logits and boxes. Okay, so we got that. What we want to do is basically we want to filter the, we want to basically just remove the last class before we do the argmax. And thereby we want to force the model to make a prediction. It won't be a very good prediction, because of course, this is only the second highest class and it's arguable how much that counts. But still, it'll do something. So this must be done in the logits, right? So we'll look at the logits. And the logits are of shape 100. So we have 100 predictions of 92 classes. Now the first thing we want to do is just remove the last class. So let's go everything here until the last class. Alright, so now we have 91. Actually, let's make it more generic. Whatever is in however many classes are okay. So we don't have this class anymore. So now if we do the softmax over the last thing, we can technically we get 91. But now they're normalized, so they add up to one. So it's kind of a probability distribution. Next, we want to find the max over this, and that will give us a max output. So we don't want to plot all the 100 predictions, because that would just be like squares all over the place, and we'd have no clue what's happening. So this max output right here, what we're trying to find is we're trying to find a, let's say the five best predictions or so the five ones where the model thinks where the model is most confident. It's not really a good metric, but you know. So these are the probability values of all of the 100 predictions. So what we want is like the top K. Okay, so let's go with five. And again, we'll get like a top K output. Let's call that top K. And I think it also has like values and indices. Yes. So now we simply need to filter from the logits and the boxes where these top ones are. So we'll filter the logits. We'll filter the logits by that top K indices, and we'll also filter the I am not very gifted today. Boxes. By the way, I'm using a colab just because it's nice to kind of play around with a model, because if I were to use a file, I'd have to restart and reload the model over and over again. Just not as nice. Right. So now we have the logits and the boxes. And if we do that right now, we get always the top five predictions. How nice is that? And you can see the top five predictions are probably still KKKKKKK. Just to verify that. And we can put its shape. See, this is what I don't like about this stuff. Yes. Okay. So we just have five predictions of 92 things. And we don't want the 92 we've already said. So we just want the 91. Let's actually put that here. Okay. So now we have five by 91. And now it should give us the top five. Ah, there we go. So many cakes and many stop signs. That's fine. That's cool. So the ultimate test right here is going to be. Yes. The human adversarial example. Let's check it out. So we'll put in a Jackson Pollock image and we'll see what the model says. Now we're actually forcing it to make predictions, right? So it can't escape. It will need to do something. Okay, I made another mistake. I would need to copy the image address right here. Like this. That's what happens when you're not an idiot. You get the actual image. So what does the model think of our pretty image? Okay. I can't even read that. So let's make this into white. Bird. Bird. Bird. Okay. Lots of birds in this image. Clearly, clearly lots of birds in this image. Let's try another one. Let's go with this. This one. Yes. Yes. Absolutely. Love it. So we copy image address. And beam boom. More birds. Wow. There's a lot of birds in these Pollock images. Just so many birds. Okay, let's try one last. How about this one? This one is a bit more human friendly, right? Put it in here. Bang. And and. Okay, we get some detections. There's a clock right here. There is a. What's that? House? Horses? Let's print. Let's print the labels. So just so we know what they are. Cake, horse, car, horse and clock. Okay. So I see the clock. Like this here is clearly a clock. Then this rectangle on the right side must be something. Let's put this to read as well. Now that's terrible. White. Back to white. How about back to white? Okay, clock. We got horse right here and house probably. And the entire image is again a cake. Yes. Okay. So as you can see, it is a pretty, pretty good system. But of course, it is only these 90 classes. But it's for now it's a it's pretty cool. And it works pretty well and just the easiness with which you get which which you can get this stuff elephants in Kruger National Park. Just the easiness is astonishing. You can just load it up, kind of have this bit of a notebook and with a bit of like a very few lines of code, you can put something together that detects these bounding boxes. Lots of elephants. And remember, we only have the top five elephants right here. So what happens if we go for more? Where is our top K? So here we can maybe say the top 15 predictions. And as always, if we want to make the model to make its own decision, we can simply revert back and add back the no class label. All right, with that, I hope you like this video. If you did, then maybe tell YouTube that you liked it, share it out. And I will share this notebook in the description for you to find and play around with. All right, thanks for watching. Bye bye.
[ { "end": 8.64, "start": 0, "text": " Howdy ho, how's it going? So today we are going to try out the DETR, the end-to-end object detection" }, { "end": 14.64, "start": 8.64, "text": " with transformers from Facebook AI research. And they have a github repo and they pretty much give" }, { "end": 19.44, "start": 14.64, "text": " you everything like the model, the pre-trained weights and so on. So today we're going to check" }, { "end": 26.64, "start": 19.44, "text": " out how easy it is to get started with that. So in order to do that they have like a colab but" }, { "end": 33.28, "start": 26.64, "text": " we won't look at it too much. I've glanced at it and we'll basically see how far can we go" }, { "end": 39.760000000000005, "start": 33.84, "text": " without looking at it too much and how easy is that. So what I've done is I've spun up a colab" }, { "end": 44.64, "start": 39.760000000000005, "text": " that I will share at the end and I've imported torch and just loaded the model so you don't have" }, { "end": 52.56, "start": 44.64, "text": " to wait for that to happen. So I've loaded that up and now we have it in the cache. So now we can" }, { "end": 59.120000000000005, "start": 52.56, "text": " basically go ahead and load an image into the model and try to detect objects in the image. So" }, { "end": 65.68, "start": 59.120000000000005, "text": " first of all this is super easy, right? You simply load this from torch hub. It's kind of like the" }, { "end": 70.88, "start": 65.68, "text": " the tensorflow hub. You simply give the name of the model. You say I want the pre-trained please." }, { "end": 76.80000000000001, "start": 70.88, "text": " Shag-a-boom! You now have a model. So if we look at that model this is going to be this entire" }, { "end": 84.72, "start": 76.8, "text": " entire DETR model right here with all the transformer and ResNet and whatnot. Okay this" }, { "end": 89.6, "start": 84.72, "text": " is almost a bit too much right here. So what we want is an image. So let's go find an image." }, { "end": 97.36, "start": 91.44, "text": " Where better to find an image than Google? So let's find an image of dogs because dogs is one of the" }, { "end": 103.6, "start": 97.36, "text": " classes in this Coco dataset. This one's nice, right? Okay so we want the image address. We want" }, { "end": 112.47999999999999, "start": 103.6, "text": " to load it in here somehow. So let the URL is... Let's make this into some sort of" }, { "end": 121.19999999999999, "start": 114.47999999999999, "text": " like an input thing where we can paste the URL right here. Okay there we go." }, { "end": 133.84, "start": 121.2, "text": " So we have this right here and that's the URL. All right no that's not the URL at all. Is it?" }, { "end": 145.44, "start": 138.48000000000002, "text": " Cool better. Now we need to load this. For that we're gonna use the requests library." }, { "end": 157.28, "start": 145.44, "text": " Always a pleasure. Requests, requests. So the way to load a binary file is you can" }, { "end": 166.64, "start": 158.4, "text": " put the URL here and you can say streamed here. I glanced this from the other thing and the raw" }, { "end": 179.2, "start": 166.64, "text": " entry will get you the bytes. No, oh sorry. Get URL streamed. Stream. Yeah so this will get you the" }, { "end": 187.83999999999997, "start": 179.2, "text": " sort of the bytes of the image and then use just say image.open and of course we need the image from" }, { "end": 200.32, "start": 187.84, "text": " the pill library, the python image library. So import image. We got that and we can open that" }, { "end": 210.24, "start": 200.32, "text": " image up and with a bit of luck. Yeah yeah. So this model expects I think Coco dataset is" }, { "end": 216.96, "start": 210.24, "text": " 640 by 480 images but they if you can see right here and we're going to take a quick glance at" }, { "end": 225.20000000000002, "start": 216.96, "text": " their transforming they resize it to 800 so we're gonna we're gonna steal that part right here." }, { "end": 236, "start": 227.68, "text": " People last time were some found it really funny that I called copy pasting to go serage. So" }, { "end": 242.32, "start": 236, "text": " we'll from now on we'll call it just seraging. What we also need are the class labels because" }, { "end": 248.48, "start": 242.32, "text": " that's in defining the Coco dataset right. So these are the class labels. Let's take those" }, { "end": 256.32, "start": 249.35999999999999, "text": " and okay so this T here these are torch vision transforms. We're gonna need that so from" }, { "end": 259.44, "start": 257.92, "text": " let's say" }, { "end": 266, "start": 262, "text": " so if you don't know torch vision it's kind of an addition to PyTorch" }, { "end": 270.88, "start": 266, "text": " that just helps you with with images and has a lot of datasets and these transforms they're really" }, { "end": 280.64, "start": 270.88, "text": " helpful because so let's call this image because you can you know resize but they have much more" }, { "end": 285.52, "start": 280.64, "text": " like random cropping and rotating images and so on pretty much everything you need for pre-training" }, { "end": 290.4, "start": 285.52, "text": " and this here is just the standard image net I believe the image net normalization so these" }, { "end": 295.84, "start": 290.4, "text": " are the means and these are the standard deviations so you can see that. So this is" }, { "end": 309.28, "start": 295.84, "text": " what the image net antivirus looks like. So now in terms of pans right here for example" }, { "end": 302.38, "start": 311.28, "text": " What we need are the fx" }, { "end": 311.9, "start": 302.38, "text": " 800 and I believe if you rescale the 640 to 800 you get 600 here." }, { "end": 316.06, "start": 311.9, "text": " Fairly sure." }, { "end": 320.52, "start": 316.06, "text": " And then let's display it just because we can." }, { "end": 323.36, "start": 320.52, "text": " It's a bit squished but we don't care." }, { "end": 328.46, "start": 323.36, "text": " And let's put that up here so we only need to execute it once." }, { "end": 330.5, "start": 328.46, "text": " Nice." }, { "end": 334.02, "start": 330.5, "text": " So from now on it should be a breeze." }, { "end": 337.98, "start": 334.02, "text": " So what these transforms do is they resize the image." }, { "end": 340.46, "start": 337.98, "text": " We don't need that anymore." }, { "end": 346.06, "start": 340.46, "text": " They make it into a tensor and then they normalize by that." }, { "end": 352.06, "start": 346.06, "text": " So if we run our image through this, because our image right now is this pill image, right?" }, { "end": 363.58, "start": 352.06, "text": " So our image is this pill image but if we run it through the transforms then we'll get" }, { "end": 365.38, "start": 363.58, "text": " a tensor." }, { "end": 367.44, "start": 365.38, "text": " So that's pretty cool." }, { "end": 371.36, "start": 367.44, "text": " So the model as it is a deep learning model it expects batches." }, { "end": 376.3, "start": 371.36, "text": " So we'll unsqueeze that in the first dimension and then we get batches." }, { "end": 382.26, "start": 376.3, "text": " So shape, let's see, we don't have unskies." }, { "end": 385.98, "start": 382.26, "text": " No, of course we don't." }, { "end": 391.78000000000003, "start": 385.98, "text": " So this is a one image of three channels of 600 by 800." }, { "end": 395.46000000000004, "start": 391.78000000000003, "text": " So this is the y-index coordinates I guess are shifted." }, { "end": 397.86, "start": 395.46000000000004, "text": " Yes, in PyTorch." }, { "end": 398.86, "start": 397.86, "text": " Cool." }, { "end": 404.6, "start": 398.86, "text": " So we'll call this our image tensor." }, { "end": 407.3, "start": 404.6, "text": " Now we just need to put it into the model." }, { "end": 411.92, "start": 407.3, "text": " So model, we put that in there." }, { "end": 416.54, "start": 411.92, "text": " And since we don't, let's actually up here put the model in eval mode." }, { "end": 423.94, "start": 416.54, "text": " I don't know if that's already done but you know you can never be sure enough that the" }, { "end": 428.66, "start": 423.94, "text": " batch norms aren't, but I think it probably doesn't have batch norms." }, { "end": 431.98, "start": 428.66, "text": " Okay, you're not utilizing the GPU." }, { "end": 433.06, "start": 431.98, "text": " We'll do that." }, { "end": 434.06, "start": 433.06, "text": " We'll do that." }, { "end": 435.06, "start": 434.06, "text": " Thanks." }, { "end": 440.66, "start": 435.06, "text": " So, how do we use the GPU?" }, { "end": 442.7, "start": 440.66, "text": " We put our model on the GPU." }, { "end": 445.3, "start": 442.7, "text": " Model equals model.cuda." }, { "end": 451.3, "start": 445.3, "text": " Yes, yes, yes." }, { "end": 452.3, "start": 451.3, "text": " I think so." }, { "end": 453.3, "start": 452.3, "text": " This is gonna work." }, { "end": 454.3, "start": 453.3, "text": " Okay." }, { "end": 459.22, "start": 454.3, "text": " We're gonna come back to this later." }, { "end": 466.02000000000004, "start": 459.22, "text": " So we forward our image, of course we also need that on the GPU." }, { "end": 468.82000000000005, "start": 466.02000000000004, "text": " And this worked." }, { "end": 469.82000000000005, "start": 468.82000000000005, "text": " This worked." }, { "end": 470.82000000000005, "start": 469.82000000000005, "text": " This worked." }, { "end": 471.82000000000005, "start": 470.82000000000005, "text": " Nice." }, { "end": 472.82000000000005, "start": 471.82000000000005, "text": " Okay." }, { "end": 478.98, "start": 472.82000000000005, "text": " And since this is just for evaluation, we should probably go with no grad right here" }, { "end": 483.42, "start": 478.98, "text": " because we don't need this whole gradient stuff if we do that." }, { "end": 484.62, "start": 483.42, "text": " Okay." }, { "end": 487.3, "start": 484.62, "text": " I'm dumb." }, { "end": 488.86, "start": 487.3, "text": " There you go." }, { "end": 494.90000000000003, "start": 488.86, "text": " And nothing happens of course because we need to capture the output somehow." }, { "end": 496.66, "start": 494.90000000000003, "text": " Let's look at that." }, { "end": 497.66, "start": 496.66, "text": " Output." }, { "end": 498.66, "start": 497.66, "text": " Wow." }, { "end": 499.90000000000003, "start": 498.66, "text": " Wow." }, { "end": 500.90000000000003, "start": 499.90000000000003, "text": " Just wow." }, { "end": 507.42, "start": 500.90000000000003, "text": " So the output is a dictionary, right, because we get back class labels and bounding boxes." }, { "end": 511.72, "start": 507.42, "text": " So let's look at the pred boxes." }, { "end": 515.26, "start": 511.72, "text": " Let's look at that tensor." }, { "end": 516.26, "start": 515.26, "text": " That's a tensor." }, { "end": 517.26, "start": 516.26, "text": " Very nice." }, { "end": 521.02, "start": 517.26, "text": " Let's look at its shape." }, { "end": 523.86, "start": 521.02, "text": " Let's not print giant tensors anymore." }, { "end": 525.62, "start": 523.86, "text": " Cool." }, { "end": 530.54, "start": 525.62, "text": " So since this was a batch of one, we should probably go with the zero." }, { "end": 535.86, "start": 530.54, "text": " And you can see right here, there is a hundred bounding boxes and each one has four numbers." }, { "end": 542.66, "start": 535.86, "text": " And if you go with the other thing that's in there, the logits, then you'll see that" }, { "end": 553.38, "start": 542.66, "text": " there are also should be a hundred logits and hello, there should be a hundred logits" }, { "end": 560.38, "start": 553.38, "text": " and each one is of size 92 because there are 92 different classes." }, { "end": 562.78, "start": 560.38, "text": " 92." }, { "end": 564.4599999999999, "start": 562.78, "text": " We'll see about that." }, { "end": 568.86, "start": 564.4599999999999, "text": " Well one is going to be the nothing class, right?" }, { "end": 572.7, "start": 568.86, "text": " By the way, how many classes do we have?" }, { "end": 575.26, "start": 572.7, "text": " We have 91 classes." }, { "end": 576.26, "start": 575.26, "text": " Okay." }, { "end": 577.3000000000001, "start": 576.26, "text": " Cool." }, { "end": 579.46, "start": 577.3000000000001, "text": " We can deal with that." }, { "end": 580.46, "start": 579.46, "text": " All right." }, { "end": 584.22, "start": 580.46, "text": " So what are we going to do next?" }, { "end": 594.82, "start": 584.22, "text": " What we want to do is for each of the logit predictions, we want to find which class it" }, { "end": 596.04, "start": 594.82, "text": " corresponds to." }, { "end": 601.62, "start": 596.04, "text": " So what we're going to do is we're going to take the argmax of the last dimension." }, { "end": 608.78, "start": 601.62, "text": " So you can see here, almost all of these things correspond to class 91 and class 91 is not" }, { "end": 611.9399999999999, "start": 608.78, "text": " in our classes because our class is only length 91." }, { "end": 614.6999999999999, "start": 611.9399999999999, "text": " So that must be the nothing class." }, { "end": 625.3399999999999, "start": 614.6999999999999, "text": " So what we can technically do is for logits and boxes in, let's just zip them together." }, { "end": 635.4200000000001, "start": 625.34, "text": " And like this." }, { "end": 637.46, "start": 635.4200000000001, "text": " Okay." }, { "end": 639.94, "start": 637.46, "text": " Class is oops." }, { "end": 646.38, "start": 639.94, "text": " Class is the logits argmax." }, { "end": 652.86, "start": 646.38, "text": " If that's 92 or let's say, if that's larger than the length of our classes, we'll just" }, { "end": 655.58, "start": 652.86, "text": " skip it for now." }, { "end": 657.78, "start": 655.58, "text": " Okay." }, { "end": 661.42, "start": 657.78, "text": " So that should work somehow." }, { "end": 672.0600000000001, "start": 661.42, "text": " And if not, then our label should be the class index right here." }, { "end": 676.64, "start": 672.0600000000001, "text": " So let's just see what the detector detects right here." }, { "end": 683.9399999999999, "start": 676.64, "text": " It detects nothing." }, { "end": 691.02, "start": 683.9399999999999, "text": " Why does it detect nothing?" }, { "end": 696.42, "start": 691.02, "text": " That doesn't seem good." }, { "end": 700.7, "start": 696.42, "text": " What are we doing wrong?" }, { "end": 708.22, "start": 700.7, "text": " We zip together the logits." }, { "end": 714.74, "start": 708.22, "text": " Oh yeah, of course, we still need the zero with entry." }, { "end": 720.1800000000001, "start": 714.74, "text": " We are cool." }, { "end": 726.94, "start": 720.1800000000001, "text": " So so so so we can delete this." }, { "end": 733.0200000000001, "start": 726.94, "text": " And now finally, beautiful dogs, two dogs detected." }, { "end": 734.0200000000001, "start": 733.0200000000001, "text": " Excellent." }, { "end": 737.7, "start": 734.0200000000001, "text": " So now for each of these dogs, we want the bounding box." }, { "end": 738.7, "start": 737.7, "text": " Okay." }, { "end": 744.22, "start": 738.7, "text": " So now we somehow need to think of how are we going to draw this on an image." }, { "end": 750.62, "start": 744.22, "text": " And well, let's, let's actually make a copy of that image, because I don't really trust" }, { "end": 752.3000000000001, "start": 750.62, "text": " myself." }, { "end": 757.9399999999999, "start": 752.3, "text": " And then at the end of this, we're just going to display that image, right." }, { "end": 763.02, "start": 757.9399999999999, "text": " Now actually, the reason I make a copy is because in these in this pillow library, you" }, { "end": 764.5, "start": 763.02, "text": " can actually draw on these images." }, { "end": 767.2199999999999, "start": 764.5, "text": " And we're going to use that to draw these bounding boxes." }, { "end": 774.06, "start": 767.2199999999999, "text": " So for that, we need an image draw, if I remember correctly." }, { "end": 777.02, "start": 774.06, "text": " And I think later, we also want some text." }, { "end": 780.42, "start": 777.02, "text": " So we need an image font." }, { "end": 782.4599999999999, "start": 780.42, "text": " Yes." }, { "end": 784.26, "start": 782.4599999999999, "text": " All right." }, { "end": 793.3399999999999, "start": 784.26, "text": " So let's draw a bounding box right here, where, so first of all, let's look at that bounding" }, { "end": 796.18, "start": 793.3399999999999, "text": " box." }, { "end": 805.38, "start": 796.18, "text": " Let's call this box box, print box dot shape and break right here." }, { "end": 806.38, "start": 805.38, "text": " What's happening?" }, { "end": 813.74, "start": 806.38, "text": " Let's not do this right now." }, { "end": 816.66, "start": 813.74, "text": " So this is a boxes of size four." }, { "end": 818.26, "start": 816.66, "text": " Now this could be two things." }, { "end": 822.52, "start": 818.26, "text": " It could be x zero, y zero, x one, y one." }, { "end": 828.02, "start": 822.52, "text": " So the two corner points are the kind of the boundaries, or it could be x, y width height." }, { "end": 832.48, "start": 828.02, "text": " Now from the paper, I know that they predict the center and the width and the height." }, { "end": 837.7, "start": 832.48, "text": " So I'm going to go with that, and I'm just going to guess that it's like x, y, w, h," }, { "end": 841.1800000000001, "start": 837.7, "text": " and not some other way around." }, { "end": 845.1, "start": 841.1800000000001, "text": " If this is a bad guess, then yeah, we'll see." }, { "end": 847.58, "start": 845.1, "text": " We can just print out one of these boxes." }, { "end": 850.94, "start": 847.58, "text": " And honestly, no clue that looks reason." }, { "end": 853.38, "start": 850.94, "text": " Oh, by the way, we should scale that up." }, { "end": 854.38, "start": 853.38, "text": " Yeah." }, { "end": 856.72, "start": 854.38, "text": " So these are normalized coordinates, probably between zero and one." }, { "end": 858.38, "start": 856.72, "text": " So we should scale that up." }, { "end": 865.9, "start": 858.38, "text": " So we should probably the x coordinates, which is scale by 800 and the y by 600." }, { "end": 867.9399999999999, "start": 865.9, "text": " So let's do it." }, { "end": 879.9, "start": 867.9399999999999, "text": " So first of all, we scale our box by 800 in the x and here is a y and the width is the" }, { "end": 883.18, "start": 879.9, "text": " x direction and this is the y direction." }, { "end": 884.18, "start": 883.18, "text": " Boom." }, { "end": 885.22, "start": 884.18, "text": " Okay." }, { "end": 889.82, "start": 885.22, "text": " We should probably get that on CPU." }, { "end": 892.38, "start": 889.82, "text": " We'll just hack together a bunch of things right here." }, { "end": 894.5, "start": 892.38, "text": " Okay, so now this isn't the correct." }, { "end": 902.6800000000001, "start": 894.5, "text": " So we so our x and y and w and h are going to be this box." }, { "end": 905.62, "start": 902.6800000000001, "text": " So now we need to actually draw on the image." }, { "end": 908.32, "start": 905.62, "text": " We're going to do that." }, { "end": 920.2600000000001, "start": 908.32, "text": " So let's first go x zero x one is x minus w half x plus w half y zero y one is the same" }, { "end": 926.5400000000001, "start": 920.2600000000001, "text": " for y with h plus h half." }, { "end": 928.1, "start": 926.5400000000001, "text": " Coolio." }, { "end": 930.38, "start": 928.1, "text": " Now we need an image draw object." }, { "end": 936.7600000000001, "start": 930.38, "text": " So I think draw on this image." }, { "end": 940.34, "start": 936.76, "text": " So whatever you draw on the draw object will end up on the image." }, { "end": 944.54, "start": 940.34, "text": " So we can use that to draw a bounding box and let's just quickly look it up." }, { "end": 951.38, "start": 944.54, "text": " So pill Python draw rectangle, maybe." }, { "end": 952.62, "start": 951.38, "text": " There we go." }, { "end": 955.62, "start": 952.62, "text": " Okay, so there's this rectangle." }, { "end": 959.7, "start": 955.62, "text": " Yeah, there's the rectangle function." }, { "end": 966.94, "start": 959.7, "text": " And you can see you put in a shape x y here and with height like this." }, { "end": 971.4200000000001, "start": 966.94, "text": " Wait for real, we wouldn't even have to need to transform it." }, { "end": 973.3000000000001, "start": 971.4200000000001, "text": " I'm pretty sure you can go x." }, { "end": 980.26, "start": 973.3000000000001, "text": " I thought I remember you could do the different thing as well." }, { "end": 981.26, "start": 980.26, "text": " But it's called rectangle." }, { "end": 982.5, "start": 981.26, "text": " Okay, so let's do that." }, { "end": 998.94, "start": 982.5, "text": " So draw rectangle and we'll go we'll go x zero or we'll go x, y with height." }, { "end": 1002.7, "start": 998.94, "text": " Let's display that down here." }, { "end": 1009.3, "start": 1002.7, "text": " Yeah, that looks that looks nothing like we want." }, { "end": 1013.4399999999999, "start": 1009.3, "text": " But it's you know, it's a start." }, { "end": 1016.74, "start": 1013.4399999999999, "text": " Maybe actually we need the other thing here." }, { "end": 1024.3799999999999, "start": 1016.74, "text": " We need x zero, y zero, x one, y one." }, { "end": 1029.02, "start": 1024.3799999999999, "text": " Yes, yes, doggy." }, { "end": 1032.7, "start": 1029.02, "text": " Okay, we still have the break in here." }, { "end": 1035.5, "start": 1032.7, "text": " Now we get both dogs." }, { "end": 1037.5, "start": 1035.5, "text": " Nice." }, { "end": 1043.22, "start": 1037.5, "text": " Okay, let's do I think fill." }, { "end": 1044.22, "start": 1043.22, "text": " Yes." }, { "end": 1045.72, "start": 1044.22, "text": " Red." }, { "end": 1049.06, "start": 1045.72, "text": " And let's go for width five or so." }, { "end": 1050.38, "start": 1049.06, "text": " Five seems like a good width." }, { "end": 1054.46, "start": 1050.38, "text": " Oh god, five is a terrible width." }, { "end": 1058.7, "start": 1054.46, "text": " Oh, it's not fill." }, { "end": 1060.7, "start": 1058.7, "text": " I think it's it's outline." }, { "end": 1062.7, "start": 1060.7, "text": " Yeah, yeah, yeah." }, { "end": 1063.7, "start": 1062.7, "text": " Okay." }, { "end": 1064.7, "start": 1063.7, "text": " Okay." }, { "end": 1067.6200000000001, "start": 1064.7, "text": " Let's still go with five." }, { "end": 1069.78, "start": 1067.6200000000001, "text": " Cool, we got our dogs." }, { "end": 1073.22, "start": 1069.78, "text": " Now we need to put like some snappy text labels." }, { "end": 1078.94, "start": 1073.22, "text": " I think there is actually a pill image draw text." }, { "end": 1084.18, "start": 1078.94, "text": " I think that exists because I've this font thing." }, { "end": 1085.18, "start": 1084.18, "text": " Yeah, exactly." }, { "end": 1087.74, "start": 1085.18, "text": " So you need the font thing." }, { "end": 1090.74, "start": 1087.74, "text": " Get a font in there." }, { "end": 1094.9, "start": 1090.74, "text": " And then Yeah, exactly." }, { "end": 1096.54, "start": 1094.9, "text": " You could put a text like this." }, { "end": 1102.7, "start": 1096.54, "text": " Okay, so you probably need the x and y coordinates of the text." }, { "end": 1104.98, "start": 1102.7, "text": " So let's do that." }, { "end": 1106.94, "start": 1104.98, "text": " W dot text." }, { "end": 1111.78, "start": 1106.94, "text": " And let's just go with x and y right here, put it right in the middle." }, { "end": 1115.02, "start": 1111.78, "text": " And the text is going to be our label, of course." }, { "end": 1120.02, "start": 1115.02, "text": " And we want the fill that's now going to be the color of the text." }, { "end": 1124.9, "start": 1120.02, "text": " Let's go with white and the font." }, { "end": 1130.62, "start": 1124.9, "text": " We're going to load some font right here." }, { "end": 1131.62, "start": 1130.62, "text": " Font dot." }, { "end": 1133.86, "start": 1131.62, "text": " How are we doing this?" }, { "end": 1135.66, "start": 1133.86, "text": " True type, true type." }, { "end": 1136.66, "start": 1135.66, "text": " Okay." }, { "end": 1139.86, "start": 1136.66, "text": " Ah, no, no cheating." }, { "end": 1141.34, "start": 1139.86, "text": " Let's just go with regular fonts." }, { "end": 1149.06, "start": 1141.34, "text": " It won't look as fancy, but we'll be fine." }, { "end": 1158.94, "start": 1149.06, "text": " So where where is our text?" }, { "end": 1159.94, "start": 1158.94, "text": " You see it?" }, { "end": 1162.94, "start": 1159.94, "text": " I don't see it." }, { "end": 1175.62, "start": 1162.94, "text": " Red, let's make it red." }, { "end": 1178.54, "start": 1175.62, "text": " Yes, there we go." }, { "end": 1181.8999999999999, "start": 1178.54, "text": " Okay, so it wasn't red enough." }, { "end": 1182.8999999999999, "start": 1181.8999999999999, "text": " This should work." }, { "end": 1184.58, "start": 1182.8999999999999, "text": " Did we just not see it?" }, { "end": 1185.58, "start": 1184.58, "text": " I'm dumb enough." }, { "end": 1186.58, "start": 1185.58, "text": " Cool." }, { "end": 1187.58, "start": 1186.58, "text": " So we have two dogs." }, { "end": 1188.58, "start": 1187.58, "text": " How easy was that?" }, { "end": 1193.5, "start": 1188.58, "text": " Actually, we wasted the most time with like bounding boxes and stuff." }, { "end": 1194.82, "start": 1193.5, "text": " Absolutely cool." }, { "end": 1195.8999999999999, "start": 1194.82, "text": " Right." }, { "end": 1199.94, "start": 1195.8999999999999, "text": " Okay, so now we can have some fun with it." }, { "end": 1204.26, "start": 1199.94, "text": " I'm going to scale this down for a bit because you don't need to see the actual code anymore" }, { "end": 1207.18, "start": 1204.26, "text": " so much so you can see the image more." }, { "end": 1209.46, "start": 1207.18, "text": " So we'll go to the images." }, { "end": 1214.66, "start": 1209.46, "text": " And the first thing I want to do is the dress." }, { "end": 1217.22, "start": 1214.66, "text": " What does this think of the dress?" }, { "end": 1221.3400000000001, "start": 1217.22, "text": " Okay, so we'll copy that." }, { "end": 1228.38, "start": 1221.3400000000001, "text": " And we'll go into our colab and just paste this right here." }, { "end": 1236.8600000000001, "start": 1228.38, "text": " But a boom, but a beam sounds nice." }, { "end": 1239.26, "start": 1236.86, "text": " And what is wrong?" }, { "end": 1242.9399999999998, "start": 1239.26, "text": " The size of a tensor must match the size of a tensor." }, { "end": 1251.9399999999998, "start": 1242.9399999999998, "text": " We do something wrong." }, { "end": 1253.3799999999999, "start": 1251.9399999999998, "text": " Transform image." }, { "end": 1261.1, "start": 1253.3799999999999, "text": " Our image is this." }, { "end": 1264.6999999999998, "start": 1261.1, "text": " Maybe this is like an RGBA image." }, { "end": 1271.7, "start": 1264.7, "text": " I think if this is RGBA, we should just convert it to like an RGB." }, { "end": 1277.54, "start": 1271.7, "text": " Pretty sure you can do something like this right here." }, { "end": 1278.54, "start": 1277.54, "text": " This should work." }, { "end": 1285.14, "start": 1278.54, "text": " If it has an alpha channel, then that will remove it." }, { "end": 1289.38, "start": 1285.14, "text": " Yes, now it works." }, { "end": 1292.3400000000001, "start": 1289.38, "text": " Okay, let's see what the model thinks of this." }, { "end": 1299.6599999999999, "start": 1292.34, "text": " Okay, apparently there's a car and there's a surfboard and there's a person and there's" }, { "end": 1301.4599999999998, "start": 1299.6599999999999, "text": " a person." }, { "end": 1303.86, "start": 1301.4599999999998, "text": " Nice, see?" }, { "end": 1309.6599999999999, "start": 1303.86, "text": " Well we didn't figure out whether the dress was blue or white or gold." }, { "end": 1312.54, "start": 1309.6599999999999, "text": " It was just a person." }, { "end": 1321.34, "start": 1312.54, "text": " Now you could actually like threshold by how sure you are of a given class." }, { "end": 1324.02, "start": 1321.34, "text": " But where's the fun in that?" }, { "end": 1325.8999999999999, "start": 1324.02, "text": " So let's go further." }, { "end": 1333.8999999999999, "start": 1325.8999999999999, "text": " Let's do some Rorschach inkblots, because those are always lots and lots of fun." }, { "end": 1338.58, "start": 1333.8999999999999, "text": " So which one should we go for?" }, { "end": 1342.54, "start": 1338.58, "text": " This one looks like fun." }, { "end": 1351.74, "start": 1342.54, "text": " So we'll put this into here." }, { "end": 1353.26, "start": 1351.74, "text": " And it's astonishing, right?" }, { "end": 1355.98, "start": 1353.26, "text": " This Cocoa data set, it only has these 90 classes." }, { "end": 1359, "start": 1355.98, "text": " Like it doesn't have anything else." }, { "end": 1362.8999999999999, "start": 1359, "text": " So it's a cake." }, { "end": 1364.8999999999999, "start": 1362.8999999999999, "text": " And this here, what is it?" }, { "end": 1370.34, "start": 1364.8999999999999, "text": " Okay, we'll have to go maybe with blue." }, { "end": 1371.54, "start": 1370.34, "text": " What is it?" }, { "end": 1373.62, "start": 1371.54, "text": " Pop sign." }, { "end": 1378.26, "start": 1373.62, "text": " Okay, but so you might think, what if we want more?" }, { "end": 1380.46, "start": 1378.26, "text": " Like what if we want more predictions?" }, { "end": 1381.98, "start": 1380.46, "text": " So there is a hack, right?" }, { "end": 1387.34, "start": 1381.98, "text": " Right now the model can always assign mass to this not a class thing, like right here," }, { "end": 1390.3, "start": 1387.34, "text": " this class 91." }, { "end": 1394.12, "start": 1390.3, "text": " In order for it to say, I don't think there's anything there." }, { "end": 1397.3799999999999, "start": 1394.12, "text": " But generally we have 100 predictions, right?" }, { "end": 1400.86, "start": 1397.3799999999999, "text": " So you see where this is going." }, { "end": 1409.6599999999999, "start": 1400.86, "text": " So yes, let's change it up a bit." }, { "end": 1413.6999999999998, "start": 1409.6599999999999, "text": " And let's go here." }, { "end": 1420.74, "start": 1413.6999999999998, "text": " Let's first extract these tensors and boxes." }, { "end": 1429.78, "start": 1420.74, "text": " Okay, so we have the boxes and this and logits and boxes." }, { "end": 1433.06, "start": 1429.78, "text": " Okay, so we got that." }, { "end": 1439.26, "start": 1433.06, "text": " What we want to do is basically we want to filter the, we want to basically just remove" }, { "end": 1442.26, "start": 1439.26, "text": " the last class before we do the argmax." }, { "end": 1447.7, "start": 1442.26, "text": " And thereby we want to force the model to make a prediction." }, { "end": 1451.74, "start": 1447.7, "text": " It won't be a very good prediction, because of course, this is only the second highest" }, { "end": 1454.3, "start": 1451.74, "text": " class and it's arguable how much that counts." }, { "end": 1458.78, "start": 1454.3, "text": " But still, it'll do something." }, { "end": 1462.42, "start": 1458.78, "text": " So this must be done in the logits, right?" }, { "end": 1466.62, "start": 1462.42, "text": " So we'll look at the logits." }, { "end": 1468.94, "start": 1466.62, "text": " And the logits are of shape 100." }, { "end": 1471.46, "start": 1468.94, "text": " So we have 100 predictions of 92 classes." }, { "end": 1474.36, "start": 1471.46, "text": " Now the first thing we want to do is just remove the last class." }, { "end": 1479.22, "start": 1474.36, "text": " So let's go everything here until the last class." }, { "end": 1481.3799999999999, "start": 1479.22, "text": " Alright, so now we have 91." }, { "end": 1485.5, "start": 1481.3799999999999, "text": " Actually, let's make it more generic." }, { "end": 1488.42, "start": 1485.5, "text": " Whatever is in however many classes are okay." }, { "end": 1490.96, "start": 1488.42, "text": " So we don't have this class anymore." }, { "end": 1499.5600000000002, "start": 1490.96, "text": " So now if we do the softmax over the last thing, we can technically we get 91." }, { "end": 1502.42, "start": 1499.5600000000002, "text": " But now they're normalized, so they add up to one." }, { "end": 1507.5, "start": 1502.42, "text": " So it's kind of a probability distribution." }, { "end": 1519.18, "start": 1507.5, "text": " Next, we want to find the max over this, and that will give us a max output." }, { "end": 1524.26, "start": 1519.18, "text": " So we don't want to plot all the 100 predictions, because that would just be like squares all" }, { "end": 1527.46, "start": 1524.26, "text": " over the place, and we'd have no clue what's happening." }, { "end": 1538.5, "start": 1527.46, "text": " So this max output right here, what we're trying to find is we're trying to find a," }, { "end": 1543.54, "start": 1538.5, "text": " let's say the five best predictions or so the five ones where the model thinks where" }, { "end": 1545.6200000000001, "start": 1543.54, "text": " the model is most confident." }, { "end": 1550.26, "start": 1545.6200000000001, "text": " It's not really a good metric, but you know." }, { "end": 1556.94, "start": 1550.26, "text": " So these are the probability values of all of the 100 predictions." }, { "end": 1559.42, "start": 1556.94, "text": " So what we want is like the top K." }, { "end": 1564.38, "start": 1559.42, "text": " Okay, so let's go with five." }, { "end": 1568.42, "start": 1564.38, "text": " And again, we'll get like a top K output." }, { "end": 1572.98, "start": 1568.42, "text": " Let's call that top K." }, { "end": 1577.9, "start": 1572.98, "text": " And I think it also has like values and indices." }, { "end": 1578.9, "start": 1577.9, "text": " Yes." }, { "end": 1590.46, "start": 1578.9, "text": " So now we simply need to filter from the logits and the boxes where these top ones are." }, { "end": 1601.5800000000002, "start": 1590.46, "text": " So we'll filter the logits." }, { "end": 1616.1, "start": 1601.58, "text": " We'll filter the logits by that top K indices, and we'll also filter the I am not very gifted" }, { "end": 1618.54, "start": 1616.1, "text": " today." }, { "end": 1622.78, "start": 1618.54, "text": " Boxes." }, { "end": 1627.34, "start": 1622.78, "text": " By the way, I'm using a colab just because it's nice to kind of play around with a model," }, { "end": 1631.8999999999999, "start": 1627.34, "text": " because if I were to use a file, I'd have to restart and reload the model over and over" }, { "end": 1632.8999999999999, "start": 1631.8999999999999, "text": " again." }, { "end": 1633.8999999999999, "start": 1632.8999999999999, "text": " Just not as nice." }, { "end": 1634.8999999999999, "start": 1633.8999999999999, "text": " Right." }, { "end": 1639.02, "start": 1634.8999999999999, "text": " So now we have the logits and the boxes." }, { "end": 1644.12, "start": 1639.02, "text": " And if we do that right now, we get always the top five predictions." }, { "end": 1645.8999999999999, "start": 1644.12, "text": " How nice is that?" }, { "end": 1652.74, "start": 1645.8999999999999, "text": " And you can see the top five predictions are probably still KKKKKKK." }, { "end": 1659.42, "start": 1652.74, "text": " Just to verify that." }, { "end": 1664.6200000000001, "start": 1659.42, "text": " And we can put its shape." }, { "end": 1670.94, "start": 1664.6200000000001, "text": " See, this is what I don't like about this stuff." }, { "end": 1671.94, "start": 1670.94, "text": " Yes." }, { "end": 1672.94, "start": 1671.94, "text": " Okay." }, { "end": 1677.22, "start": 1672.94, "text": " So we just have five predictions of 92 things." }, { "end": 1680.84, "start": 1677.22, "text": " And we don't want the 92 we've already said." }, { "end": 1684.36, "start": 1680.84, "text": " So we just want the 91." }, { "end": 1695.22, "start": 1684.36, "text": " Let's actually put that here." }, { "end": 1698.6999999999998, "start": 1695.22, "text": " Okay." }, { "end": 1700.62, "start": 1698.6999999999998, "text": " So now we have five by 91." }, { "end": 1701.8999999999999, "start": 1700.62, "text": " And now it should give us the top five." }, { "end": 1703, "start": 1701.8999999999999, "text": " Ah, there we go." }, { "end": 1707.1799999999998, "start": 1703, "text": " So many cakes and many stop signs." }, { "end": 1708.1799999999998, "start": 1707.1799999999998, "text": " That's fine." }, { "end": 1709.1799999999998, "start": 1708.1799999999998, "text": " That's cool." }, { "end": 1714.26, "start": 1709.18, "text": " So the ultimate test right here is going to be." }, { "end": 1718.8200000000002, "start": 1714.26, "text": " Yes." }, { "end": 1723.22, "start": 1718.8200000000002, "text": " The human adversarial example." }, { "end": 1724.5, "start": 1723.22, "text": " Let's check it out." }, { "end": 1731.5, "start": 1724.5, "text": " So we'll put in a Jackson Pollock image and we'll see what the model says." }, { "end": 1734.42, "start": 1731.5, "text": " Now we're actually forcing it to make predictions, right?" }, { "end": 1737.14, "start": 1734.42, "text": " So it can't escape." }, { "end": 1740.3400000000001, "start": 1737.14, "text": " It will need to do something." }, { "end": 1742.5, "start": 1740.3400000000001, "text": " Okay, I made another mistake." }, { "end": 1750.3200000000002, "start": 1742.5, "text": " I would need to copy the image address right here." }, { "end": 1752.0800000000002, "start": 1750.3200000000002, "text": " Like this." }, { "end": 1755.8200000000002, "start": 1752.0800000000002, "text": " That's what happens when you're not an idiot." }, { "end": 1757.7800000000002, "start": 1755.8200000000002, "text": " You get the actual image." }, { "end": 1762.22, "start": 1757.7800000000002, "text": " So what does the model think of our pretty image?" }, { "end": 1763.22, "start": 1762.22, "text": " Okay." }, { "end": 1765.0200000000002, "start": 1763.22, "text": " I can't even read that." }, { "end": 1770.1, "start": 1765.02, "text": " So let's make this into white." }, { "end": 1772.1399999999999, "start": 1770.1, "text": " Bird." }, { "end": 1774.1399999999999, "start": 1772.1399999999999, "text": " Bird." }, { "end": 1775.1399999999999, "start": 1774.1399999999999, "text": " Bird." }, { "end": 1776.1399999999999, "start": 1775.1399999999999, "text": " Okay." }, { "end": 1777.42, "start": 1776.1399999999999, "text": " Lots of birds in this image." }, { "end": 1779.9, "start": 1777.42, "text": " Clearly, clearly lots of birds in this image." }, { "end": 1781.94, "start": 1779.9, "text": " Let's try another one." }, { "end": 1789.34, "start": 1781.94, "text": " Let's go with this." }, { "end": 1790.34, "start": 1789.34, "text": " This one." }, { "end": 1791.34, "start": 1790.34, "text": " Yes." }, { "end": 1792.34, "start": 1791.34, "text": " Yes." }, { "end": 1793.34, "start": 1792.34, "text": " Absolutely." }, { "end": 1794.34, "start": 1793.34, "text": " Love it." }, { "end": 1800.8999999999999, "start": 1794.34, "text": " So we copy image address." }, { "end": 1814.74, "start": 1800.8999999999999, "text": " And beam boom." }, { "end": 1815.74, "start": 1814.74, "text": " More birds." }, { "end": 1816.74, "start": 1815.74, "text": " Wow." }, { "end": 1821.22, "start": 1816.74, "text": " There's a lot of birds in these Pollock images." }, { "end": 1823.02, "start": 1821.22, "text": " Just so many birds." }, { "end": 1826.9, "start": 1823.02, "text": " Okay, let's try one last." }, { "end": 1833.1399999999999, "start": 1826.9, "text": " How about this one?" }, { "end": 1838.78, "start": 1833.1399999999999, "text": " This one is a bit more human friendly, right?" }, { "end": 1844.42, "start": 1838.78, "text": " Put it in here." }, { "end": 1847.46, "start": 1844.42, "text": " Bang." }, { "end": 1849.98, "start": 1847.46, "text": " And and." }, { "end": 1853.42, "start": 1849.98, "text": " Okay, we get some detections." }, { "end": 1855.74, "start": 1853.42, "text": " There's a clock right here." }, { "end": 1858.22, "start": 1855.74, "text": " There is a." }, { "end": 1859.22, "start": 1858.22, "text": " What's that?" }, { "end": 1860.22, "start": 1859.22, "text": " House?" }, { "end": 1861.22, "start": 1860.22, "text": " Horses?" }, { "end": 1864.7, "start": 1861.22, "text": " Let's print." }, { "end": 1867.22, "start": 1864.7, "text": " Let's print the labels." }, { "end": 1869.6200000000001, "start": 1867.22, "text": " So just so we know what they are." }, { "end": 1873.42, "start": 1869.6200000000001, "text": " Cake, horse, car, horse and clock." }, { "end": 1874.42, "start": 1873.42, "text": " Okay." }, { "end": 1876.42, "start": 1874.42, "text": " So I see the clock." }, { "end": 1879.94, "start": 1876.42, "text": " Like this here is clearly a clock." }, { "end": 1889.02, "start": 1879.94, "text": " Then this rectangle on the right side must be something." }, { "end": 1893.3400000000001, "start": 1889.02, "text": " Let's put this to read as well." }, { "end": 1894.3400000000001, "start": 1893.3400000000001, "text": " Now that's terrible." }, { "end": 1895.3400000000001, "start": 1894.3400000000001, "text": " White." }, { "end": 1898.3400000000001, "start": 1895.3400000000001, "text": " Back to white." }, { "end": 1900.9, "start": 1898.3400000000001, "text": " How about back to white?" }, { "end": 1903.8600000000001, "start": 1900.9, "text": " Okay, clock." }, { "end": 1911.4199999999998, "start": 1903.86, "text": " We got horse right here and house probably." }, { "end": 1915.86, "start": 1911.4199999999998, "text": " And the entire image is again a cake." }, { "end": 1917.9399999999998, "start": 1915.86, "text": " Yes." }, { "end": 1919.24, "start": 1917.9399999999998, "text": " Okay." }, { "end": 1925.1399999999999, "start": 1919.24, "text": " So as you can see, it is a pretty, pretty good system." }, { "end": 1929.1399999999999, "start": 1925.1399999999999, "text": " But of course, it is only these 90 classes." }, { "end": 1931.84, "start": 1929.1399999999999, "text": " But it's for now it's a it's pretty cool." }, { "end": 1937.58, "start": 1931.84, "text": " And it works pretty well and just the easiness with which you get which which you can get" }, { "end": 1945.4199999999998, "start": 1937.58, "text": " this stuff elephants in Kruger National Park." }, { "end": 1947.98, "start": 1945.4199999999998, "text": " Just the easiness is astonishing." }, { "end": 1956.06, "start": 1947.98, "text": " You can just load it up, kind of have this bit of a notebook and with a bit of like a" }, { "end": 1963.1399999999999, "start": 1956.06, "text": " very few lines of code, you can put something together that detects these bounding boxes." }, { "end": 1964.1399999999999, "start": 1963.1399999999999, "text": " Lots of elephants." }, { "end": 1967.26, "start": 1964.1399999999999, "text": " And remember, we only have the top five elephants right here." }, { "end": 1969.84, "start": 1967.26, "text": " So what happens if we go for more?" }, { "end": 1971.28, "start": 1969.84, "text": " Where is our top K?" }, { "end": 1975.52, "start": 1971.28, "text": " So here we can maybe say the top 15 predictions." }, { "end": 1982.06, "start": 1975.52, "text": " And as always, if we want to make the model to make its own decision, we can simply revert" }, { "end": 1986.78, "start": 1982.06, "text": " back and add back the no class label." }, { "end": 1990, "start": 1986.78, "text": " All right, with that, I hope you like this video." }, { "end": 1995.94, "start": 1990, "text": " If you did, then maybe tell YouTube that you liked it, share it out." }, { "end": 2002.1399999999999, "start": 1995.94, "text": " And I will share this notebook in the description for you to find and play around with." }, { "end": 2003.1399999999999, "start": 2002.1399999999999, "text": " All right, thanks for watching." }, { "end": 2012.3400000000001, "start": 2003.14, "text": " Bye bye." } ]
SY5PvZrJhLE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
GPT-3: Language Models are Few-Shot Learners (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "transformers", "attention", "nlp", "natural language processing", "gpt3", "gpt-3", "gpt2", "gpt-2", "openai", "language model", "mlm", "autoregressive", "heads", "bert", "turing", "microsoft", "question answering", "news", "glue", "superglue", "sota", "preplexity", "corpus", "common crawl", "wikipedia", "natural questions", "boolq", "math", "strings", "context", "deep language", "zero shot", "few shot", "training data" ]
#gpt3 #openai #gpt-3 How far can you go with ONLY language modeling? Can a large enough language model perform NLP task out of the box? OpenAI take on these and other questions by training a transformer that is an order of magnitude larger than anything that has ever been built before and the results are astounding. OUTLINE: 0:00 - Intro & Overview 1:20 - Language Models 2:45 - Language Modeling Datasets 3:20 - Model Size 5:35 - Transformer Models 7:25 - Fine Tuning 10:15 - In-Context Learning 17:15 - Start of Experimental Results 19:10 - Question Answering 23:10 - What I think is happening 28:50 - Translation 31:30 - Winograd Schemes 33:00 - Commonsense Reasoning 37:00 - Reading Comprehension 37:30 - SuperGLUE 40:40 - NLI 41:40 - Arithmetic Expressions 48:30 - Word Unscrambling 50:30 - SAT Analogies 52:10 - News Article Generation 58:10 - Made-up Words 1:01:10 - Training Set Contamination 1:03:10 - Task Examples https://arxiv.org/abs/2005.14165 https://github.com/openai/gpt-3 Abstract: Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. Authors: Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello there! Today we're looking at language models are few-shot learners by Tom B Brown, Benjamin Mann, Nick Ryder and Melanie Sabaya and a whole slew of authors from OpenAI. This paper also called GPT-3 just came out recently. GPT-3 is a language model and it comes out of a succession of language models of OpenAI. This paper is basically an investigation into what you can do with giant language models. Now this language model is an order of magnitude larger than anyone has ever built a language model and it can do some absolutely crazy things. So we'll basically go over the architecture, over what the model does and over the experimental results. It turns out that if you train a language model on enough data it is able to solve NLP tasks that it has never seen just out of the box. We're going to look into this very cool kind of formulation of the problem. As you can see here the paper is 40 pages long without the appendix. It needs its own table of contents which is crazy. So we're going to skip a fair bit of things. First of all what is a language model? For those of you who don't know I've done a bunch of videos and you can see those in my natural language processing playlist about language models and specifically about transformer language models. So a language model let's just take an example this sentence right here. Just the sentence as such like third humans do not require large supervised datasets to learn most language tasks. This is an English sentence and a language model would be a model that if you cross out a portion from the end here like this right here it would be able to tell you what comes next. So in a language model you would input this part right here and it will tell you the next word is datasets. So that's basically all the language model does and once you've trained one you can basically generate word after word after word from it or you can ask it a question like which word is most likely to come next or more likely. So a language model is nothing but a model that can kind of generate language in a probabilistic way and the cool thing about language models is that you can train it on any sort of text data and that's what they do here. So they train a language model on giant amounts of data specifically right here they go into the datasets they use. They use this common crawl dataset which they filter down for quality and this is basically a crawl of the entire internet if you will together with these books datasets and the web text dataset and the Wikipedia dataset. So they throw all of this text that they scrape from the internet together and then train a language model on that. Now the language model right here is called GPT-3 and they train various sizes of it and we'll get into how it's built in a second but just compare this to a language model like BERT. BERT required this much flops to train and this is a log scale so this is right here this is several orders of magnitude larger and bigger model and is trained for way longer on this text so naturally it is going to be a lot better at language modeling. You can see right here the size of these models that they trained on. Remember the previous largest language model the Turing NLG of Microsoft had something like 17 billion parameters so it would be comparable to this right here whereas GPT-3 has 175 billion parameters which this is absolutely crazy. This is an order of magnitude higher than anything that's ever existed and if you look at the last GPT the GPT-2 model that if you remember I've made a video about it is too dangerous to be released well now it has been released but was too dangerous to be released it clocked in at about 1.5 billion parameters so compared to this GPT-3 XL model right here they train these multiple models to basically estimate the effect of the model size and you can see here the largest model has 96 attention layers. Each layer has 96 attention heads and each head is 128 dimensional and it trains on batches of size 3.2 million. This is the batch size absolutely crazy so they train this on a giant distributed cluster that apparently is provided by Microsoft and yes crazy crazy things. So how does this model look? This model is a transformer model and right here we don't even have like a description of a transformer model it's just assumed you know what that is. I have made several videos on transformer models and especially things like attention is all you need or BERT or something like this but for those who don't know if I have a transformer model and I want to build a language model from it let's take this sentence right here I would input a what's called a context which is the thing I already have right I would input that into a transformer model and a transformer model is just several layers of attention mechanism. Now an attention mechanism is basically a way where information is routed in between the different tokens right here and as it goes up the layer basically the the information is routed around and the model can make various inferences and at the end the model is supposed to come up with the next word that you're going to put here. Specifically in this paper they use sub words like word piece tokens like it is common in NLP right now but essentially this is an autoregressive language model so it's not like BERT it's not bi-directional it is autoregressive it goes from left to right always produces the next word it is like GPT-2 they even say this they say we use the same model and architecture as GPT-2 they just have more layers and wider layers and more data to train it on. So how do they train it? Okay that's we already said they train it in simply in simply a language modeling way just next word prediction that's it okay it so it's not even something fancy like BERT. The interesting part is when you do the now the the single tasks so what you usually did with something like BERT so with something like BERT you would do first pre-train so there you would this is the language modeling right here this pre-training phase where you teach BERT about the English language by just feeding it a lot of data and then second you had a step called fine-tuning fine I can't even write tuning so on the second one you'd have something like the task you're actually interested in and let's say the task you're actually interested in is sentiment classification so in sentiment classification you have like a sentence like blah blah blah and you want to know is that a positive sentiment like is a happy sentence or is it a sad sentence and you would have a database of labeled instances of that so in this database you'd have a bunch of sentences and for each one you would know is it good is it is it positive or is it negative and then you'd have like a smaller test set right here and you would you would train you would basically take this pre-trained model train it on this data set in a supervised machine learning way and then test it on this test set right here this is called fine-tuning that's what they display here so in fine-tuning the model is trained via repeated gradient updates using a large corpus of example tasks all right so the example task right here could be translating to French so in your training database of the translation task would be this would be sea otter is called Luther de Mer and in and and then you'd actually change your model you do a gradient update I mean if if you're in the NLP world this seems very natural but they are going to argue in a second that this isn't the only way that you can teach a model a task right so this this seems very natural right you're going to change your model you take your pre-trained model and you're going to fine-tune it on this task and if you have a different task right if you have now question answering task you're going to have a different data set right here with a train and test data set and you're going to take the pre-trained model and then fine-tune it on that data set and evaluate it on that test set so this gives you basically with as many models as you have tasks and you for each one you need a big big training data set in order to perform well sometimes we have this sometimes we don't what they are interested in is basically to take the pre-trained model and directly go and evaluate it on the test data set in a sort of a zero-shot fashion now it is not zero shot as they will argue so what are they doing in a true zero-shot fashion you would just take your your language model that you pre-trained and you just input the following text you input what they call a task description and a prompt so this is the input and you simply ask the model as a language model to predict the next work it's just what comes here now what you're counting on is basically that in the training data the model has seen a structure like this enough to understand what's going on so that in the training data somewhere in the internet there was the structure of translate something to something and then there would be a word here of something and you know it kind of has to realize that this goes here like that the next word so basically what you're asking it is if you were to find this text on a website or on Wikipedia or in any of the books data set if you were to find this piece of text what would be the next word in that piece of text and you kind of hope that this this is enough if you've trained a good language model that this is enough to to to actually produce the French translation here now before I realize I've said the language modeling is to teach the model the English language actually not true in this common crawl corpus you also have many foreign languages so you basically teach you the general model of the internet now they trend they they contrast this to what they call one-shot learning so in one-shot learning you not only do you have the task description right here and this is this is a string right you don't specifically tell the model that this is now a translation task you simply input this as a string so not only do you have the task description and the prompt right here but you also have one example and the example and this is where they this is where they bring in the where they say it's not exactly zero shot where's my little drawing here so the example is going to come from the training data set of the task that you're interested in but the important part is you never train on it you never explicitly train on that example you simply put it in the context so you simply put this string so translate English to French new line see order is Luther de Mer new line cheese is what you simply input that string into the model as a language model and you ask it what's the next word right here okay so I hope I hope this is clear this is what they call kind of one-shot generalization and by one shot they basically mean you simply provide this thing in the context of the model as a language model now the the advantage here is immediately clear that you only have to train one model then and then basically at inference time you can just input the task description and the sort of training data for the task into its its evaluation context and the task itself and it will if if it is if it really does what they claim it does it would be able to sort of understand the prompt here understand what it means to translate from English to French it would look at this example and say oh that's what you want me to do okay and then it would be able to generalize to this input right here to say okay from the task description and the example I so I get I get what you want me to do I will the next word here is cheese what's cheese in French I don't remember from a from a now the way the language model is going to interpret that is slightly different as we said before the way the language model is going to interpret is if you were to find the following text on a website somewhere the text is called translating which to French new line see order goes to loot the name new line cheese goes to what would be the next word on that website so that's what the model sees right you have to differentiate between what the human wants and what the model sees the model is just a language model that is going to take the next that is just going to determine if I were to see this text somewhere what will be the most likely next word so you have to phrase your tasks in a way that makes sense in that thing and they also have this few shot thing where you not only provide one context but you provide a bunch of context to basically tell the model more of what it what it should do right now this doesn't only work in a free mode where you basically say what's the next word here what you can also do if you have such a language hold with the exact same model you can give it basically a couple of possibilities so you can give it it's you can say like it's either shop or it's from us or it's hotel I think that has like this so you can you can basically restrict it to only produce one of these three things so in translation this might not be you know the the way to go but in if you have like yes no answers questions you can restrict it to that so in a lot of these NLP tasks you have some options given for a given question and you can also restrict it so don't you know you always have to go with the task at hand but this is in essence what the model does and this is I think this is the new well not the new per se but this is one of the core ideas of this paper if you take anything from it there's no new architecture right here there's no new wisdom in training they train in a standard way in a standard language modeling fashion a standard transformer architecture this just happens to be ginormous okay this right here this thing where they say most of these things would fine-tune and then basically end up with one model per task and you need a big data set per task but we simply can do this since we have such a large language model it is basically already basically already knows how to do these tasks as long as we formulate them in a language model way we can have the model perform these tasks and they will show that this works surprisingly well throughout this paper now we get into the experimental results right here and the experimental results first of all on language modeling as you can see here they basically say as you go up with the parameters you see the more yellow ones are the parameters you go into your validation loss goes down and down and down and down and I believe this is sort of a log scale as well so this is the log probability so the perplexity and that the this basically follows a trend oh no this is a log scale this this is a log scale it follows a trend where as you scale up the model and as you scale up the compute that the model gets and we know for these big language models we basically know you have to scale up model size compute time and data set size in the same fashion for them to make these gains but if you do that it follows like a a power law where as you scale up these things the model basically gets better and better and better and the question of course is you know how far how far can we go with this but for now it seems to hold quite well that you can just make improvements by scaling up your model on language modeling at least so where do we where do we basically go from here so before we dive into the actual results of the individual tasks so now they're going to formulate these individual tasks so they have like pure language modeling tasks right here like Alice was friends with Bob Alice went to visit her friend and then it's like what's the next word okay is Bob and George bought some baseball equipment a ball a glove and a what's the next word and I guess this should be hat sorry bat right here but we're going to go into the into the tasks and one of them is for example question answering so in question answering you simply get either you get just a pure question or a context and a question and they do the facts they test where a situation where you just get the question so you just get I don't know who is the queen of England or something like this and the model is simply to produce either the results direct or to choose from a bunch of answers which one is the most likely as a language model and as you can see as you scale up the language model the zero shot one shot and few shot predictions so in few shot you give 64 different examples from the training set in the context so you always have so your context is going to look something like this and they have examples at the bottom and haven't looked at the QA task but the the example is going to be something like this you have a task description like answer the following questions answer the question and then you have your examples in zero shot that's zero and one shot it's one that's what I'd like and then you say how tall who sorry who I don't know who climbed Everest the first the rest the first and then you say Hillary I think it was Hillary no I don't remember and then you say I don't know how how tall is the Empire State building and then you have like some number here and at the end you say what was it was it was a question from before I don't know who is the Queen of England yeah who is the Queen of England and then you ask the model to predict the next word right here okay and you do this in a closed book setting which means you have no access to Wikipedia or whatever like usually these systems they can go and query Wikipedia but this system doesn't so you just you just want to know what has the model learned about the world by simply absorbing giant amounts of text so if somewhere in the training data the fact that the Queen of England is Elizabeth the second is present it should complete this right here and it performs surprisingly well as you can see here so it manages to outperform a fine-tuned state-of-the-art model that is actually that is fine-tuned on question answering right this has it has been built for question answering and this model outperforms it by simply having a lot of language so this here is the results on on these open domain QA tasks and you you see right here it the this this few shot it outperforms this open domain and open domain means that the model can go and look at some Wikipedia page and yeah so so this is pretty cool but there are other things like the natural questions where it under performs compared to this open domain thing and they say this is mainly due to the natural questions being like it's very much about factual Wikipedia knowledge and so on maybe like the question we just made maybe is more of a natural question type of thing and since and the model is apparently not as good at that but it's still impressive that the model is able to do this out of the box okay so before I said something like before we go into the experiments I want the following so I have like some sort of hypothesis it's not it's an it's not an uncommon hypothesis that basically these things these giant language models right they're just these transformers layer after layer after layer with their connections in here what I think is happening is they are simply storing the training data right they are simply storing the training data in these connections right here so usually you think of storing the training data in some form of maybe we have like some module right here some database module in the neural network and it learns to query the module but ultimately if you train a neural network what you have is data and you train a function with parameters on that data and ultimately what you're doing is you're distilling the data into these parameters and you kind of hope to learn some regularities from it but ultimately the information about your training data influences or determines your final parameters of your function now I can imagine that if you have such a giant neural network with so many weights like 17 sorry 170 billion weights that you can pretty efficiently actually store the training data in that model and when you ask this model now to do something what it basically does is what these people sort of argue is that it has learned these language tasks is learned to reason over language and so on what I think is happening much more is it will simply go to the training data since it has stored the entire training data in its weights and it will sort of pull out the five to ten to fifty training examples that are most relevant to what you put in and it will sort of interpolate right you go to the training data and it'll pull out a bunch of training samples that are relevant to the context you put in right now and then it will sort of integrate those into the next word that's going to come out right here and I think if you look at this paper in terms of this so you always write you input a context and the context is split into a task description and then it is split into k different examples and then it is it is it has a prompt sorry this year this is the prompt so the task description is please translate from English to French and the k different things are k different translations and then the prompt is you know what what you should do so it's like half of a K half of one of these boxes right here so these boxes are have blah blah blah turns to blah blah blah and then the prompt is simply without the the right side I think what it does is it will simply take all of this and it will go to its own training data which it has stored in its weights and it will filter the training data and basically take out the the things that sort of pattern match sort of regex match in a fuzzy way to this context and then it will kind of interpolate these training examples in order to come up with the answer I don't think there is reasoning happening here and we're going to if you go through the paper with this view then you can a lot of things actually make sense and I actually I think that we need we need what we need when think people think of like explainable machine learning they often think that if I'm going to input something like I'm going to input an image into a classifier and it comes out a certain class car I like the explainability should be which part of this image was it the wheels was it the the hood which part of the image which part of the input image is responsible for making that determination what I think in especially in these language models what we should do is if the model predicts something right here the next word I think we should somehow have a method of determining which of the training examples that the model used to interpolate given this context because I'm pretty sure these training is you will find so if you'll find that for example this weight and this weight and this weight was very responsible for making this prediction happen I'm pretty sure you can somehow during training build an index of which of the which five training examples had most influence on that particular weight or on this combination of weights and then you can sort of go backwards and say you made this decision right here model please tell me which of the training data samples were responsible for making that decision actually pretty sure that already exists like I'm never the first one to think of these things though if I am site me site the channel no but just an interesting way to think about this model and an interesting way to think about kind of what does what would explain ability even mean in a model like this and my argument is since it interpolates the training data the interpret ability should come from the fact of which training samples does it interpolate okay let's go to translation so in translation as we said they simply input the like the task and then the few examples and then and then the output okay and you can see right here what you can see is that again as the model goes up in parameters the performance generally increases and also you can see that the performance is pretty good every time that this model goes to English so it goes if it if the target language is English which sort of makes sense because like a large part of the corpus they train on is English so being an English language model it should be pretty good if it is asked to produce English and it's not as good if it is asked to go into the different direction now what you also see is that it is not really a difference whether you translate from from which language you translate but if you go to English but it very much matters to which language you go if it is from English so this sort of makes sense in that it is just trained on a lot of English data and right here sometimes they are on par with the with the state-of-the-art supervised methods and also other times they outperform these methods right here and these methods are unsupervised but are specifically so they don't have a supervised training data set that goes let's say from English to French but they are built with this in mind that they need to translate later so they are sort of task specific but don't have a supervised training set and this model right here it just learns whatever it learns and it it just it just does it just does this this language model learning and at the end just because it has seen some websites where language of both things appear it can now translate reasonably well okay now yeah so the results here are a bit noisy but it is still interesting to see that it sometimes even gets close to the supervised thing though they say that they are not familiar with the literature and are not sure that these models that these numbers are you know good okay okay the next thing is these um Winograd schemes where you do have where is the text here is a classic NLP task that involves determining which word a pronoun refers to when the pronoun is grammatically ambiguous but semantically unambiguous to a human so these are sort of human produced sentences where it's kind of a pronoun could refer to multiple things I don't have a example present but where do we have the right here you can see that this model will out produce a fine-tuned large but will not out produce a fine-tuned Roberto large so it is going to it is going to come it is competing at least with the fine-tuned models that were made specifically for that task right again this is pretty pretty interesting and you also see that the larger models here it starts to make a difference whether or not you give it one zero or one or more examples okay so we'll get into we'll get into the the more interesting things right here in this thing right here where is it yes this is the kind of a physical physical question physical QA where it is a bit of common-sense reasoning so you're asked to I don't yeah these are like science questions multiple-choice questions collected from a third to ninth grade exams and the physical QA is physical QA asks common-sense question about how the physical word work world works and is intended as a probe of grounded understanding of the world so it has questions as I understand it it has questions like if a drop a ball will it fall on the ground or where will it fall or something like this and they say that they can outperform a fine-tuned state of the art model on this if they go just high enough and you can also see that there isn't much of a difference between zero one and few shot the methods of this model right here even though zero shot is even higher than one shot so this is probably just noise but then you find out that they have an asterisk here and this means that this is potentially a contaminated data set so they have potential contamination issue so what they found was there was a significant overlap between the data set this data set and their training data set and they even they only realized this too late because there was a bug in their deduplication code and then they couldn't change it anymore like I because this model is so large that they couldn't restart the training because they've already spent like so much money and energy on it this is crazy I think these language models are getting so large that we should building them we should more think of it like we built the the International Space Station or something like this where it's a project where humanity sort of collaborates or there's a big effort and you build it once and whatever you have you have right so these these good numbers here are simply or not simply are because or could be influenced by this contamination and I think that's what's happening right here even though they will make the case that this contamination isn't really an issue I can probably show you that it might be it may be actually is an issue because on the other data sets at the the fine tuned state-of-the-art model outperform the GPT-3 quite a bit so and also the the fact that the you know if you provide a demonstration or many demonstrations it doesn't actually change that much it kind of tells me that the model sort of already knows what the answer is and doesn't really need demonstrations because it doesn't help if you have the training data stored or the test data you don't really have to get demonstrations right so they have a few other a few other things right here where on these coca tasks they perform pretty poorly compared to others or poorly let's say they perform well but not particularly more well than a state-of-the-art and they perform especially poorly on the reading comprehension sorry that's the that's the cocoa so in reading comprehension what you have to do is abstractive multiple choice and span based answer formats in both dialogue and single question setting so basically have to read a piece of text like this and then answer a question about the piece of text now this is something where I think you cannot really interpolate the training data super well and therefore so you can't really just pattern match and interpret because you have to do actual reasoning and I think that's why the model performs poorly here they do measure this on on super glue which is a NLP benchmark and also here you can see it doesn't outperform a fine-tuned state-of-the-art model on these tasks but it does outperform a fine-tuned BERT model slightly the BERT model is fine tuned on these things whereas GPT-3 isn't but notice the tasks in which it does well and in which it doesn't do well compared to the state-of-the-art model so for example in the bool queue it doesn't do particularly well right the state-of-the-art is 91 and only has 76 that's quite a large difference and actually have the glue benchmark open here and you can see this is the bool queue so an example here would be is France the same time zone as the UK and then there is like a passage and you need to reason about from this passage about whether or not this answer is true or false okay this this is very much not language modeling this is reasoning and that's why the model is doing poorly here whereas in another thing you see these for example this copa right here the model is doing almost as good as a fine-tuned state-of-the-art and I have to stress this model has never actually learned this task in a supervised way it's simply a language model and I have this copa task right here and these are the examples so one example is the premise the man broke his toe what was the cause of this and you have two different things that it could be either he got a hole in his sock or he dropped a hammer on his foot and the way you phrase it in this model is you would give the premise as the context and then you simply ask the model since it's a language model which of these two things is more probable to come and of course it is going to select the thing that kind of happened more often in the training data and you know broke his toe the cause of breaking his toe that is a hammer this is entirely conceivable that a language model would know this and with enough training data could sort of pull from the training data examples where hammer on foot and broke toe appear a bunch of times and hole in sock would be rather unrelated so as long as these questions are not too adversarial constructed specifically that a language model can't solve them there the model is going to perform pretty well right here right so it is very interesting to see that if you view this as interpolating the training data it suddenly makes sense where it's good and where it isn't good so this was the super glue and and NLI it is performing particularly poorly on NLI which is the ability to understand the relationship between two sentences right so where the model classifies whether the second sentence logically follows from the first contradicts the first or is possibly true neutral okay so this is the reasoning part of this model is not given it is simply recalling the training data and doing language modeling now they say oh we can test this we can test this with synthetic and qualitative tasks so they invent some own tasks since you know now it's pretty easy since you don't have to fine-tune the model you don't have to turn to generate an actual training set for a task so you can focus on generating a test set and and you know that's what they do so they do something like arithmetic so they say okay can we come up with a bunch of arithmetic tasks for example to digit addition so what the model would see would so this is an example and what the model would see is simply this as a context right here for the prompt and if you give it examples so if this is like one-shot learning you would input add the following numbers the following numbers as a string right then a new line and then you would give it one example like what is 11 plus 12 and with the answer together with the answer answer is I don't even know 23 and then you the prompt goes here so what is 48 plus 76 and then you ask what is the next word right here what is the next string token that comes here now the the inference here is that if the model manages to do this it can't simply because these are all strings the model basically has no clue how to do math these are numbers to the model these are just tokens as strings and the inference is if the model can do this it must have learned you know some kind of reasoning ability it must have learned to like perform some logic inside so they go into two-digit addition three-digit addition four-digit addition five-digit addition and even multiplication and subtraction and the results are right here so as you can see the lower parameter models they perform pretty poorly but as you go up the parameters the big model is performing really well in the two-digit range is performing also really well so accuracy of look that accuracy 80 90 percent in three-digit addition and subtraction but then if as soon as you get to the four-digit or the two-digit multiplication and so on the performance drops now they say that's because multiplication is harder and you know it's is logically very computationally you know but the two-digit addition and so on model has learned something about the world I disagree because so here's the because what you will do is you will simply and this you simply recall the training data so look at the two-digit addition with zero shot you already get 76 percent but with one shot you get 99 percent and with few shot you get a hundred percent so if you interpret this model as simply filtering the training data to pattern match then it makes a lot of sense that the one shot would like the examples here would give you a much improvement because if you have a bunch of examples where please add right add and then oh I erased our example again so you have like 48 plus 72 equals blah blah blah you have these of this if you give more and more example all of a sudden this looks like a table and they say we made sure that the strings here these particular strings were not in our training data right so these strings never appeared but I just have an issue with this d duplication stuff because what can appear actually is not the what can appear is a table and in table often you have columns and then another column will be the sum of these columns on the left and if you are asked to pattern match you'll naturally find websites right if you have a few of these examples you'll find websites where the columns exactly refer to these things and then you'll find the sum here and if you filter for websites that appear to match your scheme in the examples you'll find all the website with a table on them where the column one column is an addition of the others and I can actually do that so I went and I typed in just a bunch of these things so 98 plus 45 is 143 18 plus 55 is 70 I believe at least and I can find now Google makes it hard because they localize and everything but you can still find what you're going to find our tables and tables and tables and tables and now I actually went to doc.go to basically say you know they they don't you know really personalize it to me and what's the first thing I find when I type in just these numbers is math skip counting missing sequence number and a website where basically the answers are already given look at that so all the model has to do is recall this particular training example from the samples it already has right and it will it will basically be able in quotes to perform addition like this is financial data and another one where you have to subtract stuff right so I'm pretty sure all the model is doing here is interpolating the training data and that's also why it performs worse if if you up the digits because longer digit numbers are simply less frequent in the in in the training data multiplication is first of all less frequent and second of all it also results in larger numbers which are less frequent right so it explains a lot so I yeah I have my issues with people saying yeah this this shows some reasoning I don't think it does the same thing here with word scramble so in word scramble they have different things you see okay they they they look whether or not only 17 matches 0.8 percent of the math things are in their training data is like no you haven't searched well enough and the rest of their deduplication by the way is also pretty weak I would say because they just look for like 13 gram overlaps between the training data and the in the and their their test data so they have these word scrambling tasks where they basically scramble words and they ask the model to unscramble it for example this word is inevitably scrambled so they always you know they give like anagrams and they give random insertion into the word like this word right here or they reverse the word and they say so this I think this is the thing at the very beginning but if you can see right here also as the model goes up then this this improves and they also say well this means maybe some kind of reasoning but I think this is just it's learning the language and it's learning that you know the the words in in sorry that the letters make up a word and the letters correspond to word pieces or are associated with word pieces and it always learns to English a good task to check this would actually be to scramble words so if you unscramble words you always end up with an English word so all it has to do is basically check which word has the highest overlap in word pieces but you could do something like please scramble this word and then always count it correctly when any of the scrambling of the words so instead of going from this to this which you can simply solve by knowing the English language but you would have basically no clue what the task is that you don't have to understand that as a model you could ask it to go from this to this given a few examples right then it would really need to understand what the task is that it's supposed to actually scramble a word and would need to learn that from its context given examples but they as far as I see they don't do that and again I think it's recalling the the training data the this is a sat analogy so the SAT or this test that the US high schoolers take to get into college and the the this they say a typical example this is dying on me no it's scrolled okay a typical example is the following this I find I find pretty hilarious all Dacius is to boldness as sanctimonious is to hypocrisy anonymous is to identity remorseful is to misdeed deleterious is to result or impressionable is to temptation this is a as as a okay I'm not a native speaker but this is a hard question right and you have to you know see that these these high schoolers they're stressed like this is very much a time-based test so you need to make a decision quickly while the model of course is basically able to sift through its entire training data in the time it takes the GPUs to perform inference but it's still funny that GPT-3 achieves 50 65 percent in the few shot setting and 59 percent in the one shot setting 53 percent is zero shot setting whereas the average score among college applicants was 57 percent so it outperforms the average college applicant it's pretty funny but you would expect the language model to have a pretty good grasp of these kind of synonyms and relations between words because these are just absolutely statistical associations between words so yeah this I found this to be pretty pretty funny and the last thing and this is what everyone's freaking out over is this news article generation where basically they give it the beginning of a few of a news article and then they let humans decide whether or not the news article is written by a machine or by a human and they say here by contrast mean human accuracy at detecting articles that were produced by the 175 billion parameter model it was barely above chance at 52 percent human abilities to detect model generated text appear to decrease as model size increases there appears to be a trend towards chance accuracy with model size and human detection of GPT-3 is close to chance okay so what they do is they give and they have some examples right here they give the model the following input the title the subtitle of an article and then this word article and the model is supposed to complete the rest of the article right here and you can also you know give do this in a few short setting such that the model basically knows that it's if you give it a few a few examples the model knows it is supposed to produce a news article right okay so there are two two ways that you can think of this first way the model has learned the language so well and it writes code it has learned to write coherent language and so on it's learned to reason keep context and blah blah blah okay second way the model sees this thing right here it sees the few you know K few shot examples that it has before in the context it will take them filter the training data to in this case it just sees news articles so do just news articles it will take this thing filter the training data even more to just the news articles that pertain largely to topics or words that appear in here and then lastly will interpolate the few training examples to produce this thing now they argue that this isn't really possible because they have actually checked that this news article is not in the training data but I have simply gone and taken a I've really taken a random substring here I've taken this substring voted to strengthen the ban on the ordination of just this substring and I've put it into Google and Babidi bah I find a book with voted to strengthen prohibitions to ban LGBTQ people from being ordained and ministers so it's you know I find this it's not the same article but it's talking about the same incident the article talks about and it is using the same language probably read the article and the author is like I can't really you know copy paste that would be you know not really cool so I'll just kind of you know write it in my own words but largely the same thing the Associated Press here also a different article you know see different title than this one right here but about the same thing and also with the same language right here voted to stay to strengthen the faiths divisive bands on same-sex marriage and ordination of LGBT clergy and generally so the argument this article wasn't in the training data is just not really something I buy in this in this case so I think it the article as such wasn't there but many articles about this topics were and I think this will just interpolate these now they say this was the hardest article for the humans to decide and this here was the easiest so it's it says I don't know Starr talks promise draws Megyn Kelly's sarcasm and it says a year ago joking Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedo with a paper bag over his head that read I'm a shape-shifter blah blah you you would guess that joking Phoenix would do something like this but they say their human raiders were US based right and you see right here it says Megyn Kelly was not impressed and she let him have it on the tonight show now that tonight show is not what Megyn Kelly is and US based people would I guess know something like this and would immediately feel like this is wrong so I think this thing is interpolated from is interpolated from a bunch of different news articles about this and the interpolation just let it like made it such that this person is on this show which that they aren't and the humans noticed right but it doesn't change the fact that it probably just went to the training data filtered a bunch of articles about these words and then interpolated like mash them together it is a good language model right it can grammar it's very good at grammar so we can interpolate different passages of text and I feel that the the really really useful application of this will be sort of as a search engine as a fuzzy search engine so now I can like input for example my my machine learning research ideas and what will output will be sort of an abstract of a paper that is kind of a mush together of other papers on the same thing and that that you know you can think of many applications I don't think we have built something really intelligent here and what this is this is though is pretty cool they they give examples like this here where they make up a word and then ask the model to use the word in a sentence so to scree is something sorry to screech something is to swing a sword at it an example of a sentence that uses the word screech is and of course the model what's the model is going to do is it's going to take this it's going to filter the training data for all of the instances where sort of this construction appears like an example of using the words which is mostly so dictionaries then it's going to not know that word but it's you can interpolate it from interpolated from all this data right here and the so the cool thing is it actually conjugates the word we screed at each other for several minutes and then we went outside and ate ice cream so you can see how this is comes to be but I think it would really be fun to have a model that tells us which training data samples were used here it can also correct English grammar which is pretty obvious though again it can never correct so the the input always here is poor English good English poor English good English poor good poor English and then good English and that's what the model is asked to to output and I'm actually not sure pretty sure this here shouldn't be bold I'm fairly sure this shouldn't be bold this is given to the model the model is only asked to produce this otherwise I'd be I'd be actually impressed but yes nothing task specific is provided aside from the examples from few example as conditioning and the poor English input good English output framing so the good English output thing here should not be in boldface authors if you're listening this should not be bold thank you okay but again it is always as you can see it's too good English it's always the target is the good English whereas if the model really understood the task it should also be able to do the inverse it should be able to to produce something poor from something good because then you eliminate the fact that it's just a good English language model right because it can basically produce something like this without having a clue what the task is it will simply you condition on this input and it will simply output this sentence because it's very likely because it's already almost here and it will output it in better English because it's a good language model right it's it's a good English language model so yeah that so they measure this overfitting the degree to which their training to which their test data is in this common crawl thing and they say they have a conservative bound on how many percent of the data in the data set are clean and as you can see here they measure then how much the performance differs to to up or down if you only evaluate on the clean portion of this data set but again their deduplication is so weak they do like n-gram deduplication whereas I think you should really like in the news articles you should really do much more fuzzy deduplication much more of a meaning deduplication if you then want to argue that the model has learned to reason like if you simply want to argue that the model is a good language model fine right but yeah and also look at this like I would expect of a data set a test data set if you know if you have like a natural questions data set is constructed from Wikipedia pages and you have the Wikipedia page in there you can either either the entire thing is clean or none of it is clean and also these we know grad data set if this data set somehow leaked into the common crawl corpus either the entire thing is clean or none of it is clean I just have kind of problems with the fact that there are so many in between things right here and yeah so I'm not I'm not convinced here that this deduplication I still think it's a cool thing but I don't I think it's mostly a training data filter and interpolator rather than actual reasoning and they go through some of the limitations here and the broader input this broader impact statement is like five pages long and yeah okay you can do you can you know bad people take the model to do bad things okay and that's pretty much it so what I appreciate here is at the bottom they have basically all the results but also a lot of tasks descriptions like how they framed each tasks more outputs and they give more outputs on their website right so you can see here how each of the tasks was framed where you always have this is what this here is what the model sees and then this is what it's asked to produce right so you have this for for all many of these things and so on squad you have this context and the question okay so the the context is actually in there I didn't know that but you have the context and the question and the model is asked to complete something right here so you can look at how the model sees tasks and maybe you can evaluate for yourself how you think how difficult you think these tasks are all right I hope this was informative it is a long paper therefore it is a long video if you're still here and haven't subscribed yet do maybe if you like this if you want more leave it a like tell me in the comments what you think of it whether you think it's actually a GI or not and I'll see you next time bye bye
[ { "end": 4.6000000000000005, "start": 0, "text": " Hello there! Today we're looking at language models are few-shot learners by" }, { "end": 12.040000000000001, "start": 4.6000000000000005, "text": " Tom B Brown, Benjamin Mann, Nick Ryder and Melanie Sabaya and a whole slew of" }, { "end": 19.44, "start": 12.040000000000001, "text": " authors from OpenAI. This paper also called GPT-3 just came out recently." }, { "end": 26.64, "start": 19.44, "text": " GPT-3 is a language model and it comes out of a succession of" }, { "end": 30.96, "start": 26.64, "text": " language models of OpenAI. This paper is basically an investigation into what" }, { "end": 35.52, "start": 30.96, "text": " you can do with giant language models. Now this language model is an order of" }, { "end": 41.04, "start": 35.52, "text": " magnitude larger than anyone has ever built a language model and it can do" }, { "end": 46.519999999999996, "start": 41.04, "text": " some absolutely crazy things. So we'll basically go over the architecture, over" }, { "end": 51.480000000000004, "start": 46.519999999999996, "text": " what the model does and over the experimental results. It turns out that" }, { "end": 58.31999999999999, "start": 51.48, "text": " if you train a language model on enough data it is able to solve NLP tasks that" }, { "end": 64.08, "start": 58.31999999999999, "text": " it has never seen just out of the box. We're going to look into this very" }, { "end": 69.02, "start": 64.08, "text": " cool kind of formulation of the problem. As you can see here the paper is 40" }, { "end": 74.24, "start": 69.02, "text": " pages long without the appendix. It needs its own table of contents which is crazy." }, { "end": 79.4, "start": 74.24, "text": " So we're going to skip a fair bit of things. First of all what is a" }, { "end": 84.5, "start": 79.4, "text": " language model? For those of you who don't know I've done a bunch of videos and you" }, { "end": 88.52000000000001, "start": 84.5, "text": " can see those in my natural language processing playlist about language" }, { "end": 93.2, "start": 88.52000000000001, "text": " models and specifically about transformer language models. So a language model" }, { "end": 98.24000000000001, "start": 93.2, "text": " let's just take an example this sentence right here. Just the sentence as such" }, { "end": 103.2, "start": 98.24000000000001, "text": " like third humans do not require large supervised" }, { "end": 107.88000000000001, "start": 103.2, "text": " datasets to learn most language tasks. This is an English sentence and a" }, { "end": 113.03999999999999, "start": 107.88, "text": " language model would be a model that if you cross out a portion from the end" }, { "end": 119.72, "start": 113.03999999999999, "text": " here like this right here it would be able to tell you what comes next. So in" }, { "end": 125.32, "start": 119.72, "text": " a language model you would input this part right here and it will tell you the" }, { "end": 130.64, "start": 125.32, "text": " next word is datasets. So that's basically all the language model does and" }, { "end": 135.56, "start": 130.64, "text": " once you've trained one you can basically generate word after word after" }, { "end": 141.08, "start": 135.56, "text": " word from it or you can ask it a question like which word is most likely" }, { "end": 146.44, "start": 141.08, "text": " to come next or more likely. So a language model is nothing but a model" }, { "end": 151.2, "start": 146.44, "text": " that can kind of generate language in a probabilistic way and the cool thing" }, { "end": 156.32, "start": 151.2, "text": " about language models is that you can train it on any sort of text data and" }, { "end": 162.8, "start": 156.32, "text": " that's what they do here. So they train a language model on giant amounts of data" }, { "end": 168.08, "start": 162.8, "text": " specifically right here they go into the datasets they use. They use this" }, { "end": 176.48000000000002, "start": 168.08, "text": " common crawl dataset which they filter down for quality and this is" }, { "end": 182.92000000000002, "start": 176.48000000000002, "text": " basically a crawl of the entire internet if you will together with these books" }, { "end": 188.84, "start": 182.92000000000002, "text": " datasets and the web text dataset and the Wikipedia dataset. So they throw all" }, { "end": 192.24, "start": 188.84, "text": " of this text that they scrape from the internet together and then train a" }, { "end": 201.08, "start": 192.24, "text": " language model on that. Now the language model right here is called" }, { "end": 206.4, "start": 201.08, "text": " GPT-3 and they train various sizes of it and we'll get into how it's built in a" }, { "end": 213.32000000000002, "start": 206.4, "text": " second but just compare this to a language model like BERT. BERT required" }, { "end": 220.24, "start": 213.32000000000002, "text": " this much flops to train and this is a log scale so this is right here" }, { "end": 225.84, "start": 220.24, "text": " this is several orders of magnitude larger and bigger model and is trained" }, { "end": 231.12, "start": 225.84, "text": " for way longer on this text so naturally it is going to be a lot better at" }, { "end": 237.72, "start": 231.12, "text": " language modeling. You can see right here the size of these models that they" }, { "end": 244.60000000000002, "start": 237.72, "text": " trained on. Remember the previous largest language model the Turing NLG of" }, { "end": 248.60000000000002, "start": 244.60000000000002, "text": " Microsoft had something like 17 billion parameters so it would be comparable to" }, { "end": 257.24, "start": 248.6, "text": " this right here whereas GPT-3 has 175 billion parameters which this is" }, { "end": 261.56, "start": 257.24, "text": " absolutely crazy. This is an order of magnitude higher than anything that's" }, { "end": 268.24, "start": 261.56, "text": " ever existed and if you look at the last GPT the GPT-2 model that if you remember" }, { "end": 272.96, "start": 268.24, "text": " I've made a video about it is too dangerous to be released well now it has" }, { "end": 278.48, "start": 272.96, "text": " been released but was too dangerous to be released it clocked in at about 1.5" }, { "end": 285.64000000000004, "start": 278.48, "text": " billion parameters so compared to this GPT-3 XL model right here they train" }, { "end": 289.6, "start": 285.64000000000004, "text": " these multiple models to basically estimate the effect of the model size" }, { "end": 297.16, "start": 289.6, "text": " and you can see here the largest model has 96 attention layers. Each layer" }, { "end": 306.72, "start": 297.16, "text": " has 96 attention heads and each head is 128 dimensional and it trains on batches" }, { "end": 312.52000000000004, "start": 306.72, "text": " of size 3.2 million. This is the batch size absolutely crazy so they train" }, { "end": 319.16, "start": 312.52000000000004, "text": " this on a giant distributed cluster that apparently is provided by Microsoft and" }, { "end": 326.32000000000005, "start": 319.16, "text": " yes crazy crazy things. So how does this model look? This model is a transformer" }, { "end": 331.04, "start": 326.32000000000005, "text": " model and right here we don't even have like a description of a transformer" }, { "end": 335.72, "start": 331.04, "text": " model it's just assumed you know what that is. I have made several videos on" }, { "end": 340.08000000000004, "start": 335.72, "text": " transformer models and especially things like attention is all you need or BERT" }, { "end": 344.92, "start": 340.08000000000004, "text": " or something like this but for those who don't know if I have a transformer" }, { "end": 349.44000000000005, "start": 344.92, "text": " model and I want to build a language model from it let's take this sentence" }, { "end": 355.56, "start": 349.44000000000005, "text": " right here I would input a what's called a context which is the thing I already" }, { "end": 360.20000000000005, "start": 355.56, "text": " have right I would input that into a transformer model and a transformer" }, { "end": 365.12, "start": 360.20000000000005, "text": " model is just several layers of attention mechanism. Now an attention" }, { "end": 369.6, "start": 365.12, "text": " mechanism is basically a way where information is routed in between the" }, { "end": 375.8, "start": 369.6, "text": " different tokens right here and as it goes up the layer basically the the" }, { "end": 379.96, "start": 375.8, "text": " information is routed around and the model can make various inferences and at" }, { "end": 386, "start": 379.96, "text": " the end the model is supposed to come up with the next word that you're going to" }, { "end": 392.04, "start": 386, "text": " put here. Specifically in this paper they use sub words like word piece tokens" }, { "end": 397.44, "start": 392.04, "text": " like it is common in NLP right now but essentially this is an autoregressive" }, { "end": 401.52000000000004, "start": 397.44, "text": " language model so it's not like BERT it's not bi-directional it is" }, { "end": 406.24, "start": 401.52000000000004, "text": " autoregressive it goes from left to right always produces the next word it" }, { "end": 412.24, "start": 406.24, "text": " is like GPT-2 they even say this they say we use the same model and" }, { "end": 420.88, "start": 412.24, "text": " architecture as GPT-2 they just have more layers and wider layers and more" }, { "end": 429.48, "start": 420.88, "text": " data to train it on. So how do they train it? Okay that's we already said they" }, { "end": 435.08, "start": 429.48, "text": " train it in simply in simply a language modeling way just next word prediction" }, { "end": 439.71999999999997, "start": 435.08, "text": " that's it okay it so it's not even something fancy like BERT. The" }, { "end": 445.52, "start": 439.71999999999997, "text": " interesting part is when you do the now the the single tasks so what you usually" }, { "end": 451.59999999999997, "start": 445.52, "text": " did with something like BERT so with something like BERT you would do first" }, { "end": 457.08, "start": 451.59999999999997, "text": " pre-train so there you would this is the language modeling right here this" }, { "end": 461.52, "start": 457.08, "text": " pre-training phase where you teach BERT about the English language by just" }, { "end": 468.24, "start": 461.52, "text": " feeding it a lot of data and then second you had a step called fine-tuning fine I" }, { "end": 474.59999999999997, "start": 468.24, "text": " can't even write tuning so on the second one you'd have something like the task" }, { "end": 477.6, "start": 474.6, "text": " you're actually interested in and let's say the task you're actually interested" }, { "end": 481.72, "start": 477.6, "text": " in is sentiment classification so in sentiment classification you have like a" }, { "end": 488.40000000000003, "start": 481.72, "text": " sentence like blah blah blah and you want to know is that a positive" }, { "end": 492.8, "start": 488.40000000000003, "text": " sentiment like is a happy sentence or is it a sad sentence and you would have a" }, { "end": 498.6, "start": 492.8, "text": " database of labeled instances of that so in this database you'd have a bunch of" }, { "end": 502.92, "start": 498.6, "text": " sentences and for each one you would know is it good is it is it positive or" }, { "end": 508.56, "start": 502.92, "text": " is it negative and then you'd have like a smaller test set right here and you" }, { "end": 513.2, "start": 508.56, "text": " would you would train you would basically take this pre-trained model" }, { "end": 518.36, "start": 513.2, "text": " train it on this data set in a supervised machine learning way and then" }, { "end": 522.8000000000001, "start": 518.36, "text": " test it on this test set right here this is called fine-tuning that's what they" }, { "end": 530, "start": 522.8000000000001, "text": " display here so in fine-tuning the model is trained via repeated gradient updates" }, { "end": 536.56, "start": 530, "text": " using a large corpus of example tasks all right so the example task right here" }, { "end": 540.44, "start": 536.56, "text": " could be translating to French so in your training database of the" }, { "end": 545.28, "start": 540.44, "text": " translation task would be this would be sea otter is called Luther de Mer and" }, { "end": 552.12, "start": 545.28, "text": " in and and then you'd actually change your model you do a gradient update I" }, { "end": 557.44, "start": 552.12, "text": " mean if if you're in the NLP world this seems very natural but they are going to" }, { "end": 563, "start": 557.44, "text": " argue in a second that this isn't the only way that you can teach a model a" }, { "end": 568.5600000000001, "start": 563, "text": " task right so this this seems very natural right you're going to change" }, { "end": 572.48, "start": 568.5600000000001, "text": " your model you take your pre-trained model and you're going to fine-tune it" }, { "end": 576, "start": 572.48, "text": " on this task and if you have a different task right if you have now" }, { "end": 580.72, "start": 576, "text": " question answering task you're going to have a different data set right here" }, { "end": 586.96, "start": 580.72, "text": " with a train and test data set and you're going to take the pre-trained" }, { "end": 592.0400000000001, "start": 586.96, "text": " model and then fine-tune it on that data set and evaluate it on that test set so" }, { "end": 596.8000000000001, "start": 592.0400000000001, "text": " this gives you basically with as many models as you have tasks and you for" }, { "end": 601.64, "start": 596.8000000000001, "text": " each one you need a big big training data set in order to perform well" }, { "end": 605.88, "start": 601.64, "text": " sometimes we have this sometimes we don't what they are interested in is" }, { "end": 611.36, "start": 605.88, "text": " basically to take the pre-trained model and directly go and evaluate it on the" }, { "end": 616.5600000000001, "start": 611.36, "text": " test data set in a sort of a zero-shot fashion now it is not zero shot as they" }, { "end": 622.0799999999999, "start": 616.56, "text": " will argue so what are they doing in a true zero-shot fashion you would just" }, { "end": 627.92, "start": 622.0799999999999, "text": " take your your language model that you pre-trained and you just input the" }, { "end": 633.88, "start": 627.92, "text": " following text you input what they call a task description and a prompt so this" }, { "end": 639.52, "start": 633.88, "text": " is the input and you simply ask the model as a language model to predict the" }, { "end": 644.04, "start": 639.52, "text": " next work it's just what comes here now what you're counting on is basically" }, { "end": 649.4, "start": 644.04, "text": " that in the training data the model has seen a structure like this enough to" }, { "end": 653.36, "start": 649.4, "text": " understand what's going on so that in the training data somewhere in the" }, { "end": 658.28, "start": 653.36, "text": " internet there was the structure of translate something to something and" }, { "end": 663.48, "start": 658.28, "text": " then there would be a word here of something and you know it kind of has to" }, { "end": 667.8, "start": 663.48, "text": " realize that this goes here like that the next word so basically what you're" }, { "end": 675.52, "start": 667.8, "text": " asking it is if you were to find this text on a website or on Wikipedia or in" }, { "end": 681.8, "start": 675.52, "text": " any of the books data set if you were to find this piece of text what would be" }, { "end": 688.8399999999999, "start": 681.8, "text": " the next word in that piece of text and you kind of hope that this this is" }, { "end": 695.88, "start": 688.8399999999999, "text": " enough if you've trained a good language model that this is enough to to to" }, { "end": 700, "start": 695.88, "text": " actually produce the French translation here now before I realize I've said the" }, { "end": 703.56, "start": 700, "text": " language modeling is to teach the model the English language actually not true" }, { "end": 708.2, "start": 703.56, "text": " in this common crawl corpus you also have many foreign languages so you" }, { "end": 716.56, "start": 708.2, "text": " basically teach you the general model of the internet now they trend they they" }, { "end": 722.92, "start": 716.56, "text": " contrast this to what they call one-shot learning so in one-shot learning you not" }, { "end": 726.76, "start": 722.92, "text": " only do you have the task description right here and this is this is a string" }, { "end": 730.5999999999999, "start": 726.76, "text": " right you don't specifically tell the model that this is now a translation" }, { "end": 734.76, "start": 730.5999999999999, "text": " task you simply input this as a string so not only do you have the task" }, { "end": 740.76, "start": 734.76, "text": " description and the prompt right here but you also have one example and the" }, { "end": 746.7199999999999, "start": 740.76, "text": " example and this is where they this is where they bring in the where they say" }, { "end": 751.28, "start": 746.7199999999999, "text": " it's not exactly zero shot where's my little drawing here so the example is" }, { "end": 758.12, "start": 751.28, "text": " going to come from the training data set of the task that you're interested in" }, { "end": 765.28, "start": 758.12, "text": " but the important part is you never train on it you never explicitly train" }, { "end": 769.52, "start": 765.28, "text": " on that example you simply put it in the context so you simply put this string so" }, { "end": 776.8399999999999, "start": 769.52, "text": " translate English to French new line see order is Luther de Mer new line cheese" }, { "end": 782.72, "start": 776.84, "text": " is what you simply input that string into the model as a language model and" }, { "end": 789.32, "start": 782.72, "text": " you ask it what's the next word right here okay so I hope I hope this is clear" }, { "end": 795.32, "start": 789.32, "text": " this is what they call kind of one-shot generalization and by one shot they" }, { "end": 801.08, "start": 795.32, "text": " basically mean you simply provide this thing in the context of the model as a" }, { "end": 807.0400000000001, "start": 801.08, "text": " language model now the the advantage here is immediately clear that you only" }, { "end": 813.44, "start": 807.0400000000001, "text": " have to train one model then and then basically at inference time you can just" }, { "end": 820.64, "start": 813.44, "text": " input the task description and the sort of training data for the task into its" }, { "end": 828.48, "start": 820.64, "text": " its evaluation context and the task itself and it will if if it is if it" }, { "end": 834.4, "start": 828.48, "text": " really does what they claim it does it would be able to sort of understand the" }, { "end": 838.5600000000001, "start": 834.4, "text": " prompt here understand what it means to translate from English to French it" }, { "end": 844.12, "start": 838.5600000000001, "text": " would look at this example and say oh that's what you want me to do okay and" }, { "end": 849.6800000000001, "start": 844.12, "text": " then it would be able to generalize to this input right here to say okay from" }, { "end": 854.48, "start": 849.6800000000001, "text": " the task description and the example I so I get I get what you want me to do I" }, { "end": 861.8000000000001, "start": 854.48, "text": " will the next word here is cheese what's cheese in French I don't remember from" }, { "end": 868.6, "start": 861.8000000000001, "text": " a from a now the way the language model is going to interpret that is slightly" }, { "end": 872.08, "start": 868.6, "text": " different as we said before the way the language model is going to interpret is" }, { "end": 878.24, "start": 872.08, "text": " if you were to find the following text on a website somewhere the text is" }, { "end": 882.88, "start": 878.24, "text": " called translating which to French new line see order goes to loot the name new" }, { "end": 888.64, "start": 882.88, "text": " line cheese goes to what would be the next word on that website so that's what" }, { "end": 891.68, "start": 888.64, "text": " the model sees right you have to differentiate between what the human" }, { "end": 895.52, "start": 891.68, "text": " wants and what the model sees the model is just a language model that is going" }, { "end": 899.92, "start": 895.52, "text": " to take the next that is just going to determine if I were to see this text" }, { "end": 904.56, "start": 899.92, "text": " somewhere what will be the most likely next word so you have to phrase your" }, { "end": 910.4, "start": 904.56, "text": " tasks in a way that makes sense in that thing and they also have this few shot" }, { "end": 914.72, "start": 910.4, "text": " thing where you not only provide one context but you provide a bunch of" }, { "end": 922.0799999999999, "start": 914.72, "text": " context to basically tell the model more of what it what it should do right now" }, { "end": 927.52, "start": 922.0799999999999, "text": " this doesn't only work in a free mode where you basically say what's the next" }, { "end": 930.88, "start": 927.52, "text": " word here what you can also do if you have such a language hold with the exact" }, { "end": 935.4, "start": 930.88, "text": " same model you can give it basically a couple of possibilities so you can give" }, { "end": 942.84, "start": 935.4, "text": " it it's you can say like it's either shop or it's from us or it's hotel I" }, { "end": 949, "start": 942.84, "text": " think that has like this so you can you can basically restrict it to only" }, { "end": 954.1999999999999, "start": 949, "text": " produce one of these three things so in translation this might not be you know" }, { "end": 960.24, "start": 954.1999999999999, "text": " the the way to go but in if you have like yes no answers questions you can" }, { "end": 964.96, "start": 960.24, "text": " restrict it to that so in a lot of these NLP tasks you have some options given" }, { "end": 969, "start": 964.96, "text": " for a given question and you can also restrict it so don't you know you always" }, { "end": 974.24, "start": 969, "text": " have to go with the task at hand but this is in essence what the model does" }, { "end": 980.2800000000001, "start": 974.24, "text": " and this is I think this is the new well not the new per se but this is one of" }, { "end": 984.32, "start": 980.2800000000001, "text": " the core ideas of this paper if you take anything from it there's no new" }, { "end": 988.9200000000001, "start": 984.32, "text": " architecture right here there's no new wisdom in training they train in a" }, { "end": 993.4000000000001, "start": 988.9200000000001, "text": " standard way in a standard language modeling fashion a standard transformer" }, { "end": 997.52, "start": 993.4, "text": " architecture this just happens to be ginormous okay this right here this" }, { "end": 1001.76, "start": 997.52, "text": " thing where they say most of these things would fine-tune and then" }, { "end": 1007.24, "start": 1001.76, "text": " basically end up with one model per task and you need a big data set per task but" }, { "end": 1013.64, "start": 1007.24, "text": " we simply can do this since we have such a large language model it is basically" }, { "end": 1018, "start": 1013.64, "text": " already basically already knows how to do these tasks as long as we formulate" }, { "end": 1023.8, "start": 1018, "text": " them in a language model way we can have the model perform these tasks and they" }, { "end": 1029.96, "start": 1023.8, "text": " will show that this works surprisingly well throughout this paper now we get" }, { "end": 1036.64, "start": 1029.96, "text": " into the experimental results right here and the experimental results first of" }, { "end": 1043.48, "start": 1036.64, "text": " all on language modeling as you can see here they basically say as you go up" }, { "end": 1048.24, "start": 1043.48, "text": " with the parameters you see the more yellow ones are the parameters you go" }, { "end": 1053.72, "start": 1048.24, "text": " into your validation loss goes down and down and down and down and I believe" }, { "end": 1059.6, "start": 1053.72, "text": " this is sort of a log scale as well so this is the log probability so the" }, { "end": 1067.56, "start": 1059.6, "text": " perplexity and that the this basically follows a trend oh no this is a log" }, { "end": 1074.04, "start": 1067.56, "text": " scale this this is a log scale it follows a trend where as you scale up" }, { "end": 1078.3999999999999, "start": 1074.04, "text": " the model and as you scale up the compute that the model gets and we know" }, { "end": 1082, "start": 1078.3999999999999, "text": " for these big language models we basically know you have to scale up model" }, { "end": 1088.1599999999999, "start": 1082, "text": " size compute time and data set size in the same fashion for them to make these" }, { "end": 1094.76, "start": 1088.1599999999999, "text": " gains but if you do that it follows like a a power law where as you scale up" }, { "end": 1097.8799999999999, "start": 1094.76, "text": " these things the model basically gets better and better and better and the" }, { "end": 1103.08, "start": 1097.8799999999999, "text": " question of course is you know how far how far can we go with this but for now" }, { "end": 1107.64, "start": 1103.08, "text": " it seems to hold quite well that you can just make improvements by scaling up" }, { "end": 1116.68, "start": 1107.64, "text": " your model on language modeling at least so where do we where do we basically go" }, { "end": 1122.32, "start": 1116.68, "text": " from here so before we dive into the actual results of the individual tasks" }, { "end": 1126.72, "start": 1122.32, "text": " so now they're going to formulate these individual tasks so they have like pure" }, { "end": 1130.6399999999999, "start": 1126.72, "text": " language modeling tasks right here like Alice was friends with Bob Alice went to" }, { "end": 1135.08, "start": 1130.6399999999999, "text": " visit her friend and then it's like what's the next word okay is Bob and" }, { "end": 1139.76, "start": 1135.08, "text": " George bought some baseball equipment a ball a glove and a what's the next word" }, { "end": 1147.8799999999999, "start": 1139.76, "text": " and I guess this should be hat sorry bat right here but we're going to go into" }, { "end": 1155.4, "start": 1147.88, "text": " the into the tasks and one of them is for example question answering so in" }, { "end": 1160.2, "start": 1155.4, "text": " question answering you simply get either you get just a pure question or a" }, { "end": 1168.64, "start": 1160.2, "text": " context and a question and they do the facts they test where a situation where" }, { "end": 1172.72, "start": 1168.64, "text": " you just get the question so you just get I don't know who is the queen of" }, { "end": 1179.08, "start": 1172.72, "text": " England or something like this and the model is simply to produce either the" }, { "end": 1185.08, "start": 1179.08, "text": " results direct or to choose from a bunch of answers which one is the most likely" }, { "end": 1192.1200000000001, "start": 1185.08, "text": " as a language model and as you can see as you scale up the language model the" }, { "end": 1197.68, "start": 1192.1200000000001, "text": " zero shot one shot and few shot predictions so in few shot you give 64" }, { "end": 1203.3600000000001, "start": 1197.68, "text": " different examples from the training set in the context so you always have so" }, { "end": 1208.2, "start": 1203.3600000000001, "text": " your context is going to look something like this and they have examples at the" }, { "end": 1213.0800000000002, "start": 1208.2, "text": " bottom and haven't looked at the QA task but the the example is going to be" }, { "end": 1217.28, "start": 1213.0800000000002, "text": " something like this you have a task description like answer the following" }, { "end": 1223.48, "start": 1217.28, "text": " questions answer the question and then you have your examples in zero shot" }, { "end": 1228.96, "start": 1223.48, "text": " that's zero and one shot it's one that's what I'd like and then you say how tall" }, { "end": 1239.32, "start": 1228.96, "text": " who sorry who I don't know who climbed Everest the first the rest the first and" }, { "end": 1246.84, "start": 1239.32, "text": " then you say Hillary I think it was Hillary no I don't remember and then you" }, { "end": 1252.88, "start": 1246.84, "text": " say I don't know how how tall is the Empire State building and then you have" }, { "end": 1260.24, "start": 1252.88, "text": " like some number here and at the end you say what was it was it was a question" }, { "end": 1264.92, "start": 1260.24, "text": " from before I don't know who is the Queen of England yeah who is the Queen" }, { "end": 1271.68, "start": 1264.92, "text": " of England and then you ask the model to predict the next word right here okay" }, { "end": 1279.0400000000002, "start": 1271.68, "text": " and you do this in a closed book setting which means you have no access to" }, { "end": 1283.72, "start": 1279.04, "text": " Wikipedia or whatever like usually these systems they can go and query Wikipedia" }, { "end": 1289.04, "start": 1283.72, "text": " but this system doesn't so you just you just want to know what has the model" }, { "end": 1294.92, "start": 1289.04, "text": " learned about the world by simply absorbing giant amounts of text so if" }, { "end": 1299.84, "start": 1294.92, "text": " somewhere in the training data the fact that the Queen of England is Elizabeth" }, { "end": 1305.3999999999999, "start": 1299.84, "text": " the second is present it should complete this right here and it performs" }, { "end": 1311.64, "start": 1305.4, "text": " surprisingly well as you can see here so it manages to outperform a fine-tuned" }, { "end": 1316.0800000000002, "start": 1311.64, "text": " state-of-the-art model that is actually that is fine-tuned on question" }, { "end": 1319.72, "start": 1316.0800000000002, "text": " answering right this has it has been built for question answering and this" }, { "end": 1327.48, "start": 1319.72, "text": " model outperforms it by simply having a lot of language so this here is the" }, { "end": 1336.92, "start": 1327.48, "text": " results on on these open domain QA tasks and you you see right here it the this" }, { "end": 1343.88, "start": 1336.92, "text": " this few shot it outperforms this open domain and open domain means that the" }, { "end": 1356.32, "start": 1343.88, "text": " model can go and look at some Wikipedia page and yeah so so this is pretty cool" }, { "end": 1361.28, "start": 1356.32, "text": " but there are other things like the natural questions where it under" }, { "end": 1367.3999999999999, "start": 1361.28, "text": " performs compared to this open domain thing and they say this is mainly due to" }, { "end": 1372.56, "start": 1367.3999999999999, "text": " the natural questions being like it's very much about factual Wikipedia" }, { "end": 1377.52, "start": 1372.56, "text": " knowledge and so on maybe like the question we just made maybe is more of a" }, { "end": 1383.04, "start": 1377.52, "text": " natural question type of thing and since and the model is apparently not as good" }, { "end": 1388.3999999999999, "start": 1383.04, "text": " at that but it's still impressive that the model is able to do this out of the" }, { "end": 1395.68, "start": 1388.3999999999999, "text": " box okay so before I said something like before we go into the experiments I want" }, { "end": 1401.32, "start": 1395.68, "text": " the following so I have like some sort of hypothesis it's not it's an it's not" }, { "end": 1407.56, "start": 1401.32, "text": " an uncommon hypothesis that basically these things these giant language models" }, { "end": 1411.72, "start": 1407.56, "text": " right they're just these transformers layer after layer after layer with their" }, { "end": 1417.88, "start": 1411.72, "text": " connections in here what I think is happening is they are simply storing the" }, { "end": 1423.24, "start": 1417.88, "text": " training data right they are simply storing the training data in these" }, { "end": 1428.08, "start": 1423.24, "text": " connections right here so usually you think of storing the training data in" }, { "end": 1432.16, "start": 1428.08, "text": " some form of maybe we have like some module right here some database module" }, { "end": 1437.28, "start": 1432.16, "text": " in the neural network and it learns to query the module but ultimately if you" }, { "end": 1443.16, "start": 1437.28, "text": " train a neural network what you have is data and you train a function with" }, { "end": 1449.3999999999999, "start": 1443.16, "text": " parameters on that data and ultimately what you're doing is you're distilling" }, { "end": 1455.28, "start": 1449.3999999999999, "text": " the data into these parameters and you kind of hope to learn some regularities" }, { "end": 1460.08, "start": 1455.28, "text": " from it but ultimately the information about your training data influences or" }, { "end": 1465.68, "start": 1460.08, "text": " determines your final parameters of your function now I can imagine that if you" }, { "end": 1472, "start": 1465.68, "text": " have such a giant neural network with so many weights like 17 sorry 170 billion" }, { "end": 1478.48, "start": 1472, "text": " weights that you can pretty efficiently actually store the training data in that" }, { "end": 1486.04, "start": 1478.48, "text": " model and when you ask this model now to do something what it basically does is" }, { "end": 1491.04, "start": 1486.04, "text": " what these people sort of argue is that it has learned these language tasks is" }, { "end": 1495.92, "start": 1491.04, "text": " learned to reason over language and so on what I think is happening much more" }, { "end": 1501.84, "start": 1495.92, "text": " is it will simply go to the training data since it has stored the entire" }, { "end": 1507.2, "start": 1501.84, "text": " training data in its weights and it will sort of pull out the five to ten to fifty" }, { "end": 1513.48, "start": 1507.2, "text": " training examples that are most relevant to what you put in and it will sort of" }, { "end": 1517.56, "start": 1513.48, "text": " interpolate right you go to the training data and it'll pull out a bunch of" }, { "end": 1521.8, "start": 1517.56, "text": " training samples that are relevant to the context you put in right now and" }, { "end": 1526.9199999999998, "start": 1521.8, "text": " then it will sort of integrate those into the next word that's going to come" }, { "end": 1533.3999999999999, "start": 1526.9199999999998, "text": " out right here and I think if you look at this paper in terms of this so you" }, { "end": 1538.84, "start": 1533.3999999999999, "text": " always write you input a context and the context is split into a task" }, { "end": 1545.52, "start": 1538.84, "text": " description and then it is split into k different examples and then it is it is" }, { "end": 1549.44, "start": 1545.52, "text": " it has a prompt sorry this year this is the prompt so the task description is" }, { "end": 1553.56, "start": 1549.44, "text": " please translate from English to French and the k different things are k" }, { "end": 1557.8799999999999, "start": 1553.56, "text": " different translations and then the prompt is you know what what you should" }, { "end": 1563.34, "start": 1557.8799999999999, "text": " do so it's like half of a K half of one of these boxes right here so these boxes" }, { "end": 1567.4, "start": 1563.34, "text": " are have blah blah blah turns to blah blah blah and then the prompt is simply" }, { "end": 1574.6399999999999, "start": 1567.4, "text": " without the the right side I think what it does is it will simply take all of" }, { "end": 1581.16, "start": 1574.64, "text": " this and it will go to its own training data which it has stored in its weights" }, { "end": 1587, "start": 1581.16, "text": " and it will filter the training data and basically take out the the things that" }, { "end": 1593, "start": 1587, "text": " sort of pattern match sort of regex match in a fuzzy way to this context and" }, { "end": 1597.6000000000001, "start": 1593, "text": " then it will kind of interpolate these training examples in order to come up" }, { "end": 1604.9199999999998, "start": 1597.6, "text": " with the answer I don't think there is reasoning happening here and we're going" }, { "end": 1610.36, "start": 1604.9199999999998, "text": " to if you go through the paper with this view then you can a lot of things" }, { "end": 1615.9199999999998, "start": 1610.36, "text": " actually make sense and I actually I think that we need we need what we need" }, { "end": 1620.74, "start": 1615.9199999999998, "text": " when think people think of like explainable machine learning they often" }, { "end": 1624.12, "start": 1620.74, "text": " think that if I'm going to input something like I'm going to input an" }, { "end": 1630.2399999999998, "start": 1624.12, "text": " image into a classifier and it comes out a certain class car I like the" }, { "end": 1635.36, "start": 1630.2399999999998, "text": " explainability should be which part of this image was it the wheels was it the" }, { "end": 1639.6399999999999, "start": 1635.36, "text": " the hood which part of the image which part of the input image is responsible" }, { "end": 1644.04, "start": 1639.6399999999999, "text": " for making that determination what I think in especially in these language" }, { "end": 1649.56, "start": 1644.04, "text": " models what we should do is if the model predicts something right here the next" }, { "end": 1655.24, "start": 1649.56, "text": " word I think we should somehow have a method of determining which of the" }, { "end": 1661.08, "start": 1655.24, "text": " training examples that the model used to interpolate given this context" }, { "end": 1666.36, "start": 1661.08, "text": " because I'm pretty sure these training is you will find so if you'll find that" }, { "end": 1670.84, "start": 1666.36, "text": " for example this weight and this weight and this weight was very responsible for" }, { "end": 1676.72, "start": 1670.84, "text": " making this prediction happen I'm pretty sure you can somehow during training" }, { "end": 1682.08, "start": 1676.72, "text": " build an index of which of the which five training examples had most influence" }, { "end": 1685.96, "start": 1682.08, "text": " on that particular weight or on this combination of weights and then you can" }, { "end": 1691.88, "start": 1685.96, "text": " sort of go backwards and say you made this decision right here model please" }, { "end": 1696.68, "start": 1691.88, "text": " tell me which of the training data samples were responsible for making that" }, { "end": 1702.08, "start": 1696.68, "text": " decision actually pretty sure that already exists like I'm never the first" }, { "end": 1709.76, "start": 1702.08, "text": " one to think of these things though if I am site me site the channel no but just" }, { "end": 1715.1999999999998, "start": 1709.76, "text": " an interesting way to think about this model and an interesting way to think" }, { "end": 1719.6799999999998, "start": 1715.1999999999998, "text": " about kind of what does what would explain ability even mean in a model" }, { "end": 1725.1999999999998, "start": 1719.6799999999998, "text": " like this and my argument is since it interpolates the training data the" }, { "end": 1730.1599999999999, "start": 1725.1999999999998, "text": " interpret ability should come from the fact of which training samples does it" }, { "end": 1737.28, "start": 1730.16, "text": " interpolate okay let's go to translation so in translation as we said they simply" }, { "end": 1748, "start": 1737.28, "text": " input the like the task and then the few examples and then and then the output" }, { "end": 1753.68, "start": 1748, "text": " okay and you can see right here what you can see is that again as the model goes" }, { "end": 1760.16, "start": 1753.68, "text": " up in parameters the performance generally increases and also you can see" }, { "end": 1765.5600000000002, "start": 1760.16, "text": " that the performance is pretty good every time that this model goes to" }, { "end": 1771.72, "start": 1765.5600000000002, "text": " English so it goes if it if the target language is English which sort of makes" }, { "end": 1777.0800000000002, "start": 1771.72, "text": " sense because like a large part of the corpus they train on is English so being" }, { "end": 1782.5600000000002, "start": 1777.0800000000002, "text": " an English language model it should be pretty good if it is asked to produce" }, { "end": 1787.12, "start": 1782.56, "text": " English and it's not as good if it is asked to go into the different direction" }, { "end": 1793.8799999999999, "start": 1787.12, "text": " now what you also see is that it is not really a difference whether you translate" }, { "end": 1800.08, "start": 1793.8799999999999, "text": " from from which language you translate but if you go to English but it very" }, { "end": 1807.6, "start": 1800.08, "text": " much matters to which language you go if it is from English so this sort of makes" }, { "end": 1812.9599999999998, "start": 1807.6, "text": " sense in that it is just trained on a lot of English data and right here" }, { "end": 1821.28, "start": 1812.9599999999998, "text": " sometimes they are on par with the with the state-of-the-art supervised methods" }, { "end": 1825.1599999999999, "start": 1821.28, "text": " and also other times they outperform these methods right here and these" }, { "end": 1829.1599999999999, "start": 1825.1599999999999, "text": " methods are unsupervised but are specifically so they don't have a" }, { "end": 1833.6799999999998, "start": 1829.1599999999999, "text": " supervised training data set that goes let's say from English to French but" }, { "end": 1839.3200000000002, "start": 1833.68, "text": " they are built with this in mind that they need to translate later so they are" }, { "end": 1844.52, "start": 1839.3200000000002, "text": " sort of task specific but don't have a supervised training set and this model" }, { "end": 1853.3600000000001, "start": 1844.52, "text": " right here it just learns whatever it learns and it it just it just does it" }, { "end": 1857.2, "start": 1853.3600000000001, "text": " just does this this language model learning and at the end just because it" }, { "end": 1862.88, "start": 1857.2, "text": " has seen some websites where language of both things appear it can now translate" }, { "end": 1873.7600000000002, "start": 1862.88, "text": " reasonably well okay now yeah so the results here are a bit noisy but it is" }, { "end": 1876.48, "start": 1873.7600000000002, "text": " still interesting to see that it sometimes even gets close to the" }, { "end": 1882.3200000000002, "start": 1876.48, "text": " supervised thing though they say that they are not familiar with the literature" }, { "end": 1889.22, "start": 1882.3200000000002, "text": " and are not sure that these models that these numbers are you know good okay okay" }, { "end": 1897.4, "start": 1889.22, "text": " the next thing is these um Winograd schemes where you do have where is the" }, { "end": 1903.6000000000001, "start": 1897.4, "text": " text here is a classic NLP task that involves determining which word a" }, { "end": 1909.92, "start": 1903.6000000000001, "text": " pronoun refers to when the pronoun is grammatically ambiguous but semantically" }, { "end": 1916.68, "start": 1909.92, "text": " unambiguous to a human so these are sort of human produced sentences where" }, { "end": 1923.04, "start": 1916.68, "text": " it's kind of a pronoun could refer to multiple things I don't have a example" }, { "end": 1931.96, "start": 1923.04, "text": " present but where do we have the right here you can see that this model will" }, { "end": 1939.88, "start": 1931.96, "text": " out produce a fine-tuned large but will not out produce a fine-tuned Roberto" }, { "end": 1947.3200000000002, "start": 1939.88, "text": " large so it is going to it is going to come it is competing at least with the" }, { "end": 1953.3200000000002, "start": 1947.3200000000002, "text": " fine-tuned models that were made specifically for that task right again" }, { "end": 1960.1200000000001, "start": 1953.3200000000002, "text": " this is pretty pretty interesting and you also see that the larger models here" }, { "end": 1964.68, "start": 1960.1200000000001, "text": " it starts to make a difference whether or not you give it one zero or one or" }, { "end": 1976.92, "start": 1964.68, "text": " more examples okay so we'll get into we'll get into the the more interesting" }, { "end": 1986.3600000000001, "start": 1976.92, "text": " things right here in this thing right here where is it yes this is the kind of" }, { "end": 1995.1599999999999, "start": 1986.36, "text": " a physical physical question physical QA where it is a bit of common-sense" }, { "end": 2004.8799999999999, "start": 1995.1599999999999, "text": " reasoning so you're asked to I don't yeah these are like science questions" }, { "end": 2010.6, "start": 2004.8799999999999, "text": " multiple-choice questions collected from a third to ninth grade exams and the" }, { "end": 2019.1599999999999, "start": 2010.6, "text": " physical QA is physical QA asks common-sense question about how the" }, { "end": 2024.32, "start": 2019.1599999999999, "text": " physical word work world works and is intended as a probe of grounded" }, { "end": 2030.24, "start": 2024.32, "text": " understanding of the world so it has questions as I understand it it has" }, { "end": 2035.9599999999998, "start": 2030.24, "text": " questions like if a drop a ball will it fall on the ground or where will it fall" }, { "end": 2042.6000000000001, "start": 2035.96, "text": " or something like this and they say that they can outperform a fine-tuned state" }, { "end": 2048.92, "start": 2042.6000000000001, "text": " of the art model on this if they go just high enough and you can also see that" }, { "end": 2055.6, "start": 2048.92, "text": " there isn't much of a difference between zero one and few shot the methods of" }, { "end": 2060.56, "start": 2055.6, "text": " this model right here even though zero shot is even higher than one shot so" }, { "end": 2066.56, "start": 2060.56, "text": " this is probably just noise but then you find out that they have an asterisk here" }, { "end": 2075.08, "start": 2066.56, "text": " and this means that this is potentially a contaminated data set so they have" }, { "end": 2079.56, "start": 2075.08, "text": " potential contamination issue so what they found was there was a significant" }, { "end": 2085.72, "start": 2079.56, "text": " overlap between the data set this data set and their training data set and" }, { "end": 2092.16, "start": 2085.72, "text": " they even they only realized this too late because there was a bug in their" }, { "end": 2099.08, "start": 2092.16, "text": " deduplication code and then they couldn't change it anymore like I because" }, { "end": 2103.9599999999996, "start": 2099.08, "text": " this model is so large that they couldn't restart the training because" }, { "end": 2108.3599999999997, "start": 2103.9599999999996, "text": " they've already spent like so much money and energy on it this is crazy I think" }, { "end": 2112.24, "start": 2108.3599999999997, "text": " these language models are getting so large that we should building them we" }, { "end": 2118.04, "start": 2112.24, "text": " should more think of it like we built the the International Space Station or" }, { "end": 2122.8799999999997, "start": 2118.04, "text": " something like this where it's a project where humanity sort of collaborates or" }, { "end": 2126.3199999999997, "start": 2122.8799999999997, "text": " there's a big effort and you build it once and whatever you have you have" }, { "end": 2135.3999999999996, "start": 2126.3199999999997, "text": " right so these these good numbers here are simply or not simply are because or" }, { "end": 2139.9599999999996, "start": 2135.3999999999996, "text": " could be influenced by this contamination and I think that's what's" }, { "end": 2143.92, "start": 2139.96, "text": " happening right here even though they will make the case that this" }, { "end": 2150.16, "start": 2143.92, "text": " contamination isn't really an issue I can probably show you that it might be" }, { "end": 2156.32, "start": 2150.16, "text": " it may be actually is an issue because on the other data sets at the the fine" }, { "end": 2165.88, "start": 2156.32, "text": " tuned state-of-the-art model outperform the GPT-3 quite a bit so and also the" }, { "end": 2170, "start": 2165.88, "text": " the fact that the you know if you provide a demonstration or many" }, { "end": 2173.84, "start": 2170, "text": " demonstrations it doesn't actually change that much it kind of tells me" }, { "end": 2177.6, "start": 2173.84, "text": " that the model sort of already knows what the answer is and doesn't really" }, { "end": 2181.44, "start": 2177.6, "text": " need demonstrations because it doesn't help if you have the training data" }, { "end": 2192.04, "start": 2181.44, "text": " stored or the test data you don't really have to get demonstrations right so they" }, { "end": 2197.84, "start": 2192.04, "text": " have a few other a few other things right here where on these coca tasks they" }, { "end": 2204.08, "start": 2197.84, "text": " perform pretty poorly compared to others or poorly let's say they perform well" }, { "end": 2213.88, "start": 2204.08, "text": " but not particularly more well than a state-of-the-art and they perform" }, { "end": 2217.88, "start": 2213.88, "text": " especially poorly on the reading comprehension sorry that's the that's" }, { "end": 2225.28, "start": 2217.88, "text": " the cocoa so in reading comprehension what you have to do is abstractive" }, { "end": 2230.28, "start": 2225.28, "text": " multiple choice and span based answer formats in both dialogue and single" }, { "end": 2235.52, "start": 2230.28, "text": " question setting so basically have to read a piece of text like this and then" }, { "end": 2241.6400000000003, "start": 2235.52, "text": " answer a question about the piece of text now this is something where I think" }, { "end": 2248.92, "start": 2241.64, "text": " you cannot really interpolate the training data super well and therefore" }, { "end": 2252.8399999999997, "start": 2248.92, "text": " so you can't really just pattern match and interpret because you have to do" }, { "end": 2259.52, "start": 2252.8399999999997, "text": " actual reasoning and I think that's why the model performs poorly here they do" }, { "end": 2267.8599999999997, "start": 2259.52, "text": " measure this on on super glue which is a NLP benchmark and also here you can see" }, { "end": 2273.76, "start": 2267.86, "text": " it doesn't outperform a fine-tuned state-of-the-art model on these tasks" }, { "end": 2280.08, "start": 2273.76, "text": " but it does outperform a fine-tuned BERT model slightly the BERT model is fine" }, { "end": 2284.84, "start": 2280.08, "text": " tuned on these things whereas GPT-3 isn't but notice the tasks in which it" }, { "end": 2290.04, "start": 2284.84, "text": " does well and in which it doesn't do well compared to the state-of-the-art" }, { "end": 2296.9, "start": 2290.04, "text": " model so for example in the bool queue it doesn't do particularly well right" }, { "end": 2301.32, "start": 2296.9, "text": " the state-of-the-art is 91 and only has 76 that's quite a large difference and" }, { "end": 2307.12, "start": 2301.32, "text": " actually have the glue benchmark open here and you can see this is the" }, { "end": 2314.12, "start": 2307.12, "text": " bool queue so an example here would be is France the same time zone as the UK" }, { "end": 2319.28, "start": 2314.12, "text": " and then there is like a passage and you need to reason about from this passage" }, { "end": 2326.56, "start": 2319.28, "text": " about whether or not this answer is true or false okay this this is very much not" }, { "end": 2331.36, "start": 2326.56, "text": " language modeling this is reasoning and that's why the model is doing poorly" }, { "end": 2336.52, "start": 2331.36, "text": " here whereas in another thing you see these for example this copa right here" }, { "end": 2342.32, "start": 2336.52, "text": " the model is doing almost as good as a fine-tuned state-of-the-art and I have to" }, { "end": 2347.2799999999997, "start": 2342.32, "text": " stress this model has never actually learned this task in a supervised way" }, { "end": 2353.72, "start": 2347.2799999999997, "text": " it's simply a language model and I have this copa task right here and these are" }, { "end": 2359.9199999999996, "start": 2353.72, "text": " the examples so one example is the premise the man broke his toe what was" }, { "end": 2364.9199999999996, "start": 2359.9199999999996, "text": " the cause of this and you have two different things that it could be either" }, { "end": 2370.7599999999998, "start": 2364.9199999999996, "text": " he got a hole in his sock or he dropped a hammer on his foot and the way you" }, { "end": 2374.7599999999998, "start": 2370.7599999999998, "text": " phrase it in this model is you would give the premise as the context and then" }, { "end": 2379.24, "start": 2374.7599999999998, "text": " you simply ask the model since it's a language model which of these two things" }, { "end": 2386, "start": 2379.24, "text": " is more probable to come and of course it is going to select the thing that" }, { "end": 2393, "start": 2386, "text": " kind of happened more often in the training data and you know broke his toe" }, { "end": 2398.2799999999997, "start": 2393, "text": " the cause of breaking his toe that is a hammer this is entirely conceivable that" }, { "end": 2403.8399999999997, "start": 2398.2799999999997, "text": " a language model would know this and with enough training data could sort of" }, { "end": 2408.3999999999996, "start": 2403.8399999999997, "text": " pull from the training data examples where hammer on foot and broke toe" }, { "end": 2414.96, "start": 2408.4, "text": " appear a bunch of times and hole in sock would be rather unrelated so as long as" }, { "end": 2419.38, "start": 2414.96, "text": " these questions are not too adversarial constructed specifically that a language" }, { "end": 2424.28, "start": 2419.38, "text": " model can't solve them there the model is going to perform pretty well right" }, { "end": 2430, "start": 2424.28, "text": " here right so it is very interesting to see that if you view this as" }, { "end": 2434, "start": 2430, "text": " interpolating the training data it suddenly makes sense where it's good and" }, { "end": 2446.6, "start": 2434, "text": " where it isn't good so this was the super glue and and NLI it is performing" }, { "end": 2453.12, "start": 2446.6, "text": " particularly poorly on NLI which is the ability to understand the relationship" }, { "end": 2458.88, "start": 2453.12, "text": " between two sentences right so where the model classifies whether the second" }, { "end": 2462.68, "start": 2458.88, "text": " sentence logically follows from the first contradicts the first or is" }, { "end": 2469.96, "start": 2462.68, "text": " possibly true neutral okay so this is the reasoning part of this model is not" }, { "end": 2475.3199999999997, "start": 2469.96, "text": " given it is simply recalling the training data and doing language modeling" }, { "end": 2481.12, "start": 2475.3199999999997, "text": " now they say oh we can test this we can test this with synthetic and qualitative" }, { "end": 2485.8799999999997, "start": 2481.12, "text": " tasks so they invent some own tasks since you know now it's pretty easy since" }, { "end": 2489.3199999999997, "start": 2485.8799999999997, "text": " you don't have to fine-tune the model you don't have to turn to generate an" }, { "end": 2496.04, "start": 2489.32, "text": " actual training set for a task so you can focus on generating a test set and" }, { "end": 2504.1200000000003, "start": 2496.04, "text": " and you know that's what they do so they do something like arithmetic so they say" }, { "end": 2509.4, "start": 2504.1200000000003, "text": " okay can we come up with a bunch of arithmetic tasks for example to digit" }, { "end": 2514.92, "start": 2509.4, "text": " addition so what the model would see would so this is an example and what the" }, { "end": 2522.36, "start": 2514.92, "text": " model would see is simply this as a context right here for the prompt and if" }, { "end": 2530.04, "start": 2522.36, "text": " you give it examples so if this is like one-shot learning you would input add" }, { "end": 2535.08, "start": 2530.04, "text": " the following numbers the following numbers as a string right then a new" }, { "end": 2544.28, "start": 2535.08, "text": " line and then you would give it one example like what is 11 plus 12 and with" }, { "end": 2550.6800000000003, "start": 2544.28, "text": " the answer together with the answer answer is I don't even know 23 and then" }, { "end": 2559.76, "start": 2550.6800000000003, "text": " you the prompt goes here so what is 48 plus 76 and then you ask what is the" }, { "end": 2566.1600000000003, "start": 2559.76, "text": " next word right here what is the next string token that comes here now the the" }, { "end": 2571.48, "start": 2566.1600000000003, "text": " inference here is that if the model manages to do this it can't simply" }, { "end": 2575.56, "start": 2571.48, "text": " because these are all strings the model basically has no clue how to do math" }, { "end": 2579.64, "start": 2575.56, "text": " these are numbers to the model these are just tokens as strings and the" }, { "end": 2584, "start": 2579.64, "text": " inference is if the model can do this it must have learned you know some kind of" }, { "end": 2590.48, "start": 2584, "text": " reasoning ability it must have learned to like perform some logic inside so" }, { "end": 2594.12, "start": 2590.48, "text": " they go into two-digit addition three-digit addition four-digit" }, { "end": 2601.52, "start": 2594.12, "text": " addition five-digit addition and even multiplication and subtraction and the" }, { "end": 2609.6, "start": 2601.52, "text": " results are right here so as you can see the lower parameter models they perform" }, { "end": 2614.68, "start": 2609.6, "text": " pretty poorly but as you go up the parameters the big model is performing" }, { "end": 2621.48, "start": 2614.68, "text": " really well in the two-digit range is performing also really well so accuracy" }, { "end": 2627.4, "start": 2621.48, "text": " of look that accuracy 80 90 percent in three-digit addition and subtraction but" }, { "end": 2631.2, "start": 2627.4, "text": " then if as soon as you get to the four-digit or the two-digit multiplication" }, { "end": 2637.3, "start": 2631.2, "text": " and so on the performance drops now they say that's because multiplication is" }, { "end": 2642.32, "start": 2637.3, "text": " harder and you know it's is logically very computationally you know but the" }, { "end": 2647.08, "start": 2642.32, "text": " two-digit addition and so on model has learned something about the world I" }, { "end": 2658, "start": 2647.08, "text": " disagree because so here's the because what you will do is you will simply and" }, { "end": 2664.12, "start": 2658, "text": " this you simply recall the training data so look at the two-digit addition with" }, { "end": 2669.36, "start": 2664.12, "text": " zero shot you already get 76 percent but with one shot you get 99 percent and" }, { "end": 2675.84, "start": 2669.36, "text": " with few shot you get a hundred percent so if you interpret this model as simply" }, { "end": 2682.96, "start": 2675.84, "text": " filtering the training data to pattern match then it makes a lot of sense that" }, { "end": 2688.76, "start": 2682.96, "text": " the one shot would like the examples here would give you a much improvement" }, { "end": 2696.36, "start": 2688.76, "text": " because if you have a bunch of examples where please add right add and then oh I" }, { "end": 2703.08, "start": 2696.36, "text": " erased our example again so you have like 48 plus 72 equals blah blah blah you" }, { "end": 2709.44, "start": 2703.08, "text": " have these of this if you give more and more example all of a sudden this looks" }, { "end": 2715.84, "start": 2709.44, "text": " like a table and they say we made sure that the strings here these particular" }, { "end": 2719.88, "start": 2715.84, "text": " strings were not in our training data right so these strings never appeared" }, { "end": 2725.3199999999997, "start": 2719.88, "text": " but I just have an issue with this d duplication stuff because what can" }, { "end": 2734.56, "start": 2725.32, "text": " appear actually is not the what can appear is a table and in table often you" }, { "end": 2739.8, "start": 2734.56, "text": " have columns and then another column will be the sum of these columns on the" }, { "end": 2744.6800000000003, "start": 2739.8, "text": " left and if you are asked to pattern match you'll naturally find websites" }, { "end": 2748.56, "start": 2744.6800000000003, "text": " right if you have a few of these examples you'll find websites where the" }, { "end": 2754.1200000000003, "start": 2748.56, "text": " columns exactly refer to these things and then you'll find the sum here and if" }, { "end": 2759.8199999999997, "start": 2754.12, "text": " you filter for websites that appear to match your scheme in the examples you'll" }, { "end": 2764.56, "start": 2759.8199999999997, "text": " find all the website with a table on them where the column one column is an" }, { "end": 2770.7999999999997, "start": 2764.56, "text": " addition of the others and I can actually do that so I went and I typed in" }, { "end": 2779.24, "start": 2770.7999999999997, "text": " just a bunch of these things so 98 plus 45 is 143 18 plus 55 is 70 I believe at" }, { "end": 2784.8399999999997, "start": 2779.24, "text": " least and I can find now Google makes it hard because they localize and" }, { "end": 2789.68, "start": 2784.8399999999997, "text": " everything but you can still find what you're going to find our tables and" }, { "end": 2797.8799999999997, "start": 2789.68, "text": " tables and tables and tables and now I actually went to doc.go to basically say" }, { "end": 2802.9199999999996, "start": 2797.8799999999997, "text": " you know they they don't you know really personalize it to me and what's the" }, { "end": 2807.7599999999998, "start": 2802.9199999999996, "text": " first thing I find when I type in just these numbers is math skip counting" }, { "end": 2815.0800000000004, "start": 2807.76, "text": " missing sequence number and a website where basically the answers are already" }, { "end": 2820.1600000000003, "start": 2815.0800000000004, "text": " given look at that so all the model has to do is recall this particular training" }, { "end": 2826.6800000000003, "start": 2820.1600000000003, "text": " example from the samples it already has right and it will it will basically be" }, { "end": 2831.48, "start": 2826.6800000000003, "text": " able in quotes to perform addition like this is financial data and another one" }, { "end": 2837.1600000000003, "start": 2831.48, "text": " where you have to subtract stuff right so I'm pretty sure all the model is doing" }, { "end": 2843.92, "start": 2837.16, "text": " here is interpolating the training data and that's also why it performs worse if" }, { "end": 2850.8799999999997, "start": 2843.92, "text": " if you up the digits because longer digit numbers are simply less frequent" }, { "end": 2857.2, "start": 2850.8799999999997, "text": " in the in in the training data multiplication is first of all less" }, { "end": 2861.2799999999997, "start": 2857.2, "text": " frequent and second of all it also results in larger numbers which are less" }, { "end": 2870.6400000000003, "start": 2861.28, "text": " frequent right so it explains a lot so I yeah I have my issues with people" }, { "end": 2877, "start": 2870.6400000000003, "text": " saying yeah this this shows some reasoning I don't think it does the same" }, { "end": 2882.52, "start": 2877, "text": " thing here with word scramble so in word scramble they have different things you" }, { "end": 2889.6800000000003, "start": 2882.52, "text": " see okay they they they look whether or not only 17 matches 0.8 percent of the" }, { "end": 2893.7999999999997, "start": 2889.68, "text": " math things are in their training data is like no you haven't searched well" }, { "end": 2899.44, "start": 2893.7999999999997, "text": " enough and the rest of their deduplication by the way is also pretty" }, { "end": 2904.3599999999997, "start": 2899.44, "text": " weak I would say because they just look for like 13 gram overlaps between the" }, { "end": 2911, "start": 2904.3599999999997, "text": " training data and the in the and their their test data so they have these word" }, { "end": 2917.52, "start": 2911, "text": " scrambling tasks where they basically scramble words and they ask the model to" }, { "end": 2923.8, "start": 2917.52, "text": " unscramble it for example this word is inevitably scrambled so they always you" }, { "end": 2928.32, "start": 2923.8, "text": " know they give like anagrams and they give random insertion into the word like" }, { "end": 2935.92, "start": 2928.32, "text": " this word right here or they reverse the word and they say so this I think this" }, { "end": 2944.48, "start": 2935.92, "text": " is the thing at the very beginning but if you can see right here also as the" }, { "end": 2949.72, "start": 2944.48, "text": " model goes up then this this improves and they also say well this means maybe" }, { "end": 2956.2, "start": 2949.72, "text": " some kind of reasoning but I think this is just it's learning the language and" }, { "end": 2963.44, "start": 2956.2, "text": " it's learning that you know the the words in in sorry that the letters make" }, { "end": 2969.2400000000002, "start": 2963.44, "text": " up a word and the letters correspond to word pieces or are associated with word" }, { "end": 2975.12, "start": 2969.24, "text": " pieces and it always learns to English a good task to check this would actually" }, { "end": 2979.7999999999997, "start": 2975.12, "text": " be to scramble words so if you unscramble words you always end up with" }, { "end": 2983.4799999999996, "start": 2979.7999999999997, "text": " an English word so all it has to do is basically check which word has the" }, { "end": 2989, "start": 2983.4799999999996, "text": " highest overlap in word pieces but you could do something like please scramble" }, { "end": 2993.2, "start": 2989, "text": " this word and then always count it correctly when any of the scrambling of" }, { "end": 2999.22, "start": 2993.2, "text": " the words so instead of going from this to this which you can simply solve by" }, { "end": 3004.64, "start": 2999.22, "text": " knowing the English language but you would have basically no clue what the" }, { "end": 3008.52, "start": 3004.64, "text": " task is that you don't have to understand that as a model you could ask" }, { "end": 3013.04, "start": 3008.52, "text": " it to go from this to this given a few examples right then it would really need" }, { "end": 3018.3599999999997, "start": 3013.04, "text": " to understand what the task is that it's supposed to actually scramble a word and" }, { "end": 3023.6, "start": 3018.3599999999997, "text": " would need to learn that from its context given examples but they as far" }, { "end": 3030.52, "start": 3023.6, "text": " as I see they don't do that and again I think it's recalling the the training" }, { "end": 3037.68, "start": 3030.52, "text": " data the this is a sat analogy so the SAT or this test that the US high" }, { "end": 3044.44, "start": 3037.68, "text": " schoolers take to get into college and the the this they say a typical example" }, { "end": 3052.48, "start": 3044.44, "text": " this is dying on me no it's scrolled okay a typical example is the following" }, { "end": 3059.92, "start": 3052.48, "text": " this I find I find pretty hilarious all Dacius is to boldness as sanctimonious" }, { "end": 3064.96, "start": 3059.92, "text": " is to hypocrisy anonymous is to identity remorseful is to misdeed" }, { "end": 3069.96, "start": 3064.96, "text": " deleterious is to result or impressionable is to temptation this is a" }, { "end": 3075.4, "start": 3069.96, "text": " as as a okay I'm not a native speaker but this is a hard question right and" }, { "end": 3080, "start": 3075.4, "text": " you have to you know see that these these high schoolers they're stressed" }, { "end": 3083.8, "start": 3080, "text": " like this is very much a time-based test so you need to make a decision quickly" }, { "end": 3087.92, "start": 3083.8, "text": " while the model of course is basically able to sift through its entire training" }, { "end": 3092.8, "start": 3087.92, "text": " data in the time it takes the GPUs to perform inference but it's still funny" }, { "end": 3101.24, "start": 3092.8, "text": " that GPT-3 achieves 50 65 percent in the few shot setting and 59 percent in the" }, { "end": 3106.24, "start": 3101.24, "text": " one shot setting 53 percent is zero shot setting whereas the average score among" }, { "end": 3110.9199999999996, "start": 3106.24, "text": " college applicants was 57 percent so it outperforms the average college applicant" }, { "end": 3114.8399999999997, "start": 3110.9199999999996, "text": " it's pretty funny but you would expect the language model to have a pretty good" }, { "end": 3120.56, "start": 3114.8399999999997, "text": " grasp of these kind of synonyms and relations between words because these" }, { "end": 3127.68, "start": 3120.56, "text": " are just absolutely statistical associations between words so yeah this" }, { "end": 3132.6, "start": 3127.68, "text": " I found this to be pretty pretty funny and the last thing and this is what" }, { "end": 3138.92, "start": 3132.6, "text": " everyone's freaking out over is this news article generation where basically" }, { "end": 3146.72, "start": 3138.92, "text": " they give it the beginning of a few of a news article and then they let humans" }, { "end": 3152.04, "start": 3146.72, "text": " decide whether or not the news article is written by a machine or by a human" }, { "end": 3159.44, "start": 3152.04, "text": " and they say here by contrast mean human accuracy at detecting articles that were" }, { "end": 3165.2000000000003, "start": 3159.44, "text": " produced by the 175 billion parameter model it was barely above chance at 52" }, { "end": 3171.36, "start": 3165.2000000000003, "text": " percent human abilities to detect model generated text appear to decrease as" }, { "end": 3176.4, "start": 3171.36, "text": " model size increases there appears to be a trend towards chance accuracy with" }, { "end": 3184.32, "start": 3176.4, "text": " model size and human detection of GPT-3 is close to chance okay so what they do" }, { "end": 3190.1600000000003, "start": 3184.32, "text": " is they give and they have some examples right here they give the model the" }, { "end": 3196.44, "start": 3190.1600000000003, "text": " following input the title the subtitle of an article and then this word article" }, { "end": 3200.6400000000003, "start": 3196.44, "text": " and the model is supposed to complete the rest of the article right here and" }, { "end": 3205.36, "start": 3200.6400000000003, "text": " you can also you know give do this in a few short setting such that the model" }, { "end": 3211.32, "start": 3205.36, "text": " basically knows that it's if you give it a few a few examples the model knows it" }, { "end": 3218.44, "start": 3211.32, "text": " is supposed to produce a news article right okay so there are two two ways" }, { "end": 3223.96, "start": 3218.44, "text": " that you can think of this first way the model has learned the language so well" }, { "end": 3228.88, "start": 3223.96, "text": " and it writes code it has learned to write coherent language and so on it's" }, { "end": 3235.32, "start": 3228.88, "text": " learned to reason keep context and blah blah blah okay second way the model sees" }, { "end": 3242.34, "start": 3235.32, "text": " this thing right here it sees the few you know K few shot examples that it has" }, { "end": 3248.4, "start": 3242.34, "text": " before in the context it will take them filter the training data to in this case" }, { "end": 3252.0800000000004, "start": 3248.4, "text": " it just sees news articles so do just news articles it will take this thing" }, { "end": 3256.44, "start": 3252.0800000000004, "text": " filter the training data even more to just the news articles that pertain" }, { "end": 3262.96, "start": 3256.44, "text": " largely to topics or words that appear in here and then lastly will interpolate" }, { "end": 3267.52, "start": 3262.96, "text": " the few training examples to produce this thing now they argue that this" }, { "end": 3273.4, "start": 3267.52, "text": " isn't really possible because they have actually checked that this news article" }, { "end": 3282.12, "start": 3273.4, "text": " is not in the training data but I have simply gone and taken a I've really" }, { "end": 3286.28, "start": 3282.12, "text": " taken a random substring here I've taken this substring voted to strengthen the" }, { "end": 3293.6400000000003, "start": 3286.28, "text": " ban on the ordination of just this substring and I've put it into Google and" }, { "end": 3300.88, "start": 3293.6400000000003, "text": " Babidi bah I find a book with voted to strengthen prohibitions to ban LGBTQ" }, { "end": 3305.76, "start": 3300.88, "text": " people from being ordained and ministers so it's you know I find this it's not" }, { "end": 3310.5, "start": 3305.76, "text": " the same article but it's talking about the same incident the article talks" }, { "end": 3315.1600000000003, "start": 3310.5, "text": " about and it is using the same language probably read the article and the" }, { "end": 3319.92, "start": 3315.16, "text": " author is like I can't really you know copy paste that would be you know not" }, { "end": 3325.72, "start": 3319.92, "text": " really cool so I'll just kind of you know write it in my own words but largely" }, { "end": 3331.72, "start": 3325.72, "text": " the same thing the Associated Press here also a different article you know see" }, { "end": 3338.8399999999997, "start": 3331.72, "text": " different title than this one right here but about the same thing and also with" }, { "end": 3343.48, "start": 3338.8399999999997, "text": " the same language right here voted to stay to strengthen the faiths divisive" }, { "end": 3350.2400000000002, "start": 3343.48, "text": " bands on same-sex marriage and ordination of LGBT clergy and generally" }, { "end": 3355.6, "start": 3350.2400000000002, "text": " so the argument this article wasn't in the training data is just not really" }, { "end": 3364.4, "start": 3355.6, "text": " something I buy in this in this case so I think it the article as such wasn't" }, { "end": 3369, "start": 3364.4, "text": " there but many articles about this topics were and I think this will just" }, { "end": 3374.32, "start": 3369, "text": " interpolate these now they say this was the hardest article for the humans to" }, { "end": 3382.24, "start": 3374.32, "text": " decide and this here was the easiest so it's it says I don't know" }, { "end": 3387.12, "start": 3382.24, "text": " Starr talks promise draws Megyn Kelly's sarcasm and it says a year ago joking" }, { "end": 3389.76, "start": 3387.12, "text": " Phoenix made headlines when he appeared on the red carpet at the Golden Globes" }, { "end": 3393.36, "start": 3389.76, "text": " wearing a tuxedo with a paper bag over his head that read I'm a shape-shifter" }, { "end": 3396.88, "start": 3393.36, "text": " blah blah you you would guess that joking Phoenix would do something like" }, { "end": 3401.2400000000002, "start": 3396.88, "text": " this but they say their human raiders were US based right and you see right" }, { "end": 3405.08, "start": 3401.2400000000002, "text": " here it says Megyn Kelly was not impressed and she let him have it on the" }, { "end": 3410.36, "start": 3405.08, "text": " tonight show now that tonight show is not what Megyn Kelly is and US based" }, { "end": 3415.6800000000003, "start": 3410.36, "text": " people would I guess know something like this and would immediately feel like" }, { "end": 3426, "start": 3415.6800000000003, "text": " this is wrong so I think this thing is interpolated from is interpolated from a" }, { "end": 3431.48, "start": 3426, "text": " bunch of different news articles about this and the interpolation just let it" }, { "end": 3436.4, "start": 3431.48, "text": " like made it such that this person is on this show which that they aren't and the" }, { "end": 3440.76, "start": 3436.4, "text": " humans noticed right but it doesn't change the fact that it probably just" }, { "end": 3445, "start": 3440.76, "text": " went to the training data filtered a bunch of articles about these words and" }, { "end": 3449.08, "start": 3445, "text": " then interpolated like mash them together it is a good language model" }, { "end": 3453.48, "start": 3449.08, "text": " right it can grammar it's very good at grammar so we can interpolate different" }, { "end": 3460.8, "start": 3453.48, "text": " passages of text and I feel that the the really really useful application of this" }, { "end": 3465.2, "start": 3460.8, "text": " will be sort of as a search engine as a fuzzy search engine so now I can like" }, { "end": 3471.6, "start": 3465.2, "text": " input for example my my machine learning research ideas and what will output will" }, { "end": 3475.4, "start": 3471.6, "text": " be sort of an abstract of a paper that is kind of a mush together of other" }, { "end": 3481.28, "start": 3475.4, "text": " papers on the same thing and that that you know you can think of many" }, { "end": 3487.2400000000002, "start": 3481.28, "text": " applications I don't think we have built something really intelligent here and" }, { "end": 3492.96, "start": 3487.2400000000002, "text": " what this is this is though is pretty cool they they give examples like this" }, { "end": 3497.28, "start": 3492.96, "text": " here where they make up a word and then ask the model to use the word in a" }, { "end": 3503.6000000000004, "start": 3497.28, "text": " sentence so to scree is something sorry to screech something is to swing a" }, { "end": 3508, "start": 3503.6000000000004, "text": " sword at it an example of a sentence that uses the word screech is and of" }, { "end": 3511.96, "start": 3508, "text": " course the model what's the model is going to do is it's going to take this" }, { "end": 3517.2, "start": 3511.96, "text": " it's going to filter the training data for all of the instances where sort of" }, { "end": 3521.32, "start": 3517.2, "text": " this construction appears like an example of using the words which is" }, { "end": 3526.2, "start": 3521.32, "text": " mostly so dictionaries then it's going to not know that word but it's you can" }, { "end": 3531.56, "start": 3526.2, "text": " interpolate it from interpolated from all this data right here and the so the" }, { "end": 3537, "start": 3531.56, "text": " cool thing is it actually conjugates the word we screed at each other for several" }, { "end": 3543.8, "start": 3537, "text": " minutes and then we went outside and ate ice cream so you can see how this is" }, { "end": 3548.4, "start": 3543.8, "text": " comes to be but I think it would really be fun to have a model that tells us" }, { "end": 3554.56, "start": 3548.4, "text": " which training data samples were used here it can also correct English grammar" }, { "end": 3562.84, "start": 3554.56, "text": " which is pretty obvious though again it can never correct so the the input" }, { "end": 3569.08, "start": 3562.84, "text": " always here is poor English good English poor English good English poor good poor" }, { "end": 3574.4, "start": 3569.08, "text": " English and then good English and that's what the model is asked to to output and" }, { "end": 3581.92, "start": 3574.4, "text": " I'm actually not sure pretty sure this here shouldn't be bold I'm fairly sure" }, { "end": 3585.28, "start": 3581.92, "text": " this shouldn't be bold this is given to the model the model is only asked to" }, { "end": 3593.6800000000003, "start": 3585.28, "text": " produce this otherwise I'd be I'd be actually impressed but yes nothing task" }, { "end": 3597.4, "start": 3593.6800000000003, "text": " specific is provided aside from the examples from few example as" }, { "end": 3602.96, "start": 3597.4, "text": " conditioning and the poor English input good English output framing so the good" }, { "end": 3607.2400000000002, "start": 3602.96, "text": " English output thing here should not be in boldface authors if you're listening" }, { "end": 3615.1200000000003, "start": 3607.2400000000002, "text": " this should not be bold thank you okay but again it is always as you can" }, { "end": 3619.7999999999997, "start": 3615.12, "text": " see it's too good English it's always the target is the good English whereas" }, { "end": 3625.52, "start": 3619.7999999999997, "text": " if the model really understood the task it should also be able to do the inverse" }, { "end": 3629.2, "start": 3625.52, "text": " it should be able to to produce something poor from something good" }, { "end": 3633.7599999999998, "start": 3629.2, "text": " because then you eliminate the fact that it's just a good English language model" }, { "end": 3640.16, "start": 3633.7599999999998, "text": " right because it can basically produce something like this without having a" }, { "end": 3645.48, "start": 3640.16, "text": " clue what the task is it will simply you condition on this input and it will" }, { "end": 3650.8399999999997, "start": 3645.48, "text": " simply output this sentence because it's very likely because it's already almost" }, { "end": 3656.16, "start": 3650.8399999999997, "text": " here and it will output it in better English because it's a good language" }, { "end": 3665.2799999999997, "start": 3656.16, "text": " model right it's it's a good English language model so yeah that so they" }, { "end": 3670.6000000000004, "start": 3665.28, "text": " measure this overfitting the degree to which their training to which their test" }, { "end": 3676.0400000000004, "start": 3670.6000000000004, "text": " data is in this common crawl thing and they say they have a conservative bound" }, { "end": 3681.0400000000004, "start": 3676.0400000000004, "text": " on how many percent of the data in the data set are clean and as you can see" }, { "end": 3686.2000000000003, "start": 3681.0400000000004, "text": " here they measure then how much the performance differs to to up or down if" }, { "end": 3691.36, "start": 3686.2000000000003, "text": " you only evaluate on the clean portion of this data set but again their" }, { "end": 3696.1200000000003, "start": 3691.36, "text": " deduplication is so weak they do like n-gram deduplication whereas I think you" }, { "end": 3700.6800000000003, "start": 3696.1200000000003, "text": " should really like in the news articles you should really do much more fuzzy" }, { "end": 3707.04, "start": 3700.6800000000003, "text": " deduplication much more of a meaning deduplication if you then want to argue" }, { "end": 3710.6400000000003, "start": 3707.04, "text": " that the model has learned to reason like if you simply want to argue that" }, { "end": 3716.88, "start": 3710.6400000000003, "text": " the model is a good language model fine right but yeah and also look at this" }, { "end": 3722.88, "start": 3716.88, "text": " like I would expect of a data set a test data set if you know if you have like a" }, { "end": 3726.6400000000003, "start": 3722.88, "text": " natural questions data set is constructed from Wikipedia pages and you" }, { "end": 3732.2000000000003, "start": 3726.6400000000003, "text": " have the Wikipedia page in there you can either either the entire thing is clean" }, { "end": 3738.48, "start": 3732.2000000000003, "text": " or none of it is clean and also these we know grad data set if this data set" }, { "end": 3742.1600000000003, "start": 3738.48, "text": " somehow leaked into the common crawl corpus either the entire thing is clean" }, { "end": 3747.08, "start": 3742.16, "text": " or none of it is clean I just have kind of problems with the fact that there are" }, { "end": 3756.3599999999997, "start": 3747.08, "text": " so many in between things right here and yeah so I'm not I'm not convinced here" }, { "end": 3763.3999999999996, "start": 3756.3599999999997, "text": " that this deduplication I still think it's a cool thing but I don't I think" }, { "end": 3769, "start": 3763.3999999999996, "text": " it's mostly a training data filter and interpolator rather than actual" }, { "end": 3774.44, "start": 3769, "text": " reasoning and they go through some of the limitations here and the broader" }, { "end": 3780.84, "start": 3774.44, "text": " input this broader impact statement is like five pages long and yeah okay you" }, { "end": 3789.04, "start": 3780.84, "text": " can do you can you know bad people take the model to do bad things okay and" }, { "end": 3794.44, "start": 3789.04, "text": " that's pretty much it so what I appreciate here is at the bottom they" }, { "end": 3799.7200000000003, "start": 3794.44, "text": " have basically all the results but also a lot of tasks descriptions like how" }, { "end": 3805.4, "start": 3799.7200000000003, "text": " they framed each tasks more outputs and they give more outputs on their website" }, { "end": 3809.32, "start": 3805.4, "text": " right so you can see here how each of the tasks was framed where you always" }, { "end": 3814.04, "start": 3809.32, "text": " have this is what this here is what the model sees and then this is what it's" }, { "end": 3821.92, "start": 3814.04, "text": " asked to produce right so you have this for for all many of these things and so" }, { "end": 3828.4, "start": 3821.92, "text": " on squad you have this context and the question okay so the the context is" }, { "end": 3833.56, "start": 3828.4, "text": " actually in there I didn't know that but you have the context and the question" }, { "end": 3838.8, "start": 3833.56, "text": " and the model is asked to complete something right here so you can look at" }, { "end": 3843.64, "start": 3838.8, "text": " how the model sees tasks and maybe you can evaluate for yourself how you think" }, { "end": 3850, "start": 3843.64, "text": " how difficult you think these tasks are all right I hope this was informative it" }, { "end": 3854.6, "start": 3850, "text": " is a long paper therefore it is a long video if you're still here and haven't" }, { "end": 3862.32, "start": 3854.6, "text": " subscribed yet do maybe if you like this if you want more leave it a like tell me" }, { "end": 3867.04, "start": 3862.32, "text": " in the comments what you think of it whether you think it's actually a GI or" }, { "end": 3881.64, "start": 3867.04, "text": " not and I'll see you next time bye bye" } ]
T35ba_VXkMY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
DETR: End-to-End Object Detection with Transformers (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "facebook", "fair", "fb", "facebook ai", "object detection", "coco", "bounding boxes", "hungarian", "matching", "bipartite", "cnn", "transformer", "attention", "encoder", "decoder", "images", "vision", "pixels", "segmentation", "classes", "stuff", "things", "attention mechanism", "squared", "unrolled", "overlap", "threshold", "rcnn" ]
Object detection in images is a notoriously hard task! Objects can be of a wide variety of classes, can be numerous or absent, they can occlude each other or be out of frame. All of this makes it even more surprising that the architecture in this paper is so simple. Thanks to a clever loss function, a single Transformer stacked on a CNN is enough to handle the entire task! OUTLINE: 0:00 - Intro & High-Level Overview 0:50 - Problem Formulation 2:30 - Architecture Overview 6:20 - Bipartite Match Loss Function 15:55 - Architecture in Detail 25:00 - Object Queries 31:00 - Transformer Properties 35:40 - Results ERRATA: When I introduce bounding boxes, I say they consist of x and y, but you also need the width and height. My Video on Transformers: https://youtu.be/iDulhoQ2pro Paper: https://arxiv.org/abs/2005.12872 Blog: https://ai.facebook.com/blog/end-to-end-object-detection-with-transformers/ Code: https://github.com/facebookresearch/detr Abstract: We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. Training code and pretrained models are available at this https URL. Authors: Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're going to look at end-to-end object detection with transformers by Nicolas Carillon, Francisco Massa and others at Facebook AI Research. So on a high level this paper does object detection in images using first a CNN and then a transformer to detect objects and it does so via a bipartite matching training objective. And this leaves you basically with an architecture that is super super simple compared to the previous architectures that had all kinds of engineering hurdles and thresholds and hyper parameters. So really excited for this. As always if you like content like this consider leaving a like comment or subscribing. Let's get into it. So let's say you have a picture like this here and you're supposed to detect all the objects in it and also where they are and what they are. This task is called object detection. So a good classifier here would say there's a bird right here and so this is a bird and then this here is also a bird. Right? They can be overlapping these bounding boxes so this is you see the the first problem that bird why is that green? Nevermind. Okay and those are the only two objects. So there's a number of very difficult things here. First of all you need to sort of detect the objects. You need to know how many there are. It's all it's not always the same in each image. There can be multiple objects of the same class. There can be multiple objects of different classes. They can be anywhere of any size. They can be overlapping in the background small or across the entire image. They can occlude each other partially. So the problem is a very very difficult problem and previous work has done a lot of engineering on this like building detectors and then kind of you want to classify every single pixel here and then you get like two detections right here that are very close for the same class. You say ah that must maybe be the same instance. Right? So there's only one thing here and not two things and so on. So there used to be very complicated architectures that solve these problems and this paper here comes up with a super simple architecture and will kind of go from the high level to the low to the implementation of each of the parts. So what does this paper propose? How do we solve a task like this? First of all we put the image and the image here without the labels of course. We put it through a convolutional neural network encoder. Since this is an image task it's you know kind of understandable that we do this mostly because CNNs just work so well for images. So this gives us this set of image features and I think this vector here is not really representative of what's happening. So let's actually take this picture right here and draw it in kind of an angled way and what we'll do with CNN is we'll simply sort of scale it down but have it multiple. So here it's three channels right? It's red, green and blue like this. Three channels but we'll scale it down but we make it more channels. So yeah so more channels. Okay but it's still sort of an image right here. It still has the image form. So the CNN basically gives us this thing which is sort of a higher level representation of the image with many more feature channels but still kind of information where in the image those features are. This is going to be important in a second because now this thing which is this set of image features goes into a transformer encoder decoder and this is sort of the magic thing here as a component. We'll look into that in a second but what they get out right here is this set of box predictions. So outcomes, each of these boxes here is going to be consisting of a tuple and the tuple is going to be the class and the bounding box. So an example for this could be bird at x equals 2, y equals 5. That's an example. Another example of this could also be there is nothing at x equals 7, y equals 9. So the nothing class is a valid class right here and that's also important. But safe to say there is this set of box predictions and then that is basically your output. These things are your output. If you have those things you can draw these bounding boxes, you can assign the labels. The question is how do you train it? Now what you're given is a database of images and these images as you see here on the right, these images already have by human annotators drawn these bounding boxes in and also labels. So this here would be annotated with bird and this here would be annotated with bird. But it doesn't have any of these like it doesn't annotate the nothing classes and so on. So the question is how do you compare the two? Can you simply say okay if the first one here is the bird and then and the second one is this bird then it's good but then you know that the ordering shouldn't matter. You simply care whether you have the correct bounding boxes, you don't care whether you output them in the correct order. And also what if your classifier does something like this? It outputs those two boxes we see here but it also outputs this here and says bird or like one that is slightly off and says bird and so on. So how do you deal with all of these cases? So the way that this paper deals with all of these cases is with their bipartite matching loss this thing right here. So how does it work? Let's say your... where can we go? Let's say your classifier, so here is an image. I have to wait for this to catch up. Here is an image and we put it through this entire pipeline and we get a set of predictions and they're going to be class bounding box, class bounding box, class bounding box. Now the first thing you need to know is that there are always the same amount of predictions. There are always this size here is fixed, that's large n. That's kind of a maximum of predictions. Since you can always predict either a class or the nothing class, in this case you could predict anywhere from zero to five objects in the scene. And then the second thing is from your database you get out an image with its bounding box annotations that are made by human labellers. Let's say these two. And you also do class bounding box, class bounding box. But now you see we only have two instances, so here we just pad with the nothing class. So I don't know what the bounding box should be for the nothing class. It doesn't really matter. Nothing, no bounding box, nothing, no bounding box, no bounding box. So your ground truth labels, if you will, are also of size n. So you always compare n things here on the left that your classifier output with n things on the right. Now as we already said the question is how do you deal with... you can't simply compare one by one because the ordering should not be important. But also you don't want to encourage your classifier to always kind of... if the one bird is very prominent, you don't want to encourage your classifier to say, here's a bird, here's a bird, there's a bird right here, hey, hey, there's a bird, there's a bird, there's a bird. And basically just because the signal for that bird is stronger and basically ignore the other bird, what you want to do is you want to encourage some sort of your classifier to detect if it has already detected an object, it shouldn't detect it again in a slightly different place. So the way you do this is with this bipartite matching loss. So at the time when you compute a loss, you go here and you compute what's called a maximum matching. Now what you have to provide is a loss function. So we can... there's a loss function L and L will take two of these things. L will take the red, the predicted thing of your model and L will take the true under... one of the true underlying things and L will compute a number and will say how well do these two agree. So you can say for example if either of them is the nothing class then I have no loss, like I don't care about them, that gives you no loss. But if the two classes agree and the two bounding boxes agree then it's very good right? Then we maybe even give like some negative loss or give loss zero. But if the bounding boxes agree but the classes don't agree then you say that's bad. Or the other way around if the classes agree in the bounding... or even if everything disagrees it's the worst. What you're basically saying is if these two would correspond to each other, if the thing on the left were the prediction for the thing on the right, which we don't know, it could be that the thing on the right refers to the bird on the right and the thing on the left refers to the bird on the left. So it would be natural that the bounding boxes are the same. But you say if these were corresponding to each other what would the loss be? How well would they do? And now if you compute this bipartite matching, what you want, I guess it's a it's a minimum matching in this case, what you want is you want to find an assignment of things on the left to things on the right. A one-to-one assignment. This is an example of a one-to-one assignment. Everything on the left is assigned exactly one thing on the right such that the total loss is minimized. So you're going to say I'm going to align the things on the left with the things on the right such that it's maximally favorable. I give you the maximum benefit of the doubt by aligning these things. So in the best possible case what's the loss? This is somehow clear. So this you're trying to find the assignment from the left to the right that makes that basically is the best case for this output right here. Where you really say oh okay here you output a bird very close to the bird here in the in a ground truth label. That's this here. So I'm going to connect these two because that's sort of it's it gives a model the most benefit of the doubt. And the loss that you have at the end of that matching, so this loss here would only then count wherever these connections are, that loss is going to be your training loss. So this solves the problems we had before. It is not dependent on the order because if you reorder the things your minimum matching will simply swap with it. If you output the same bird multiple times only one of these is going to be assigned. So if this here is that bird only one of them, only this one maybe, is going to be assigned to that one. And the other ones can't be assigned to that one, are forced to be assigned to a different one. Let's say this one here and are going to incur a loss. So you encourage your model to output let's say diverse bounding boxes, different bounding boxes for things. So this solves these problems and it's very clever. And there are algorithms to compute these minimum matchings. They use the Hungarian algorithm which will give you exactly such a matching. Again this is possible because you have n things on each side and the n is in effect here is the maximum of objects that you can detect at once. I guess if there is less you can simply pad right here. And then the model of course is encouraged to come up with the equal number of no class predictions. Because if it outputs a prediction when it shouldn't, if it already predicts two things and these are assigned to these two things and then it outputs one more thing it is going to be penalized because it should output three things with no class but it has output one too many with a class, it's going to be penalized. Okay so this is a pretty pretty cool thing. Again it relies on the fact that you have n on both sides but you can make n so large that basically it covers all of the cases. So you can make n like 50. So you can detect up to 50 things in a scene. Alright that's the algorithm in a high level. They do show their loss here. You see the loss ultimately is going to be over this matching right here. That's the minimum bipartite assignment that basically minimizes this total loss over your prediction and label matchings. And the loss they come up with here, I said you have to give the algorithm a loss, is this one. And they kind of go into how they do it. I don't think it's super important so the class algorithm, sorry the loss on the class labels I think is going to be a softmax or a sorry a cross entropy loss like an usual classification. And the loss on the to say whether two bounding boxes agree is a mixture of the L1 loss that compares two bounding boxes and this IOU loss which is not dependent on the scale of the bounding boxes. It kind of computes how much fraction of the two bounding boxes overlap. But in any case the loss basically consists of saying how how much do the labels agree and how much do the bounding boxes agree. Again this is only possible because after that you compute this matching otherwise you would have no clue which predictions to compare to which other predictions. So let's look at this architecture a bit more in detail. As we said you have this what they call the backbone which is a convolutional neural network. And with that you put in some positional encodings. Now I already said the you should look at these features right here as just smaller feature versions of the image but they still have some image nature. Then they are flattened. So once they are put in the transformer encoder because the transformer is naturally a sequence processing unit okay so it takes in just a sequence of vectors right here. And since an image is not a sequence what you'll do is if you have your image features and we said we have a bunch of channels let's say we have four channels and they're height and width and C you're going to unroll and flatten that into one sequence. So this is height times width you basically unroll across these axis right here into this axis and it's channel size. So basically you have a sequence here of C dimensional feature vectors that you then put into your encoder. So your encoder will now transform this sequence into an equally long sequence yet again of features. And the good thing about a transformer because why do you use a transformer? The good thing about the transformer is that in such a sequence and I've done videos on transformers you can basically mainly look at the video attention is all you need if you want to understand this more fully. This thing can basically have attention so it has attention layers it can attend from each position to each position in a one-shot manner. So as it transforms this representation up the transformer layers at each step it can basically aggregate information from everywhere in the sequence to anywhere else and therefore it's very it's very powerful if you have a sequence and you need sort of global connections across the sequence. This is very good for a language processing because in a sentence let's look at this sentence the input images are batched together. Applying blah blah blah blah blah blah blah blah blah and then there's they right the word they and you need you need to know that they refers to the input images okay and but you see this is very very far away in the sentence so you need a model that makes use of long range dependencies and they make the case that in such a task right here you also need the long range dependencies because these bounding boxes as you see right here they can be quite large so if you have an image you need that this part here communicates with these and this and this and this part basically anywhere in the bounding box and these bounding boxes can be quite large so the transformer architecture actually makes sense here. Now I want to go a bit later into why I think it actually makes even more sense for bounding box detection but right now I just want to keep going through this through this architecture right here so if my computer here decides to come back yes we can go on so what we'll get out is yet another so in here we put this thing we put down here we put into the transformer encoder and we get an equally sized equally shaped sequence out of the transformer encoder and you see that this thing here goes as a side input into this transformer decoder so the transformer encoder here is just a bit more of a feature mapping technically just for the architecture you could think of just putting this into here but of course it's going to go better with the transformer encoder the transformer decoder now does something similar but you see it has the encoder as a side input this is very much like this is not like BERT BERT is like a only encoder transformer whereas this is much like the original attention is all you need transformer that has an encoder and then the decoder as a side input basically as conditioning information has the encoder output what does the decoder do again since it's a transformer it's going to take a sequence and output a sequence the sequence it takes is right here is what they call object queries and this also is different from the attention is all you need papers and they don't do it autoregressively they just do it one shot what does it mean it means that you start with a sequence here of four things and these are these are the this is this big N and you out you output the sequence of a sequence of four things and it's important to see what they're going to end up so these things are then directly going through a classifier that now outputs the so these things here are these class label bounding box outputs okay so each of these things is going to after transformation end up being one of these bounding boxes either defining an object or saying that there isn't an object somewhere okay you see here this bounding box refers to this bird this bounding box refers to this bird so each of these things is going to be one bounding box and the what they call object queries is the question of course is what do you input here right I actually I want to transform this image information that comes from the left here I want to transform that into the bounding boxes what do I input here and the answer is you just input at the start you just input n random vectors because what's that gonna give you is basically n outputs you want an outputs because you want n of these bounding box classifications so you need n things and if I input n things into a transformer it's going to give me n things as an output and then in each step I can simply condition on the information that comes in the images and it it'll give me right then I can incorporate that information it's a very deep learning way of thinking about it actually so that you just need the information somewhere in there and I need n things now they go more into detail into this transformer architecture help help in a helpful fashion in the appendix and we'll go there quickly so this I think here makes more sense so the image features come in here right and you see this is just a transformer stack an encoder stack of multi-head self-attention and feed forward in instance wise or like token wise feed forward network and then that information is taken and is given as conditioning information over here now in here as I said you input these object queries which at the beginning are just n random vectors and what you're going to do you are also going to feature encode them and then you combine it with this image information so ultimately if you think of this one of these things one of these things is going to be a vector right and then that vector is going to be transformed and then it will have as it is transformed it will have the opportunity to basically look at features that come from here the arrow is in the wrong direction so you have already taken the image and you've transformed it into a feature representation which is also a vector right you have the features of the image right here now as you transform this vector this object query Q you have the opportunity to look at the image features right and that's how do you get the image information in there so the image features will come in here transform that through attention so this is an attention mechanism on the image and then what you will output is a bounding box and a class label it's really hard to explain I would guess you need to understand really what attention mechanisms are and of course the crucial part of course is what what's this what do you input at the beginning and these object queries aren't actually random as I said they are learned so what you're going to do is you're going to learn in dependent of the input image you're going to learn n different object queries and these object queries now it's very it's very interesting because these object queries are sort of going to be different it's like you have different people that can ask the input image different questions right and this they have so their end is 100 but they show 20 of these object queries that they learn and so they have visualization of all bounding box predictions on all images so it's it's sort of like you have n different people at your disposal and you train these n different people to kind of ask different questions of the input image okay you say this person up here will always ask irrespective of what the input images will always ask sort of hey input image what's what's on your bottom left right that's I'm really interested what's on your bottom left and sometimes I'm a bit interested in what's here but I'm mainly interested what's on the bottom left of the image whereas this person right here sorry this person right here is more interested in what's in the center of the different colors here is referred to different sizes of bounding boxes so this person is also interested so the person on the top left is interested mainly in I think small bounding boxes that are on the bottom left and the person here is mostly interested in what's I'm really interested what's in the center what's large in the center I want give me large things that are in the center right and then this person right here is really interested on stuff that's on the right side of the image so you see in order to get different sort of a difference in bounding box predictions you train n different people to ask different questions of the of the input image and this asking of questions is exactly what an attention mechanism is so this person right here let's let's take this this person and I'm saying person these are vectors these are learned object queries but this person first they will simply ask the question what's on what's on the right side and then the the image features right getting drawing the image features it will have an attention mechanism to this part of the image features and then it will get back some signal right and then it will transform that with its own signal up and then it will ask maybe again okay now that I know more because you see that person is interested in multiple things it's interested in those things and those things so at first it will focus on these things but then it says on now I'm now I know more right there is there I know I see there is actually something on the right side so in the higher layers it can then go back and ask the image more questions by sending these Q vectors of the attention mechanism and it will get back the V vectors from the image features that correspond to these Q things so up and up the layers this person can ask more refined questions about what that particular person is interested in okay and since you have the different people here that ask different questions you basically learn the people in a way such that across the data set they all together they cover every possible image pretty well again these people what they're interested in initially is not dependent on the picture you simply learn this in a global manner all right this is the best way I have of describing it he basically learned and people that are each one is interested in different things different classes and different regions in the image and each one of these people is going to output their best guess of what is where based on what they're interested in so that person might say I'm you know I'm the person that's interested kind of in the left side of things so I I'm going to output that there is a bird right here now these people if this is a transformer right and everything can attend to everything they can actually communicate with each other as they incorporate information from the image so in each layer they can do both they can incorporate information from the image and they can communicate with each other and then in the next layer they can do it again and again and again and thereby they can sort of they can sort of say well you already got the left side I will take the right side you already got the bird class I will take the elephant class and so on so you see here how the the architecture of the transformer actually is also very conducive to doing this bounding box prediction in that these different things can sort of attend to each other and therefore communicate with each other all right I hope that sort of makes sense now before we get into the experiments I want to list a third reason of why the transformer especially the encoders might actually also make a giant amount of sense here since you unroll the image into height and width and you have to imagine what does the transformer do the transformer as we said here has this notion of a tension where from any point in the sequence it can gather information from any other point in the sequence and this that's usually one of the downsides of the transformers is done via a quadratic attention mechanism so if I just list one feature channel go over here if I just list one feature channel right here this is height times width of the image right this is this is the entire image unrolled in one vector height times width and here I unroll it again height times width then this this matrix that I can build right here which is called the attention matrix right here it will tell me which parts of the sequence attend to which other parts okay so if you have an image that has the let's say the number three and you really want to figure out whether or not this is a three then the bow up here must communicate with the bow down here right they need to share information is it ah there's a bow here there's a bow here and there is a spiky thing here that must be a three so you want something this is rather at the beginning of the sequence you want this to attend first of all it will attend itself so you get fairly high values along the diagonal maybe 10 10 10 11 11 12 I saw this olig skit 100 million nine nine but it also like this this part here at the beginning of the sequence let's say it's here because this is unrolled right needs to attend to the end so this needs to attend to the end which we will put an 11 here and the other way around doesn't always need to be symmetrical by the way okay but in any case this is going to be a h times w squared matrix because everything can attend to everything and that's the attention mechanism why do I think this is so good for bounding boxes because let's let's imagine you actually have a matrix that is like this okay height times width times height times width every single point in here actually defines a bounding box because this point this point right here in this dimension corresponds to one location in the image and on this axis it corresponds to another location now in the attention matrix simply means these two points need to communicate but if you have two pixels you actually have defined a bounding box right here right you you are actually you're defining a bounding box and the the fact that this is happening in the exact same matrices could mean that the transformers are uniquely well the transformers across sequences of these height times width unrolled images are uniquely well conducive to these bounding box prediction tasks I'm actually a bit astounded because when I first just read the title this immediately popped to my mind I'm like oh yes of course and they're going to predict the bounding boxes by simply training so what you would do what I thought this was gonna be is out you output an actual matrix like this and then you simply each point you can you can simply classify right so you can classify here whether whether or not like at in this direction there is a bird right and then if you have two points like this for example you and you also classify whether in this direction there is a bird right and this naturally defines a bounding box or you could like take this matrix and actually just classify individual points in this matrix to be the bounding boxes because they already define bounding boxes so I just I think these these quadratic things are are uniquely I mean someone must have thought of this or if not cite the YouTube channel it would be funny first paper ever to actually have to cite the YouTube channel but again yeah so transformers seem to be a good idea for these kinds of things so how do they do of course they do well they are on par with these other much much much more complex architectures these faster are CNN models they are apparently much more complex but they are on par with this they do however train forever I think they train for like six days on eight GPUs is not that much if you compare to like language models on hundreds of TPUs but still okay I don't want to go into the numbers of experiments but what is pretty cool is that they can now visualize this sort of attention and you can see right here that if they look at a particular point in the image and visualize the attention it will actually attend to the instance itself so it will like these are usually the problems for these detection algorithms when things overlap and are partially occluded but you can see right here that the attention is on the part of the image that makes the instance in the back and the attention here is on the part of this and it doesn't sort of overlap into the others so that is one thing that's pretty impressive about these architectures the other thing they show is for example it can generalize to many many instances so here it has never seen 24 giraffes in one image but yet it can absolutely do that and giraffe giraffe giraffe giraffe giraffe and the one of the coolest images I find are these here where you can see right here again attention visualization and you see that even within the bounding box of the front elephant here you see that the attention on this foot of the back elephant is is is assigned to this blue bounding box so this is the blue basically the blue bounding box person that is attending to that back foot that means they they these things really sort of understand or they learn these things like occlusion and just hard I have a hard time describing it but you can see it visually here right like how it clearly learns that these are two instances that are sort of occluding each other but this this this instance can actually appear within the bounding box of the other instance and the same goes for the zebra here that are partially occluding each other and you can see that the attention is correctly like even here that this back foot of this zebra is correctly labeled so all in all that is pretty cool and they take it a step further and they say well with this architecture we can actually pretty easily do pixel wise classification so this is this cocoa stuff and things data set where I don't know which one is the stuff and which one is the things I think things is the objects and stuff is like sky and mountains and so on and so this is a classification task where you actually have to label every single pixel so what they do is they simply input this through their detector and they detect the instances they take the attention maps of the instances and then they scale it up this right here is just a CNN sort of in reverse that scales up the image because they have scaled it down as we said they scale it up again and then they can simply classify each pixel where each of these you remember we had these different people here that it that cared about different things in the image each of these people will classify their respective pixels the pixels they feel responsible for and then you simply merge all of these people's predictions together into this prediction and again this gives pretty pretty impressive results I am I mean this is this is fun this looks like it sort of works haven't they do quantitative analysis of course but I'm just impressed by the examples right here alright that was sort of it I really enjoyed reading this papers the simplicity is pretty cool they do have not only do they have code in the paper to show how ridiculously easy it is to get this to run this is all you need in pytorch but they do actually have code and as I understand they also have pre-trained models so they have this model zoo right here where they give you the pre-trained models so you can play with it and you can even load it from torch hub yourself and you can train it yourself they have a collab all is there all right again if you enjoyed this video consider leaving a like subscribing and I'll see you next time bye bye
[ { "end": 4.34, "start": 0, "text": " Hi there! Today we're going to look at end-to-end object detection with" }, { "end": 9.42, "start": 4.34, "text": " transformers by Nicolas Carillon, Francisco Massa and others at Facebook AI" }, { "end": 16.740000000000002, "start": 9.42, "text": " Research. So on a high level this paper does object detection in images using" }, { "end": 22.82, "start": 16.740000000000002, "text": " first a CNN and then a transformer to detect objects and it does so via a" }, { "end": 28.46, "start": 22.82, "text": " bipartite matching training objective. And this leaves you basically with an" }, { "end": 33.2, "start": 28.46, "text": " architecture that is super super simple compared to the previous architectures" }, { "end": 38.660000000000004, "start": 33.2, "text": " that had all kinds of engineering hurdles and thresholds and hyper" }, { "end": 44.08, "start": 38.660000000000004, "text": " parameters. So really excited for this. As always if you like content like this" }, { "end": 50.96, "start": 44.08, "text": " consider leaving a like comment or subscribing. Let's get into it. So let's" }, { "end": 54.72, "start": 50.96, "text": " say you have a picture like this here and you're supposed to detect all the" }, { "end": 59.64, "start": 54.72, "text": " objects in it and also where they are and what they are. This task is called" }, { "end": 65.24, "start": 59.64, "text": " object detection. So a good classifier here would say there's a bird right here" }, { "end": 74.44, "start": 65.24, "text": " and so this is a bird and then this here is also a bird. Right? They can be" }, { "end": 79.72, "start": 74.44, "text": " overlapping these bounding boxes so this is you see the the first problem that" }, { "end": 85.52, "start": 79.72, "text": " bird why is that green? Nevermind. Okay and those are the only two objects. So" }, { "end": 89.76, "start": 85.52, "text": " there's a number of very difficult things here. First of all you need to" }, { "end": 94.06, "start": 89.76, "text": " sort of detect the objects. You need to know how many there are. It's all it's" }, { "end": 97.75999999999999, "start": 94.06, "text": " not always the same in each image. There can be multiple objects of the same" }, { "end": 102, "start": 97.75999999999999, "text": " class. There can be multiple objects of different classes. They can be anywhere" }, { "end": 107.32, "start": 102, "text": " of any size. They can be overlapping in the background small or across the" }, { "end": 111.75999999999999, "start": 107.32, "text": " entire image. They can occlude each other partially. So the problem is a very very" }, { "end": 117.94, "start": 111.75999999999999, "text": " difficult problem and previous work has done a lot of engineering on this like" }, { "end": 123.03999999999999, "start": 117.94, "text": " building detectors and then kind of you want to classify every single pixel here" }, { "end": 127.69999999999999, "start": 123.03999999999999, "text": " and then you get like two detections right here that are very close for the" }, { "end": 132.4, "start": 127.69999999999999, "text": " same class. You say ah that must maybe be the same instance. Right? So there's only" }, { "end": 138.52, "start": 132.4, "text": " one thing here and not two things and so on. So there used to be very" }, { "end": 142.12, "start": 138.52, "text": " complicated architectures that solve these problems and this paper here comes" }, { "end": 147.06, "start": 142.12, "text": " up with a super simple architecture and will kind of go from the high level to" }, { "end": 151.28, "start": 147.06, "text": " the low to the implementation of each of the parts. So what does this paper" }, { "end": 157.28, "start": 151.28, "text": " propose? How do we solve a task like this? First of all we put the image and the" }, { "end": 161.88, "start": 157.28, "text": " image here without the labels of course. We put it through a convolutional neural" }, { "end": 166.32, "start": 161.88, "text": " network encoder. Since this is an image task it's you know kind of understandable" }, { "end": 175.07999999999998, "start": 166.32, "text": " that we do this mostly because CNNs just work so well for images. So this gives us" }, { "end": 179.64, "start": 175.07999999999998, "text": " this set of image features and I think this vector here is not really" }, { "end": 184.24, "start": 179.64, "text": " representative of what's happening. So let's actually take this picture right" }, { "end": 190.56, "start": 184.24, "text": " here and draw it in kind of an angled way and what we'll do with CNN is we'll" }, { "end": 195.48, "start": 190.56, "text": " simply sort of scale it down but have it multiple. So here it's three channels" }, { "end": 201.48, "start": 195.48, "text": " right? It's red, green and blue like this. Three channels but we'll scale it down" }, { "end": 215.4, "start": 201.48, "text": " but we make it more channels. So yeah so more channels. Okay but it's still sort" }, { "end": 221.04, "start": 215.4, "text": " of an image right here. It still has the image form. So the CNN" }, { "end": 225.56, "start": 221.04, "text": " basically gives us this thing which is sort of a higher level representation of" }, { "end": 230.28, "start": 225.56, "text": " the image with many more feature channels but still kind of information" }, { "end": 234.36, "start": 230.28, "text": " where in the image those features are. This is going to be important in a" }, { "end": 240.36, "start": 234.36, "text": " second because now this thing which is this set of image features goes into a" }, { "end": 248.28, "start": 240.36, "text": " transformer encoder decoder and this is sort of the magic thing here as a" }, { "end": 254.04000000000002, "start": 248.28, "text": " component. We'll look into that in a second but what they get out right" }, { "end": 260.28000000000003, "start": 254.04000000000002, "text": " here is this set of box predictions. So outcomes, each of these boxes here is" }, { "end": 265.36, "start": 260.28000000000003, "text": " going to be consisting of a tuple and the tuple is going to be the class and" }, { "end": 274.8, "start": 265.36, "text": " the bounding box. So an example for this could be bird at x equals 2, y" }, { "end": 281.8, "start": 274.8, "text": " equals 5. That's an example. Another example of this could also be there is" }, { "end": 292.24, "start": 281.8, "text": " nothing at x equals 7, y equals 9. So the nothing class is a valid class" }, { "end": 297.76, "start": 292.24, "text": " right here and that's also important. But safe to say there is this set of box" }, { "end": 303.68, "start": 297.76, "text": " predictions and then that is basically your output. These things are" }, { "end": 307.64, "start": 303.68, "text": " your output. If you have those things you can draw these bounding boxes, you can" }, { "end": 312.08, "start": 307.64, "text": " assign the labels. The question is how do you train it? Now what you're given is a" }, { "end": 317.86, "start": 312.08, "text": " database of images and these images as you see here on the right, these images" }, { "end": 323.68, "start": 317.86, "text": " already have by human annotators drawn these bounding boxes in and also labels." }, { "end": 328.76, "start": 323.68, "text": " So this here would be annotated with bird and this here would be annotated" }, { "end": 334.12, "start": 328.76, "text": " with bird. But it doesn't have any of these like it doesn't annotate the" }, { "end": 341.28000000000003, "start": 334.12, "text": " nothing classes and so on. So the question is how do you compare the two?" }, { "end": 349.4, "start": 341.28, "text": " Can you simply say okay if the first one here is the bird and then and the second" }, { "end": 353.23999999999995, "start": 349.4, "text": " one is this bird then it's good but then you know that the ordering shouldn't" }, { "end": 356.67999999999995, "start": 353.23999999999995, "text": " matter. You simply care whether you have the correct bounding boxes, you" }, { "end": 362, "start": 356.67999999999995, "text": " don't care whether you output them in the correct order. And also what if your" }, { "end": 366.96, "start": 362, "text": " classifier does something like this? It outputs those two boxes we see here but" }, { "end": 373, "start": 366.96, "text": " it also outputs this here and says bird or like one that is slightly off and" }, { "end": 380.2, "start": 373, "text": " says bird and so on. So how do you deal with all of these cases? So the way that" }, { "end": 385.2, "start": 380.2, "text": " this paper deals with all of these cases is with their bipartite matching loss" }, { "end": 392, "start": 385.2, "text": " this thing right here. So how does it work? Let's say your... where can we go?" }, { "end": 398.8, "start": 392, "text": " Let's say your classifier, so here is an image. I have to wait for this to catch" }, { "end": 403.56, "start": 398.8, "text": " up. Here is an image and we put it through this entire pipeline and we" }, { "end": 410.44, "start": 403.56, "text": " get a set of predictions and they're going to be class bounding box, class" }, { "end": 415.6, "start": 410.44, "text": " bounding box, class bounding box. Now the first thing you need to know is that" }, { "end": 421.36, "start": 415.6, "text": " there are always the same amount of predictions. There are always this" }, { "end": 427.72, "start": 421.36, "text": " size here is fixed, that's large n. That's kind of a maximum" }, { "end": 431.88, "start": 427.72, "text": " of predictions. Since you can always predict either a class or the nothing" }, { "end": 436.84000000000003, "start": 431.88, "text": " class, in this case you could predict anywhere from zero to five objects in" }, { "end": 443.84000000000003, "start": 436.84000000000003, "text": " the scene. And then the second thing is from your database you" }, { "end": 449.62, "start": 443.84000000000003, "text": " get out an image with its bounding box annotations that are made by human" }, { "end": 458.56, "start": 449.62, "text": " labellers. Let's say these two. And you also do class bounding box, class bounding" }, { "end": 464.6, "start": 458.56, "text": " box. But now you see we only have two instances, so here we just pad with" }, { "end": 468.68, "start": 464.6, "text": " the nothing class. So I don't know what the bounding box should be for the" }, { "end": 474.88, "start": 468.68, "text": " nothing class. It doesn't really matter. Nothing, no bounding box, nothing, no" }, { "end": 484.48, "start": 474.88, "text": " bounding box, no bounding box. So your ground truth labels, if you will, are also" }, { "end": 491.92, "start": 484.48, "text": " of size n. So you always compare n things here on the left that your classifier" }, { "end": 498.32, "start": 491.92, "text": " output with n things on the right. Now as we already said the question is how do" }, { "end": 503.88, "start": 498.32, "text": " you deal with... you can't simply compare one by one because the ordering should" }, { "end": 509.4, "start": 503.88, "text": " not be important. But also you don't want to encourage your classifier to always" }, { "end": 514.24, "start": 509.4, "text": " kind of... if the one bird is very prominent, you don't want to" }, { "end": 518.56, "start": 514.24, "text": " encourage your classifier to say, here's a bird, here's a bird, there's a bird" }, { "end": 521.72, "start": 518.56, "text": " right here, hey, hey, there's a bird, there's a bird, there's a bird. And" }, { "end": 525.4399999999999, "start": 521.72, "text": " basically just because the signal for that bird is stronger and basically" }, { "end": 529.72, "start": 525.4399999999999, "text": " ignore the other bird, what you want to do is you want to encourage some sort of" }, { "end": 534.44, "start": 529.72, "text": " your classifier to detect if it has already detected an object, it shouldn't" }, { "end": 541.84, "start": 534.44, "text": " detect it again in a slightly different place. So the way you do this is" }, { "end": 546.1600000000001, "start": 541.84, "text": " with this bipartite matching loss. So at the time when you compute a loss," }, { "end": 552.6800000000001, "start": 546.1600000000001, "text": " you go here and you compute what's called a maximum matching. Now what you" }, { "end": 559.64, "start": 552.6800000000001, "text": " have to provide is a loss function. So we can... there's a loss function L and L" }, { "end": 565.64, "start": 559.64, "text": " will take two of these things. L will take the red, the predicted thing of your" }, { "end": 573.72, "start": 565.64, "text": " model and L will take the true under... one of the true underlying things and L" }, { "end": 582, "start": 573.72, "text": " will compute a number and will say how well do these two agree. So you can say" }, { "end": 588.08, "start": 582, "text": " for example if either of them is the nothing class then I have no loss, like I" }, { "end": 592.84, "start": 588.08, "text": " don't care about them, that gives you no loss. But if the two classes" }, { "end": 597.96, "start": 592.84, "text": " agree and the two bounding boxes agree then it's very good right? Then we maybe" }, { "end": 602.36, "start": 597.96, "text": " even give like some negative loss or give loss zero. But if the bounding" }, { "end": 609.8000000000001, "start": 602.36, "text": " boxes agree but the classes don't agree then you say that's bad. Or the other way" }, { "end": 614, "start": 609.8000000000001, "text": " around if the classes agree in the bounding... or even if everything disagrees it's" }, { "end": 620.56, "start": 614, "text": " the worst. What you're basically saying is if these two would" }, { "end": 625.52, "start": 620.56, "text": " correspond to each other, if the thing on the left were the prediction for" }, { "end": 628.76, "start": 625.52, "text": " the thing on the right, which we don't know, it could be that the thing on" }, { "end": 633.28, "start": 628.76, "text": " the right refers to the bird on the right and the thing on the left refers to" }, { "end": 637.52, "start": 633.28, "text": " the bird on the left. So it would be natural that the bounding boxes are the" }, { "end": 644.8, "start": 637.52, "text": " same. But you say if these were corresponding to each other what would" }, { "end": 650.24, "start": 644.8, "text": " the loss be? How well would they do? And now if you compute this bipartite" }, { "end": 654.68, "start": 650.24, "text": " matching, what you want, I guess it's a it's a minimum matching in this case," }, { "end": 658.64, "start": 654.68, "text": " what you want is you want to find an assignment of things on the left to" }, { "end": 663.96, "start": 658.64, "text": " things on the right. A one-to-one assignment. This is an example of a" }, { "end": 668, "start": 663.96, "text": " one-to-one assignment. Everything on the left is assigned exactly one thing on" }, { "end": 675, "start": 668, "text": " the right such that the total loss is minimized. So you're going to say I'm" }, { "end": 680.48, "start": 675, "text": " going to align the things on the left with the things on the right such that" }, { "end": 686, "start": 680.48, "text": " it's maximally favorable. I give you the maximum benefit of the doubt by" }, { "end": 693.84, "start": 686, "text": " aligning these things. So in the best possible case what's the loss?" }, { "end": 699, "start": 693.84, "text": " This is somehow clear. So this you're trying to find the assignment" }, { "end": 704.1600000000001, "start": 699, "text": " from the left to the right that makes that basically is the best case for this" }, { "end": 711.6, "start": 704.1600000000001, "text": " output right here. Where you really say oh okay here you output a bird" }, { "end": 716.64, "start": 711.6, "text": " very close to the bird here in the in a ground truth label. That's this here. So" }, { "end": 722.6, "start": 716.64, "text": " I'm going to connect these two because that's sort of it's" }, { "end": 728.08, "start": 722.6, "text": " it gives a model the most benefit of the doubt. And the loss that you have at the" }, { "end": 733.84, "start": 728.08, "text": " end of that matching, so this loss here would only then count wherever these" }, { "end": 741.12, "start": 733.84, "text": " connections are, that loss is going to be your training loss. So this solves" }, { "end": 744.5600000000001, "start": 741.12, "text": " the problems we had before. It is not dependent on the order because if you" }, { "end": 749.88, "start": 744.5600000000001, "text": " reorder the things your minimum matching will simply swap" }, { "end": 757.52, "start": 749.88, "text": " with it. If you output the same bird multiple times only one of" }, { "end": 763.72, "start": 757.52, "text": " these is going to be assigned. So if this here is that bird only one of them," }, { "end": 767.48, "start": 763.72, "text": " only this one maybe, is going to be assigned to that one. And the other ones" }, { "end": 772.08, "start": 767.48, "text": " can't be assigned to that one, are forced to be assigned to a different one. Let's" }, { "end": 776.6, "start": 772.08, "text": " say this one here and are going to incur a loss. So you encourage your model to" }, { "end": 782.44, "start": 776.6, "text": " output let's say diverse bounding boxes, different bounding boxes for things." }, { "end": 787.6800000000001, "start": 782.44, "text": " So this solves these problems and it's very clever. And there are" }, { "end": 792.2, "start": 787.6800000000001, "text": " algorithms to compute these minimum matchings. They use the Hungarian" }, { "end": 796.16, "start": 792.2, "text": " algorithm which will give you exactly such a matching. Again this is possible" }, { "end": 802.64, "start": 796.16, "text": " because you have n things on each side and the n is in effect here is the" }, { "end": 807.68, "start": 802.64, "text": " maximum of objects that you can detect at once. I guess if there is less you can" }, { "end": 813.96, "start": 807.68, "text": " simply pad right here. And then the model of course is encouraged to come up with" }, { "end": 821.76, "start": 813.96, "text": " the equal number of no class predictions. Because if it outputs a prediction when" }, { "end": 826.12, "start": 821.76, "text": " it shouldn't, if it already predicts two things and these are assigned to" }, { "end": 830.3199999999999, "start": 826.12, "text": " these two things and then it outputs one more thing it is going to be penalized" }, { "end": 836.1600000000001, "start": 830.32, "text": " because it should output three things with no class but it has output one too" }, { "end": 845.6400000000001, "start": 836.1600000000001, "text": " many with a class, it's going to be penalized. Okay so this is a pretty" }, { "end": 850.96, "start": 845.6400000000001, "text": " pretty cool thing. Again it relies on the fact that you have n on both sides" }, { "end": 857.1600000000001, "start": 850.96, "text": " but you can make n so large that basically it covers all of the cases. So" }, { "end": 864.28, "start": 857.16, "text": " you can make n like 50. So you can detect up to 50 things in a scene. Alright" }, { "end": 871.8399999999999, "start": 864.28, "text": " that's the algorithm in a high level. They do show their loss here. You see the" }, { "end": 876.68, "start": 871.8399999999999, "text": " loss ultimately is going to be over this matching right here." }, { "end": 883, "start": 876.68, "text": " That's the minimum bipartite assignment that basically minimizes this total loss" }, { "end": 891.4, "start": 883, "text": " over your prediction and label matchings. And the loss they come up with here, I" }, { "end": 899.04, "start": 891.4, "text": " said you have to give the algorithm a loss, is this one. And they kind of go" }, { "end": 904.32, "start": 899.04, "text": " into how they do it. I don't think it's super important so the class algorithm," }, { "end": 910.44, "start": 904.32, "text": " sorry the loss on the class labels I think is going to be a softmax or a" }, { "end": 915.48, "start": 910.44, "text": " sorry a cross entropy loss like an usual classification. And the loss on the to" }, { "end": 921.44, "start": 915.48, "text": " say whether two bounding boxes agree is a mixture of the L1 loss that compares" }, { "end": 927.4200000000001, "start": 921.44, "text": " two bounding boxes and this IOU loss which is not dependent on the scale of" }, { "end": 931.8000000000001, "start": 927.4200000000001, "text": " the bounding boxes. It kind of computes how much fraction of the two bounding" }, { "end": 938.2800000000001, "start": 931.8000000000001, "text": " boxes overlap. But in any case the loss basically consists of saying how" }, { "end": 942.52, "start": 938.28, "text": " how much do the labels agree and how much do the bounding boxes agree." }, { "end": 947.04, "start": 942.52, "text": " Again this is only possible because after that you compute this matching" }, { "end": 951, "start": 947.04, "text": " otherwise you would have no clue which predictions to" }, { "end": 956.0799999999999, "start": 951, "text": " compare to which other predictions. So let's look at this architecture a bit" }, { "end": 961.24, "start": 956.0799999999999, "text": " more in detail. As we said you have this what they call the backbone which is a" }, { "end": 967.64, "start": 961.24, "text": " convolutional neural network. And with that you put in some positional encodings." }, { "end": 974.24, "start": 967.64, "text": " Now I already said the you should look at these features right here as just" }, { "end": 980.16, "start": 974.24, "text": " smaller feature versions of the image but they still have some image nature." }, { "end": 986.36, "start": 980.16, "text": " Then they are flattened. So once they are put in the transformer encoder" }, { "end": 994, "start": 986.36, "text": " because the transformer is naturally a sequence processing unit okay so it" }, { "end": 998.84, "start": 994, "text": " takes in just a sequence of vectors right here. And since an image is not a" }, { "end": 1004.12, "start": 998.84, "text": " sequence what you'll do is if you have your image features and we said we have" }, { "end": 1007.76, "start": 1004.12, "text": " a bunch of channels let's say we have four channels and they're height and" }, { "end": 1018.7, "start": 1007.76, "text": " width and C you're going to unroll and flatten that into one sequence. So this" }, { "end": 1025.0800000000002, "start": 1018.7, "text": " is height times width you basically unroll across these axis right here into" }, { "end": 1035.68, "start": 1025.0800000000002, "text": " this axis and it's channel size. So basically you have a sequence here of" }, { "end": 1043.04, "start": 1035.68, "text": " C dimensional feature vectors that you then put into your encoder. So" }, { "end": 1049.36, "start": 1043.04, "text": " your encoder will now transform this sequence into an equally long sequence" }, { "end": 1056.52, "start": 1049.36, "text": " yet again of features. And the good thing about a transformer because why do you" }, { "end": 1060.76, "start": 1056.52, "text": " use a transformer? The good thing about the transformer is that in such a" }, { "end": 1065.6, "start": 1060.76, "text": " sequence and I've done videos on transformers you can basically mainly" }, { "end": 1070.52, "start": 1065.6, "text": " look at the video attention is all you need if you want to understand this more" }, { "end": 1078.8, "start": 1070.52, "text": " fully. This thing can basically have attention so it has attention layers it" }, { "end": 1086.28, "start": 1078.8, "text": " can attend from each position to each position in a one-shot manner. So as it" }, { "end": 1092.28, "start": 1086.28, "text": " transforms this representation up the transformer layers at each step it can" }, { "end": 1096.92, "start": 1092.28, "text": " basically aggregate information from everywhere in the sequence to anywhere" }, { "end": 1103.8400000000001, "start": 1096.92, "text": " else and therefore it's very it's very powerful if you have a sequence and you" }, { "end": 1108.8000000000002, "start": 1103.8400000000001, "text": " need sort of global connections across the sequence. This is very good for a" }, { "end": 1113.76, "start": 1108.8000000000002, "text": " language processing because in a sentence let's look at this sentence the" }, { "end": 1121.28, "start": 1113.76, "text": " input images are batched together. Applying blah blah blah blah blah blah" }, { "end": 1127.16, "start": 1121.28, "text": " blah blah blah and then there's they right the word they and you need you" }, { "end": 1133, "start": 1127.16, "text": " need to know that they refers to the input images okay and but you see this" }, { "end": 1139.2, "start": 1133, "text": " is very very far away in the sentence so you need a model that makes use of long" }, { "end": 1143.6, "start": 1139.2, "text": " range dependencies and they make the case that in such a task right here you" }, { "end": 1148.18, "start": 1143.6, "text": " also need the long range dependencies because these bounding boxes as you see" }, { "end": 1154, "start": 1148.18, "text": " right here they can be quite large so if you have an image you need that this" }, { "end": 1158.76, "start": 1154, "text": " part here communicates with these and this and this and this part basically" }, { "end": 1163.68, "start": 1158.76, "text": " anywhere in the bounding box and these bounding boxes can be quite large so the" }, { "end": 1168.88, "start": 1163.68, "text": " transformer architecture actually makes sense here. Now I want to go a bit later" }, { "end": 1173.6000000000001, "start": 1168.88, "text": " into why I think it actually makes even more sense for bounding box detection" }, { "end": 1178.8799999999999, "start": 1173.6, "text": " but right now I just want to keep going through this through this architecture" }, { "end": 1186.52, "start": 1178.8799999999999, "text": " right here so if my computer here decides to come back yes we can go on so" }, { "end": 1195.3999999999999, "start": 1186.52, "text": " what we'll get out is yet another so in here we put this thing we put down here" }, { "end": 1200, "start": 1195.3999999999999, "text": " we put into the transformer encoder and we get an equally sized equally shaped" }, { "end": 1205.4, "start": 1200, "text": " sequence out of the transformer encoder and you see that this thing here goes as" }, { "end": 1210.84, "start": 1205.4, "text": " a side input into this transformer decoder so the transformer encoder here" }, { "end": 1215.68, "start": 1210.84, "text": " is just a bit more of a feature mapping technically just for the architecture" }, { "end": 1220.34, "start": 1215.68, "text": " you could think of just putting this into here but of course it's going to go" }, { "end": 1226.2, "start": 1220.34, "text": " better with the transformer encoder the transformer decoder now does something" }, { "end": 1231.48, "start": 1226.2, "text": " similar but you see it has the encoder as a side input this is very much like" }, { "end": 1237.48, "start": 1231.48, "text": " this is not like BERT BERT is like a only encoder transformer whereas this" }, { "end": 1242.48, "start": 1237.48, "text": " is much like the original attention is all you need transformer that has an" }, { "end": 1247.2, "start": 1242.48, "text": " encoder and then the decoder as a side input basically as conditioning" }, { "end": 1253.52, "start": 1247.2, "text": " information has the encoder output what does the decoder do again since it's a" }, { "end": 1257.96, "start": 1253.52, "text": " transformer it's going to take a sequence and output a sequence the" }, { "end": 1263.84, "start": 1257.96, "text": " sequence it takes is right here is what they call object queries and this also" }, { "end": 1266.72, "start": 1263.84, "text": " is different from the attention is all you need papers and they don't do it" }, { "end": 1271.32, "start": 1266.72, "text": " autoregressively they just do it one shot what does it mean it means that you" }, { "end": 1276.8, "start": 1271.32, "text": " start with a sequence here of four things and these are these are the this" }, { "end": 1283.04, "start": 1276.8, "text": " is this big N and you out you output the sequence of a sequence of four things" }, { "end": 1288.6, "start": 1283.04, "text": " and it's important to see what they're going to end up so these things are then" }, { "end": 1296.1599999999999, "start": 1288.6, "text": " directly going through a classifier that now outputs the so these things here are" }, { "end": 1304.44, "start": 1296.1599999999999, "text": " these class label bounding box outputs okay so each of these things is going to" }, { "end": 1309.04, "start": 1304.44, "text": " after transformation end up being one of these bounding boxes either defining an" }, { "end": 1313.8, "start": 1309.04, "text": " object or saying that there isn't an object somewhere okay you see here this" }, { "end": 1318.56, "start": 1313.8, "text": " bounding box refers to this bird this bounding box refers to this bird so each" }, { "end": 1327.12, "start": 1318.56, "text": " of these things is going to be one bounding box and the what they call" }, { "end": 1331.84, "start": 1327.12, "text": " object queries is the question of course is what do you input here right I" }, { "end": 1335.48, "start": 1331.84, "text": " actually I want to transform this image information that comes from the left" }, { "end": 1339.8, "start": 1335.48, "text": " here I want to transform that into the bounding boxes what do I input here and" }, { "end": 1346.3600000000001, "start": 1339.8, "text": " the answer is you just input at the start you just input n random vectors" }, { "end": 1351.72, "start": 1346.3600000000001, "text": " because what's that gonna give you is basically n outputs you want an outputs" }, { "end": 1357.68, "start": 1351.72, "text": " because you want n of these bounding box classifications so you need n things and" }, { "end": 1362.72, "start": 1357.68, "text": " if I input n things into a transformer it's going to give me n things as an" }, { "end": 1366.84, "start": 1362.72, "text": " output and then in each step I can simply condition on the information that" }, { "end": 1372.84, "start": 1366.84, "text": " comes in the images and it it'll give me right then I can incorporate that" }, { "end": 1377.2, "start": 1372.84, "text": " information it's a very deep learning way of thinking about it actually so that" }, { "end": 1381.2, "start": 1377.2, "text": " you just need the information somewhere in there and I need n things now they go" }, { "end": 1387.24, "start": 1381.2, "text": " more into detail into this transformer architecture help help in a helpful" }, { "end": 1394.32, "start": 1387.24, "text": " fashion in the appendix and we'll go there quickly so this I think here makes" }, { "end": 1400.28, "start": 1394.32, "text": " more sense so the image features come in here right and you see this is just a" }, { "end": 1405.52, "start": 1400.28, "text": " transformer stack an encoder stack of multi-head self-attention and feed" }, { "end": 1414.1200000000001, "start": 1405.52, "text": " forward in instance wise or like token wise feed forward network and then that" }, { "end": 1421.4799999999998, "start": 1414.12, "text": " information is taken and is given as conditioning information over here now" }, { "end": 1425.9199999999998, "start": 1421.4799999999998, "text": " in here as I said you input these object queries which at the beginning are just" }, { "end": 1432.4799999999998, "start": 1425.9199999999998, "text": " n random vectors and what you're going to do you are also going to feature" }, { "end": 1438.12, "start": 1432.4799999999998, "text": " encode them and then you combine it with this image information so ultimately if" }, { "end": 1442.84, "start": 1438.12, "text": " you think of this one of these things one of these things is going to be a" }, { "end": 1449, "start": 1442.84, "text": " vector right and then that vector is going to be transformed and then it" }, { "end": 1454.76, "start": 1449, "text": " will have as it is transformed it will have the opportunity to basically look" }, { "end": 1459.9199999999998, "start": 1454.76, "text": " at features that come from here the arrow is in the wrong direction so you" }, { "end": 1465, "start": 1459.9199999999998, "text": " have already taken the image and you've transformed it into a feature" }, { "end": 1469.6, "start": 1465, "text": " representation which is also a vector right you have the features of the image" }, { "end": 1476.8, "start": 1469.6, "text": " right here now as you transform this vector this object query Q you have the" }, { "end": 1482.9599999999998, "start": 1476.8, "text": " opportunity to look at the image features right and that's how do you get" }, { "end": 1488.32, "start": 1482.9599999999998, "text": " the image information in there so the image features will come in here" }, { "end": 1493.76, "start": 1488.32, "text": " transform that through attention so this is an attention mechanism on the image" }, { "end": 1500.68, "start": 1493.76, "text": " and then what you will output is a bounding box and a class label it's" }, { "end": 1507.92, "start": 1500.68, "text": " really hard to explain I would guess you need to understand really what attention" }, { "end": 1512.68, "start": 1507.92, "text": " mechanisms are and of course the crucial part of course is what what's this what" }, { "end": 1517.4, "start": 1512.68, "text": " do you input at the beginning and these object queries aren't actually random as" }, { "end": 1523.68, "start": 1517.4, "text": " I said they are learned so what you're going to do is you're going to learn in" }, { "end": 1529, "start": 1523.68, "text": " dependent of the input image you're going to learn n different object" }, { "end": 1536, "start": 1529, "text": " queries and these object queries now it's very it's very interesting because" }, { "end": 1542.88, "start": 1536, "text": " these object queries are sort of going to be different it's like you have" }, { "end": 1548.96, "start": 1542.88, "text": " different people that can ask the input image different questions right and this" }, { "end": 1557.04, "start": 1548.96, "text": " they have so their end is 100 but they show 20 of these object queries that" }, { "end": 1562.64, "start": 1557.04, "text": " they learn and so they have visualization of all bounding box" }, { "end": 1569.44, "start": 1562.64, "text": " predictions on all images so it's it's sort of like you have n different people" }, { "end": 1574.88, "start": 1569.44, "text": " at your disposal and you train these n different people to kind of ask" }, { "end": 1581.16, "start": 1574.88, "text": " different questions of the input image okay you say this person up here will" }, { "end": 1585.68, "start": 1581.16, "text": " always ask irrespective of what the input images will always ask sort of hey" }, { "end": 1590.5600000000002, "start": 1585.68, "text": " input image what's what's on your bottom left right that's I'm really interested" }, { "end": 1595.8000000000002, "start": 1590.5600000000002, "text": " what's on your bottom left and sometimes I'm a bit interested in what's here but" }, { "end": 1599.68, "start": 1595.8000000000002, "text": " I'm mainly interested what's on the bottom left of the image whereas this" }, { "end": 1605.5600000000002, "start": 1599.68, "text": " person right here sorry this person right here is more interested in what's" }, { "end": 1611.52, "start": 1605.5600000000002, "text": " in the center of the different colors here is referred to different sizes of" }, { "end": 1616.6000000000001, "start": 1611.52, "text": " bounding boxes so this person is also interested so the person on the top left" }, { "end": 1622.1200000000001, "start": 1616.6000000000001, "text": " is interested mainly in I think small bounding boxes that are on the bottom" }, { "end": 1627.48, "start": 1622.1200000000001, "text": " left and the person here is mostly interested in what's I'm really" }, { "end": 1631.6, "start": 1627.48, "text": " interested what's in the center what's large in the center I want give me large" }, { "end": 1638.48, "start": 1631.6, "text": " things that are in the center right and then this person right here is really" }, { "end": 1643.6, "start": 1638.48, "text": " interested on stuff that's on the right side of the image so you see in order to" }, { "end": 1648.8, "start": 1643.6, "text": " get different sort of a difference in bounding box predictions you train n" }, { "end": 1656, "start": 1648.8, "text": " different people to ask different questions of the of the input image and" }, { "end": 1661.28, "start": 1656, "text": " this asking of questions is exactly what an attention mechanism is so this" }, { "end": 1668.76, "start": 1661.28, "text": " person right here let's let's take this this person and I'm saying person these" }, { "end": 1674.84, "start": 1668.76, "text": " are vectors these are learned object queries but this person first they will" }, { "end": 1680.12, "start": 1674.84, "text": " simply ask the question what's on what's on the right side and then the the image" }, { "end": 1688.6, "start": 1680.12, "text": " features right getting drawing the image features it will have an attention" }, { "end": 1694.6399999999999, "start": 1688.6, "text": " mechanism to this part of the image features and then it will get back some" }, { "end": 1700.8, "start": 1694.6399999999999, "text": " signal right and then it will transform that with its own signal up and then it" }, { "end": 1706.3999999999999, "start": 1700.8, "text": " will ask maybe again okay now that I know more because you see that person is" }, { "end": 1709.1999999999998, "start": 1706.3999999999999, "text": " interested in multiple things it's interested in those things and those" }, { "end": 1713.96, "start": 1709.2, "text": " things so at first it will focus on these things but then it says on now I'm" }, { "end": 1718.92, "start": 1713.96, "text": " now I know more right there is there I know I see there is actually something" }, { "end": 1723.24, "start": 1718.92, "text": " on the right side so in the higher layers it can then go back and ask the" }, { "end": 1728.52, "start": 1723.24, "text": " image more questions by sending these Q vectors of the attention mechanism and" }, { "end": 1734.48, "start": 1728.52, "text": " it will get back the V vectors from the image features that correspond to these" }, { "end": 1740.72, "start": 1734.48, "text": " Q things so up and up the layers this person can ask more refined questions" }, { "end": 1745.6, "start": 1740.72, "text": " about what that particular person is interested in okay and since you have" }, { "end": 1752.4, "start": 1745.6, "text": " the different people here that ask different questions you basically learn" }, { "end": 1758.44, "start": 1752.4, "text": " the people in a way such that across the data set they all together they cover" }, { "end": 1763.76, "start": 1758.44, "text": " every possible image pretty well again these people what they're interested in" }, { "end": 1767.76, "start": 1763.76, "text": " initially is not dependent on the picture you simply learn this in a" }, { "end": 1773.56, "start": 1767.76, "text": " global manner all right this is the best way I have of describing it he basically" }, { "end": 1779.8799999999999, "start": 1773.56, "text": " learned and people that are each one is interested in different things different" }, { "end": 1784.48, "start": 1779.8799999999999, "text": " classes and different regions in the image and each one of these people is" }, { "end": 1791.8, "start": 1784.48, "text": " going to output their best guess of what is where based on what they're interested" }, { "end": 1796.72, "start": 1791.8, "text": " in so that person might say I'm you know I'm the person that's interested kind of" }, { "end": 1801.6, "start": 1796.72, "text": " in the left side of things so I I'm going to output that there is a bird" }, { "end": 1806.68, "start": 1801.6, "text": " right here now these people if this is a transformer right and everything can" }, { "end": 1811.6, "start": 1806.68, "text": " attend to everything they can actually communicate with each other as they" }, { "end": 1817.6, "start": 1811.6, "text": " incorporate information from the image so in each layer they can do both they" }, { "end": 1821.28, "start": 1817.6, "text": " can incorporate information from the image and they can communicate with each" }, { "end": 1824.6399999999999, "start": 1821.28, "text": " other and then in the next layer they can do it again and again and again and" }, { "end": 1830.28, "start": 1824.6399999999999, "text": " thereby they can sort of they can sort of say well you already got the left side" }, { "end": 1835.32, "start": 1830.28, "text": " I will take the right side you already got the bird class I will take the" }, { "end": 1840.68, "start": 1835.32, "text": " elephant class and so on so you see here how the the architecture of the" }, { "end": 1847.08, "start": 1840.68, "text": " transformer actually is also very conducive to doing this bounding box" }, { "end": 1851.24, "start": 1847.08, "text": " prediction in that these different things can sort of attend to each other" }, { "end": 1857.52, "start": 1851.24, "text": " and therefore communicate with each other all right I hope that sort of" }, { "end": 1861.6799999999998, "start": 1857.52, "text": " makes sense now before we get into the experiments I want to list a third" }, { "end": 1868.04, "start": 1861.6799999999998, "text": " reason of why the transformer especially the encoders might actually also make a" }, { "end": 1875.1599999999999, "start": 1868.04, "text": " giant amount of sense here since you unroll the image into height and width" }, { "end": 1880.3200000000002, "start": 1875.16, "text": " and you have to imagine what does the transformer do the transformer as we" }, { "end": 1885.28, "start": 1880.3200000000002, "text": " said here has this notion of a tension where from any point in the sequence it" }, { "end": 1890.3200000000002, "start": 1885.28, "text": " can gather information from any other point in the sequence and this that's" }, { "end": 1895.8600000000001, "start": 1890.3200000000002, "text": " usually one of the downsides of the transformers is done via a quadratic" }, { "end": 1901.52, "start": 1895.8600000000001, "text": " attention mechanism so if I just list one feature channel go over here if I" }, { "end": 1908.08, "start": 1901.52, "text": " just list one feature channel right here this is height times width of the image" }, { "end": 1914.08, "start": 1908.08, "text": " right this is this is the entire image unrolled in one vector height times" }, { "end": 1922.28, "start": 1914.08, "text": " width and here I unroll it again height times width then this this matrix that I" }, { "end": 1929.08, "start": 1922.28, "text": " can build right here which is called the attention matrix right here it will tell" }, { "end": 1934.6799999999998, "start": 1929.08, "text": " me which parts of the sequence attend to which other parts okay so if you have an" }, { "end": 1940.3999999999999, "start": 1934.6799999999998, "text": " image that has the let's say the number three and you really want to figure out" }, { "end": 1945.6399999999999, "start": 1940.3999999999999, "text": " whether or not this is a three then the bow up here must communicate with the" }, { "end": 1949.36, "start": 1945.6399999999999, "text": " bow down here right they need to share information is it ah there's a bow here" }, { "end": 1954.84, "start": 1949.36, "text": " there's a bow here and there is a spiky thing here that must be a three so you" }, { "end": 1959, "start": 1954.84, "text": " want something this is rather at the beginning of the sequence you want this" }, { "end": 1963.32, "start": 1959, "text": " to attend first of all it will attend itself so you get fairly high values" }, { "end": 1973.4, "start": 1963.32, "text": " along the diagonal maybe 10 10 10 11 11 12 I saw this olig skit 100 million nine" }, { "end": 1978.96, "start": 1973.4, "text": " nine but it also like this this part here at the beginning of the sequence" }, { "end": 1982.58, "start": 1978.96, "text": " let's say it's here because this is unrolled right needs to attend to the" }, { "end": 1988.72, "start": 1982.58, "text": " end so this needs to attend to the end which we will put an 11 here and" }, { "end": 1994.04, "start": 1988.72, "text": " the other way around doesn't always need to be symmetrical by the way okay but in" }, { "end": 2001.52, "start": 1994.04, "text": " any case this is going to be a h times w squared matrix because everything can" }, { "end": 2005.72, "start": 2001.52, "text": " attend to everything and that's the attention mechanism why do I think this" }, { "end": 2010.92, "start": 2005.72, "text": " is so good for bounding boxes because let's let's imagine you actually have a" }, { "end": 2016, "start": 2010.92, "text": " matrix that is like this okay height times width times height times width" }, { "end": 2021.04, "start": 2016, "text": " every single point in here actually defines a bounding box because this" }, { "end": 2028, "start": 2021.04, "text": " point this point right here in this dimension corresponds to one location in" }, { "end": 2032.88, "start": 2028, "text": " the image and on this axis it corresponds to another location now in" }, { "end": 2036.96, "start": 2032.88, "text": " the attention matrix simply means these two points need to communicate but if" }, { "end": 2042, "start": 2036.96, "text": " you have two pixels you actually have defined a bounding box right here right" }, { "end": 2048.84, "start": 2042, "text": " you you are actually you're defining a bounding box and the the fact that this" }, { "end": 2054.52, "start": 2048.84, "text": " is happening in the exact same matrices could mean that the transformers are" }, { "end": 2059.64, "start": 2054.52, "text": " uniquely well the transformers across sequences of these height times width" }, { "end": 2066.04, "start": 2059.64, "text": " unrolled images are uniquely well conducive to these bounding box" }, { "end": 2071.76, "start": 2066.04, "text": " prediction tasks I'm actually a bit astounded because when I first just read" }, { "end": 2075.44, "start": 2071.76, "text": " the title this immediately popped to my mind I'm like oh yes of course and" }, { "end": 2080.2000000000003, "start": 2075.44, "text": " they're going to predict the bounding boxes by simply training so what you" }, { "end": 2084, "start": 2080.2000000000003, "text": " would do what I thought this was gonna be is out you output an actual matrix" }, { "end": 2090.84, "start": 2084, "text": " like this and then you simply each point you can you can simply classify right so" }, { "end": 2097.6800000000003, "start": 2090.84, "text": " you can classify here whether whether or not like at in this direction there is a" }, { "end": 2103.96, "start": 2097.68, "text": " bird right and then if you have two points like this for example you and you" }, { "end": 2107.52, "start": 2103.96, "text": " also classify whether in this direction there is a bird right and this naturally" }, { "end": 2111.6, "start": 2107.52, "text": " defines a bounding box or you could like take this matrix and actually just" }, { "end": 2117.56, "start": 2111.6, "text": " classify individual points in this matrix to be the bounding boxes because" }, { "end": 2123.3599999999997, "start": 2117.56, "text": " they already define bounding boxes so I just I think these these quadratic" }, { "end": 2127.52, "start": 2123.36, "text": " things are are uniquely I mean someone must have thought of this or if not" }, { "end": 2132.44, "start": 2127.52, "text": " cite the YouTube channel it would be funny first paper ever to actually have" }, { "end": 2139.36, "start": 2132.44, "text": " to cite the YouTube channel but again yeah so transformers seem to be a good" }, { "end": 2146.1, "start": 2139.36, "text": " idea for these kinds of things so how do they do of course they do well they are" }, { "end": 2152.48, "start": 2146.1, "text": " on par with these other much much much more complex architectures these faster" }, { "end": 2158.2400000000002, "start": 2152.48, "text": " are CNN models they are apparently much more complex but they are on par with" }, { "end": 2164.56, "start": 2158.2400000000002, "text": " this they do however train forever I think they train for like six days on" }, { "end": 2169.28, "start": 2164.56, "text": " eight GPUs is not that much if you compare to like language models on" }, { "end": 2175.6, "start": 2169.28, "text": " hundreds of TPUs but still okay I don't want to go into the numbers of" }, { "end": 2180.64, "start": 2175.6, "text": " experiments but what is pretty cool is that they can now visualize this sort of" }, { "end": 2186.7599999999998, "start": 2180.64, "text": " attention and you can see right here that if they look at a particular point" }, { "end": 2192.16, "start": 2186.7599999999998, "text": " in the image and visualize the attention it will actually attend to the instance" }, { "end": 2196.3599999999997, "start": 2192.16, "text": " itself so it will like these are usually the problems for these detection" }, { "end": 2201.2, "start": 2196.3599999999997, "text": " algorithms when things overlap and are partially occluded but you can see right" }, { "end": 2206, "start": 2201.2, "text": " here that the attention is on the part of the image that makes the instance in" }, { "end": 2210.24, "start": 2206, "text": " the back and the attention here is on the part of this and it doesn't sort of" }, { "end": 2216.3999999999996, "start": 2210.24, "text": " overlap into the others so that is one thing that's pretty impressive about" }, { "end": 2221.12, "start": 2216.3999999999996, "text": " these architectures the other thing they show is for example it can generalize to" }, { "end": 2227, "start": 2221.12, "text": " many many instances so here it has never seen 24 giraffes in one image but yet it" }, { "end": 2235.3199999999997, "start": 2227, "text": " can absolutely do that and giraffe giraffe giraffe giraffe giraffe and the" }, { "end": 2242.1600000000003, "start": 2235.32, "text": " one of the coolest images I find are these here where you can see right here" }, { "end": 2249, "start": 2242.1600000000003, "text": " again attention visualization and you see that even within the bounding box" }, { "end": 2257.56, "start": 2249, "text": " of the front elephant here you see that the attention on this foot of the back" }, { "end": 2264.2000000000003, "start": 2257.56, "text": " elephant is is is assigned to this blue bounding box so this is the blue" }, { "end": 2270.72, "start": 2264.2, "text": " basically the blue bounding box person that is attending to that back foot that" }, { "end": 2277.3199999999997, "start": 2270.72, "text": " means they they these things really sort of understand or they learn these things" }, { "end": 2286.04, "start": 2277.3199999999997, "text": " like occlusion and just hard I have a hard time describing it but you can see" }, { "end": 2289.58, "start": 2286.04, "text": " it visually here right like how it clearly learns that these are two" }, { "end": 2295.08, "start": 2289.58, "text": " instances that are sort of occluding each other but this this this instance can" }, { "end": 2300.6, "start": 2295.08, "text": " actually appear within the bounding box of the other instance and the same goes" }, { "end": 2305.3199999999997, "start": 2300.6, "text": " for the zebra here that are partially occluding each other and you can see that" }, { "end": 2311.2, "start": 2305.3199999999997, "text": " the attention is correctly like even here that this back foot of this zebra" }, { "end": 2320.24, "start": 2311.2, "text": " is correctly labeled so all in all that is pretty cool and they take it a step" }, { "end": 2324.3199999999997, "start": 2320.24, "text": " further and they say well with this architecture we can actually pretty" }, { "end": 2330.04, "start": 2324.3199999999997, "text": " easily do pixel wise classification so this is this cocoa stuff and things data" }, { "end": 2335.96, "start": 2330.04, "text": " set where I don't know which one is the stuff and which one is the things I" }, { "end": 2340.68, "start": 2335.96, "text": " think things is the objects and stuff is like sky and mountains and so on and" }, { "end": 2346.44, "start": 2340.68, "text": " so this is a classification task where you actually have to label every single" }, { "end": 2350.9199999999996, "start": 2346.44, "text": " pixel so what they do is they simply input this through their detector and" }, { "end": 2357.48, "start": 2350.9199999999996, "text": " they detect the instances they take the attention maps of the instances and then" }, { "end": 2363.3599999999997, "start": 2357.48, "text": " they scale it up this right here is just a CNN sort of in reverse that scales up" }, { "end": 2368.3999999999996, "start": 2363.3599999999997, "text": " the image because they have scaled it down as we said they scale it up again" }, { "end": 2376.7200000000003, "start": 2368.4, "text": " and then they can simply classify each pixel where each of these you remember" }, { "end": 2380.64, "start": 2376.7200000000003, "text": " we had these different people here that it that cared about different things in" }, { "end": 2385.56, "start": 2380.64, "text": " the image each of these people will classify their respective pixels the" }, { "end": 2389.64, "start": 2385.56, "text": " pixels they feel responsible for and then you simply merge all of these" }, { "end": 2396.12, "start": 2389.64, "text": " people's predictions together into this prediction and again this gives pretty" }, { "end": 2403.3199999999997, "start": 2396.12, "text": " pretty impressive results I am I mean this is this is fun this looks like it" }, { "end": 2408.52, "start": 2403.3199999999997, "text": " sort of works haven't they do quantitative analysis of course but I'm" }, { "end": 2411.7999999999997, "start": 2408.52, "text": " just impressed by the examples right here" }, { "end": 2417.3599999999997, "start": 2411.7999999999997, "text": " alright that was sort of it I really enjoyed reading this papers the" }, { "end": 2422, "start": 2417.3599999999997, "text": " simplicity is pretty cool they do have not only do they have code in the paper" }, { "end": 2428.16, "start": 2422, "text": " to show how ridiculously easy it is to get this to run this is all you need in" }, { "end": 2433.64, "start": 2428.16, "text": " pytorch but they do actually have code and as I understand they also have" }, { "end": 2438.4, "start": 2433.64, "text": " pre-trained models so they have this model zoo right here where they give you" }, { "end": 2442.68, "start": 2438.4, "text": " the pre-trained models so you can play with it and you can even load it from" }, { "end": 2448.08, "start": 2442.68, "text": " torch hub yourself and you can train it yourself they have a collab all is there" }, { "end": 2453.2, "start": 2448.08, "text": " all right again if you enjoyed this video consider leaving a like" }, { "end": 2482.08, "start": 2453.2, "text": " subscribing and I'll see you next time bye bye" } ]
a-VQfQqIMrE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
mixup: Beyond Empirical Risk Minimization (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "classifier", "dnn", "cnn", "high dimensions", "class boundaries", "mixing", "interpolation", "latent", "beta", "regularizer", "regularization", "generalization", "adversarial examples", "smooth" ]
Neural Networks often draw hard boundaries in high-dimensional space, which makes them very brittle. Mixup is a technique that linearly interpolates between data and labels at training time and achieves much smoother and more regular class boundaries. OUTLINE: 0:00 - Intro 0:30 - The problem with ERM 2:50 - Mixup 6:40 - Code 9:35 - Results https://arxiv.org/abs/1710.09412 Abstract: Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks. Authors: Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at Mix-Up Beyond Empirical Risk Minimization by Hongyi Cheng, Mustafa Sis, Jan Endauphin and David Lopez-Paz. So this paper is actually pretty simple, but it introduces a technique that apparently helps with training classifiers, and I haven't seen it used in practice, so there must be at least something to it. It is ultimately very simple. So usually you input a data point X into your neural network in deep learning. So f of X, that's your neural network. Your neural network has parameters theta. You get some output y hat, and along with the X you also have a y, a true label, and then you have a loss function that compares what you output with your true label, and then you just try to make that loss smaller. You want to adjust your parameters so next time you see data point X, its output will be a little closer to the true label y. And we call this empirical risk minimization, because you don't actually think that your X comes from some data distribution D, like the space of all natural images or the space of all of language, but what you actually have is a data set of a finite amount of data that you can sample X and Y from. So instead of minimizing your true risk, you minimize your empirical risk, the empirical misc-renimization right here. Now what's the problem with that? The problem is that you can get overly confident about your data points and nothing else, and that will hurt your generalization. So if you have a data point, let's say right here, and another one right here, your network is basically, so this is class 1, this is class 2, your network is going to maybe make decision boundaries like this, and like this, where it says okay, here is class 1 and here is class 2. But it's very conceivable that here it says, here is class 4, and over here is class 7, and right here through is class 9, and by the way, here, class 4 again. So the empirical risk minimization leaves everything in between the data points open. Now what this paper proposes is that we should not only train our classifier on these data points, but on all the data points sort of in between the two. And this is the mix-up data points. So this data point here might be constructed, if this is A and this is B, from 0.1 times B, right, and plus 0.9 times A, because it's mostly A and it's a little bit B. And now you think, what are the labels here if A belongs to class 1 and B belongs to class 2? And of course the label of this data point is 0.1 times the class of B, which is 2, plus 0.9 times the class of A, which is 1. Ultimately, because what you do is you input a class like class number 2, if you want to input this into a machine learning model, you don't just say it's class number 2. What you input is a distribution that basically has zeros everywhere. So these small things, they're 0, 0, 0, 1, 0. And this here is at class number 2. So this would be class number 1, class number 2, class number 3, right? You input a distribution like this if you want to express class number 2. Now in our sample right here, what we would input as a label is simply a mix between class 1, so 0.9, so 0.9 of class 1, 0.1 of class 2, and then 0 everywhere else. So this would be our label for the data point that we construct right here. This would be our, sorry, the top one would be our data point. Formally, you take two data points and you mix them using this lambda mixing factor. That'll give you a new data point that's in between the other data points. And you take the two corresponding labels and you mix them accordingly as well. And that will give you the label for that data point. And now your model will learn to basically smoothly interpolate. So you will teach your model. So the model on the left here is class number 1, right? That's class number 1. The thing on the right is class number 2. This here is half of class 1 and half of class 2. So the model basically learns a smooth interpolation where the situation that's here on top is probably not going to happen anymore. But what it would do is it would sort of create these iso lines around class 2 and around class 1 where it's sort of smoothly getting less and less sure about the class of the data points. But on the way, it is always either class 1 or class 2. And they say that can help the generalization performance. And it's visible. Why? Right? It's just the only thing that's not clear from the beginning is that this kind of interpolation actually makes sense. Because this means we sort of linearly interpolate between two images. So if we have two images, we just take half of one and half of the other. And that will be not a natural image. It will be kind of a blurry thing. Otherwise, you know, all our problems would be solved. And we could just linearly classify things. But in any case, in practice, it actually seems to help. Probably because interpolations of two images, linear interpolations, are still much more like something like a natural image than any random noise you could come up with. So they say this in code right here. Code is pretty simple. Simply want to mix the two things. And the mixing factor, this lambda here, comes from a beta distribution. And they use a beta, I believe, of 0.4 or something. Just want to quickly show you. This is the red line here. So the red line, as you can see, mostly, most of the time, they're going to either sample the thing on the very left or the thing on the very right. That means they either sample the first or the second data point. But some of the time, they actually sample something in the middle. And it's fairly uniform in the middle. So it appears like a good distribution to sample from if you want to sample these mixing coefficients. And by adjusting the actual number of alpha and beta here, you can determine how many times you sample the original data points versus how many times you sample something in the middle. OK. On this toy data set right here, they showcase what mixup can do. So in a classic model, you have the orange and the green data points. And blue is basically where the classifier believes it's class one. You see this very hard border here. It's quite a hard border. Now, you only have two classes here. And so the hard border is sort of a problem in itself, because if you think of, for example, adversarial examples, all they have to do is basically get over that one inch. And the classifier is already super duper sure it's the orange class. Whereas if you use mixup, your border is much, much, much more fuzzy. It's like, yeah, it's only really sure here. And out here everywhere. But in the middle, it's sort of like, I don't know. And so that's kind of a more desirable situation. Now, of course, this here works particularly in this in this linear 2D setting. But as we can see, the same reasoning applies to sort of higher, higher layers and higher dimensionality data points. I have seemed to lost the ability to zoom. Oh, no, it's back. OK. And that's basically it for this paper. This is all they do. They propose this method and then they test it. They say something interesting here that mixup converges to the classical method as alpha approaches zero. So that would push your beta distribution basically in the middle all the way down. And you would only sample from the very left or the very right. So you can smoothly interpolate between this mixing and the classic method. They so their main results are we apply this to classifiers. And what I like is, since again, is also a classifier. So the discriminator is a classifier. They also apply it to GANs and they outperform on stabilized the classic training on GANs. They show that it's more robust towards adversarial attacks because it's not so sure about intermediate things. And they generally outperform other methods. But also they do this nice investigation here where they measure the prediction error of in between data. And what it means is they say a prediction is counted as a miss if it does not belong to YI or YJ. So you have a sample right here, XI and a sample right here XJ. And you look at what the classifier says in between the two data points. So you just interpolate the two data points and just measure what the classifier says. And whenever the classifier either says YI or YJ, either either label of those two data points, you count it as correct and you only counted as incorrect if it says something else. And you can see here if you train with the classic method ERM, these errors happen much more often. That's exactly the situation I pointed out at the beginning where in the high dimensions, it can occur that all sorts of decision boundaries sneak here in between the two data points. And by interpolating between them during training, you sort of much reduce that. You reduce that effect a lot. Now, this they also say that the gradient norm of the gradients of the model with respect to input in between training data, it happens the same thing. The norm of the gradients in the middle is also much, much lower. And this investigation I find pretty cool. I have to say I've seen mix up in practice, so it might be useful. I've read a paper where they basically say, oh, it was a big transfer paper. Yeah, where they basically say it is useful if you have, for example, if you have little data and a big model, so you can sort of regularize the model and is also useful to know that they did test this with dropout. So they compared it with dropout. And the conclusion is basically that this is something else than dropout. So it's not doing the same thing. Dropout, of course, it means you drop out some of the data points in intermediate activations. And that sort of gives you a noisy version of the data point. This here can actually be combined with dropout, which means that it gives you an additional benefit. You see right here, most of the best numbers happen when you use mix up plus dropout. So it seems to be just an additional regularization on top of dropout. Pretty cool, pretty cool investigation also. All right. So if you like this, I invite you to read the paper. If you like the video, please subscribe and like and comment. And yeah, have a nice day. Bye bye.
[ { "end": 6.7, "start": 0, "text": " Hi there! Today we'll look at Mix-Up Beyond Empirical Risk Minimization by Hongyi Cheng," }, { "end": 11.66, "start": 6.7, "text": " Mustafa Sis, Jan Endauphin and David Lopez-Paz." }, { "end": 22, "start": 11.66, "text": " So this paper is actually pretty simple, but it introduces a technique that apparently helps with training classifiers," }, { "end": 29.5, "start": 22, "text": " and I haven't seen it used in practice, so there must be at least something to it." }, { "end": 32.5, "start": 29.5, "text": " It is ultimately very simple." }, { "end": 41, "start": 32.5, "text": " So usually you input a data point X into your neural network in deep learning." }, { "end": 48, "start": 41, "text": " So f of X, that's your neural network. Your neural network has parameters theta." }, { "end": 56, "start": 48, "text": " You get some output y hat, and along with the X you also have a y, a true label," }, { "end": 62, "start": 56, "text": " and then you have a loss function that compares what you output with your true label," }, { "end": 65, "start": 62, "text": " and then you just try to make that loss smaller." }, { "end": 75, "start": 65, "text": " You want to adjust your parameters so next time you see data point X, its output will be a little closer to the true label y." }, { "end": 82, "start": 75, "text": " And we call this empirical risk minimization," }, { "end": 90, "start": 82, "text": " because you don't actually think that your X comes from some data distribution D," }, { "end": 94, "start": 90, "text": " like the space of all natural images or the space of all of language," }, { "end": 104, "start": 94, "text": " but what you actually have is a data set of a finite amount of data that you can sample X and Y from." }, { "end": 116, "start": 104, "text": " So instead of minimizing your true risk, you minimize your empirical risk, the empirical misc-renimization right here." }, { "end": 118, "start": 116, "text": " Now what's the problem with that?" }, { "end": 124, "start": 118, "text": " The problem is that you can get overly confident about your data points and nothing else," }, { "end": 126, "start": 124, "text": " and that will hurt your generalization." }, { "end": 132, "start": 126, "text": " So if you have a data point, let's say right here, and another one right here," }, { "end": 138, "start": 132, "text": " your network is basically, so this is class 1, this is class 2," }, { "end": 143, "start": 138, "text": " your network is going to maybe make decision boundaries like this, and like this," }, { "end": 146, "start": 143, "text": " where it says okay, here is class 1 and here is class 2." }, { "end": 155, "start": 146, "text": " But it's very conceivable that here it says, here is class 4, and over here is class 7," }, { "end": 161, "start": 155, "text": " and right here through is class 9, and by the way, here, class 4 again." }, { "end": 170, "start": 161, "text": " So the empirical risk minimization leaves everything in between the data points open." }, { "end": 179, "start": 170, "text": " Now what this paper proposes is that we should not only train our classifier on these data points," }, { "end": 185, "start": 179, "text": " but on all the data points sort of in between the two." }, { "end": 190, "start": 185, "text": " And this is the mix-up data points." }, { "end": 195, "start": 190, "text": " So this data point here might be constructed, if this is A and this is B," }, { "end": 208, "start": 195, "text": " from 0.1 times B, right, and plus 0.9 times A, because it's mostly A and it's a little bit B." }, { "end": 214, "start": 208, "text": " And now you think, what are the labels here if A belongs to class 1 and B belongs to class 2?" }, { "end": 222, "start": 214, "text": " And of course the label of this data point is 0.1 times the class of B, which is 2," }, { "end": 227, "start": 222, "text": " plus 0.9 times the class of A, which is 1." }, { "end": 233, "start": 227, "text": " Ultimately, because what you do is you input a class like class number 2," }, { "end": 238, "start": 233, "text": " if you want to input this into a machine learning model, you don't just say it's class number 2." }, { "end": 245, "start": 238, "text": " What you input is a distribution that basically has zeros everywhere." }, { "end": 250, "start": 245, "text": " So these small things, they're 0, 0, 0, 1, 0." }, { "end": 252, "start": 250, "text": " And this here is at class number 2." }, { "end": 255, "start": 252, "text": " So this would be class number 1, class number 2, class number 3, right?" }, { "end": 261, "start": 255, "text": " You input a distribution like this if you want to express class number 2." }, { "end": 269, "start": 261, "text": " Now in our sample right here, what we would input as a label is simply a mix between class 1," }, { "end": 278, "start": 269, "text": " so 0.9, so 0.9 of class 1, 0.1 of class 2, and then 0 everywhere else." }, { "end": 284, "start": 278, "text": " So this would be our label for the data point that we construct right here." }, { "end": 288, "start": 284, "text": " This would be our, sorry, the top one would be our data point." }, { "end": 297, "start": 288, "text": " Formally, you take two data points and you mix them using this lambda mixing factor." }, { "end": 301, "start": 297, "text": " That'll give you a new data point that's in between the other data points." }, { "end": 305, "start": 301, "text": " And you take the two corresponding labels and you mix them accordingly as well." }, { "end": 308, "start": 305, "text": " And that will give you the label for that data point." }, { "end": 314, "start": 308, "text": " And now your model will learn to basically smoothly interpolate." }, { "end": 316, "start": 314, "text": " So you will teach your model." }, { "end": 320, "start": 316, "text": " So the model on the left here is class number 1, right?" }, { "end": 321, "start": 320, "text": " That's class number 1." }, { "end": 323, "start": 321, "text": " The thing on the right is class number 2." }, { "end": 330, "start": 323, "text": " This here is half of class 1 and half of class 2." }, { "end": 335, "start": 330, "text": " So the model basically learns a smooth interpolation where the situation that's here on top" }, { "end": 337, "start": 335, "text": " is probably not going to happen anymore." }, { "end": 344, "start": 337, "text": " But what it would do is it would sort of create these iso lines around class 2" }, { "end": 351, "start": 344, "text": " and around class 1 where it's sort of smoothly getting less and less sure about the class of the data points." }, { "end": 354, "start": 351, "text": " But on the way, it is always either class 1 or class 2." }, { "end": 357, "start": 354, "text": " And they say that can help the generalization performance." }, { "end": 359, "start": 357, "text": " And it's visible. Why? Right?" }, { "end": 367, "start": 359, "text": " It's just the only thing that's not clear from the beginning is that this kind of interpolation actually makes sense." }, { "end": 372, "start": 367, "text": " Because this means we sort of linearly interpolate between two images." }, { "end": 375, "start": 372, "text": " So if we have two images, we just take half of one and half of the other." }, { "end": 377, "start": 375, "text": " And that will be not a natural image." }, { "end": 379, "start": 377, "text": " It will be kind of a blurry thing." }, { "end": 382, "start": 379, "text": " Otherwise, you know, all our problems would be solved." }, { "end": 385, "start": 382, "text": " And we could just linearly classify things." }, { "end": 389, "start": 385, "text": " But in any case, in practice, it actually seems to help." }, { "end": 393, "start": 389, "text": " Probably because interpolations of two images, linear interpolations," }, { "end": 401, "start": 393, "text": " are still much more like something like a natural image than any random noise you could come up with." }, { "end": 406, "start": 401, "text": " So they say this in code right here." }, { "end": 407, "start": 406, "text": " Code is pretty simple." }, { "end": 410, "start": 407, "text": " Simply want to mix the two things." }, { "end": 414, "start": 410, "text": " And the mixing factor, this lambda here, comes from a beta distribution." }, { "end": 418, "start": 414, "text": " And they use a beta, I believe, of 0.4 or something." }, { "end": 421, "start": 418, "text": " Just want to quickly show you. This is the red line here." }, { "end": 427, "start": 421, "text": " So the red line, as you can see, mostly, most of the time," }, { "end": 433, "start": 427, "text": " they're going to either sample the thing on the very left or the thing on the very right." }, { "end": 437, "start": 433, "text": " That means they either sample the first or the second data point." }, { "end": 441, "start": 437, "text": " But some of the time, they actually sample something in the middle." }, { "end": 445, "start": 441, "text": " And it's fairly uniform in the middle." }, { "end": 451, "start": 445, "text": " So it appears like a good distribution to sample from if you want to sample these mixing coefficients." }, { "end": 456, "start": 451, "text": " And by adjusting the actual number of alpha and beta here," }, { "end": 464, "start": 456, "text": " you can determine how many times you sample the original data points versus how many times you sample something in the middle." }, { "end": 466, "start": 464, "text": " OK." }, { "end": 472, "start": 466, "text": " On this toy data set right here, they showcase what mixup can do." }, { "end": 476, "start": 472, "text": " So in a classic model, you have the orange and the green data points." }, { "end": 480, "start": 476, "text": " And blue is basically where the classifier believes it's class one." }, { "end": 482, "start": 480, "text": " You see this very hard border here." }, { "end": 484, "start": 482, "text": " It's quite a hard border." }, { "end": 486, "start": 484, "text": " Now, you only have two classes here." }, { "end": 494, "start": 486, "text": " And so the hard border is sort of a problem in itself, because if you think of, for example, adversarial examples," }, { "end": 499, "start": 494, "text": " all they have to do is basically get over that one inch." }, { "end": 505, "start": 499, "text": " And the classifier is already super duper sure it's the orange class." }, { "end": 509, "start": 505, "text": " Whereas if you use mixup, your border is much, much, much more fuzzy." }, { "end": 513, "start": 509, "text": " It's like, yeah, it's only really sure here." }, { "end": 516, "start": 513, "text": " And out here everywhere." }, { "end": 520, "start": 516, "text": " But in the middle, it's sort of like, I don't know." }, { "end": 524, "start": 520, "text": " And so that's kind of a more desirable situation." }, { "end": 530, "start": 524, "text": " Now, of course, this here works particularly in this in this linear 2D setting." }, { "end": 540, "start": 530, "text": " But as we can see, the same reasoning applies to sort of higher, higher layers and higher dimensionality data points." }, { "end": 543, "start": 540, "text": " I have seemed to lost the ability to zoom." }, { "end": 545, "start": 543, "text": " Oh, no, it's back." }, { "end": 546, "start": 545, "text": " OK." }, { "end": 549, "start": 546, "text": " And that's basically it for this paper." }, { "end": 550, "start": 549, "text": " This is all they do." }, { "end": 554, "start": 550, "text": " They propose this method and then they test it." }, { "end": 561, "start": 554, "text": " They say something interesting here that mixup converges to the classical method as alpha approaches zero." }, { "end": 565, "start": 561, "text": " So that would push your beta distribution basically in the middle all the way down." }, { "end": 570, "start": 565, "text": " And you would only sample from the very left or the very right." }, { "end": 577, "start": 570, "text": " So you can smoothly interpolate between this mixing and the classic method." }, { "end": 584, "start": 577, "text": " They so their main results are we apply this to classifiers." }, { "end": 588, "start": 584, "text": " And what I like is, since again, is also a classifier." }, { "end": 589, "start": 588, "text": " So the discriminator is a classifier." }, { "end": 595, "start": 589, "text": " They also apply it to GANs and they outperform on stabilized the classic training on GANs." }, { "end": 604, "start": 595, "text": " They show that it's more robust towards adversarial attacks because it's not so sure about intermediate things." }, { "end": 608, "start": 604, "text": " And they generally outperform other methods." }, { "end": 617, "start": 608, "text": " But also they do this nice investigation here where they measure the prediction error of in between data." }, { "end": 624, "start": 617, "text": " And what it means is they say a prediction is counted as a miss if it does not belong to YI or YJ." }, { "end": 629, "start": 624, "text": " So you have a sample right here, XI and a sample right here XJ." }, { "end": 635, "start": 629, "text": " And you look at what the classifier says in between the two data points." }, { "end": 639, "start": 635, "text": " So you just interpolate the two data points and just measure what the classifier says." }, { "end": 646, "start": 639, "text": " And whenever the classifier either says YI or YJ, either either label of those two data points," }, { "end": 652, "start": 646, "text": " you count it as correct and you only counted as incorrect if it says something else." }, { "end": 659, "start": 652, "text": " And you can see here if you train with the classic method ERM, these errors happen much more often." }, { "end": 665, "start": 659, "text": " That's exactly the situation I pointed out at the beginning where in the high dimensions," }, { "end": 671, "start": 665, "text": " it can occur that all sorts of decision boundaries sneak here in between the two data points." }, { "end": 681, "start": 671, "text": " And by interpolating between them during training, you sort of much reduce that." }, { "end": 686, "start": 681, "text": " You reduce that effect a lot." }, { "end": 695, "start": 686, "text": " Now, this they also say that the gradient norm of the gradients of the model with respect to input in between training data," }, { "end": 702, "start": 695, "text": " it happens the same thing. The norm of the gradients in the middle is also much, much lower." }, { "end": 711, "start": 702, "text": " And this investigation I find pretty cool. I have to say I've seen mix up in practice, so it might be useful." }, { "end": 716, "start": 711, "text": " I've read a paper where they basically say, oh, it was a big transfer paper." }, { "end": 722, "start": 716, "text": " Yeah, where they basically say it is useful if you have, for example, if you have little data and a big model," }, { "end": 728, "start": 722, "text": " so you can sort of regularize the model and is also useful to know that they did test this with dropout." }, { "end": 734, "start": 728, "text": " So they compared it with dropout. And the conclusion is basically that this is something else than dropout." }, { "end": 743, "start": 734, "text": " So it's not doing the same thing. Dropout, of course, it means you drop out some of the data points in intermediate activations." }, { "end": 747, "start": 743, "text": " And that sort of gives you a noisy version of the data point." }, { "end": 754, "start": 747, "text": " This here can actually be combined with dropout, which means that it gives you an additional benefit." }, { "end": 760, "start": 754, "text": " You see right here, most of the best numbers happen when you use mix up plus dropout." }, { "end": 766, "start": 760, "text": " So it seems to be just an additional regularization on top of dropout." }, { "end": 774, "start": 766, "text": " Pretty cool, pretty cool investigation also. All right. So if you like this, I invite you to read the paper." }, { "end": 781, "start": 774, "text": " If you like the video, please subscribe and like and comment. And yeah, have a nice day. Bye bye." } ]
l5he9JNJqHA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
A critical analysis of self-supervision, or what we can learn from a single image (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "investigation", "linear probes", "usefulness", "representations", "intermediate", "hidden layers", "self-supervised", "rotnet", "crop", "augmentation", "color jitter", "dataset" ]
Does self-supervision really need a lot of data? How low can you go? This paper shows that a single image is enough to learn the lower layers of a deep neural network. Interestingly, more data does not appear to help as long as enough data augmentation is applied. OUTLINE: 0:00 - Overview 1:40 - What is self-supervision 4:20 - What does this paper do 7:00 - Linear probes 11:15 - Linear probe results 17:10 - Results 22:25 - Learned Features https://arxiv.org/abs/1904.13132 Abstract: We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabelled images are used for training. We conclude that: (1) the weights of the early layers of deep networks contain limited information about the statistics of natural images, that (2) such low-level statistics can be learned through self-supervision just as well as through strong supervision, and that (3) the low-level statistics can be captured via synthetic transformations instead of using a large image dataset. Authors: Yuki M. Asano, Christian Rupprecht, Andrea Vedaldi Thumbnail Image: https://commons.wikimedia.org/wiki/File:Golden_Gate_Bridge_during_blue_hour_(16_x_10).jpg Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, today we'll look at a critical analysis of self-supervision or what we can learn from a single image by Yuki M. Asano, Christian Ruprecht and Andrea Vidaldi. This paper, I really was excited when I saw this paper because the outset is so cool and the experiments have a very promising. So we'll take a look. Basically we show that three different and representative methods, byGAN, RotNet and DeepCluster, so this is self-supervision techniques, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabeled images are used for training. We conclude that first the weights of the early layers of deep networks contain limited information about statistics of natural images, that second such low-level statistics can be learned through self-supervision just as well as through strong supervision and that three the low-level statistics can be captured via synthetic transformations instead of using large image data set. As I said, I was kind of excited when I saw this and what they're talking about is self-supervision. So really quickly for those who don't know, self-supervision is a technique where if you have images but no labels and you would still like to learn something from them, you can do a pre-training step for your network. So basically you have your neural network F and what you would like to do is you would like F of X to be close to Y for X and Y to in your training data set. But you can if you have a much larger data set of just X's, right here you have pairs of X and Y of just X, that are sort of similar to the X in your label data set. You can kind of get your network used to the data by doing this self-supervision. So what you would do is you would sort of come up with your own labels for the data points and one way is this, we'll take just this rot net as an example. So what you'll do is you'll input an image, so maybe an image of the number 3 in handwritten digit recognition, but you flip it to its side. So it's the number 3 right here and then you ask the network to come up with an answer. Is it upright? Is it flipped to the right? Is it flipped to the left? Or is it turned on its head? Which one is it? And of course you who did the transformation know the correct label. So this is how you come up with sort of fake labels for your data and this works surprisingly well and what this paper basically says is that you do not actually need this giant database here. It's actually sufficient sometimes if you have one single image where you do this on. Now the claim is a bit of a cheat I have to say, but we'll go into that. And further they say okay it's enough to have one single image is to learn the features of the lower layers of the neural network because they usually tend to extract low level features that you know you can learn from a single image but the higher level features you really need the supervised data set. It's not enough to have these self-supervision techniques and even if you have many many many self-supervision samples, so if you actually do have this giant data set it still doesn't help you for the higher layers. Almost causing to question this you have a giant data set of unlabeled things notion that is often presented including by me. Okay so what do they do? They take either single images or just very few images so they also have a setting where they take 10 images. For the single image they hand select it. So they hand select the following three images. So this image right here they select because it's very crowded there's a lot going on. There's people, there's objects, there is lighting and so on. There's you know houses, these lines, there's perspective. So that's why they select image A. Image B here they also experiment with image B is a drawn image as you can see there's also lots of stuff going on but they basically want to to research how does a natural image compared to a artificial image. And then in C they have this as sort of a control because there is lots of parts here where there's not much going on compared to here and most of the image there's lots of stuff going on and this image on the image number C or letter C has large areas where there's nothing going on. Okay so these are the single images. Now why I say it's a bit of a cheat is that these images are actually super large. So for ImageNet and for C410 this might be one of the samples of the C410 or ImageNet classifier. Now of course C410 is a lot smaller than ImageNet but still for ImageNet these are you know there are many pictures here not just one. So to say this is from a single image it's technically true but then if you split it into multiple images it's technically not true. So it would have been fun to see what actually happens with a single image when you downscale it. But okay so how do they investigate this? They have this five layer neural network right here so this five convolutional I'm gonna guess there's five convolutional layer after each convolutional layer I think there is some batch norm and relu and then to the next convolutional layer and so on and then at the end maybe there is or there's also some pooling here max pool at the end there is going to be some linear classifier that classifies it into either a ten or a hundred or a thousand classes whatever you want. The way they investigate this is through linear probes so-called. Now linear probes are somewhat of a technique to inspect how much each of the layers learn. So if we again draw our network right here and this is the input X right so you have the hidden representation one hidden representation two hidden representation three and here you output it to the y hat and that you compare with the y from your data set right the X is from your data set and the Y is from your data set. Now linear probe wants to investigate how useful a given hidden representation is to classify the output. So what a linear probe would do is it would take the hidden representation here and learn one single linear classifier to classify that hidden representation to come up with a y hat given h1 or something like this. So the important part here is that this is linear right this is a this is a linear classifier. You do nothing more you take the representation and instead of this entire giant neural network on top of it you simply build a linear classifier and you can build these linear probes from any layer right here. You can build a linear classifier on top of this on top of this and then you basically look how good is your linear classifier when trained on that hidden representation that the network comes up with and that's how you estimate how much information about the target or let's say no how how optimal the representation already is because at the end of the network right you do have a linear classifier. So at some point this representation must go into a form where it is now linearly well classifiable and the assumption is basically that these layers of the neural network successively make a representation that is more and more linearly classifiable and that is a strong assumption right and this paper here uses linear probes exclusively and that is a bit worrisome to me because I have my troubles with these linear probe approach because this strong assumption that more linearly classifiable is better it just rubs me in the wrong way right. We know that the information content can never increase from layer to layer about the label so any information about the label that is in H1 sorry that is in H2 must also have been present in H1 so technically if we just built the correct classifier we could predict from H1 just as well as from H2 right because we're actually doing it we're building the neural network but the fact that we cannot predict linearly as well using H1 so the fact that this classifier here performs worse than this classifier here because H1 is a less optimal representation in a linear sense it's and it's just meh and the fact that I mean yes but then to use that and to estimate oh how useful is a representation you're equating usefulness with linearly classifiable and that I disagree a representation can be extremely useful if the following layers manage to do something useful with it and that can be something completely different or it can even be the opposite of the linear classifiability right so this is kind of my problem here and they don't do a good work of convincing me otherwise so they don't employ different techniques other than these linear probes in any case when they do this linear probe you can see right here that the percent supervised performance so that's how much how much percent of supervised performance do you get oh single single image self-supervision we show that several self-supervision methods can be used to train the first few layers of a deep neural networks using a single training image such as this image a B or even C provided that sufficient data augmentation is used so what they do here is they use this self-supervision then they take the signal from the convolutional layer one the hidden representation that's h1 right here they train this linear probe on it and they see how how well does it perform after and this is after the network has been self supervised with rot net for example and then they compare that to the linear probe at layer one of the supervised network right so you take the supervised network and you do the same thing and there they find okay this rot net and all the other techniques they perform very well and especially if you only do a single image they perform better as you can see right here I mean if I interpret this correctly this one rot net one by again one deep cluster these are these top things right here and the 100 is the comparison to the supervised performance right so 100 means 100% of the performance of the supervised representation this is absolutely crazy to me and this in fact so let's just interpret it from their perspective right so you also have random so if you I guess if you randomly initialize a network then with the linear with training a linear classifier on the hidden representation one you could reach something like 60% accuracy which is impressive okay but if you do the linear probe at layer two you reach a lower accuracy now remember this is lower accuracy compared to the supervised performance right so the the there are two effects at play here the supervised performance is gonna go up because the well if you believe the assumption that the successive layers make the representation more and more linear linearly classifiable but also it could be that just at the same time the self supervised performance the performance of the self supervised representation is going down so the graph here is sort of I don't really know how to interpret it and it really goes down after that that's why they say you can learn the first layers fairly well with self supervision even from a single image but you cannot learn the upper layers and they're basically just measuring this using this linear probe method compared to the supervised performance what I would somewhat like to see is that you train let's say you train a self supervised network fine but then you freeze this layer and then you fine-tune the rest of your network on top of that representation that would actually give you an estimate of how useful is that representation if I had an you know an all-powerful function approximator which is a neural network and then of course you're probably not going to get supervised performance and by the way you'd have to compare that also to supervised with and without pre training using self supervision and then you actually get a good estimate of what how well what kind of a representation do these things learn in this case all we you know all we get out of this is this linear probe thing compared to the supervised representation and it just seems a bit uninterpretable honestly and the fact that here you can go beyond 100% you can actually be better than supervised should already tell you that the linear this linear probe thing might not be a good instrument to might not be such a good instrument especially in the lower layers the lower layers will be most inaccurate with these linear probe measurement but that's that's their finding basically they can learn the features of the lower layers as well in terms of this linear probe formulation as the supervised learning again they never compare this to fine-tuning on top of these representation or compare it to self supervision plus supervision which I would really expect all right so they say they do a lots and lots of data augmentation since of course they only have a single image they basically supercharge data augmentation and they show that this helps now I don't want to actually go into the into the very into the very details of what they're doing because they just have different methods of augmentation they just have different networks but here are the results so if this is on on image net if we use full supervision we use the entire data set and we do these linear probe evaluation we get a 20% accuracy after layer 1 36 after layer 2 and so on this goes up as we go through the layer so this kind of gives credence to the hypothesis that these layers sort of make the representation more linear then they have a bunch of scattering and random networks and K means pre training which doesn't get you a lot like but that's what they compare it to basically the self supervision to just the scattering transforms and things like that but then they get into their methods and here we'll look at for example this rod net so if you train on just one image this image a of course if you have one image then you get this many this this much of the layer one now okay so now that I see this here they have this column right here which uses the full data set what I think this is is the self supervised training using this many images so what if you do rod net self supervision on this many it could also be the performance after supervised training after pre training with this method but I think it is the performance after just after self supervision again with no fine-tuning on top and then evaluating these linear probes that's why this number is lower than this number right here but astonishingly after you do it with just one image you get a higher number and if you do it with a thousand images you get an even higher number but if you do it with many more images you do you you somehow don't get a higher number this all seems a bit it seems a bit weird honestly basically means that okay it is more important to augment the same thing over and over and over in different ways than it is to incorporate different images I mean there's ways I can believe that but I'm not sure but you basically see that after a while the performance compared to the first of all to the supervised method so yes if you look for example here up here drops dramatically and even if you have the full young now I'm convinced that this this is just self supervision using the full data set even if you have the full data set but only do self supervision your performance still suffers compared to the supervised training so that's why they claim they have these two claims you can learn the first layer representations fairly well with self supervision that's comparing this number to this number you can do so even from a single image that's comparing this number to this number right and noticing that it's almost the same these two numbers are almost the same actually one is a bit higher you can learn that fairly well but if you go down the layers you will basically suffer with your single image and with your full image soup self supervision so you need the supervised signal to learn the features of these later layers and that's all evaluated with these linear probe things yeah so that is their main claims right here and they kind of analyze image a and image B so they come to the conclusion that image a works much better because it's natural and image B is not working so well but this depends on the self supervision used and image C still apparently works quite well even though it has these large areas of nothing which all of this is a bit weird but it's definitely cool to see these results now again I would like to see something like you freeze these representations and then you actually train a neural network on top of that and look how that performs that would actually be an interesting thing though maybe they've done this and I'm just unaware right here they look at the filters that these methods have learned just from self supervision on a single image and you can see these are the types of filters that we would see using even supervised learning if you look at the filters they turn out to look pretty much like this of course I can't decide if these particular things are good or bad filters or not they do some qualitative analysis and here they have fine-tuning okay ah fine-tuning experiments the pre-trained models first two convolutions are left frozen or replaced by the scattering transform and the network is retrained using image net training set okay here we go so if you do this fully supervised you get to a 59.4 now okay this seems very low accuracy honestly for even like for image net but maybe this is their thing but if they do this on top of the on top of the these self supervised methods they do get a fairly good okay they get a fairly good accuracy right here I would have liked to have this evaluation right here be applied in the table above and not these linear probes they just seem kind of kind of wonky but you can see that it is possible to learn this to learn this using just a single image to learn the features of the lower layers now how you exactly would would put this into a training procedure how you exactly make use of this during training if you already know that it's not gonna help for the deeper layers I'm not so sure because at least you always have your own data set right so you always have at least that many images that you can self supervise train on but it's certainly interesting interesting results and with that I think I'm going to leave it at that and thanks for listening I hope you enjoyed this and bye bye
[ { "end": 6.4, "start": 0, "text": " Alright, today we'll look at a critical analysis of self-supervision or what we" }, { "end": 12.48, "start": 6.4, "text": " can learn from a single image by Yuki M. Asano, Christian Ruprecht and Andrea" }, { "end": 23.16, "start": 12.48, "text": " Vidaldi. This paper, I really was excited when I saw this paper because the" }, { "end": 29.92, "start": 23.16, "text": " outset is so cool and the experiments have a very promising. So we'll take a" }, { "end": 36.2, "start": 29.92, "text": " look. Basically we show that three different and representative methods," }, { "end": 43.2, "start": 36.2, "text": " byGAN, RotNet and DeepCluster, so this is self-supervision techniques, can learn" }, { "end": 48.52, "start": 43.2, "text": " the first few layers of a convolutional network from a single image as well as" }, { "end": 53.760000000000005, "start": 48.52, "text": " using millions of images and manual labels, provided that strong data" }, { "end": 60.68, "start": 53.76, "text": " augmentation is used. However, for deeper layers the gap with manual supervision" }, { "end": 65.72, "start": 60.68, "text": " cannot be closed even if millions of unlabeled images are used for training." }, { "end": 70.28, "start": 65.72, "text": " We conclude that first the weights of the early layers of deep networks" }, { "end": 74.12, "start": 70.28, "text": " contain limited information about statistics of natural images, that" }, { "end": 78.84, "start": 74.12, "text": " second such low-level statistics can be learned through self-supervision just" }, { "end": 84.2, "start": 78.84, "text": " as well as through strong supervision and that three the low-level statistics" }, { "end": 87.86, "start": 84.2, "text": " can be captured via synthetic transformations instead of using large" }, { "end": 97.32000000000001, "start": 87.86, "text": " image data set. As I said, I was kind of excited when I saw this and what" }, { "end": 101.32000000000001, "start": 97.32000000000001, "text": " they're talking about is self-supervision. So really quickly for" }, { "end": 106.60000000000001, "start": 101.32000000000001, "text": " those who don't know, self-supervision is a technique where if you have images but" }, { "end": 110.83999999999999, "start": 106.6, "text": " no labels and you would still like to learn something from them, you can do" }, { "end": 115.91999999999999, "start": 110.83999999999999, "text": " a pre-training step for your network. So basically you have your neural" }, { "end": 121.96, "start": 115.91999999999999, "text": " network F and what you would like to do is you would like F of X to be close to" }, { "end": 128.48, "start": 121.96, "text": " Y for X and Y to in your training data set. But you can if you have a much" }, { "end": 134.44, "start": 128.48, "text": " larger data set of just X's, right here you have pairs of X and Y of just X," }, { "end": 140.52, "start": 134.44, "text": " that are sort of similar to the X in your label data set. You can" }, { "end": 146.72, "start": 140.52, "text": " kind of get your network used to the data by doing this self-supervision." }, { "end": 151.16, "start": 146.72, "text": " So what you would do is you would sort of come up with your own labels for the" }, { "end": 158.16, "start": 151.16, "text": " data points and one way is this, we'll take just this rot net as an example. So" }, { "end": 164.64, "start": 158.16, "text": " what you'll do is you'll input an image, so maybe an image of the number 3 in" }, { "end": 170, "start": 164.64, "text": " handwritten digit recognition, but you flip it to its side." }, { "end": 174.2, "start": 170, "text": " So it's the number 3 right here and then you ask the network to come up with" }, { "end": 179.12, "start": 174.2, "text": " an answer. Is it upright? Is it flipped to the right? Is it flipped to the left? Or" }, { "end": 184.32, "start": 179.12, "text": " is it turned on its head? Which one is it? And of course you who did the" }, { "end": 188.44, "start": 184.32, "text": " transformation know the correct label. So this is how you come up with sort" }, { "end": 193.12, "start": 188.44, "text": " of fake labels for your data and this works surprisingly well and what this" }, { "end": 199.79999999999998, "start": 193.12, "text": " paper basically says is that you do not actually need this giant database here." }, { "end": 205.18, "start": 199.79999999999998, "text": " It's actually sufficient sometimes if you have one single image where you do" }, { "end": 213.68, "start": 205.18, "text": " this on. Now the claim is a bit of a cheat I have to say, but we'll go into" }, { "end": 219.24, "start": 213.68, "text": " that. And further they say okay it's enough to have one single image is to" }, { "end": 223.56, "start": 219.24, "text": " learn the features of the lower layers of the neural network because they" }, { "end": 228.36, "start": 223.56, "text": " usually tend to extract low level features that you know you can learn" }, { "end": 232.64000000000001, "start": 228.36, "text": " from a single image but the higher level features you really need the supervised" }, { "end": 238.04000000000002, "start": 232.64000000000001, "text": " data set. It's not enough to have these self-supervision techniques and" }, { "end": 244.4, "start": 238.04, "text": " even if you have many many many self-supervision samples, so if you" }, { "end": 247.64, "start": 244.4, "text": " actually do have this giant data set it still doesn't help you for the higher" }, { "end": 253.35999999999999, "start": 247.64, "text": " layers. Almost causing to question this you have a giant data set of unlabeled" }, { "end": 263.71999999999997, "start": 253.35999999999999, "text": " things notion that is often presented including by me. Okay so what do they do?" }, { "end": 270.44000000000005, "start": 263.72, "text": " They take either single images or just very few images so they also" }, { "end": 275.08000000000004, "start": 270.44000000000005, "text": " have a setting where they take 10 images. For the single image they hand select" }, { "end": 280.36, "start": 275.08000000000004, "text": " it. So they hand select the following three images. So this image right here" }, { "end": 285.76000000000005, "start": 280.36, "text": " they select because it's very crowded there's a lot going on. There's" }, { "end": 293.08000000000004, "start": 285.76000000000005, "text": " people, there's objects, there is lighting and so on. There's you know houses, these" }, { "end": 300.08, "start": 293.08, "text": " lines, there's perspective. So that's why they select image A. Image B" }, { "end": 304.52, "start": 300.08, "text": " here they also experiment with image B is a drawn image as you can see there's" }, { "end": 310.47999999999996, "start": 304.52, "text": " also lots of stuff going on but they basically want to to research how does a" }, { "end": 317.53999999999996, "start": 310.47999999999996, "text": " natural image compared to a artificial image. And then in C they have this as" }, { "end": 321.88, "start": 317.53999999999996, "text": " sort of a control because there is lots of parts here where there's not much" }, { "end": 326.2, "start": 321.88, "text": " going on compared to here and most of the image there's lots of stuff going on" }, { "end": 332.76, "start": 326.2, "text": " and this image on the image number C or letter C has large areas where there's" }, { "end": 338.88, "start": 332.76, "text": " nothing going on. Okay so these are the single images. Now why I say it's a bit" }, { "end": 344.68, "start": 338.88, "text": " of a cheat is that these images are actually super large. So for" }, { "end": 352.44, "start": 344.68, "text": " ImageNet and for C410 this might be one of the samples of the C410" }, { "end": 357.12, "start": 352.44, "text": " or ImageNet classifier. Now of course C410 is a lot smaller than ImageNet" }, { "end": 362.4, "start": 357.12, "text": " but still for ImageNet these are you know there are many pictures here not" }, { "end": 368.32, "start": 362.4, "text": " just one. So to say this is from a single image it's technically true but" }, { "end": 374.71999999999997, "start": 368.32, "text": " then if you split it into multiple images it's technically not true. So it" }, { "end": 378.76, "start": 374.71999999999997, "text": " would have been fun to see what actually happens with a single image when you" }, { "end": 384.68, "start": 378.76, "text": " downscale it. But okay so how do they investigate this? They have this five layer" }, { "end": 388.76, "start": 384.68, "text": " neural network right here so this five convolutional I'm gonna guess there's" }, { "end": 393.12, "start": 388.76, "text": " five convolutional layer after each convolutional layer I think there is" }, { "end": 399.8, "start": 393.12, "text": " some batch norm and relu and then to the next convolutional layer and so on and" }, { "end": 405.28000000000003, "start": 399.8, "text": " then at the end maybe there is or there's also some pooling here max pool" }, { "end": 411.64, "start": 405.28000000000003, "text": " at the end there is going to be some linear classifier that classifies it into" }, { "end": 416.12, "start": 411.64, "text": " either a ten or a hundred or a thousand classes whatever you want. The way they" }, { "end": 423.28000000000003, "start": 416.12, "text": " investigate this is through linear probes so-called. Now linear probes are" }, { "end": 430.12, "start": 423.28000000000003, "text": " somewhat of a technique to inspect how much each of the layers learn. So if we" }, { "end": 435.92, "start": 430.12, "text": " again draw our network right here and this is the input X right so you have" }, { "end": 440.2, "start": 435.92, "text": " the hidden representation one hidden representation two hidden representation" }, { "end": 447.03999999999996, "start": 440.2, "text": " three and here you output it to the y hat and that you compare with the y from" }, { "end": 451.88, "start": 447.03999999999996, "text": " your data set right the X is from your data set and the Y is from your data set." }, { "end": 458.28, "start": 451.88, "text": " Now linear probe wants to investigate how useful a given hidden representation" }, { "end": 462.84, "start": 458.28, "text": " is to classify the output. So what a linear probe would do is it would take" }, { "end": 469.52, "start": 462.84, "text": " the hidden representation here and learn one single linear classifier to classify" }, { "end": 477.24, "start": 469.52, "text": " that hidden representation to come up with a y hat given h1 or something like" }, { "end": 484, "start": 477.24, "text": " this. So the important part here is that this is linear right this is a this is a" }, { "end": 491.44, "start": 484, "text": " linear classifier. You do nothing more you take the representation and instead" }, { "end": 495.44, "start": 491.44, "text": " of this entire giant neural network on top of it you simply build a linear" }, { "end": 500.44, "start": 495.44, "text": " classifier and you can build these linear probes from any layer right here." }, { "end": 505.12, "start": 500.44, "text": " You can build a linear classifier on top of this on top of this and then you" }, { "end": 511.52, "start": 505.12, "text": " basically look how good is your linear classifier when trained on that" }, { "end": 516.12, "start": 511.52, "text": " hidden representation that the network comes up with and that's how you" }, { "end": 524.16, "start": 516.12, "text": " estimate how much information about the target or let's say no how how optimal" }, { "end": 529.28, "start": 524.16, "text": " the representation already is because at the end of the network right you do have" }, { "end": 533.9599999999999, "start": 529.28, "text": " a linear classifier. So at some point this representation must go into a form" }, { "end": 539.68, "start": 533.9599999999999, "text": " where it is now linearly well classifiable and the assumption is" }, { "end": 545.36, "start": 539.68, "text": " basically that these layers of the neural network successively make a" }, { "end": 552.56, "start": 545.36, "text": " representation that is more and more linearly classifiable and that is a" }, { "end": 560.04, "start": 552.56, "text": " strong assumption right and this paper here uses linear probes" }, { "end": 565.28, "start": 560.04, "text": " exclusively and that is a bit worrisome to me because I have my troubles with" }, { "end": 571.04, "start": 565.28, "text": " these linear probe approach because this strong assumption that more linearly" }, { "end": 577.52, "start": 571.04, "text": " classifiable is better it just rubs me in the wrong way right. We know that the" }, { "end": 583.68, "start": 577.52, "text": " information content can never increase from layer to layer about the" }, { "end": 590.68, "start": 583.68, "text": " label so any information about the label that is in H1 sorry that is in H2 must" }, { "end": 595, "start": 590.68, "text": " also have been present in H1 so technically if we just built the correct" }, { "end": 601.4, "start": 595, "text": " classifier we could predict from H1 just as well as from H2 right because we're" }, { "end": 605.84, "start": 601.4, "text": " actually doing it we're building the neural network but the fact that we" }, { "end": 612.8000000000001, "start": 605.84, "text": " cannot predict linearly as well using H1 so the fact that this classifier here" }, { "end": 620.2800000000001, "start": 612.8000000000001, "text": " performs worse than this classifier here because H1 is a less optimal" }, { "end": 626.2, "start": 620.2800000000001, "text": " representation in a linear sense it's and it's just meh and the fact that I" }, { "end": 632.2, "start": 626.2, "text": " mean yes but then to use that and to estimate oh how useful is a" }, { "end": 638, "start": 632.2, "text": " representation you're equating usefulness with linearly classifiable and" }, { "end": 645.08, "start": 638, "text": " that I disagree a representation can be extremely useful if the following layers" }, { "end": 649.72, "start": 645.08, "text": " manage to do something useful with it and that can be something completely" }, { "end": 657.0400000000001, "start": 649.72, "text": " different or it can even be the opposite of the linear classifiability right so" }, { "end": 664.24, "start": 657.04, "text": " this is kind of my problem here and they don't do a good work of convincing me" }, { "end": 669.64, "start": 664.24, "text": " otherwise so they don't employ different techniques other than these linear probes" }, { "end": 681.16, "start": 669.64, "text": " in any case when they do this linear probe you can see right here that the" }, { "end": 688.9599999999999, "start": 681.16, "text": " percent supervised performance so that's how much how much percent of supervised" }, { "end": 691.9599999999999, "start": 688.9599999999999, "text": " performance do you get" }, { "end": 697.8, "start": 693.16, "text": " oh single single image self-supervision we show that several self-supervision" }, { "end": 701.9599999999999, "start": 697.8, "text": " methods can be used to train the first few layers of a deep neural networks" }, { "end": 707.36, "start": 701.9599999999999, "text": " using a single training image such as this image a B or even C provided that" }, { "end": 712.64, "start": 707.36, "text": " sufficient data augmentation is used so what they do here is they use this" }, { "end": 717.8000000000001, "start": 712.64, "text": " self-supervision then they take the signal from the convolutional layer one" }, { "end": 723, "start": 717.8000000000001, "text": " the hidden representation that's h1 right here they train this linear probe" }, { "end": 730.36, "start": 723, "text": " on it and they see how how well does it perform after and this is after the" }, { "end": 735.6800000000001, "start": 730.36, "text": " network has been self supervised with rot net for example and then they" }, { "end": 743.3599999999999, "start": 735.68, "text": " compare that to the linear probe at layer one of the supervised network" }, { "end": 749.16, "start": 743.3599999999999, "text": " right so you take the supervised network and you do the same thing and there they" }, { "end": 758.64, "start": 749.16, "text": " find okay this rot net and all the other techniques they perform very well and" }, { "end": 766.3199999999999, "start": 758.64, "text": " especially if you only do a single image they perform better as you can see right" }, { "end": 770.8, "start": 766.3199999999999, "text": " here I mean if I interpret this correctly this one rot net one by again one deep" }, { "end": 776.92, "start": 770.8, "text": " cluster these are these top things right here and the 100 is the comparison to" }, { "end": 782.4399999999999, "start": 776.92, "text": " the supervised performance right so 100 means 100% of the performance of the" }, { "end": 791.08, "start": 782.44, "text": " supervised representation this is absolutely crazy to me and this in fact" }, { "end": 797.0400000000001, "start": 791.08, "text": " so let's just interpret it from their perspective right so you also have" }, { "end": 802.72, "start": 797.0400000000001, "text": " random so if you I guess if you randomly initialize a network then with the" }, { "end": 807.0400000000001, "start": 802.72, "text": " linear with training a linear classifier on the hidden representation one you" }, { "end": 817.56, "start": 807.04, "text": " could reach something like 60% accuracy which is impressive okay but if you do" }, { "end": 824, "start": 817.56, "text": " the linear probe at layer two you reach a lower accuracy now remember this is" }, { "end": 832.04, "start": 824, "text": " lower accuracy compared to the supervised performance right so the the" }, { "end": 837.12, "start": 832.04, "text": " there are two effects at play here the supervised performance is gonna go up" }, { "end": 842.28, "start": 837.12, "text": " because the well if you believe the assumption that the successive layers" }, { "end": 848.64, "start": 842.28, "text": " make the representation more and more linear linearly classifiable but also it" }, { "end": 853.64, "start": 848.64, "text": " could be that just at the same time the self supervised performance the" }, { "end": 859.48, "start": 853.64, "text": " performance of the self supervised representation is going down so the" }, { "end": 865.4, "start": 859.48, "text": " graph here is sort of I don't really know how to interpret it and it really" }, { "end": 871.38, "start": 865.4, "text": " goes down after that that's why they say you can learn the first layers fairly" }, { "end": 878.4200000000001, "start": 871.38, "text": " well with self supervision even from a single image but you cannot learn the" }, { "end": 884.4, "start": 878.4200000000001, "text": " upper layers and they're basically just measuring this using this linear probe" }, { "end": 889.3199999999999, "start": 884.4, "text": " method compared to the supervised performance what I would somewhat like" }, { "end": 895.4399999999999, "start": 889.3199999999999, "text": " to see is that you train let's say you train a self supervised network fine but" }, { "end": 902.28, "start": 895.4399999999999, "text": " then you freeze this layer and then you fine-tune the rest of your network on" }, { "end": 906.22, "start": 902.28, "text": " top of that representation that would actually give you an estimate of how" }, { "end": 912.3199999999999, "start": 906.22, "text": " useful is that representation if I had an you know an all-powerful function" }, { "end": 916.5200000000001, "start": 912.32, "text": " approximator which is a neural network and then of course you're probably not" }, { "end": 921.48, "start": 916.5200000000001, "text": " going to get supervised performance and by the way you'd have to compare that" }, { "end": 927.9200000000001, "start": 921.48, "text": " also to supervised with and without pre training using self supervision and" }, { "end": 933.6, "start": 927.9200000000001, "text": " then you actually get a good estimate of what how well what kind of a" }, { "end": 938.8000000000001, "start": 933.6, "text": " representation do these things learn in this case all we you know all we get out" }, { "end": 943.3199999999999, "start": 938.8, "text": " of this is this linear probe thing compared to the supervised" }, { "end": 949.3599999999999, "start": 943.3199999999999, "text": " representation and it just seems a bit uninterpretable honestly and the fact" }, { "end": 955.04, "start": 949.3599999999999, "text": " that here you can go beyond 100% you can actually be better than supervised" }, { "end": 960.92, "start": 955.04, "text": " should already tell you that the linear this linear probe thing might not be a" }, { "end": 968.24, "start": 960.92, "text": " good instrument to might not be such a good instrument especially in the lower" }, { "end": 972.8, "start": 968.24, "text": " layers the lower layers will be most inaccurate with these linear probe" }, { "end": 977.36, "start": 972.8, "text": " measurement but that's that's their finding basically they can learn the" }, { "end": 984.16, "start": 977.36, "text": " features of the lower layers as well in terms of this linear probe formulation" }, { "end": 990.16, "start": 984.16, "text": " as the supervised learning again they never compare this to fine-tuning on top" }, { "end": 996.6800000000001, "start": 990.16, "text": " of these representation or compare it to self supervision plus supervision which" }, { "end": 1004.56, "start": 996.68, "text": " I would really expect all right so they say they do a lots and lots of data" }, { "end": 1007.7199999999999, "start": 1004.56, "text": " augmentation since of course they only have a single image they basically" }, { "end": 1015.16, "start": 1007.7199999999999, "text": " supercharge data augmentation and they show that this helps now I don't want to" }, { "end": 1021.92, "start": 1015.16, "text": " actually go into the into the very into the very details of what they're doing" }, { "end": 1027.1599999999999, "start": 1021.92, "text": " because they just have different methods of augmentation they just have different" }, { "end": 1038.24, "start": 1027.1599999999999, "text": " networks but here are the results so if this is on on image net if we use full" }, { "end": 1043.92, "start": 1038.24, "text": " supervision we use the entire data set and we do these linear probe evaluation" }, { "end": 1051.84, "start": 1043.92, "text": " we get a 20% accuracy after layer 1 36 after layer 2 and so on this goes" }, { "end": 1055.84, "start": 1051.84, "text": " up as we go through the layer so this kind of gives credence to the hypothesis" }, { "end": 1062.84, "start": 1055.84, "text": " that these layers sort of make the representation more linear then they" }, { "end": 1071.8799999999999, "start": 1062.84, "text": " have a bunch of scattering and random networks and K means pre training which" }, { "end": 1078.9199999999998, "start": 1071.8799999999999, "text": " doesn't get you a lot like but that's what they compare it to basically the" }, { "end": 1084.8000000000002, "start": 1078.92, "text": " self supervision to just the scattering transforms and things like that but then" }, { "end": 1090.3600000000001, "start": 1084.8000000000002, "text": " they get into their methods and here we'll look at for example this rod net" }, { "end": 1099.88, "start": 1090.3600000000001, "text": " so if you train on just one image this image a of course if you have one image" }, { "end": 1109.64, "start": 1099.88, "text": " then you get this many this this much of the layer one now okay so now that I see" }, { "end": 1118.8000000000002, "start": 1109.64, "text": " this here they have this column right here which uses the full data set what I" }, { "end": 1129.0800000000002, "start": 1118.8000000000002, "text": " think this is is the self supervised training using this many images so what" }, { "end": 1135.6799999999998, "start": 1129.08, "text": " if you do rod net self supervision on this many it could also be the" }, { "end": 1142.04, "start": 1135.6799999999998, "text": " performance after supervised training after pre training with this method but" }, { "end": 1147.6399999999999, "start": 1142.04, "text": " I think it is the performance after just after self supervision again with no" }, { "end": 1153.6399999999999, "start": 1147.6399999999999, "text": " fine-tuning on top and then evaluating these linear probes that's why this" }, { "end": 1159.68, "start": 1153.64, "text": " number is lower than this number right here but astonishingly after you do it" }, { "end": 1167.3200000000002, "start": 1159.68, "text": " with just one image you get a higher number and if you do it with a thousand" }, { "end": 1174.0800000000002, "start": 1167.3200000000002, "text": " images you get an even higher number but if you do it with many more images you" }, { "end": 1181.8000000000002, "start": 1174.0800000000002, "text": " do you you somehow don't get a higher number this all seems a bit it seems a" }, { "end": 1189.2, "start": 1181.8, "text": " bit weird honestly basically means that okay it is more important to augment the" }, { "end": 1193.68, "start": 1189.2, "text": " same thing over and over and over in different ways than it is to incorporate" }, { "end": 1199.56, "start": 1193.68, "text": " different images I mean there's ways I can believe that but I'm not sure but" }, { "end": 1207.8, "start": 1199.56, "text": " you basically see that after a while the performance compared to the first of all" }, { "end": 1215.12, "start": 1207.8, "text": " to the supervised method so yes if you look for example here up here drops" }, { "end": 1221.72, "start": 1215.12, "text": " dramatically and even if you have the full young now I'm convinced that this" }, { "end": 1226.04, "start": 1221.72, "text": " this is just self supervision using the full data set even if you have the full" }, { "end": 1231.12, "start": 1226.04, "text": " data set but only do self supervision your performance still suffers compared" }, { "end": 1238.32, "start": 1231.12, "text": " to the supervised training so that's why they claim they have these two claims" }, { "end": 1243.6599999999999, "start": 1238.32, "text": " you can learn the first layer representations fairly well with self" }, { "end": 1250.4799999999998, "start": 1243.6599999999999, "text": " supervision that's comparing this number to this number you can do so even from a" }, { "end": 1256.8799999999999, "start": 1250.4799999999998, "text": " single image that's comparing this number to this number right and noticing" }, { "end": 1262.0800000000002, "start": 1256.88, "text": " that it's almost the same these two numbers are almost the same actually one" }, { "end": 1270.5600000000002, "start": 1262.0800000000002, "text": " is a bit higher you can learn that fairly well but if you go down the layers" }, { "end": 1277.8400000000001, "start": 1270.5600000000002, "text": " you will basically suffer with your single image and with your full image" }, { "end": 1282.6000000000001, "start": 1277.8400000000001, "text": " soup self supervision so you need the supervised signal to learn the features" }, { "end": 1289.36, "start": 1282.6, "text": " of these later layers and that's all evaluated with these linear probe things" }, { "end": 1296.04, "start": 1289.36, "text": " yeah so that is their main claims right here and they kind of analyze image a" }, { "end": 1300.8, "start": 1296.04, "text": " and image B so they come to the conclusion that image a works much" }, { "end": 1307.08, "start": 1300.8, "text": " better because it's natural and image B is not working so well but this depends" }, { "end": 1317.6, "start": 1307.08, "text": " on the self supervision used and image C still apparently works quite well even" }, { "end": 1322.32, "start": 1317.6, "text": " though it has these large areas of nothing which all of this is a bit weird" }, { "end": 1327.72, "start": 1322.32, "text": " but it's definitely cool to see these results now again I would like to see" }, { "end": 1330.8799999999999, "start": 1327.72, "text": " something like you freeze these representations and then you actually" }, { "end": 1335.36, "start": 1330.8799999999999, "text": " train a neural network on top of that and look how that performs that would" }, { "end": 1340.6399999999999, "start": 1335.36, "text": " actually be an interesting thing though maybe they've done this and I'm just" }, { "end": 1350.28, "start": 1340.6399999999999, "text": " unaware right here they look at the filters that these methods have learned" }, { "end": 1354.12, "start": 1350.28, "text": " just from self supervision on a single image and you can see these are the types" }, { "end": 1359.4799999999998, "start": 1354.12, "text": " of filters that we would see using even supervised learning if you look at the" }, { "end": 1365, "start": 1359.4799999999998, "text": " filters they turn out to look pretty much like this of course I can't decide" }, { "end": 1371.6, "start": 1365, "text": " if these particular things are good or bad filters or not they do some" }, { "end": 1382.16, "start": 1371.6, "text": " qualitative analysis and here they have fine-tuning okay ah fine-tuning" }, { "end": 1388.04, "start": 1382.16, "text": " experiments the pre-trained models first two convolutions are left frozen or" }, { "end": 1393.72, "start": 1388.04, "text": " replaced by the scattering transform and the network is retrained using image" }, { "end": 1401.88, "start": 1393.72, "text": " net training set okay here we go so if you do this fully supervised you get to" }, { "end": 1413.6000000000001, "start": 1401.88, "text": " a 59.4 now okay this seems very low accuracy honestly for even like for" }, { "end": 1420.84, "start": 1413.6000000000001, "text": " image net but maybe this is their thing but if they do this on top of the on top" }, { "end": 1426.56, "start": 1420.84, "text": " of the these self supervised methods they do get a fairly good okay they get" }, { "end": 1431.6399999999999, "start": 1426.56, "text": " a fairly good accuracy right here I would have liked to have this evaluation" }, { "end": 1436.4399999999998, "start": 1431.6399999999999, "text": " right here be applied in the table above and not these linear probes they just" }, { "end": 1446.24, "start": 1436.4399999999998, "text": " seem kind of kind of wonky but you can see that it is possible to learn this to" }, { "end": 1451.48, "start": 1446.24, "text": " learn this using just a single image to learn the features of the lower layers" }, { "end": 1459.16, "start": 1451.48, "text": " now how you exactly would would put this into a training procedure how you" }, { "end": 1463.68, "start": 1459.16, "text": " exactly make use of this during training if you already know that it's not gonna" }, { "end": 1469.72, "start": 1463.68, "text": " help for the deeper layers I'm not so sure because at least you always have" }, { "end": 1475.68, "start": 1469.72, "text": " your own data set right so you always have at least that many images that you" }, { "end": 1481.72, "start": 1475.68, "text": " can self supervise train on but it's certainly interesting interesting" }, { "end": 1492.2, "start": 1481.72, "text": " results and with that I think I'm going to leave it at that and thanks for" }, { "end": 1507.88, "start": 1492.2, "text": " listening I hope you enjoyed this and bye bye" } ]
YrO1v7-KcXs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Deep image reconstruction from human brain activity (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "fmri", "mind reading", "thoughts", "visual cortex", "vc", "v1", "v4", "vgg", "reconstruction", "iterative", "deep dream", "microscope", "activity", "imagine", "visualize", "introspection", "human", "telepathy" ]
Can you peek into people's brains? Reading human thoughts is a long-standing dream of the AI field. This paper reads fMRI signals from a person and then reconstructs what that person's eyes currently see. This is achieved by translating the fMRI signal to features of a Deep Neural Network and then iteratively optimizing the input of the network to match those features. The results are impressive. OUTLINE: 0:00 - Overview 1:35 - Pipeline 4:00 - Training 5:20 - Image Reconstruction 7:00 - Deep Generator Network 8:15 - Results Paper: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006633 My Video on OpenAI Microscope (what I called Atlas): https://youtu.be/Ok44otx90D4 Abstract: The mental contents of perception and imagery are thought to be encoded in hierarchical representations in the brain, but previous attempts to visualize perceptual contents have failed to capitalize on multiple levels of the hierarchy, leaving it challenging to reconstruct internal imagery. Recent work showed that visual cortical activity measured by functional magnetic resonance imaging (fMRI) can be decoded (translated) into the hierarchical features of a pre-trained deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features. Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that our method was able to reliably produce reconstructions that resembled the viewed natural images. A natural image prior introduced by a deep generator neural network effectively rendered semantically meaningful details to the reconstructions. Human judgment of the reconstructions supported the effectiveness of combining multiple DNN layers to enhance the visual quality of generated images. While our model was solely trained with natural images, it successfully generalized to artificial shapes, indicating that our model was not simply matching to exemplars. The same analysis applied to mental imagery demonstrated rudimentary reconstructions of the subjective content. Our results suggest that our method can effectively combine hierarchical neural representations to reconstruct perceptual and subjective images, providing a new window into the internal contents of the brain. Authors: Guohua Shen, Tomoyasu Horikawa, Kei Majima, Yukiyasu Kamitani Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at deep image reconstruction from human brain activity by Gwa-wa Shen, Tomoyasu Horikawa, Kai Majima and Yuki Yazu Kamitani. This is like the reading thoughts. So I was excited when I saw this paper. I saw this on reddit and it is a bit older. It is from the beginning of last year. So I'm sure there have been developments in this area. But basically what this paper does is it will have a human look at a picture as you can see for example right up here. It will measure the MRI activity. Then it will use a what they call a feature decoder in order to map that MRI activity to features of a deep neural network. Then they will reconstruct the image that is closest to those features in the neural network. By reconstruction they get basically an image of what the human sees. So we're going to explore this pipeline right here. But it's pretty cool and if it works it basically means that we can read someone's thoughts. But of course there's going to be issues and problems. So first of all this is all visual. They measure the activity in the visual cortex right here. Let's break it down to the individual parts. The individual parts here, the fMRI, that is a machine. We cannot control. So you measure the fMRI activity and that basically measures which of these cells in your brain use oxygen. It's functional fMRI, it's not structural. So it measures which ones are active and that's how you would see which parts of the brains are active. I think the resolution on these things has gotten very very good. So you can make out very fine grained activation patterns in the neurons. And they measure the visual cortex which is the part responsible for basically visual stimuli. So for seeing things. Now they need this feature decoder but because ultimately what they want to do is they want to have these features correspond to features in a neural network. This DNN here is like a VGG, I think a VGG-16 or a VGG-19. It's called a VGG-16 network. This is a couple of years ago these architectures were very popular for ImageNet and they're fairly basic. So what that means it's not like a super duper inception net where you have like layers within layers and so on. They're pretty straightforward convolutional neural networks with nonlinearities and pooling here. You can see there is a bunch of layers, then there's pooling, there is a bunch of layers, there's pooling and so on. So what you want to look at are these layers of the deep neural network. The individual layers and you want to basically put an image right here into the neural network and then observe its features in the neural network. Then you want to put the same image through the human and then you observe the MRI features. You know that this is the same image so you can basically learn a feature decoder. This is going to be another sort of machine learned maybe in neural network. I haven't actually read this could be just like a linear regression or a neural network. Just a regression that maps the fMRI to the features. So this is what you have to learn. Basically they took a bunch of humans, stuck them in an MRI machine, got out their their fMRI data for the same image. So for a given image X they got the human fMRI data and they got the VGG. They got the features when they put the X, the image through the neural network and now they learn a function that minimizes the error. So there is an error. So basically they run this on a test set of images and then you end up with a function that can basically map fMRI features to neural network features. Now the second step, now you can give the human, let's say this works, you can give the human an arbitrary image and the human will basically interpret it through its visual cortex. You measure the activity in the visual cortex and then you can predict the neural network features if that image were given to the neural network. But now you don't give the image to the neural network. Instead of what you do, you do something like deep dream does. That means you start from a noisy image at the beginning and you try to find the image. So you start from this noisy image right here and through iterative gradient descent you refine this image. That's this arrow right here. You try to find the image that as closely as possible in the internal representation matches the features that you predict from the fMRI signal. Because these are the features that the neural network should see. If your feature decoder is good, these are the features that the neural network should output for that image. So you're basically trying to find this image right here but you're not looking at it. You only look at the features that should be in the neural network. So after a bunch of steps of refining that image, you hope you basically end up with an image that corresponds to these features. Then you can look at it. It usually looks something like this. We're used to these kind of things from neural networks. If I invite you to look at something like the OpenAI Atlas, if you want to understand how this is done. But basically we can get the image that most faithfully corresponds to these features. This doesn't always work super well because there are actually, since the neural network is sort of a dimensional reduction technique, there are many images that correspond to the same features and they often end up like really weird. So what they do is they have this deep generator network as a prior. Basically this is the generator from a GAN, from a Generative Adversarial Network. So this network right here is really good at just producing naturally looking images. And now because we have that, our task is not going to be to start from this image right here. Our task will be to do the exact same thing but with the input to the deep generator network. So basically what we're trying to do is we're trying to find the input vector to the deep generator network such that these features right here correspond to the features that we predict from the fMRI activity. And because the deep generator network is trained to produce natural looking images, this will always give us sort of a natural looking image no matter what our input vector is. So thereby we basically constrain the optimization procedure to only output natural looking images. So let's see how well this works. So these are the reconstructions. I have to say they're training set, they're training and testing set. So up here, this procedure where we learned right here, this would be done on a training set of images. And then they expose the humans to testing set of images. So this reconstruction right here would happen on images that the feature decoder wasn't trained on. But the humans are looking at it. So the humans would be looking at the picture here on the left. And then this would be to the right you'll see as more and more iterations of this process of reconstructing happens, the image gets basically clearer and clearer. And you can see on the right you get pretty good looking images for the ones on the left. Now, these researchers they tend to they tout the success like to say, oh, look, look at this. But you know, honestly, like this is a leopard. This is a dog sheep. This is an owl. This is a dog. So, so, you know, this is a fish. And this is a like a shell, like a muscle. So I'm and you can go through these. And basically, you know, this is a sled. And then this is a truck. So they go basically, and they say, wow, the accuracy is really good. So they do a pixel correlation accuracy, which is, you know, you just try to pixel correlate things, but they have human judgment, the accuracy via human judgment is over 95%. That's crazy. But how do they do it? So basically, they tell you, okay, see, this is the image at the beginning, no, sorry, they give you this image right here as a human radar. And then they give you two other images, they say, okay, here are two images, let's say these two, which one did it come from? And if the human can can determine it correctly counts as a hit. So the baseline probability here is 50%. So basically, right, is so this right here, is it rather the owl? Or is it rather the VCR? And I mean, in that respect, it's pretty impressive what you can read from a brain, but in no way, like zero way, this is reading your thoughts. This isn't like, it seems to basically just reconstruct a example from the ImageNet training set. And the ImageNet Explorer is down right now. So I, I try to look at this. But it seems to me it's just kind of reconstructing something it knows that sort of bit resembles the image on the left. Yeah, but it is not it is not reconstructing that image. Not at all. Like look, like a bit, but vaguely, vaguely. But they do some more investigation into this. Okay, so well, first of all, here you can see what happens without this deep generator network. So when you have unconstrained search, then it is even worse, right? You get like big pixel meshes right here. So you need this this kind of prior over natural images. But the prior, I think the prior here comes through a bit much because I the prior might be in part responsible for why the images just show something else than you see. They go into an investigation of if and they discover if you use more layers to reconstruct, the reconstruction gets better. So here, according to human judgment, if you just reconstruct from the first layer, you don't get very good reconstruction. But if you incorporate the signal across many layers of the neural network, so you're basically trying to match to predict many layers of signal different layers, then the reconstruction gets really good. And, you know, we know this from things like style transfer, you can modify how close you are to the original or to the target by basically seeing which layers and how many you reconstruct to which accuracy. So this makes kind of sense. So if you if you only take the first layer to match the features, then you get basically this blob here. But if you get layers one through seven, you get pretty pretty okay ish thing that looks like this thing. I guess these are without the deep generator network so far. But no, that is interesting. And I think the novel thing here is one of the novel thing is that they actually use multiple layers of the neural network to reconstruct. The interesting thing is, I think this is this is pretty cool. They can now do this with these shapes. So these shapes aren't natural images, and they have not been seen in training. But still, as you can see, when the human sees, for example, the plus shape, it will get you pretty clear plus shape. And it happens for a lot of these things right here. So these are actually, I would say, fairly okay ish reconstructions of what the human sees. Here neuron, that is fairly neat. And for the alphabetical letters and shapes, you see again, the even the pixel correlation now is pretty high, but the human judgment again high. And here the human judgment kind of makes sense, right? If you ask, is it like this shape or this shape? Then it makes more sense to evaluate it like this. So the shapes, I am fairly impressed that they can reconstruct these shapes. And what they're now trying to do is they're trying to infer imagined images. So basically, they're telling a human, please imagine an image, and they show it to the human. It's not really imagining, it's basically recalling, they show you this image. And then you whatever close your eyes, they take the image away, and you just try to imagine that. And you can see through the reconstruction process, that works out, you know, sort of ish, right? You can see that the cross here, it kind of comes through. And this the plus kind of it sort of comes through. And so these are the high accuracy, these are actually the samples where where it worked. And there are also samples where it didn't work, like here, where you see it doesn't really come through. So there, either there is, you know, really a difference between imagining something and seeing something, or this method just isn't very good, per se, and you actually need or humans aren't really good at imagining, like there's lots of explanations. And here is the same thing if you imagine these images. Now they report that if humans just recall or imagine these images, then the reconstruction doesn't work at all. So that might be to the fact that, you know, you cannot in your recollection, you basically just remember the important things about something, you don't remember the exact pixel values, and therefore, your visual cortex doesn't respond in the same way. But I mean, it's interesting, even per se to think about it. But I have my doubts about you know, this entire system. So I don't want to make too many conclusions here about these things. Suffice to say that sometimes it actually can read your thoughts. Because if you just think so this is the stuff up here, if you just think of a shape, it can sort of kind of a bit make out the shape that you're thinking about. Alright, this was it for this paper. I'm basically mainly wanted to show you what I found. And I'm I have to say I'm pretty impressed with this, even though like, this is a laptop. This is not a VCR. This is a VCR. It's, it's more of a nearest neighbor thing, really, than I reconstruction, I think. But that's my opinion, right? Yes. So if you like this, give it a like, subscribe if you're still here. And I look forward to next time. Bye bye.
[ { "end": 6.72, "start": 0, "text": " Hi there! Today we're looking at deep image reconstruction from human brain activity by" }, { "end": 19.68, "start": 6.72, "text": " Gwa-wa Shen, Tomoyasu Horikawa, Kai Majima and Yuki Yazu Kamitani. This is like the reading thoughts." }, { "end": 27.82, "start": 19.68, "text": " So I was excited when I saw this paper. I saw this on reddit and it is a bit older. It is from the" }, { "end": 34.3, "start": 27.82, "text": " beginning of last year. So I'm sure there have been developments in this area. But basically what this" }, { "end": 42.480000000000004, "start": 34.3, "text": " paper does is it will have a human look at a picture as you can see for example right up here." }, { "end": 50.56, "start": 42.480000000000004, "text": " It will measure the MRI activity. Then it will use a what they call a feature decoder in order to" }, { "end": 60.480000000000004, "start": 50.56, "text": " map that MRI activity to features of a deep neural network. Then they will reconstruct the image that" }, { "end": 68.4, "start": 60.480000000000004, "text": " is closest to those features in the neural network. By reconstruction they get basically" }, { "end": 76.88, "start": 68.4, "text": " an image of what the human sees. So we're going to explore this pipeline right here. But it's pretty" }, { "end": 84.83999999999999, "start": 76.88, "text": " cool and if it works it basically means that we can read someone's thoughts. But of course there's" }, { "end": 90.44, "start": 84.83999999999999, "text": " going to be issues and problems. So first of all this is all visual. They measure the activity in" }, { "end": 102.16, "start": 90.44, "text": " the visual cortex right here. Let's break it down to the individual parts. The individual parts here," }, { "end": 109.75999999999999, "start": 102.16, "text": " the fMRI, that is a machine. We cannot control. So you measure the fMRI activity and that basically" }, { "end": 118, "start": 109.75999999999999, "text": " measures which of these cells in your brain use oxygen. It's functional fMRI, it's not structural." }, { "end": 124.36, "start": 118, "text": " So it measures which ones are active and that's how you would see which parts of the brains are" }, { "end": 130.92, "start": 124.36, "text": " active. I think the resolution on these things has gotten very very good. So you can make out very" }, { "end": 138.07999999999998, "start": 130.92, "text": " fine grained activation patterns in the neurons. And they measure the visual cortex which is the" }, { "end": 147.23999999999998, "start": 138.07999999999998, "text": " part responsible for basically visual stimuli. So for seeing things. Now they need this feature" }, { "end": 153.56, "start": 147.23999999999998, "text": " decoder but because ultimately what they want to do is they want to have these features correspond" }, { "end": 162.04, "start": 153.56, "text": " to features in a neural network. This DNN here is like a VGG, I think a VGG-16 or a VGG-19. It's" }, { "end": 168.48, "start": 162.04, "text": " called a VGG-16 network. This is a couple of years ago these architectures were very popular for" }, { "end": 177.76, "start": 168.48, "text": " ImageNet and they're fairly basic. So what that means it's not like a super duper inception net" }, { "end": 182.64000000000001, "start": 177.76, "text": " where you have like layers within layers and so on. They're pretty straightforward convolutional" }, { "end": 190.32, "start": 182.64, "text": " neural networks with nonlinearities and pooling here. You can see there is a bunch of layers," }, { "end": 196.64, "start": 190.32, "text": " then there's pooling, there is a bunch of layers, there's pooling and so on. So what you want to" }, { "end": 204.48, "start": 196.64, "text": " look at are these layers of the deep neural network. The individual layers and you want to" }, { "end": 213.32, "start": 204.48, "text": " basically put an image right here into the neural network and then observe its features in the" }, { "end": 218.88, "start": 213.32, "text": " neural network. Then you want to put the same image through the human and then you observe the" }, { "end": 228.12, "start": 218.88, "text": " MRI features. You know that this is the same image so you can basically learn a feature decoder. This" }, { "end": 233.12, "start": 228.12, "text": " is going to be another sort of machine learned maybe in neural network. I haven't actually read" }, { "end": 240.36, "start": 233.12, "text": " this could be just like a linear regression or a neural network. Just a regression that maps the" }, { "end": 247.08, "start": 240.36, "text": " fMRI to the features. So this is what you have to learn. Basically they took a bunch of humans," }, { "end": 255.08, "start": 247.08, "text": " stuck them in an MRI machine, got out their their fMRI data for the same image. So for a given image" }, { "end": 270.88, "start": 255.08, "text": " X they got the human fMRI data and they got the VGG. They got the features when they put the X," }, { "end": 278.44, "start": 270.88, "text": " the image through the neural network and now they learn a function that minimizes the error. So there" }, { "end": 287.56, "start": 278.44, "text": " is an error. So basically they run this on a test set of images and then you end up with a function" }, { "end": 296.15999999999997, "start": 287.56, "text": " that can basically map fMRI features to neural network features. Now the second step, now you" }, { "end": 302.24, "start": 296.15999999999997, "text": " can give the human, let's say this works, you can give the human an arbitrary image and the" }, { "end": 308.32, "start": 302.24, "text": " human will basically interpret it through its visual cortex. You measure the activity in the" }, { "end": 315.72, "start": 308.32, "text": " visual cortex and then you can predict the neural network features if that image were given to the" }, { "end": 322.84000000000003, "start": 315.72, "text": " neural network. But now you don't give the image to the neural network. Instead of what you do," }, { "end": 330.68, "start": 322.84000000000003, "text": " you do something like deep dream does. That means you start from a noisy image at the beginning and" }, { "end": 338.92, "start": 330.68, "text": " you try to find the image. So you start from this noisy image right here and through iterative" }, { "end": 346.92, "start": 338.92, "text": " gradient descent you refine this image. That's this arrow right here. You try to find the image" }, { "end": 352.96000000000004, "start": 346.92, "text": " that as closely as possible in the internal representation matches the features that you" }, { "end": 359.84000000000003, "start": 352.96000000000004, "text": " predict from the fMRI signal. Because these are the features that the neural network should see." }, { "end": 365.47999999999996, "start": 359.84, "text": " If your feature decoder is good, these are the features that the neural network should output" }, { "end": 372.23999999999995, "start": 365.47999999999996, "text": " for that image. So you're basically trying to find this image right here but you're not looking at" }, { "end": 378.67999999999995, "start": 372.23999999999995, "text": " it. You only look at the features that should be in the neural network. So after a bunch of steps" }, { "end": 385.44, "start": 378.67999999999995, "text": " of refining that image, you hope you basically end up with an image that corresponds to these" }, { "end": 393.76, "start": 385.44, "text": " features. Then you can look at it. It usually looks something like this. We're used to these" }, { "end": 399.68, "start": 393.76, "text": " kind of things from neural networks. If I invite you to look at something like the OpenAI Atlas," }, { "end": 407.15999999999997, "start": 399.68, "text": " if you want to understand how this is done. But basically we can get the image that most" }, { "end": 415.2, "start": 407.15999999999997, "text": " faithfully corresponds to these features. This doesn't always work super well because there" }, { "end": 421.4, "start": 415.2, "text": " are actually, since the neural network is sort of a dimensional reduction technique, there are many" }, { "end": 427.44, "start": 421.4, "text": " images that correspond to the same features and they often end up like really weird. So what they" }, { "end": 433.8, "start": 427.44, "text": " do is they have this deep generator network as a prior. Basically this is the generator from a GAN," }, { "end": 440.68, "start": 433.8, "text": " from a Generative Adversarial Network. So this network right here is really good at just producing" }, { "end": 449.56, "start": 440.68, "text": " naturally looking images. And now because we have that, our task is not going to be to start from" }, { "end": 456.64, "start": 449.56, "text": " this image right here. Our task will be to do the exact same thing but with the input to the deep" }, { "end": 462.04, "start": 456.64, "text": " generator network. So basically what we're trying to do is we're trying to find the input vector to" }, { "end": 470.08, "start": 462.04, "text": " the deep generator network such that these features right here correspond to the features that we" }, { "end": 477.32, "start": 470.08, "text": " predict from the fMRI activity. And because the deep generator network is trained to produce" }, { "end": 484.91999999999996, "start": 477.32, "text": " natural looking images, this will always give us sort of a natural looking image no matter what" }, { "end": 490.52, "start": 484.91999999999996, "text": " our input vector is. So thereby we basically constrain the optimization procedure to only" }, { "end": 499.12, "start": 490.52, "text": " output natural looking images. So let's see how well this works. So these are the reconstructions." }, { "end": 504.76, "start": 499.12, "text": " I have to say they're training set, they're training and testing set. So up here, this" }, { "end": 510.16, "start": 504.76, "text": " procedure where we learned right here, this would be done on a training set of images. And then they" }, { "end": 515.4, "start": 510.16, "text": " expose the humans to testing set of images. So this reconstruction right here would happen on" }, { "end": 522.24, "start": 515.4, "text": " images that the feature decoder wasn't trained on. But the humans are looking at it. So the humans" }, { "end": 528.8, "start": 522.24, "text": " would be looking at the picture here on the left. And then this would be to the right you'll see" }, { "end": 535.56, "start": 528.8, "text": " as more and more iterations of this process of reconstructing happens, the image gets basically" }, { "end": 542.3199999999999, "start": 535.56, "text": " clearer and clearer. And you can see on the right you get pretty good looking images for the ones on" }, { "end": 550, "start": 542.3199999999999, "text": " the left. Now, these researchers they tend to they tout the success like to say, oh, look, look at" }, { "end": 569.4, "start": 550, "text": " this. But you know, honestly, like this is a leopard. This is a dog sheep. This is an owl. This" }, { "end": 581.36, "start": 569.4, "text": " is a dog. So, so, you know, this is a fish. And this is a like a shell, like a muscle. So I'm and" }, { "end": 594.84, "start": 581.36, "text": " you can go through these. And basically, you know, this is a sled. And then this is a truck. So they" }, { "end": 600.96, "start": 594.84, "text": " go basically, and they say, wow, the accuracy is really good. So they do a pixel correlation" }, { "end": 605.8000000000001, "start": 600.96, "text": " accuracy, which is, you know, you just try to pixel correlate things, but they have human" }, { "end": 614.2, "start": 605.8000000000001, "text": " judgment, the accuracy via human judgment is over 95%. That's crazy. But how do they do it? So" }, { "end": 621.4000000000001, "start": 614.2, "text": " basically, they tell you, okay, see, this is the image at the beginning, no, sorry, they give you" }, { "end": 628.0799999999999, "start": 621.4, "text": " this image right here as a human radar. And then they give you two other images, they say, okay," }, { "end": 636.0799999999999, "start": 628.0799999999999, "text": " here are two images, let's say these two, which one did it come from? And if the human can can" }, { "end": 643, "start": 636.0799999999999, "text": " determine it correctly counts as a hit. So the baseline probability here is 50%. So basically," }, { "end": 652.72, "start": 643, "text": " right, is so this right here, is it rather the owl? Or is it rather the VCR? And I mean, in that" }, { "end": 659.16, "start": 652.72, "text": " respect, it's pretty impressive what you can read from a brain, but in no way, like zero way, this" }, { "end": 667.68, "start": 659.16, "text": " is reading your thoughts. This isn't like, it seems to basically just reconstruct a example from the" }, { "end": 674.0799999999999, "start": 667.68, "text": " ImageNet training set. And the ImageNet Explorer is down right now. So I, I try to look at this." }, { "end": 682.8399999999999, "start": 674.0799999999999, "text": " But it seems to me it's just kind of reconstructing something it knows that sort of bit resembles the" }, { "end": 689.4399999999999, "start": 682.8399999999999, "text": " image on the left. Yeah, but it is not it is not reconstructing that image. Not at all. Like look," }, { "end": 700.84, "start": 689.44, "text": " like a bit, but vaguely, vaguely. But they do some more investigation into this. Okay, so well," }, { "end": 705.5600000000001, "start": 700.84, "text": " first of all, here you can see what happens without this deep generator network. So when you have" }, { "end": 714.9200000000001, "start": 705.5600000000001, "text": " unconstrained search, then it is even worse, right? You get like big pixel meshes right here. So you" }, { "end": 722.4, "start": 714.92, "text": " need this this kind of prior over natural images. But the prior, I think the prior here comes through" }, { "end": 729.5999999999999, "start": 722.4, "text": " a bit much because I the prior might be in part responsible for why the images just show something" }, { "end": 738.64, "start": 729.5999999999999, "text": " else than you see. They go into an investigation of if and they discover if you use more layers to" }, { "end": 744.28, "start": 738.64, "text": " reconstruct, the reconstruction gets better. So here, according to human judgment, if you just" }, { "end": 750.36, "start": 744.28, "text": " reconstruct from the first layer, you don't get very good reconstruction. But if you incorporate" }, { "end": 756.12, "start": 750.36, "text": " the signal across many layers of the neural network, so you're basically trying to match to" }, { "end": 762.3199999999999, "start": 756.12, "text": " predict many layers of signal different layers, then the reconstruction gets really good. And," }, { "end": 768.88, "start": 762.3199999999999, "text": " you know, we know this from things like style transfer, you can modify how close you are to" }, { "end": 775.36, "start": 768.88, "text": " the original or to the target by basically seeing which layers and how many you reconstruct to which" }, { "end": 782.08, "start": 775.36, "text": " accuracy. So this makes kind of sense. So if you if you only take the first layer to match the" }, { "end": 787.32, "start": 782.08, "text": " features, then you get basically this blob here. But if you get layers one through seven, you get" }, { "end": 795.4, "start": 787.32, "text": " pretty pretty okay ish thing that looks like this thing. I guess these are without the deep generator" }, { "end": 802.3199999999999, "start": 795.4, "text": " network so far. But no, that is interesting. And I think the novel thing here is one of the novel" }, { "end": 811.76, "start": 802.3199999999999, "text": " thing is that they actually use multiple layers of the neural network to reconstruct. The interesting" }, { "end": 818.68, "start": 811.76, "text": " thing is, I think this is this is pretty cool. They can now do this with these shapes. So these" }, { "end": 825.04, "start": 818.68, "text": " shapes aren't natural images, and they have not been seen in training. But still, as you can see," }, { "end": 830.92, "start": 825.04, "text": " when the human sees, for example, the plus shape, it will get you pretty clear plus shape. And it" }, { "end": 837.0799999999999, "start": 830.92, "text": " happens for a lot of these things right here. So these are actually, I would say, fairly okay ish" }, { "end": 847.4399999999999, "start": 837.0799999999999, "text": " reconstructions of what the human sees. Here neuron, that is fairly neat. And for the alphabetical" }, { "end": 852.4, "start": 847.4399999999999, "text": " letters and shapes, you see again, the even the pixel correlation now is pretty high, but the human" }, { "end": 859.56, "start": 852.4, "text": " judgment again high. And here the human judgment kind of makes sense, right? If you ask, is it like" }, { "end": 868, "start": 859.56, "text": " this shape or this shape? Then it makes more sense to evaluate it like this. So the shapes, I am" }, { "end": 876.1999999999999, "start": 868, "text": " fairly impressed that they can reconstruct these shapes. And what they're now trying to do is they're" }, { "end": 885.2, "start": 876.2, "text": " trying to infer imagined images. So basically, they're telling a human, please imagine an image," }, { "end": 890.5600000000001, "start": 885.2, "text": " and they show it to the human. It's not really imagining, it's basically recalling, they show" }, { "end": 895.5200000000001, "start": 890.5600000000001, "text": " you this image. And then you whatever close your eyes, they take the image away, and you just try" }, { "end": 903.32, "start": 895.5200000000001, "text": " to imagine that. And you can see through the reconstruction process, that works out, you know," }, { "end": 911.0400000000001, "start": 903.32, "text": " sort of ish, right? You can see that the cross here, it kind of comes through. And this the plus" }, { "end": 919.24, "start": 911.0400000000001, "text": " kind of it sort of comes through. And so these are the high accuracy, these are actually the samples" }, { "end": 925.36, "start": 919.24, "text": " where where it worked. And there are also samples where it didn't work, like here, where you see it" }, { "end": 930.1600000000001, "start": 925.36, "text": " doesn't really come through. So there, either there is, you know, really a difference between" }, { "end": 940.3199999999999, "start": 930.16, "text": " imagining something and seeing something, or this method just isn't very good, per se, and you" }, { "end": 946.36, "start": 940.3199999999999, "text": " actually need or humans aren't really good at imagining, like there's lots of explanations. And" }, { "end": 953.8399999999999, "start": 946.36, "text": " here is the same thing if you imagine these images. Now they report that if humans just recall or" }, { "end": 961.6800000000001, "start": 953.84, "text": " imagine these images, then the reconstruction doesn't work at all. So that might be to the fact" }, { "end": 966.32, "start": 961.6800000000001, "text": " that, you know, you cannot in your recollection, you basically just remember the important things" }, { "end": 972.5600000000001, "start": 966.32, "text": " about something, you don't remember the exact pixel values, and therefore, your visual cortex" }, { "end": 978.48, "start": 972.5600000000001, "text": " doesn't respond in the same way. But I mean, it's interesting, even per se to think about it. But I" }, { "end": 984.24, "start": 978.48, "text": " have my doubts about you know, this entire system. So I don't want to make too many conclusions here" }, { "end": 992.64, "start": 984.24, "text": " about these things. Suffice to say that sometimes it actually can read your thoughts. Because if you" }, { "end": 1000, "start": 992.64, "text": " just think so this is the stuff up here, if you just think of a shape, it can sort of kind of a" }, { "end": 1008.12, "start": 1000, "text": " bit make out the shape that you're thinking about. Alright, this was it for this paper. I'm basically" }, { "end": 1014.16, "start": 1008.12, "text": " mainly wanted to show you what I found. And I'm I have to say I'm pretty impressed with this, even" }, { "end": 1025, "start": 1014.16, "text": " though like, this is a laptop. This is not a VCR. This is a VCR. It's, it's more of a nearest neighbor" }, { "end": 1035.88, "start": 1025, "text": " thing, really, than I reconstruction, I think. But that's my opinion, right? Yes. So if you like this," }, { "end": 1043.3200000000002, "start": 1035.88, "text": " give it a like, subscribe if you're still here. And I look forward to next time. Bye bye." } ]
UjJU13GdL94
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Regularizing Trajectory Optimization with Denoising Autoencoders (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "reinforcement learning", "model predictive control", "dae", "denoising autoencoders", "trajectory", "trajectory optimization", "planning", "adversarial attack", "errors", "open loop", "closed loop", "joint", "probability", "derivative", "gaussian", "experience", "learned model", "world model", "model predictive", "mpc" ]
Can you plan with a learned model of the world? Yes, but there's a catch: The better your planning algorithm is, the more the errors of your world model will hurt you! This paper solves this problem by regularizing the planning algorithm to stay in high probability regions, given its experience. https://arxiv.org/abs/1903.11981 Interview w/ Harri: https://youtu.be/HnZDmxYnpg4 Abstract: Trajectory optimization using a learned model of the environment is one of the core elements of model-based reinforcement learning. This procedure often suffers from exploiting inaccuracies of the learned model. We propose to regularize trajectory optimization by means of a denoising autoencoder that is trained on the same trajectories as the model of the environment. We show that the proposed regularization leads to improved planning with both gradient-based and gradient-free optimizers. We also demonstrate that using regularized trajectory optimization leads to rapid initial learning in a set of popular motor control tasks, which suggests that the proposed approach can be a useful tool for improving sample efficiency. Authors: Rinu Boney, Norman Di Palo, Mathias Berglund, Alexander Ilin, Juho Kannala, Antti Rasmus, Harri Valpola Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at regularizing trajectory optimization with denoising autoencoders by Renu Bonet, Norman DiPaolo and others of various places, but a lot of the people are from Curious AI. And we actually had a discussion with Hari, who is the CEO of Curious AI. And this was on our Machine Learning Street Talk podcast. So this is another YouTube channel for those of you who don't know, where every week or so we try to have either an interesting discussion or a guest, like sort of an interview or talk about comment on the talk. So if it is not out yet, I'll link to it as soon as it comes out. But if you're watching this video later, make sure to check out our conversation with Hari because it was absolutely fantastic. And in general, if you like videos like this, consider subscribing, liking, sharing if you're still here at the end and liked it. Okay, so this paper on a high level deals with model based reinforcement learning. Model based reinforcement learning means that you are using a model of the world to do reinforcement learning. So in essence, if you have your reinforcement learning setup where you are an agent, and you have to interact with the world, you have to do so in many steps in like a round trip fashion. So you put an action you act and the world gives you back an observation. And you have to act in the world over and over and over such that you will be able to maximize your reward. Now what is model based reinforcement learning? Model based reinforcement learning basically means that the agent here has internally a model of the world. So it sort of understands how the world works. Situations where you have a accurate model of the world are things like chess. So in chess, the rules are very clear, you know how the world's going to behave if you perform a certain action. But in real world applications, it's very, very hard to actually make a model. So people usually rely on learned models. So what does it mean you basically learn a neural network that tries to predict how the world is going to act. So this here is going to be a deep neural network that you learn from what you see in the world. Now trajectory optimization basically means that you are now you now have this world model, and you use it to look ahead, as I said, so you are in the state like here, and you can do let's say three different actions. And you use your world model here, world. And you see, you think how's the world going to react if I do either of those three things, and then you get into three different states. And then again, you after each one, you consider three actions, three actions here, three actions here, and so on. So ultimately, you're going to kind of have an overview over a planning horizon, which here we call H, you kind of look ahead a couple of steps, or there are various ways of doing this. But ultimately, you will basically find that this this path here is really good. So I think I'm going to take this as a first action. So trajectory optimization considers finding the best green path here in this tree of possibilities that your your world model gives you. Okay. Now, what does what do these people say? They say this procedure often suffers from exploiting inaccuracies of the learned model. What does that mean? That basically means that if I have a world model, and it is not accurate, then it is basically, basically the thing that tries to find the best green path here, the optimizer is sort of trying to find the the best path against this world model. Now, if that world model is inaccurate, that can lead to devastating consequences. So what do we mean by this? I'll give you an example. If you have a room, right, and the room is, let's take our classic room like this. And you are here and you would like to go here. And so you're a reinforcement learning agent, you do some exploration, right, you explore a bit here, the next episode, you might go here, and you might go here, and so on. And over time, in this framework, you're going to build a model of the world. So at the beginning, we won't tell you how these rooms look, you have to discover it by yourself. So maybe at the beginning, we only tell you, there's these four walls, the rest you have to figure out. So on and on, you're going to fill in your blanks, you do your first explorations, and you've had there's a bit of a wall here, right. And there might be some wall here, I crashed into that, right, you're going into here, you crash into a wall, you saw there's a wall here and here, there's a wall here, you go maybe here, oh, there's no wall. So you go further, there's no wall anywhere here, you crash here, okay, we already knew there's a wall. Maybe you crash here. All right, so right now, you have, okay, you go here, you have a model of the world in this situation, where there's a wall here, a wall down. And if you now try to do trajectory optimization, remember, you have to go from here to here. If you try to do trajectory optimization, what is it going to turn out? It's going to turn out like, look, there you go. That works just fine. And that's not because you're so good at planning. I mean, you are good at planning, but because your model is inaccurate here, because it has never seen this, your entire training distribution that you trained the world model on, only explored the area over here. All right, so you see how the more efficient this planning algorithm is, like the blue arrow, the thing that finds the blue arrow, the more efficient that is, the more consequential it is when your learned world model has mistakes, because it will exactly exploit these mistakes in order to get the shortest path possible or the highest reward in that case. And this, they call this like almost an adversarial attack on the world model, which is a pretty good way of framing it. They propose actually to solve this problem. They say we propose to regularize trajectory optimization by means of a denoising autoencoder that is trained on the same trajectories as the model of the environment. We show that the proposed regularization leads to improve planning with both gradient based and gradient free optimizers. We also demonstrate that using regularized trajectory optimization leads to rapid initial learning in a set of popular motor control tasks, which suggests that the proposed approach can be useful tool for improving sample efficiency. So in essence, what do they do? They basically say, okay, we want to regularize this using a denoising autoencoder. And I think it's best if we if we look at the at the math for doing this. So the math here starts off as follows, saying you want to learn a world model. This is F here. F is the world model. It takes in a state and an action and it gives you the next state or an approximation to it. And the parameters here indicate that this is some sort of function that you learn like a deep neural network. You can do this in fully or partially observed environments. Now when you plan, what you want to do is you say I have a planning horizon H, right? Then I have a reward function, and the reward function is going to give me a reward for each state action pair. So if I'm in a state and I do a certain action, I'm going to get some reward. This could be you have reached the target or this could be you know how much money you've collected or whatnot. So you're going to look at horizon H, you're going to look H steps into the future, and you want to maximize the sum of all the rewards here. So in the limit, this reduces to simply like, for example, reaching the target in our rooms case if H is infinite. But you can consider a lower planning horizon. So you want to find the action sequence that maximizes this reward in the future. And now this reward relies on your environment model. So here's the algorithm. First you collect some data, okay, that's how you start off. That train the dynamics model, the world model, using the data you've already collected. Then for each time step T, you want to optimize this trajectory. So you want to find the best next action sequence and take the first action, implement the first action and get the new observation. And do you do this in a loop until the end and at the end you say add this data to D. So that's what you do. You use your world model to get the best action sequence. That's how you optimize the trajectory. And then at the end of the episode, you've done an episode, right? You went somewhere, you put all of this into your training data to make the world model better. Something to note here is that the world model will only learn about things that you have done, right? So there is kind of an interaction effect. That's the green area here. The world model only knows the paths, the world model only can accurately estimate the world where you have been. And that's going to turn out to be the entire problem because these blue arrow finder can now go away from that. That's explained here. Potential inaccuracies of the trained model cause substantial difficulties for the planning process. Rather than optimizing what really happens, planning can easily end up exploiting the weaknesses of the predictive model. Planning is effectively an adversarial attack against the agent's own forward model. This results in a wide gap between expectations based on the model and what actually happens. And they have this example here where it's like an industrial control process. And what you have to imagine, there's like some sort of a container here with a liquid in it. And there are two pipes that lead to this container, pipe one and pipe two. And there are valves here. So there's this valve right here and there's this valve right here. So these are valve one and valve two. And there is also an output pipe right here and that's another valve right here. So you can control these three valves, the two inputs and one output. And you have to somehow optimize the reaction in here. So this is a chemical reaction made up out of the two liquids that flow in here. And you have to somehow optimize a property of that. And that's highly nonlinear and has maybe like time shifts. So when you open a valve, it's going to take a while and then it's very nonlinear. And then you are not supposed to break the pressure limit. So you have to also outflow some stuff. And if you just do this with a learned model, it looks like this. So first of all, here is a classic controller. People have been doing this stuff in industry and they basically build controllers for it. And you can do that and that works out really okay-ish. As you can see right here, this is the product rate, what you're supposed to optimize. And you see some sort of a smooth... You're supposed to actually bring it to this dashed line right here. And this is some sort of smooth thing, right? And you're supposed to, I guess, bring the pressure here and the A in purge. I don't know what these quantities are, but you're supposed to bring them to the dashed line and it's very nonlinear and very time dependent. So that works. And you see here kind of the smoothness by which the variables are manipulated. Now, if you just learn a world model and then do this trajectory optimization, basically this is some sort of a planning-based reinforcement learning with a world model. You see right here, it works, but it's super jittery. The pressure spikes here and apparently this here is a pressure limit. So it spikes the pressure limit. And you can see that the manipulated variables are up and down and up and down and up and down because at each step, it basically completely overestimates its potential reward. With things like, wow, this is really good, but all it does is find a weakness in the model and not a really good action per se. Now with their method to already take it away, you can see that now the control task super smoothly and very quickly converges to these optimal things. And you can see that the variables being manipulated are also rather smoothly manipulated. And that's an indication that the model is accurately estimating their rewards. Okay, so how do they do it? Via what they call trajectory, via regularization of trajectory optimization. So in essence, what do we want to regularize here? There are many things we could do to solve this, but the way this paper goes is they say we not only do we want the most return, we also want a high log probability of our taken path. So this here, as you can see, this is observation action and so on, observation action. So this is the future. This right here is the future. So this sequence here is what is going to give me the reward right here. So G is also dependent on these things, but it's not said explicitly here. So G is dependent on your plan. Maybe let's not call this the future. This is the plan. This is the plan you came up with. So this is directly going to influence G and G is the reward you're going to get under your model. But also you want the log probability of the plan itself to be high. Now there, I think there is a bit, there is something missing here and that is conditioned on your training distribution right here. And I think that's a actually rather crucial part. Now that's, that's the KL thing. So this is conditioned on your training. So what you want is you want the plan to be basically in your training distribution. So you, you want what you, you want your plan that you're going to execute. If that is actually part of your training data set, then you know, I have already executed this once before and it's reasonable to assume that therefore my world model has learned from this experience and is going to give me an accurate reward. If we go back to our rooms example, then up here somewhere, if we go back to our rooms example, right, you see that anywhere in the green area where I have already explored the world model is fairly good, right? It's going to give me accurate reflection of the world. But as soon as it go outside the green area, it is not. And inside the green area is basically where my training data is. Now if I in the future actually take a path here, crash into a wall right here, right? You saw in the algorithm at the end of an episode, I'm going to add my trajectory to the training data for the world model. So this green part here expands to include that. And now if I go here again, if my plan goes there again, now I can trust the world model. But also now it has it is actually correct because it has a wall here. So you see that the regularization basically you not only do I want the biggest reward possible under my world model, I also want that the plan that I'm about to execute is has a high probability under my training distribution. Okay. And the way we do this is by denoising auto encoders. We want the log probability here to be high and you do this via a denoising auto encoder. What's a denoising auto encoder? A denoising auto encoder is basically so if you have, for example, an image and the image is of a trusty cat whiskers and a denoising auto encoder is an unsupervised method where you have it's basically an auto encoder. So there is a bunch of layers compressing to a hidden representation, then uncompressing it again. Okay. And at the end, you want to output the same as at the beginning. So it's basically an auto encoder. But the special part about the denoising auto encoder is that first, you take your input and you know, you put some noise on it. So that could mean could mean anything here. But here, what they do is they do they make some Gaussian noise on it. Now, I can't really draw Gaussian noise here, but it would be kind of convolved with Gaussian Gaussian noise. So I'm just going to add some noise like this. So noise, noise, noise. So there's some noise, you see, and then you feed that. That's now what you feed in here. And the algorithm is supposed to reconstruct this, this original image. So the algorithm is basically supposed to take away the noise, it doesn't see the original image, but it's supposed to produce it. And you do this with your training data. What does that mean? Ultimately, for our trajectory optimization, it means that if I have a trajectory that I did before, and it maybe goes here, right? What I can do is I can make a noisy version of it, which would be the black one right here. So I put some noise on it, some noise. Right, it's kind of the same, but okay. And the denoising autoencoder is supposed to give me back the red one. This will simply give me some sort of a probabilistic model of my training distribution. So they go through the math here and show that these denoising autoencoders actually naturally output this log probability, sorry, the gradient of the log probability. Because optimal denoising theory says that for zero mean and Gaussian noise, the optimal denoising function, the optimal denoising function for zero mean Gaussian corruption is this thing right here. So it is, if you give me X and you tell me X has been corrupted by zero mean Gaussian noise of size sigma n, then the best, and you simply tell me, give me back the original image, the best thing I can do is to take what you gave me and add this gradient of the log probability of X if I can, if I have a model of the log probability. So that's the best thing I can do. And that's the best denoising function. And now you have to think a bit in reverse. If we train a denoising autoencoder, that is going to approximate this best function that there is. So we know that the best possible denoising function is this, we train a denoising autoencoder, which in the optimal case is going to converge to the best denoising function. So if we then reformulate and we do denoising autoencoder of X minus or X tilde minus X tilde, that is divided by the standard deviation. Sorry, the variance. That is going to give us this quantity right here, the gradient of the log probability. And the gradient of the log probability of X is exactly what we need to run gradient descent on our function. So here is our function again, G plus this regularization. Now they don't regularize over the entire future, but over these windows. But in essence it's G plus the log probability of your plan. If you take the gradient of that, of course you take the gradient of the sum. So it's the gradient of G plus the gradient of the log probability with respect to the actions. And here simple application of the chain rule will tell you that you have to propagate through the input, through the X. And you need this quantity. The gradient of the log probability with respect to its inputs. Now as we just saw, the optimal denoising autoencoder is going to output that thing. So if we train a denoising autoencoder and we suppose it reaches a good accuracy, then we can obtain this quantity basically for free. And that's the entire trick here. So in essence, what does it mean? In essence what it means is that if we are in our room again, and we have our partial model of the world, let's say we have this model, because we are here and all we've ever explored is these things right here. Now when I go and do my trajectory optimization, and my trajectory optimization wants to go here, I simply say, no, I don't know that, I haven't seen that yet. You can only plan basically within the space where we have already been. So you can plan like here. So here now there is of course, there is going to be some exploration, so some probability that you can go away a bit, but not too much. So in this case, it would result in the planning only to happen in spaces where we've actually been. So it might go here, and then here, because okay, here we haven't been anywhere. But then that would lead me to take the first step in this direction, and not in this direction. And if I take my first step in this first direction, then of course, I'm going to be already a bit on the correct path right here. Because if I take the first step into this direction, then after that, I'm going to have to, if once I crash here, I'm going to have to correct really hard. And that's exactly what's going to give you this super jittery control. Whereas if you only plan where you've already been, you won't, the probability that you're going to have to do like a 180 is going to be much, much lower. Okay. That seems like that's about it. Let's look at the experiments. So they're experiments. Basically I actually want to go down here to this industry, sorry, not the industrial control process, but to the mojo co experiments. So these are kind of continuous control tasks. You might have seen it. There's some like one is a, a, the ant here is basically this 3d and is like a blob and it has I think four legs and each leg has two joints. And it just needs to walk as far as possible or reach some sort of goal. And the half cheetah is like a 2d thing where I think it's something like this. It also has these two legs and it's supposed to walk forward and not fall over. And you can put force basically on each of the, of the joints here. So you see that their baselines are Gaussian processes. And this pets thing is a previous baseline to do, to also do model based control with a learned model. And here they, there's is the main, their main one is the red one. And as you can see that it goes much faster. Well it basically outperforms the rest in these high, in these more complicated tasks. And then card pole or something like this is, is lower dimensional, easier tasks. And you can see that at least it does not hurt. Now they make, they say here something they don't, they don't show in the plots. They say that if you let this run for a while, then basically the, their method doesn't make any improvement anymore. Whereas the baseline methods will sort of at some point surpass it. And the reason that is, and I'm not sure if it's on this exact task, but they mentioned that which it's, it's I respect so far is because they say since we only plan where we know, where did I draw it? Since we only plan where we know, we basically do much less exploration than others. We kind of stick to what we know when we plan. So inherently we do less exploration and in our conversation with Hari, he basically said this, this is intended. And the base, the intention is that you want to do your planning where you know, and then explicitly add a component that does exploration. So you have control over, so you can basically say, huh, I, I've never been here sort of. Now you would be in an exploration phase, you would explicitly go there rather than intermingle your planning with your exploration and basically rely on your planning to screw up and you're exploring. Because if your plan, if you're planning never screws up, then you won't explore either, right? Then you will always reach your goal or your planning will always be correct. And these other methods that don't have this explicitly, they explore every time their planning screws up and you don't want that. You want your planning to be as good as possible. And they do that by sticking to what they know. And then they the next step, which is not in this paper would be to add an explicit exploration policy to reach areas they've never reached before. Okay, so that's the reason why they don't ultimately reach the best accuracy, but they do reach a the initial accuracy much faster than the other tasks, because they plan better. They have a long discussion here of what still problems are like local minima or the planning horizon problem, open loop versus closed loop compounding errors in planning. But I'm going to leave this out for now. And I thank you for being here. I very much invite you to check out the paper for more details. It's pretty cool, pretty easy to read, actually. It's very written very well. And with that, see you next time. Bye bye.
[ { "end": 6.26, "start": 0, "text": " Hi there, today we're looking at regularizing trajectory optimization with denoising autoencoders" }, { "end": 14.16, "start": 6.26, "text": " by Renu Bonet, Norman DiPaolo and others of various places, but a lot of the people are" }, { "end": 16.2, "start": 14.16, "text": " from Curious AI." }, { "end": 22.98, "start": 16.2, "text": " And we actually had a discussion with Hari, who is the CEO of Curious AI." }, { "end": 26.52, "start": 22.98, "text": " And this was on our Machine Learning Street Talk podcast." }, { "end": 31.38, "start": 26.52, "text": " So this is another YouTube channel for those of you who don't know, where every week or" }, { "end": 37.66, "start": 31.38, "text": " so we try to have either an interesting discussion or a guest, like sort of an interview or talk" }, { "end": 40.28, "start": 37.66, "text": " about comment on the talk." }, { "end": 44.3, "start": 40.28, "text": " So if it is not out yet, I'll link to it as soon as it comes out." }, { "end": 49.22, "start": 44.3, "text": " But if you're watching this video later, make sure to check out our conversation with Hari" }, { "end": 52.64, "start": 49.22, "text": " because it was absolutely fantastic." }, { "end": 57.84, "start": 52.64, "text": " And in general, if you like videos like this, consider subscribing, liking, sharing if you're" }, { "end": 61, "start": 57.84, "text": " still here at the end and liked it." }, { "end": 67.12, "start": 61, "text": " Okay, so this paper on a high level deals with model based reinforcement learning." }, { "end": 73.24000000000001, "start": 67.12, "text": " Model based reinforcement learning means that you are using a model of the world to do reinforcement" }, { "end": 74.24000000000001, "start": 73.24000000000001, "text": " learning." }, { "end": 82, "start": 74.24000000000001, "text": " So in essence, if you have your reinforcement learning setup where you are an agent, and" }, { "end": 87.24, "start": 82, "text": " you have to interact with the world, you have to do so in many steps in like a round trip" }, { "end": 88.24, "start": 87.24, "text": " fashion." }, { "end": 93.08, "start": 88.24, "text": " So you put an action you act and the world gives you back an observation." }, { "end": 99.06, "start": 93.08, "text": " And you have to act in the world over and over and over such that you will be able to" }, { "end": 101.3, "start": 99.06, "text": " maximize your reward." }, { "end": 104.34, "start": 101.3, "text": " Now what is model based reinforcement learning?" }, { "end": 111.64, "start": 104.34, "text": " Model based reinforcement learning basically means that the agent here has internally" }, { "end": 113.48, "start": 111.64, "text": " a model of the world." }, { "end": 118.72, "start": 113.48, "text": " So it sort of understands how the world works." }, { "end": 122.36, "start": 118.72, "text": " Situations where you have a accurate model of the world are things like chess." }, { "end": 127.32, "start": 122.36, "text": " So in chess, the rules are very clear, you know how the world's going to behave if you" }, { "end": 129.04, "start": 127.32, "text": " perform a certain action." }, { "end": 134.32, "start": 129.04, "text": " But in real world applications, it's very, very hard to actually make a model." }, { "end": 137.28, "start": 134.32, "text": " So people usually rely on learned models." }, { "end": 142.32, "start": 137.28, "text": " So what does it mean you basically learn a neural network that tries to predict how the" }, { "end": 144.38, "start": 142.32, "text": " world is going to act." }, { "end": 150.28, "start": 144.38, "text": " So this here is going to be a deep neural network that you learn from what you see in" }, { "end": 151.36, "start": 150.28, "text": " the world." }, { "end": 159.06, "start": 151.36, "text": " Now trajectory optimization basically means that you are now you now have this world model," }, { "end": 164.2, "start": 159.06, "text": " and you use it to look ahead, as I said, so you are in the state like here, and you can" }, { "end": 166.52, "start": 164.2, "text": " do let's say three different actions." }, { "end": 170.48000000000002, "start": 166.52, "text": " And you use your world model here, world." }, { "end": 175.44, "start": 170.48000000000002, "text": " And you see, you think how's the world going to react if I do either of those three things," }, { "end": 177.66000000000003, "start": 175.44, "text": " and then you get into three different states." }, { "end": 182.84, "start": 177.66000000000003, "text": " And then again, you after each one, you consider three actions, three actions here, three actions" }, { "end": 184.36, "start": 182.84, "text": " here, and so on." }, { "end": 190.12, "start": 184.36, "text": " So ultimately, you're going to kind of have an overview over a planning horizon, which" }, { "end": 196.48000000000002, "start": 190.12, "text": " here we call H, you kind of look ahead a couple of steps, or there are various ways of doing" }, { "end": 197.48, "start": 196.48, "text": " this." }, { "end": 202.35999999999999, "start": 197.48, "text": " But ultimately, you will basically find that this this path here is really good." }, { "end": 208.32, "start": 202.35999999999999, "text": " So I think I'm going to take this as a first action." }, { "end": 215.48, "start": 208.32, "text": " So trajectory optimization considers finding the best green path here in this tree of possibilities" }, { "end": 218.76, "start": 215.48, "text": " that your your world model gives you." }, { "end": 220.16, "start": 218.76, "text": " Okay." }, { "end": 223.04, "start": 220.16, "text": " Now, what does what do these people say?" }, { "end": 230.79999999999998, "start": 223.04, "text": " They say this procedure often suffers from exploiting inaccuracies of the learned model." }, { "end": 231.92, "start": 230.79999999999998, "text": " What does that mean?" }, { "end": 236.95999999999998, "start": 231.92, "text": " That basically means that if I have a world model, and it is not accurate, then it is" }, { "end": 243.32, "start": 236.95999999999998, "text": " basically, basically the thing that tries to find the best green path here, the optimizer" }, { "end": 249.64, "start": 243.32, "text": " is sort of trying to find the the best path against this world model." }, { "end": 255.04, "start": 249.64, "text": " Now, if that world model is inaccurate, that can lead to devastating consequences." }, { "end": 256.44, "start": 255.04, "text": " So what do we mean by this?" }, { "end": 258.96, "start": 256.44, "text": " I'll give you an example." }, { "end": 267.47999999999996, "start": 258.96, "text": " If you have a room, right, and the room is, let's take our classic room like this." }, { "end": 272.28, "start": 267.47999999999996, "text": " And you are here and you would like to go here." }, { "end": 276.91999999999996, "start": 272.28, "text": " And so you're a reinforcement learning agent, you do some exploration, right, you explore" }, { "end": 281.6, "start": 276.92, "text": " a bit here, the next episode, you might go here, and you might go here, and so on." }, { "end": 285.68, "start": 281.6, "text": " And over time, in this framework, you're going to build a model of the world." }, { "end": 290.36, "start": 285.68, "text": " So at the beginning, we won't tell you how these rooms look, you have to discover it" }, { "end": 291.56, "start": 290.36, "text": " by yourself." }, { "end": 296.12, "start": 291.56, "text": " So maybe at the beginning, we only tell you, there's these four walls, the rest you have" }, { "end": 297.36, "start": 296.12, "text": " to figure out." }, { "end": 302.16, "start": 297.36, "text": " So on and on, you're going to fill in your blanks, you do your first explorations, and" }, { "end": 305.36, "start": 302.16, "text": " you've had there's a bit of a wall here, right." }, { "end": 309.68, "start": 305.36, "text": " And there might be some wall here, I crashed into that, right, you're going into here," }, { "end": 313.92, "start": 309.68, "text": " you crash into a wall, you saw there's a wall here and here, there's a wall here, you go" }, { "end": 316.04, "start": 313.92, "text": " maybe here, oh, there's no wall." }, { "end": 320.76, "start": 316.04, "text": " So you go further, there's no wall anywhere here, you crash here, okay, we already knew" }, { "end": 322.28000000000003, "start": 320.76, "text": " there's a wall." }, { "end": 323.28000000000003, "start": 322.28000000000003, "text": " Maybe you crash here." }, { "end": 329.24, "start": 323.28000000000003, "text": " All right, so right now, you have, okay, you go here, you have a model of the world in" }, { "end": 333.68, "start": 329.24, "text": " this situation, where there's a wall here, a wall down." }, { "end": 340.3, "start": 333.68, "text": " And if you now try to do trajectory optimization, remember, you have to go from here to here." }, { "end": 344.6, "start": 340.3, "text": " If you try to do trajectory optimization, what is it going to turn out?" }, { "end": 348.48, "start": 344.6, "text": " It's going to turn out like, look, there you go." }, { "end": 350.36, "start": 348.48, "text": " That works just fine." }, { "end": 353.4, "start": 350.36, "text": " And that's not because you're so good at planning." }, { "end": 358.58, "start": 353.4, "text": " I mean, you are good at planning, but because your model is inaccurate here, because it" }, { "end": 364.03999999999996, "start": 358.58, "text": " has never seen this, your entire training distribution that you trained the world model" }, { "end": 366.91999999999996, "start": 364.03999999999996, "text": " on, only explored the area over here." }, { "end": 372.52, "start": 366.91999999999996, "text": " All right, so you see how the more efficient this planning algorithm is, like the blue" }, { "end": 377.79999999999995, "start": 372.52, "text": " arrow, the thing that finds the blue arrow, the more efficient that is, the more consequential" }, { "end": 385.03999999999996, "start": 377.79999999999995, "text": " it is when your learned world model has mistakes, because it will exactly exploit these mistakes" }, { "end": 392.04, "start": 385.04, "text": " in order to get the shortest path possible or the highest reward in that case." }, { "end": 397.64000000000004, "start": 392.04, "text": " And this, they call this like almost an adversarial attack on the world model, which is a pretty" }, { "end": 403.40000000000003, "start": 397.64000000000004, "text": " good way of framing it." }, { "end": 406.8, "start": 403.40000000000003, "text": " They propose actually to solve this problem." }, { "end": 412.28000000000003, "start": 406.8, "text": " They say we propose to regularize trajectory optimization by means of a denoising autoencoder" }, { "end": 416.52, "start": 412.28, "text": " that is trained on the same trajectories as the model of the environment." }, { "end": 421.44, "start": 416.52, "text": " We show that the proposed regularization leads to improve planning with both gradient based" }, { "end": 423.64, "start": 421.44, "text": " and gradient free optimizers." }, { "end": 428.73999999999995, "start": 423.64, "text": " We also demonstrate that using regularized trajectory optimization leads to rapid initial" }, { "end": 434.47999999999996, "start": 428.73999999999995, "text": " learning in a set of popular motor control tasks, which suggests that the proposed approach" }, { "end": 439.09999999999997, "start": 434.47999999999996, "text": " can be useful tool for improving sample efficiency." }, { "end": 442.48, "start": 439.1, "text": " So in essence, what do they do?" }, { "end": 450.26000000000005, "start": 442.48, "text": " They basically say, okay, we want to regularize this using a denoising autoencoder." }, { "end": 456.48, "start": 450.26000000000005, "text": " And I think it's best if we if we look at the at the math for doing this." }, { "end": 464.40000000000003, "start": 456.48, "text": " So the math here starts off as follows, saying you want to learn a world model." }, { "end": 465.40000000000003, "start": 464.40000000000003, "text": " This is F here." }, { "end": 466.48, "start": 465.40000000000003, "text": " F is the world model." }, { "end": 472.64000000000004, "start": 466.48, "text": " It takes in a state and an action and it gives you the next state or an approximation to" }, { "end": 473.64000000000004, "start": 472.64000000000004, "text": " it." }, { "end": 477.56, "start": 473.64000000000004, "text": " And the parameters here indicate that this is some sort of function that you learn like" }, { "end": 479.56, "start": 477.56, "text": " a deep neural network." }, { "end": 484.96000000000004, "start": 479.56, "text": " You can do this in fully or partially observed environments." }, { "end": 492.36, "start": 484.96000000000004, "text": " Now when you plan, what you want to do is you say I have a planning horizon H, right?" }, { "end": 498.36, "start": 492.36, "text": " Then I have a reward function, and the reward function is going to give me a reward for" }, { "end": 499.96000000000004, "start": 498.36, "text": " each state action pair." }, { "end": 505.28000000000003, "start": 499.96000000000004, "text": " So if I'm in a state and I do a certain action, I'm going to get some reward." }, { "end": 510.12, "start": 505.28000000000003, "text": " This could be you have reached the target or this could be you know how much money you've" }, { "end": 512.24, "start": 510.12, "text": " collected or whatnot." }, { "end": 517.5600000000001, "start": 512.24, "text": " So you're going to look at horizon H, you're going to look H steps into the future, and" }, { "end": 521.64, "start": 517.5600000000001, "text": " you want to maximize the sum of all the rewards here." }, { "end": 527.4399999999999, "start": 521.64, "text": " So in the limit, this reduces to simply like, for example, reaching the target in our rooms" }, { "end": 532.16, "start": 527.4399999999999, "text": " case if H is infinite." }, { "end": 533.96, "start": 532.16, "text": " But you can consider a lower planning horizon." }, { "end": 541.88, "start": 533.96, "text": " So you want to find the action sequence that maximizes this reward in the future." }, { "end": 549.86, "start": 541.88, "text": " And now this reward relies on your environment model." }, { "end": 553.4, "start": 549.86, "text": " So here's the algorithm." }, { "end": 556.92, "start": 553.4, "text": " First you collect some data, okay, that's how you start off." }, { "end": 563.24, "start": 556.92, "text": " That train the dynamics model, the world model, using the data you've already collected." }, { "end": 568, "start": 563.24, "text": " Then for each time step T, you want to optimize this trajectory." }, { "end": 573.02, "start": 568, "text": " So you want to find the best next action sequence and take the first action, implement the first" }, { "end": 575.84, "start": 573.02, "text": " action and get the new observation." }, { "end": 580.76, "start": 575.84, "text": " And do you do this in a loop until the end and at the end you say add this data to D." }, { "end": 582.36, "start": 580.76, "text": " So that's what you do." }, { "end": 588, "start": 582.36, "text": " You use your world model to get the best action sequence." }, { "end": 590.52, "start": 588, "text": " That's how you optimize the trajectory." }, { "end": 594.48, "start": 590.52, "text": " And then at the end of the episode, you've done an episode, right?" }, { "end": 599.6, "start": 594.48, "text": " You went somewhere, you put all of this into your training data to make the world model" }, { "end": 601.88, "start": 599.6, "text": " better." }, { "end": 608.04, "start": 601.88, "text": " Something to note here is that the world model will only learn about things that you have" }, { "end": 610.08, "start": 608.04, "text": " done, right?" }, { "end": 611.96, "start": 610.08, "text": " So there is kind of an interaction effect." }, { "end": 613.6, "start": 611.96, "text": " That's the green area here." }, { "end": 618.68, "start": 613.6, "text": " The world model only knows the paths, the world model only can accurately estimate the" }, { "end": 622.52, "start": 618.68, "text": " world where you have been." }, { "end": 629.24, "start": 622.52, "text": " And that's going to turn out to be the entire problem because these blue arrow finder can" }, { "end": 636.12, "start": 629.24, "text": " now go away from that." }, { "end": 637.6800000000001, "start": 636.12, "text": " That's explained here." }, { "end": 642.5600000000001, "start": 637.6800000000001, "text": " Potential inaccuracies of the trained model cause substantial difficulties for the planning" }, { "end": 644.38, "start": 642.5600000000001, "text": " process." }, { "end": 649.02, "start": 644.38, "text": " Rather than optimizing what really happens, planning can easily end up exploiting the" }, { "end": 652.04, "start": 649.02, "text": " weaknesses of the predictive model." }, { "end": 656.36, "start": 652.04, "text": " Planning is effectively an adversarial attack against the agent's own forward model." }, { "end": 664.24, "start": 656.36, "text": " This results in a wide gap between expectations based on the model and what actually happens." }, { "end": 669.76, "start": 664.24, "text": " And they have this example here where it's like an industrial control process." }, { "end": 675.24, "start": 669.76, "text": " And what you have to imagine, there's like some sort of a container here with a liquid" }, { "end": 676.6, "start": 675.24, "text": " in it." }, { "end": 683.32, "start": 676.6, "text": " And there are two pipes that lead to this container, pipe one and pipe two." }, { "end": 684.94, "start": 683.32, "text": " And there are valves here." }, { "end": 690.36, "start": 684.94, "text": " So there's this valve right here and there's this valve right here." }, { "end": 692.9000000000001, "start": 690.36, "text": " So these are valve one and valve two." }, { "end": 698.5, "start": 692.9000000000001, "text": " And there is also an output pipe right here and that's another valve right here." }, { "end": 703.7, "start": 698.5, "text": " So you can control these three valves, the two inputs and one output." }, { "end": 709.5, "start": 703.7, "text": " And you have to somehow optimize the reaction in here." }, { "end": 714.44, "start": 709.5, "text": " So this is a chemical reaction made up out of the two liquids that flow in here." }, { "end": 716.9200000000001, "start": 714.44, "text": " And you have to somehow optimize a property of that." }, { "end": 721.1, "start": 716.9200000000001, "text": " And that's highly nonlinear and has maybe like time shifts." }, { "end": 726.1400000000001, "start": 721.1, "text": " So when you open a valve, it's going to take a while and then it's very nonlinear." }, { "end": 729.46, "start": 726.1400000000001, "text": " And then you are not supposed to break the pressure limit." }, { "end": 732.82, "start": 729.46, "text": " So you have to also outflow some stuff." }, { "end": 737.34, "start": 732.82, "text": " And if you just do this with a learned model, it looks like this." }, { "end": 740.9000000000001, "start": 737.34, "text": " So first of all, here is a classic controller." }, { "end": 747.54, "start": 740.9, "text": " People have been doing this stuff in industry and they basically build controllers for it." }, { "end": 752.02, "start": 747.54, "text": " And you can do that and that works out really okay-ish." }, { "end": 756.9, "start": 752.02, "text": " As you can see right here, this is the product rate, what you're supposed to optimize." }, { "end": 760.54, "start": 756.9, "text": " And you see some sort of a smooth..." }, { "end": 764.5799999999999, "start": 760.54, "text": " You're supposed to actually bring it to this dashed line right here." }, { "end": 768.18, "start": 764.5799999999999, "text": " And this is some sort of smooth thing, right?" }, { "end": 773.54, "start": 768.18, "text": " And you're supposed to, I guess, bring the pressure here and the A in purge." }, { "end": 776.7399999999999, "start": 773.54, "text": " I don't know what these quantities are, but you're supposed to bring them to the dashed" }, { "end": 782.02, "start": 776.7399999999999, "text": " line and it's very nonlinear and very time dependent." }, { "end": 783.02, "start": 782.02, "text": " So that works." }, { "end": 786.5, "start": 783.02, "text": " And you see here kind of the smoothness by which the variables are manipulated." }, { "end": 794.8199999999999, "start": 786.5, "text": " Now, if you just learn a world model and then do this trajectory optimization, basically" }, { "end": 801.2600000000001, "start": 794.82, "text": " this is some sort of a planning-based reinforcement learning with a world model." }, { "end": 806.22, "start": 801.2600000000001, "text": " You see right here, it works, but it's super jittery." }, { "end": 810.38, "start": 806.22, "text": " The pressure spikes here and apparently this here is a pressure limit." }, { "end": 812.62, "start": 810.38, "text": " So it spikes the pressure limit." }, { "end": 816.38, "start": 812.62, "text": " And you can see that the manipulated variables are up and down and up and down and up and" }, { "end": 822.98, "start": 816.38, "text": " down because at each step, it basically completely overestimates its potential reward." }, { "end": 827.14, "start": 822.98, "text": " With things like, wow, this is really good, but all it does is find a weakness in the" }, { "end": 830.86, "start": 827.14, "text": " model and not a really good action per se." }, { "end": 837.34, "start": 830.86, "text": " Now with their method to already take it away, you can see that now the control task super" }, { "end": 842.26, "start": 837.34, "text": " smoothly and very quickly converges to these optimal things." }, { "end": 848.22, "start": 842.26, "text": " And you can see that the variables being manipulated are also rather smoothly manipulated." }, { "end": 856.6600000000001, "start": 848.22, "text": " And that's an indication that the model is accurately estimating their rewards." }, { "end": 860.5400000000001, "start": 856.6600000000001, "text": " Okay, so how do they do it?" }, { "end": 866.98, "start": 860.5400000000001, "text": " Via what they call trajectory, via regularization of trajectory optimization." }, { "end": 869.86, "start": 866.98, "text": " So in essence, what do we want to regularize here?" }, { "end": 875.46, "start": 869.86, "text": " There are many things we could do to solve this, but the way this paper goes is they" }, { "end": 886.6600000000001, "start": 875.46, "text": " say we not only do we want the most return, we also want a high log probability of our" }, { "end": 888.4200000000001, "start": 886.6600000000001, "text": " taken path." }, { "end": 894.7800000000001, "start": 888.4200000000001, "text": " So this here, as you can see, this is observation action and so on, observation action." }, { "end": 896.74, "start": 894.7800000000001, "text": " So this is the future." }, { "end": 902.24, "start": 896.74, "text": " This right here is the future." }, { "end": 911.1800000000001, "start": 902.24, "text": " So this sequence here is what is going to give me the reward right here." }, { "end": 915.98, "start": 911.1800000000001, "text": " So G is also dependent on these things, but it's not said explicitly here." }, { "end": 918.38, "start": 915.98, "text": " So G is dependent on your plan." }, { "end": 919.86, "start": 918.38, "text": " Maybe let's not call this the future." }, { "end": 922.8, "start": 919.86, "text": " This is the plan." }, { "end": 925.1800000000001, "start": 922.8, "text": " This is the plan you came up with." }, { "end": 929.04, "start": 925.1800000000001, "text": " So this is directly going to influence G and G is the reward you're going to get under" }, { "end": 930.04, "start": 929.04, "text": " your model." }, { "end": 935.2199999999999, "start": 930.04, "text": " But also you want the log probability of the plan itself to be high." }, { "end": 940.0999999999999, "start": 935.2199999999999, "text": " Now there, I think there is a bit, there is something missing here and that is conditioned" }, { "end": 943.9, "start": 940.0999999999999, "text": " on your training distribution right here." }, { "end": 947.0999999999999, "start": 943.9, "text": " And I think that's a actually rather crucial part." }, { "end": 949.8199999999999, "start": 947.0999999999999, "text": " Now that's, that's the KL thing." }, { "end": 951.78, "start": 949.8199999999999, "text": " So this is conditioned on your training." }, { "end": 960.5799999999999, "start": 951.78, "text": " So what you want is you want the plan to be basically in your training distribution." }, { "end": 967.4399999999999, "start": 960.5799999999999, "text": " So you, you want what you, you want your plan that you're going to execute." }, { "end": 973.9399999999999, "start": 967.4399999999999, "text": " If that is actually part of your training data set, then you know, I have already executed" }, { "end": 981.1, "start": 973.9399999999999, "text": " this once before and it's reasonable to assume that therefore my world model has learned" }, { "end": 985.62, "start": 981.1, "text": " from this experience and is going to give me an accurate reward." }, { "end": 993.26, "start": 985.62, "text": " If we go back to our rooms example, then up here somewhere, if we go back to our rooms" }, { "end": 999.1, "start": 993.26, "text": " example, right, you see that anywhere in the green area where I have already explored the" }, { "end": 1002, "start": 999.1, "text": " world model is fairly good, right?" }, { "end": 1005.0600000000001, "start": 1002, "text": " It's going to give me accurate reflection of the world." }, { "end": 1009.5400000000001, "start": 1005.0600000000001, "text": " But as soon as it go outside the green area, it is not." }, { "end": 1014.3199999999999, "start": 1009.54, "text": " And inside the green area is basically where my training data is." }, { "end": 1020.9, "start": 1014.3199999999999, "text": " Now if I in the future actually take a path here, crash into a wall right here, right?" }, { "end": 1025.6, "start": 1020.9, "text": " You saw in the algorithm at the end of an episode, I'm going to add my trajectory to" }, { "end": 1027.7, "start": 1025.6, "text": " the training data for the world model." }, { "end": 1031.94, "start": 1027.7, "text": " So this green part here expands to include that." }, { "end": 1039.42, "start": 1031.94, "text": " And now if I go here again, if my plan goes there again, now I can trust the world model." }, { "end": 1043.14, "start": 1039.42, "text": " But also now it has it is actually correct because it has a wall here." }, { "end": 1048.98, "start": 1043.14, "text": " So you see that the regularization basically you not only do I want the biggest reward" }, { "end": 1055.78, "start": 1048.98, "text": " possible under my world model, I also want that the plan that I'm about to execute is" }, { "end": 1059.7, "start": 1055.78, "text": " has a high probability under my training distribution." }, { "end": 1060.7, "start": 1059.7, "text": " Okay." }, { "end": 1068.8200000000002, "start": 1060.7, "text": " And the way we do this is by denoising auto encoders." }, { "end": 1074.86, "start": 1068.82, "text": " We want the log probability here to be high and you do this via a denoising auto encoder." }, { "end": 1077.06, "start": 1074.86, "text": " What's a denoising auto encoder?" }, { "end": 1086.06, "start": 1077.06, "text": " A denoising auto encoder is basically so if you have, for example, an image and the image" }, { "end": 1095.46, "start": 1086.06, "text": " is of a trusty cat whiskers and a denoising auto encoder is an unsupervised method where" }, { "end": 1097.76, "start": 1095.46, "text": " you have it's basically an auto encoder." }, { "end": 1104.02, "start": 1097.76, "text": " So there is a bunch of layers compressing to a hidden representation, then uncompressing" }, { "end": 1105.54, "start": 1104.02, "text": " it again." }, { "end": 1107.54, "start": 1105.54, "text": " Okay." }, { "end": 1113.62, "start": 1107.54, "text": " And at the end, you want to output the same as at the beginning." }, { "end": 1115.96, "start": 1113.62, "text": " So it's basically an auto encoder." }, { "end": 1122.3, "start": 1115.96, "text": " But the special part about the denoising auto encoder is that first, you take your input" }, { "end": 1125.06, "start": 1122.3, "text": " and you know, you put some noise on it." }, { "end": 1128.46, "start": 1125.06, "text": " So that could mean could mean anything here." }, { "end": 1133.34, "start": 1128.46, "text": " But here, what they do is they do they make some Gaussian noise on it." }, { "end": 1138.1799999999998, "start": 1133.34, "text": " Now, I can't really draw Gaussian noise here, but it would be kind of convolved with Gaussian" }, { "end": 1139.1799999999998, "start": 1138.1799999999998, "text": " Gaussian noise." }, { "end": 1142.3799999999999, "start": 1139.1799999999998, "text": " So I'm just going to add some noise like this." }, { "end": 1145.3799999999999, "start": 1142.3799999999999, "text": " So noise, noise, noise." }, { "end": 1150.72, "start": 1145.3799999999999, "text": " So there's some noise, you see, and then you feed that." }, { "end": 1153.3799999999999, "start": 1150.72, "text": " That's now what you feed in here." }, { "end": 1158.8600000000001, "start": 1153.38, "text": " And the algorithm is supposed to reconstruct this, this original image." }, { "end": 1163.7800000000002, "start": 1158.8600000000001, "text": " So the algorithm is basically supposed to take away the noise, it doesn't see the original" }, { "end": 1166.38, "start": 1163.7800000000002, "text": " image, but it's supposed to produce it." }, { "end": 1168.3400000000001, "start": 1166.38, "text": " And you do this with your training data." }, { "end": 1169.3400000000001, "start": 1168.3400000000001, "text": " What does that mean?" }, { "end": 1176.8600000000001, "start": 1169.3400000000001, "text": " Ultimately, for our trajectory optimization, it means that if I have a trajectory that" }, { "end": 1181.5, "start": 1176.8600000000001, "text": " I did before, and it maybe goes here, right?" }, { "end": 1189.34, "start": 1181.5, "text": " What I can do is I can make a noisy version of it, which would be the black one right" }, { "end": 1190.34, "start": 1189.34, "text": " here." }, { "end": 1193.1, "start": 1190.34, "text": " So I put some noise on it, some noise." }, { "end": 1196.82, "start": 1193.1, "text": " Right, it's kind of the same, but okay." }, { "end": 1201.36, "start": 1196.82, "text": " And the denoising autoencoder is supposed to give me back the red one." }, { "end": 1207.38, "start": 1201.36, "text": " This will simply give me some sort of a probabilistic model of my training distribution." }, { "end": 1211.16, "start": 1207.38, "text": " So they go through the math here and show that these denoising autoencoders actually" }, { "end": 1219.02, "start": 1211.16, "text": " naturally output this log probability, sorry, the gradient of the log probability." }, { "end": 1226.8600000000001, "start": 1219.02, "text": " Because optimal denoising theory says that for zero mean and Gaussian noise, the optimal" }, { "end": 1234.6200000000001, "start": 1226.8600000000001, "text": " denoising function, the optimal denoising function for zero mean Gaussian corruption" }, { "end": 1237.46, "start": 1234.6200000000001, "text": " is this thing right here." }, { "end": 1248.06, "start": 1237.46, "text": " So it is, if you give me X and you tell me X has been corrupted by zero mean Gaussian" }, { "end": 1257.74, "start": 1248.06, "text": " noise of size sigma n, then the best, and you simply tell me, give me back the original" }, { "end": 1264.7, "start": 1257.74, "text": " image, the best thing I can do is to take what you gave me and add this gradient of" }, { "end": 1272.38, "start": 1264.7, "text": " the log probability of X if I can, if I have a model of the log probability." }, { "end": 1276.82, "start": 1272.38, "text": " So that's the best thing I can do." }, { "end": 1279.5, "start": 1276.82, "text": " And that's the best denoising function." }, { "end": 1281.5, "start": 1279.5, "text": " And now you have to think a bit in reverse." }, { "end": 1290.5, "start": 1281.5, "text": " If we train a denoising autoencoder, that is going to approximate this best function" }, { "end": 1292.18, "start": 1290.5, "text": " that there is." }, { "end": 1297.8200000000002, "start": 1292.18, "text": " So we know that the best possible denoising function is this, we train a denoising autoencoder," }, { "end": 1303.0600000000002, "start": 1297.8200000000002, "text": " which in the optimal case is going to converge to the best denoising function." }, { "end": 1314.8200000000002, "start": 1303.0600000000002, "text": " So if we then reformulate and we do denoising autoencoder of X minus or X tilde minus X" }, { "end": 1319.22, "start": 1314.8200000000002, "text": " tilde, that is divided by the standard deviation." }, { "end": 1321.74, "start": 1319.22, "text": " Sorry, the variance." }, { "end": 1330.1, "start": 1321.74, "text": " That is going to give us this quantity right here, the gradient of the log probability." }, { "end": 1337.3, "start": 1330.1, "text": " And the gradient of the log probability of X is exactly what we need to run gradient" }, { "end": 1339.42, "start": 1337.3, "text": " descent on our function." }, { "end": 1343.3, "start": 1339.42, "text": " So here is our function again, G plus this regularization." }, { "end": 1347.86, "start": 1343.3, "text": " Now they don't regularize over the entire future, but over these windows." }, { "end": 1352.06, "start": 1347.86, "text": " But in essence it's G plus the log probability of your plan." }, { "end": 1356.02, "start": 1352.06, "text": " If you take the gradient of that, of course you take the gradient of the sum." }, { "end": 1363.6999999999998, "start": 1356.02, "text": " So it's the gradient of G plus the gradient of the log probability with respect to the" }, { "end": 1364.6999999999998, "start": 1363.6999999999998, "text": " actions." }, { "end": 1370.7199999999998, "start": 1364.6999999999998, "text": " And here simple application of the chain rule will tell you that you have to propagate through" }, { "end": 1372.3, "start": 1370.7199999999998, "text": " the input, through the X." }, { "end": 1373.9399999999998, "start": 1372.3, "text": " And you need this quantity." }, { "end": 1379.42, "start": 1373.94, "text": " The gradient of the log probability with respect to its inputs." }, { "end": 1390.3400000000001, "start": 1379.42, "text": " Now as we just saw, the optimal denoising autoencoder is going to output that thing." }, { "end": 1396.8200000000002, "start": 1390.3400000000001, "text": " So if we train a denoising autoencoder and we suppose it reaches a good accuracy, then" }, { "end": 1400.74, "start": 1396.8200000000002, "text": " we can obtain this quantity basically for free." }, { "end": 1404.22, "start": 1400.74, "text": " And that's the entire trick here." }, { "end": 1408.02, "start": 1404.22, "text": " So in essence, what does it mean?" }, { "end": 1414.38, "start": 1408.02, "text": " In essence what it means is that if we are in our room again, and we have our partial" }, { "end": 1420.58, "start": 1414.38, "text": " model of the world, let's say we have this model, because we are here and all we've ever" }, { "end": 1429.78, "start": 1420.58, "text": " explored is these things right here." }, { "end": 1434.94, "start": 1429.78, "text": " Now when I go and do my trajectory optimization, and my trajectory optimization wants to go" }, { "end": 1439.44, "start": 1434.94, "text": " here, I simply say, no, I don't know that, I haven't seen that yet." }, { "end": 1445.26, "start": 1439.44, "text": " You can only plan basically within the space where we have already been." }, { "end": 1449.58, "start": 1445.26, "text": " So you can plan like here." }, { "end": 1456.1399999999999, "start": 1449.58, "text": " So here now there is of course, there is going to be some exploration, so some probability" }, { "end": 1459.5, "start": 1456.1399999999999, "text": " that you can go away a bit, but not too much." }, { "end": 1465.18, "start": 1459.5, "text": " So in this case, it would result in the planning only to happen in spaces where we've actually" }, { "end": 1466.18, "start": 1465.18, "text": " been." }, { "end": 1471.54, "start": 1466.18, "text": " So it might go here, and then here, because okay, here we haven't been anywhere." }, { "end": 1478.24, "start": 1471.54, "text": " But then that would lead me to take the first step in this direction, and not in this direction." }, { "end": 1484.1, "start": 1478.24, "text": " And if I take my first step in this first direction, then of course, I'm going to be" }, { "end": 1486.98, "start": 1484.1, "text": " already a bit on the correct path right here." }, { "end": 1490.98, "start": 1486.98, "text": " Because if I take the first step into this direction, then after that, I'm going to have" }, { "end": 1495.22, "start": 1490.98, "text": " to, if once I crash here, I'm going to have to correct really hard." }, { "end": 1499.94, "start": 1495.22, "text": " And that's exactly what's going to give you this super jittery control." }, { "end": 1505.14, "start": 1499.94, "text": " Whereas if you only plan where you've already been, you won't, the probability that you're" }, { "end": 1510.98, "start": 1505.14, "text": " going to have to do like a 180 is going to be much, much lower." }, { "end": 1513.64, "start": 1510.98, "text": " Okay." }, { "end": 1519.44, "start": 1513.64, "text": " That seems like that's about it." }, { "end": 1521.64, "start": 1519.44, "text": " Let's look at the experiments." }, { "end": 1526.14, "start": 1521.64, "text": " So they're experiments." }, { "end": 1531.88, "start": 1526.14, "text": " Basically I actually want to go down here to this industry, sorry, not the industrial" }, { "end": 1536.5600000000002, "start": 1531.88, "text": " control process, but to the mojo co experiments." }, { "end": 1539.42, "start": 1536.5600000000002, "text": " So these are kind of continuous control tasks." }, { "end": 1540.42, "start": 1539.42, "text": " You might have seen it." }, { "end": 1551.98, "start": 1540.42, "text": " There's some like one is a, a, the ant here is basically this 3d and is like a blob and" }, { "end": 1555.46, "start": 1551.98, "text": " it has I think four legs and each leg has two joints." }, { "end": 1560.14, "start": 1555.46, "text": " And it just needs to walk as far as possible or reach some sort of goal." }, { "end": 1567, "start": 1560.14, "text": " And the half cheetah is like a 2d thing where I think it's something like this." }, { "end": 1572.6, "start": 1567, "text": " It also has these two legs and it's supposed to walk forward and not fall over." }, { "end": 1579.06, "start": 1572.6, "text": " And you can put force basically on each of the, of the joints here." }, { "end": 1584.72, "start": 1579.06, "text": " So you see that their baselines are Gaussian processes." }, { "end": 1593.7, "start": 1584.72, "text": " And this pets thing is a previous baseline to do, to also do model based control with" }, { "end": 1596.04, "start": 1593.7, "text": " a learned model." }, { "end": 1602.72, "start": 1596.04, "text": " And here they, there's is the main, their main one is the red one." }, { "end": 1606.72, "start": 1602.72, "text": " And as you can see that it goes much faster." }, { "end": 1613.28, "start": 1606.72, "text": " Well it basically outperforms the rest in these high, in these more complicated tasks." }, { "end": 1619.3999999999999, "start": 1613.28, "text": " And then card pole or something like this is, is lower dimensional, easier tasks." }, { "end": 1623.04, "start": 1619.3999999999999, "text": " And you can see that at least it does not hurt." }, { "end": 1630.84, "start": 1623.04, "text": " Now they make, they say here something they don't, they don't show in the plots." }, { "end": 1639.74, "start": 1630.84, "text": " They say that if you let this run for a while, then basically the, their method doesn't make" }, { "end": 1641.8, "start": 1639.74, "text": " any improvement anymore." }, { "end": 1647.82, "start": 1641.8, "text": " Whereas the baseline methods will sort of at some point surpass it." }, { "end": 1653.02, "start": 1647.82, "text": " And the reason that is, and I'm not sure if it's on this exact task, but they mentioned" }, { "end": 1661.2, "start": 1653.02, "text": " that which it's, it's I respect so far is because they say since we only plan where" }, { "end": 1665.82, "start": 1661.2, "text": " we know, where did I draw it?" }, { "end": 1672.6399999999999, "start": 1665.82, "text": " Since we only plan where we know, we basically do much less exploration than others." }, { "end": 1676, "start": 1672.6399999999999, "text": " We kind of stick to what we know when we plan." }, { "end": 1680.68, "start": 1676, "text": " So inherently we do less exploration and in our conversation with Hari, he basically said" }, { "end": 1684.48, "start": 1680.68, "text": " this, this is intended." }, { "end": 1690.48, "start": 1684.48, "text": " And the base, the intention is that you want to do your planning where you know, and then" }, { "end": 1694.2, "start": 1690.48, "text": " explicitly add a component that does exploration." }, { "end": 1700.96, "start": 1694.2, "text": " So you have control over, so you can basically say, huh, I, I've never been here sort of." }, { "end": 1707.18, "start": 1700.96, "text": " Now you would be in an exploration phase, you would explicitly go there rather than" }, { "end": 1715.2, "start": 1707.18, "text": " intermingle your planning with your exploration and basically rely on your planning to screw" }, { "end": 1717.5600000000002, "start": 1715.2, "text": " up and you're exploring." }, { "end": 1724.1200000000001, "start": 1717.5600000000002, "text": " Because if your plan, if you're planning never screws up, then you won't explore either," }, { "end": 1725.1200000000001, "start": 1724.1200000000001, "text": " right?" }, { "end": 1728.04, "start": 1725.1200000000001, "text": " Then you will always reach your goal or your planning will always be correct." }, { "end": 1733.0800000000002, "start": 1728.04, "text": " And these other methods that don't have this explicitly, they explore every time their" }, { "end": 1735.2, "start": 1733.0800000000002, "text": " planning screws up and you don't want that." }, { "end": 1738.24, "start": 1735.2, "text": " You want your planning to be as good as possible." }, { "end": 1740.8, "start": 1738.24, "text": " And they do that by sticking to what they know." }, { "end": 1745.14, "start": 1740.8, "text": " And then they the next step, which is not in this paper would be to add an explicit" }, { "end": 1750.72, "start": 1745.14, "text": " exploration policy to reach areas they've never reached before." }, { "end": 1757.48, "start": 1750.72, "text": " Okay, so that's the reason why they don't ultimately reach the best accuracy, but they" }, { "end": 1766.44, "start": 1757.48, "text": " do reach a the initial accuracy much faster than the other tasks, because they plan better." }, { "end": 1774.18, "start": 1766.44, "text": " They have a long discussion here of what still problems are like local minima or the planning" }, { "end": 1780.44, "start": 1774.18, "text": " horizon problem, open loop versus closed loop compounding errors in planning." }, { "end": 1783.18, "start": 1780.44, "text": " But I'm going to leave this out for now." }, { "end": 1785.6, "start": 1783.18, "text": " And I thank you for being here." }, { "end": 1789.24, "start": 1785.6, "text": " I very much invite you to check out the paper for more details." }, { "end": 1791.48, "start": 1789.24, "text": " It's pretty cool, pretty easy to read, actually." }, { "end": 1793.76, "start": 1791.48, "text": " It's very written very well." }, { "end": 1795.8799999999999, "start": 1793.76, "text": " And with that, see you next time." }, { "end": 1816.2, "start": 1795.88, "text": " Bye bye." } ]
wcHQ3IutSJg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[News] The NeurIPS Broader Impact Statement
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "neurips", "conference", "ethics", "society", "impact", "statement", "submission", "authors", "accept", "reject", "flag", "review", "double blind" ]
For the first time, all authors submitting to the NeurIPS conference are forced to write a statement about the broader impact of their research on society. The messaging around this and how exactly this can influence the paper acceptance process is highly confusing. OUTLINE: 0:00 - Intro 0:30 - VentureBeat Article 1:35 - Official Communication 9:55 - Special Ethics Reviewers 11:00 - Unofficial Communication 22:55 - Conclusion Sources: https://neurips.cc/Conferences/2020/CallForPapers https://neurips.cc/Conferences/2020/PaperInformation/ReviewerGuidelines https://neurips.cc/Conferences/2020/PaperInformation/NeurIPS-FAQ https://medium.com/@NeurIPSConf/getting-started-with-neurips-2020-e350f9b39c28 https://venturebeat.com/2020/02/24/neurips-requires-ai-researchers-to-account-for-societal-impact-and-financial-conflicts-of-interest/ https://medium.com/@NeurIPSConf/a-note-for-submitting-authors-48cebfebae82 https://medium.com/@BrentH/suggestions-for-writing-neurips-2020-broader-impacts-statements-121da1b765bf https://acm-fca.org/2018/03/29/negativeimpacts/ https://medium.com/@operations_18894/a-guide-to-writing-the-neurips-impact-statement-4293b723f832 https://gdpr-info.eu/ Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
As many of you might be familiar with, the NURIPS 2020 conference now requires authors to include a section in their submissions discussing the broader impact of their work, including possible societal consequences, both positive and negative. That was announced in the Getting Started with NURIPS 2020 announcement on Medium by the conference organizers. Shortly after that, in an email to VentureBeat, Michael Lippman, the communications chair of NURIPS 2020, told VentureBeat that these statements will be published with each paper. However, they'll appear only in the camera ready versions of the papers, so they do not compromise the double blind nature of the reviewing process. But then goes on to say, reviewers and area chairs assessment will be done on the basis of technical contributions only. However, if a paper is flagged for potential ethical concerns, then the paper will be sent to another set of reviewers with expertise in ethics and machine learning. The final acceptance of these papers is contingent on the positive assessment by these second set of reviewers as well. So this seems a bit odd. For one, the broader impact statement is only published after the double blind reviewing process is over. But the papers will be assessed based on their ethical and societal impact. So maybe the assessment will have nothing to do with this statement. Let's dive in a bit deeper. In the NURIPS 2020 FAQ, do I have to complete the broader impact section? The answer is yes. Please include the section. But they say, however, if your work is very theoretical or is general enough that there is no particular application foreseen, then you are free to write a broader impact discussion is not applicable. So until now, I genuinely feel that the conference organizers view this as some sort of experiment. And it is reasonable if it doesn't apply to you, which is probably the case for most people, then you can simply write, this does not apply to our research. Can my submission be rejected solely on the basis of the broader impact section? Answer, no. Reviewers will be asked to rate a submission based on the evaluation criteria, not your broader impact section. They will also be asked to check whether the broader impact section is adequately addressed. So reviewers will be able to check the broader impact section, which isn't there or is it there during the double blind reviewing process. But they only have to say whether it's adequately addressed and they will not be able to reject a paper on that basis. Again, they repeat. The authors can simply state this work does not present any foreseeable societal consequences if the authors feel that this is the case. If this is not the case, the conference asks of the authors to discuss along the lines of positive potential impacts and negative potential impacts of the submission. So far, so good. Let's actually look at these evaluation criteria that they ask reviewers to grade the paper by, which, as they say, has nothing to do with the broader impact section. Papers that violate the style have already been published or have fatal flaws may be rejected on that basis. Other submissions will be judged on the basis of their technical quality, novelty, potential impact and clarity. But one could still think that the potential impact here is a potential technical impact. It has nothing to do with this broader impact section. That has nothing to do with you being accepted or rejected. They go on to say submissions will also be considered on ethical grounds, regardless of scientific quality or contribution. And they say a submission may be rejected for ethical considerations. Now, again, one could say that they don't look at your broader impact statement if they feel that there is an ethical violation, they reject it. But before we've already heard that if the reviewers feel that there is an ethical consideration that may include the broader impact section, they can flag the paper and that will go to a set of second reviewers. And these reviewers can actually reject your paper. So it seems like there is a bit of a mixed message here. The entire question sort of hinges on who makes the decision and based on what. And one of the questions is what kind of decisions do the reviewers make? So where else better to go than the reviewer guidelines? Question 11 to the reviewers. Have the authors adequately address the broader impact of their work, including potential negative ethical and societal implications of their work? And indicate whether you believe the broader impact section was adequate. So it feels like that the reviewers are simply to evaluate whether this has done with enough work and not necessarily whether they agree with the broader impact section or not. The question here is if the reviewers think that this has not been done with enough adequacy, but don't necessarily see an ethical problem or actually do, can it be rejected on the basis that it has not been done adequately? The entire writing here seems like it should. But then also it seems like the reviewers assessment should have nothing to do with the broader impact section. Question 12 of the reviewer guidelines says, does the submission raise potential ethical concerns? Note that this is a different question from question 11, where you're simply asked to judge adequacy. The reviewer guidelines say, note that your rating should be independent of this. If the AC also shares this concern, dedicated reviewers with expertise at the intersection of ethics and machine learning will further review the submission. And your duty is to flag the papers. This now seems that reviewers are to consider the adequacy of the statement, but not its content and forward its content to another section, which contradicts that the reviewers don't see the statement or that the statement can't influence the review. And it also contradicts the statement that your paper cannot be rejected based on the broader impact section. Namely, if the second set of reviewers read your broader impact section, find it doesn't address their concerns, they can in fact reject your paper based on that. I guess someone arguing against that would say that these people could also reject it just because they think it's ethically problematic. But if the paper has a broader impact section, I think they are going to look at that with some sort of an open mind and at least be influenced by that. In a note for submitting authors, the conference organizers again released a statement saying that the broader impact statement should include a statement about the foreseeable positive impact as well as potential risks associated mitigations of the proposed research. Authors can also declare that a broader impact statement is not applicable to their work if they believe this to be the case. And they again repeat reviewers will also confirm whether the broader impact statement is inadequate. But this assessment will not affect the overall rating. However, reviewers have the option to flag a paper for ethical concerns, which may relate to the content of the broader impact section. The paper will be sent for additional review to a pool of emergency reviewers with expertise in machine learning and ethics who will provide an assessment solely on the basis of ethical considerations. We expect very few, if any, papers to need such further assessment. So the official communication makes a divide between on one hand, adequacy and on the other hand, real ethical concerns. And their message is basically reviewers will judge the adequacy, flag the ethical concerns, and then special reviewers will be able to reject based on ethical concerns. Now, what's not really clear is the messaging that reviewers should not base their judgment on the broader impact section. But then where does this adequacy rating go into the process of rejecting or accepting a paper? And with them saying there will only be a few, they expect only a few, it seems like it's sort of an experiment that they do this year. Oh, hey, NURIBS organizer. Well, hello. So you've decided to make everyone include a broader impact statement, but the broader impact statement will only be visible after the reviewing process. Correct. But the reviewers should check its adequacy during the review process. Yes, while they can't see it. Correct. But it should in no way influence their judgment and you can't be rejected because of that. That is correct. But if it is found to be inadequate or problematic, it is sent to a second set of reviewers, which on the basis of the paper and the broader impact statement will decide if the paper is of ethical concern. Yes. And if the paper is of enough ethical concern and the broader impact statement doesn't convince the special reviewers otherwise, they will be able to reject that paper. That is indeed correct. So how are you saying that the broader impact statement has no influence on your score and you can't be rejected because of it? Well, as we said, no one's able to see it until the paper is released. Obviously. Let's talk about these special reviewers. And for that, we broadly have to talk about incentives. Just imagine for a second that this expectation comes to fruit, that no paper is actually flagged and or any paper that is flagged to this committee will come back with a clear, no, this is not really an ethical concern or a reason to discard this paper as a scientific contribution. One might almost think that then this program will be abolished in the next year because it's useless. So the more problems the special reviewers find, the more justified their position is. I wonder where that leads. I guess we are very dependent on pretty much every single person in these special reviewers being some sort of super honest person that has no incentives and also no strong opinions on these things and and generally has gone into this ML ethics just out of interest and not to actually make an impact. I'm sure that will work out just fine. Now, the official NURiBS website actually links to a blog post of Brent Hecht, suggestions for writing the NURiBS 2020 broader impact statements. So we can reasonably assume that this is at least in agreement with the organizers of the conference. Brent here says understanding the societal impacts of your work is going to be hard. It is going to take lots of effort to write NURiBS broader impact statements. Tons of work has already been done for you. Check out the literature from communities that have studied societal impacts of AI for a long while. Even better, bring a social scientist onto your research team. Remember, though, they don't work for free. Hire them into your company, give them subawards, recruit them as PhD students through interdisciplinary programs. So are you saying the more problems these people find, the more of them will get hired? And again, look at these statements in general. It seems to really be about how much work are you able to put into this? So here's your average PhD student. Now, they pretty much already have to write their papers by themselves or in very, very small teams because they need first authorship. And they have to do all their experiments and they don't have enough resources. And now they're also asked to spend considerable amount of time not only writing this very hard statement, but also reading up on all the literature that there is to read up on. Or alternatively, if you don't want to do that, well, just hire someone, of course, because budgets and salaries in universities and for PhD students is notoriously loose. We can just hire someone that does that. I don't really think it's possible for single PhD students or research labs at universities to just hire someone or put someone full time on this additional required work that is put onto them. I wonder who will be actually able to put additional people on this such that they surely end up with beautiful, well-researched, broader impact statements to justify even the most ethically concerning research. I'm wondering, it just gets on the tip of my tongue, but I was going to have to leave that for another time. For people who do more theoretical work, it is going to be more difficult. Wait a minute. I thought the official communication was you're very free to leave it away if you don't think this applies to you. But here it basically saying it's going to be more work for you. Find something that is both rigorous and practical for your research. And the argument is to get funding for any theoretical work, someone had to make an argument about positive societal impacts at some point. Not true. Some universities simply get money and academic freedom. If that argument is possible, it is probably also possible to make a rigorous statement about some negative societal impacts. You might be tempted to write boilerplate low information statements. Don't do this. It will undermine the rigor in the rest of your paper. The public will roll its eyes and reviewers may and often should call you out. Now, wait a minute. I thought the reviewers are only supposed to judge the adequacy and absolutely have no influence on their judgment of the paper. But the underpinning of this text here is basically that if you as a reviewer feel that this hasn't been done adequately, you sort of should let this swap over into your assessment of the rigor and adequacy of the technical contributions in the paper. Because they're kind of the same, right? And they also say that this might spark a conversation is specifically the author response period will be a decent opportunity to have a bit of a dialogue between author and reviewer on the impact statement. So this basically means that I as a reviewer now have to write in my review something about the broader impact statement and not only judge its adequacy in a special field for it. And then the author is forced to spend a bunch of their very, very, very valuable author response on rebuttling the reviewers assessment of their broader impact statement. Our proposal's view is that it's not your job as a reviewer to judge submissions for their impact. Rather, you should evaluate the rigor with which they disclose their impacts. So again, it's about putting work into it. If you go to the full proposal behind this blog post, you'll find the following snippets. So there is a list of expected outcomes of introducing such mandatory broader impact statements. Now, have a look at this expected outcomes. We expect that action on the above recommendations will lead to a number of desirable outcomes. And they're all positive outcomes. Now, haven't we been discussing for the last minutes that it's always important to assess the positive and the negative outcomes of your actions and of your releases? How ironic that none of the organizers of the conference, nor any of these people communicating, were forced to release a broader impact statement discussing the negative consequences that their actions would have on the community and the greater society. They go on to give a list of examples of how you could do such positive and negative aspects of technology. One of them is social media. And I would agree that there are ethical considerations if you invent social media. We all know that social media can be some sort of a dopamine feedback loop addiction and have negative consequences in society that are not readily visible. But it goes on. Crowdwork, a researcher who invents a new crowdwork framework, likely motivates her work by highlighting the problem the framework solves. But they go on to say that crowdwork also has negative externalities, such as incentivizing very low pay. And the researcher should find ways to engineer her crowdwork framework such that these externalities are structurally mitigated and or she might advocate for minimum wage laws to be adapted. So this researcher working out a problem in crowdworking now has to basically solve millennia old problems in structural economics that have thousands of moving parts and no clear consensus on how to solve. But the best example, and this is the example they actually tell you to look at if your work is more theoretical or you don't really think it has an impact, is the following storage and computation. Recent advances in storage systems and graphical processing unit processing afford the easy storage of massive amounts of data and the real time computation on these data. This has incentivized corporations to collect every possible data point about their users, save this data indefinitely and strive to monetize this data in new ways. While allowing for impressive new capabilities, this trend also presents tremendous risks to privacy. Researchers working in storage and GPU processing should consider these and other foreseeable potential risks in their papers. They should also enumerate technological and policy means by which these risks might be mitigated, e.g. technologies to automate general data protection regulation require capabilities and improvements to GDPR like policies. That is absolutely mad. So here you are making a GPU chip more powerful and you're asked to think ahead about the fact that this can be used to mine data. And not only that, now you're also required to propose improvements to GDPR like policies. The GDPR only an 88 page, very fine print legal document that in addition to all the literature about AI governance, our poor PhD student is now also required to read, understand and be able to improve. How long does this chain of causality go? How do you have to think ahead? This gets ridiculous. It's 200,000 BC and Nuno in his cave just invented fire. Well, fire can be used to cook food, can be used to have less disease, can be used to settle down, expand civilization, build educational facilities, build up a culture, a scientific method, enable massive progress, industrialization, general improvement in health, wealth, education and happiness of society, which ultimately leads to some people building GPUs and saving your data and analyzing all your things to serve you ads of better kitty stickers. How could Nuno in his cave do this to us? Where is his broader impact statement about the invention of fire for the future data collection algorithms on GPUs? Look, I'm not saying that you should not consider the downstreams effect of your inventions. Of course you should. But at some point it gets ridiculous for most of the work handed into a conference like NURIPS. Either the downstreams effect are so far away that is almost impossible to foresee or as any technology you can use it for good and for bad. And it is going to be with the application of this technology and not its invention where the good and the bad come in. And what most people are going to do is simply come up with things that mean absolutely nothing and generally make not a lot of difference. While it gives a big advantage to big institutions that can spend a lot of time and effort on crafting very rigorous adequate statements. Another release called A Guide to Writing the NURIPS Impact Statement that is not linked by NURIPS, but as they say was in communication with some of the organizers of the conference. So it's reasonable to assume they also largely agree with these positions here. Says you should discuss, read and reflect time permitting impact assessment will benefit from broad intellectual reflection, discuss potential impacts, follow public discussion, read case studies and read the scholarly literature. On tech governance, of course, time permitting. But then again, if it's not rigorous enough, a reviewer might be getting the idea that the rest of your paper isn't rigorous enough. So maybe time must permit for this one. And they again say, think about impacts even for theoretical work. So the official communication always says that if you don't feel this applies to you, you're very free to write. This doesn't apply to me. But the in official communication says if this doesn't apply, you're doing something wrong. And by the way, we're evaluate the rest of your paper based on the amount of work you put into that statement. Ultimately, these statements are just going to boil down to you can do good and bad things with any technology as is visible on this example they give here. Pluribus, a superhuman AI for multiplayer poker, they say they intentionally choose to broaden the focus of their broader impact assessment, depending who can use this scientific advance such as criminals or well motivated citizens. This technology may be socially harmful or beneficial. If access to this capability is mostly available to the wealthy, it could plausibly promote concentration of wealth. And further on the other side, increased skill could increase total welfare. Gee, if that doesn't apply to every single technology ever, I don't know. Again, my general assessment of this is not that it is absolutely wrong to do this or very useless. It is just shifting the balance a bit more on to large institutions who can actually afford to spend a lot of time and work into crafting beautiful statements. And in general, I don't think it's that big of a deal, but I also don't think it's going to help very much to just force everyone to do this. I guess we'll see how it turns out. In this VentureBeat article, they link someone named Joe Redmond saying, I stopped doing CV research because I saw the impact my work was having. I love the work, but the military applications and privacy concerns eventually became impossible to ignore, which I respect a lot. But I would ask, did Joe Redmond realize this after being forced to write a broader impact statement or at some other point? That was my two cents. If you like videos like this and paper analysis and other things, then subscribe, like wherever these buttons are, share it with your friends and see you next time.
[ { "end": 18, "start": 0, "text": " As many of you might be familiar with, the NURIPS 2020 conference now requires authors to include a section in their submissions discussing the broader impact of their work, including possible societal consequences, both positive and negative." }, { "end": 25, "start": 18, "text": " That was announced in the Getting Started with NURIPS 2020 announcement on Medium by the conference organizers." }, { "end": 38, "start": 25, "text": " Shortly after that, in an email to VentureBeat, Michael Lippman, the communications chair of NURIPS 2020, told VentureBeat that these statements will be published with each paper." }, { "end": 48, "start": 38, "text": " However, they'll appear only in the camera ready versions of the papers, so they do not compromise the double blind nature of the reviewing process." }, { "end": 56, "start": 48, "text": " But then goes on to say, reviewers and area chairs assessment will be done on the basis of technical contributions only." }, { "end": 67, "start": 56, "text": " However, if a paper is flagged for potential ethical concerns, then the paper will be sent to another set of reviewers with expertise in ethics and machine learning." }, { "end": 76, "start": 67, "text": " The final acceptance of these papers is contingent on the positive assessment by these second set of reviewers as well." }, { "end": 85, "start": 76, "text": " So this seems a bit odd. For one, the broader impact statement is only published after the double blind reviewing process is over." }, { "end": 89, "start": 85, "text": " But the papers will be assessed based on their ethical and societal impact." }, { "end": 97, "start": 89, "text": " So maybe the assessment will have nothing to do with this statement. Let's dive in a bit deeper." }, { "end": 103, "start": 97, "text": " In the NURIPS 2020 FAQ, do I have to complete the broader impact section?" }, { "end": 107, "start": 103, "text": " The answer is yes. Please include the section." }, { "end": 120, "start": 107, "text": " But they say, however, if your work is very theoretical or is general enough that there is no particular application foreseen, then you are free to write a broader impact discussion is not applicable." }, { "end": 127, "start": 120, "text": " So until now, I genuinely feel that the conference organizers view this as some sort of experiment." }, { "end": 137, "start": 127, "text": " And it is reasonable if it doesn't apply to you, which is probably the case for most people, then you can simply write, this does not apply to our research." }, { "end": 144, "start": 137, "text": " Can my submission be rejected solely on the basis of the broader impact section? Answer, no." }, { "end": 151, "start": 144, "text": " Reviewers will be asked to rate a submission based on the evaluation criteria, not your broader impact section." }, { "end": 157, "start": 151, "text": " They will also be asked to check whether the broader impact section is adequately addressed." }, { "end": 167, "start": 157, "text": " So reviewers will be able to check the broader impact section, which isn't there or is it there during the double blind reviewing process." }, { "end": 176, "start": 167, "text": " But they only have to say whether it's adequately addressed and they will not be able to reject a paper on that basis." }, { "end": 186, "start": 176, "text": " Again, they repeat. The authors can simply state this work does not present any foreseeable societal consequences if the authors feel that this is the case." }, { "end": 198, "start": 186, "text": " If this is not the case, the conference asks of the authors to discuss along the lines of positive potential impacts and negative potential impacts of the submission." }, { "end": 210, "start": 198, "text": " So far, so good. Let's actually look at these evaluation criteria that they ask reviewers to grade the paper by, which, as they say, has nothing to do with the broader impact section." }, { "end": 217, "start": 210, "text": " Papers that violate the style have already been published or have fatal flaws may be rejected on that basis." }, { "end": 226, "start": 217, "text": " Other submissions will be judged on the basis of their technical quality, novelty, potential impact and clarity." }, { "end": 231, "start": 226, "text": " But one could still think that the potential impact here is a potential technical impact." }, { "end": 239, "start": 231, "text": " It has nothing to do with this broader impact section. That has nothing to do with you being accepted or rejected." }, { "end": 247, "start": 239, "text": " They go on to say submissions will also be considered on ethical grounds, regardless of scientific quality or contribution." }, { "end": 253, "start": 247, "text": " And they say a submission may be rejected for ethical considerations." }, { "end": 262, "start": 253, "text": " Now, again, one could say that they don't look at your broader impact statement if they feel that there is an ethical violation, they reject it." }, { "end": 274, "start": 262, "text": " But before we've already heard that if the reviewers feel that there is an ethical consideration that may include the broader impact section, they can flag the paper and that will go to a set of second reviewers." }, { "end": 281, "start": 274, "text": " And these reviewers can actually reject your paper. So it seems like there is a bit of a mixed message here." }, { "end": 286, "start": 281, "text": " The entire question sort of hinges on who makes the decision and based on what." }, { "end": 291, "start": 286, "text": " And one of the questions is what kind of decisions do the reviewers make?" }, { "end": 295, "start": 291, "text": " So where else better to go than the reviewer guidelines?" }, { "end": 297, "start": 295, "text": " Question 11 to the reviewers." }, { "end": 307, "start": 297, "text": " Have the authors adequately address the broader impact of their work, including potential negative ethical and societal implications of their work?" }, { "end": 311, "start": 307, "text": " And indicate whether you believe the broader impact section was adequate." }, { "end": 322, "start": 311, "text": " So it feels like that the reviewers are simply to evaluate whether this has done with enough work and not necessarily whether they agree with the broader impact section or not." }, { "end": 337, "start": 322, "text": " The question here is if the reviewers think that this has not been done with enough adequacy, but don't necessarily see an ethical problem or actually do, can it be rejected on the basis that it has not been done adequately?" }, { "end": 340, "start": 337, "text": " The entire writing here seems like it should." }, { "end": 346, "start": 340, "text": " But then also it seems like the reviewers assessment should have nothing to do with the broader impact section." }, { "end": 353, "start": 346, "text": " Question 12 of the reviewer guidelines says, does the submission raise potential ethical concerns?" }, { "end": 360, "start": 353, "text": " Note that this is a different question from question 11, where you're simply asked to judge adequacy." }, { "end": 365, "start": 360, "text": " The reviewer guidelines say, note that your rating should be independent of this." }, { "end": 376, "start": 365, "text": " If the AC also shares this concern, dedicated reviewers with expertise at the intersection of ethics and machine learning will further review the submission." }, { "end": 379, "start": 376, "text": " And your duty is to flag the papers." }, { "end": 394, "start": 379, "text": " This now seems that reviewers are to consider the adequacy of the statement, but not its content and forward its content to another section, which contradicts that the reviewers don't see the statement or that the statement can't influence the review." }, { "end": 400, "start": 394, "text": " And it also contradicts the statement that your paper cannot be rejected based on the broader impact section." }, { "end": 411, "start": 400, "text": " Namely, if the second set of reviewers read your broader impact section, find it doesn't address their concerns, they can in fact reject your paper based on that." }, { "end": 419, "start": 411, "text": " I guess someone arguing against that would say that these people could also reject it just because they think it's ethically problematic." }, { "end": 429, "start": 419, "text": " But if the paper has a broader impact section, I think they are going to look at that with some sort of an open mind and at least be influenced by that." }, { "end": 446, "start": 429, "text": " In a note for submitting authors, the conference organizers again released a statement saying that the broader impact statement should include a statement about the foreseeable positive impact as well as potential risks associated mitigations of the proposed research." }, { "end": 454, "start": 446, "text": " Authors can also declare that a broader impact statement is not applicable to their work if they believe this to be the case." }, { "end": 460, "start": 454, "text": " And they again repeat reviewers will also confirm whether the broader impact statement is inadequate." }, { "end": 464, "start": 460, "text": " But this assessment will not affect the overall rating." }, { "end": 472, "start": 464, "text": " However, reviewers have the option to flag a paper for ethical concerns, which may relate to the content of the broader impact section." }, { "end": 485, "start": 472, "text": " The paper will be sent for additional review to a pool of emergency reviewers with expertise in machine learning and ethics who will provide an assessment solely on the basis of ethical considerations." }, { "end": 489, "start": 485, "text": " We expect very few, if any, papers to need such further assessment." }, { "end": 497, "start": 489, "text": " So the official communication makes a divide between on one hand, adequacy and on the other hand, real ethical concerns." }, { "end": 508, "start": 497, "text": " And their message is basically reviewers will judge the adequacy, flag the ethical concerns, and then special reviewers will be able to reject based on ethical concerns." }, { "end": 515, "start": 508, "text": " Now, what's not really clear is the messaging that reviewers should not base their judgment on the broader impact section." }, { "end": 522, "start": 515, "text": " But then where does this adequacy rating go into the process of rejecting or accepting a paper?" }, { "end": 530, "start": 522, "text": " And with them saying there will only be a few, they expect only a few, it seems like it's sort of an experiment that they do this year." }, { "end": 533, "start": 530, "text": " Oh, hey, NURIBS organizer. Well, hello." }, { "end": 540, "start": 533, "text": " So you've decided to make everyone include a broader impact statement, but the broader impact statement will only be visible after the reviewing process." }, { "end": 545, "start": 540, "text": " Correct. But the reviewers should check its adequacy during the review process." }, { "end": 548, "start": 545, "text": " Yes, while they can't see it." }, { "end": 553, "start": 548, "text": " Correct. But it should in no way influence their judgment and you can't be rejected because of that." }, { "end": 559, "start": 553, "text": " That is correct. But if it is found to be inadequate or problematic, it is sent to a second set of reviewers," }, { "end": 568, "start": 559, "text": " which on the basis of the paper and the broader impact statement will decide if the paper is of ethical concern." }, { "end": 575, "start": 568, "text": " Yes. And if the paper is of enough ethical concern and the broader impact statement doesn't convince the special reviewers otherwise," }, { "end": 580, "start": 575, "text": " they will be able to reject that paper. That is indeed correct." }, { "end": 587, "start": 580, "text": " So how are you saying that the broader impact statement has no influence on your score and you can't be rejected because of it?" }, { "end": 592, "start": 587, "text": " Well, as we said, no one's able to see it until the paper is released. Obviously." }, { "end": 599, "start": 592, "text": " Let's talk about these special reviewers. And for that, we broadly have to talk about incentives." }, { "end": 608, "start": 599, "text": " Just imagine for a second that this expectation comes to fruit, that no paper is actually flagged and or any paper that is flagged to this committee" }, { "end": 616, "start": 608, "text": " will come back with a clear, no, this is not really an ethical concern or a reason to discard this paper as a scientific contribution." }, { "end": 623, "start": 616, "text": " One might almost think that then this program will be abolished in the next year because it's useless." }, { "end": 630, "start": 623, "text": " So the more problems the special reviewers find, the more justified their position is." }, { "end": 641, "start": 630, "text": " I wonder where that leads. I guess we are very dependent on pretty much every single person in these special reviewers being some sort of super honest person" }, { "end": 652, "start": 641, "text": " that has no incentives and also no strong opinions on these things and and generally has gone into this ML ethics just out of interest and not to actually make an impact." }, { "end": 655, "start": 652, "text": " I'm sure that will work out just fine." }, { "end": 665, "start": 655, "text": " Now, the official NURiBS website actually links to a blog post of Brent Hecht, suggestions for writing the NURiBS 2020 broader impact statements." }, { "end": 671, "start": 665, "text": " So we can reasonably assume that this is at least in agreement with the organizers of the conference." }, { "end": 677, "start": 671, "text": " Brent here says understanding the societal impacts of your work is going to be hard." }, { "end": 685, "start": 677, "text": " It is going to take lots of effort to write NURiBS broader impact statements. Tons of work has already been done for you." }, { "end": 691, "start": 685, "text": " Check out the literature from communities that have studied societal impacts of AI for a long while." }, { "end": 698, "start": 691, "text": " Even better, bring a social scientist onto your research team. Remember, though, they don't work for free." }, { "end": 706, "start": 698, "text": " Hire them into your company, give them subawards, recruit them as PhD students through interdisciplinary programs." }, { "end": 714, "start": 706, "text": " So are you saying the more problems these people find, the more of them will get hired?" }, { "end": 720, "start": 714, "text": " And again, look at these statements in general. It seems to really be about how much work are you able to put into this?" }, { "end": 723, "start": 720, "text": " So here's your average PhD student." }, { "end": 731, "start": 723, "text": " Now, they pretty much already have to write their papers by themselves or in very, very small teams because they need first authorship." }, { "end": 735, "start": 731, "text": " And they have to do all their experiments and they don't have enough resources." }, { "end": 741, "start": 735, "text": " And now they're also asked to spend considerable amount of time not only writing this very hard statement," }, { "end": 746, "start": 741, "text": " but also reading up on all the literature that there is to read up on." }, { "end": 756, "start": 746, "text": " Or alternatively, if you don't want to do that, well, just hire someone, of course, because budgets and salaries in universities and for PhD students is notoriously loose." }, { "end": 758, "start": 756, "text": " We can just hire someone that does that." }, { "end": 772, "start": 758, "text": " I don't really think it's possible for single PhD students or research labs at universities to just hire someone or put someone full time on this additional required work that is put onto them." }, { "end": 787, "start": 772, "text": " I wonder who will be actually able to put additional people on this such that they surely end up with beautiful, well-researched, broader impact statements to justify even the most ethically concerning research." }, { "end": 797, "start": 787, "text": " I'm wondering, it just gets on the tip of my tongue, but I was going to have to leave that for another time." }, { "end": 802, "start": 797, "text": " For people who do more theoretical work, it is going to be more difficult." }, { "end": 809, "start": 802, "text": " Wait a minute. I thought the official communication was you're very free to leave it away if you don't think this applies to you." }, { "end": 813, "start": 809, "text": " But here it basically saying it's going to be more work for you." }, { "end": 817, "start": 813, "text": " Find something that is both rigorous and practical for your research." }, { "end": 827, "start": 817, "text": " And the argument is to get funding for any theoretical work, someone had to make an argument about positive societal impacts at some point." }, { "end": 832, "start": 827, "text": " Not true. Some universities simply get money and academic freedom." }, { "end": 840, "start": 832, "text": " If that argument is possible, it is probably also possible to make a rigorous statement about some negative societal impacts." }, { "end": 844, "start": 840, "text": " You might be tempted to write boilerplate low information statements." }, { "end": 850, "start": 844, "text": " Don't do this. It will undermine the rigor in the rest of your paper." }, { "end": 856, "start": 850, "text": " The public will roll its eyes and reviewers may and often should call you out." }, { "end": 867, "start": 856, "text": " Now, wait a minute. I thought the reviewers are only supposed to judge the adequacy and absolutely have no influence on their judgment of the paper." }, { "end": 883, "start": 867, "text": " But the underpinning of this text here is basically that if you as a reviewer feel that this hasn't been done adequately, you sort of should let this swap over into your assessment of the rigor and adequacy of the technical contributions in the paper." }, { "end": 885, "start": 883, "text": " Because they're kind of the same, right?" }, { "end": 898, "start": 885, "text": " And they also say that this might spark a conversation is specifically the author response period will be a decent opportunity to have a bit of a dialogue between author and reviewer on the impact statement." }, { "end": 909, "start": 898, "text": " So this basically means that I as a reviewer now have to write in my review something about the broader impact statement and not only judge its adequacy in a special field for it." }, { "end": 921, "start": 909, "text": " And then the author is forced to spend a bunch of their very, very, very valuable author response on rebuttling the reviewers assessment of their broader impact statement." }, { "end": 928, "start": 921, "text": " Our proposal's view is that it's not your job as a reviewer to judge submissions for their impact." }, { "end": 933, "start": 928, "text": " Rather, you should evaluate the rigor with which they disclose their impacts." }, { "end": 936, "start": 933, "text": " So again, it's about putting work into it." }, { "end": 942, "start": 936, "text": " If you go to the full proposal behind this blog post, you'll find the following snippets." }, { "end": 950, "start": 942, "text": " So there is a list of expected outcomes of introducing such mandatory broader impact statements." }, { "end": 953, "start": 950, "text": " Now, have a look at this expected outcomes." }, { "end": 960, "start": 953, "text": " We expect that action on the above recommendations will lead to a number of desirable outcomes." }, { "end": 963, "start": 960, "text": " And they're all positive outcomes." }, { "end": 974, "start": 963, "text": " Now, haven't we been discussing for the last minutes that it's always important to assess the positive and the negative outcomes of your actions and of your releases?" }, { "end": 990, "start": 974, "text": " How ironic that none of the organizers of the conference, nor any of these people communicating, were forced to release a broader impact statement discussing the negative consequences that their actions would have on the community and the greater society." }, { "end": 997, "start": 990, "text": " They go on to give a list of examples of how you could do such positive and negative aspects of technology." }, { "end": 1000, "start": 997, "text": " One of them is social media." }, { "end": 1005, "start": 1000, "text": " And I would agree that there are ethical considerations if you invent social media." }, { "end": 1014, "start": 1005, "text": " We all know that social media can be some sort of a dopamine feedback loop addiction and have negative consequences in society that are not readily visible." }, { "end": 1024, "start": 1014, "text": " But it goes on. Crowdwork, a researcher who invents a new crowdwork framework, likely motivates her work by highlighting the problem the framework solves." }, { "end": 1031, "start": 1024, "text": " But they go on to say that crowdwork also has negative externalities, such as incentivizing very low pay." }, { "end": 1044, "start": 1031, "text": " And the researcher should find ways to engineer her crowdwork framework such that these externalities are structurally mitigated and or she might advocate for minimum wage laws to be adapted." }, { "end": 1059, "start": 1044, "text": " So this researcher working out a problem in crowdworking now has to basically solve millennia old problems in structural economics that have thousands of moving parts and no clear consensus on how to solve." }, { "end": 1072, "start": 1059, "text": " But the best example, and this is the example they actually tell you to look at if your work is more theoretical or you don't really think it has an impact, is the following storage and computation." }, { "end": 1083, "start": 1072, "text": " Recent advances in storage systems and graphical processing unit processing afford the easy storage of massive amounts of data and the real time computation on these data." }, { "end": 1094, "start": 1083, "text": " This has incentivized corporations to collect every possible data point about their users, save this data indefinitely and strive to monetize this data in new ways." }, { "end": 1101, "start": 1094, "text": " While allowing for impressive new capabilities, this trend also presents tremendous risks to privacy." }, { "end": 1109, "start": 1101, "text": " Researchers working in storage and GPU processing should consider these and other foreseeable potential risks in their papers." }, { "end": 1123, "start": 1109, "text": " They should also enumerate technological and policy means by which these risks might be mitigated, e.g. technologies to automate general data protection regulation require capabilities and improvements to GDPR like policies." }, { "end": 1136, "start": 1123, "text": " That is absolutely mad. So here you are making a GPU chip more powerful and you're asked to think ahead about the fact that this can be used to mine data." }, { "end": 1142, "start": 1136, "text": " And not only that, now you're also required to propose improvements to GDPR like policies." }, { "end": 1157, "start": 1142, "text": " The GDPR only an 88 page, very fine print legal document that in addition to all the literature about AI governance, our poor PhD student is now also required to read, understand and be able to improve." }, { "end": 1163, "start": 1157, "text": " How long does this chain of causality go? How do you have to think ahead? This gets ridiculous." }, { "end": 1170, "start": 1163, "text": " It's 200,000 BC and Nuno in his cave just invented fire." }, { "end": 1187, "start": 1170, "text": " Well, fire can be used to cook food, can be used to have less disease, can be used to settle down, expand civilization, build educational facilities, build up a culture, a scientific method, enable massive progress, industrialization," }, { "end": 1203, "start": 1187, "text": " general improvement in health, wealth, education and happiness of society, which ultimately leads to some people building GPUs and saving your data and analyzing all your things to serve you ads of better kitty stickers." }, { "end": 1215, "start": 1203, "text": " How could Nuno in his cave do this to us? Where is his broader impact statement about the invention of fire for the future data collection algorithms on GPUs?" }, { "end": 1222, "start": 1215, "text": " Look, I'm not saying that you should not consider the downstreams effect of your inventions. Of course you should." }, { "end": 1227, "start": 1222, "text": " But at some point it gets ridiculous for most of the work handed into a conference like NURIPS." }, { "end": 1238, "start": 1227, "text": " Either the downstreams effect are so far away that is almost impossible to foresee or as any technology you can use it for good and for bad." }, { "end": 1246, "start": 1238, "text": " And it is going to be with the application of this technology and not its invention where the good and the bad come in." }, { "end": 1253, "start": 1246, "text": " And what most people are going to do is simply come up with things that mean absolutely nothing and generally make not a lot of difference." }, { "end": 1263, "start": 1253, "text": " While it gives a big advantage to big institutions that can spend a lot of time and effort on crafting very rigorous adequate statements." }, { "end": 1275, "start": 1263, "text": " Another release called A Guide to Writing the NURIPS Impact Statement that is not linked by NURIPS, but as they say was in communication with some of the organizers of the conference." }, { "end": 1279, "start": 1275, "text": " So it's reasonable to assume they also largely agree with these positions here." }, { "end": 1292, "start": 1279, "text": " Says you should discuss, read and reflect time permitting impact assessment will benefit from broad intellectual reflection, discuss potential impacts, follow public discussion, read case studies and read the scholarly literature." }, { "end": 1295, "start": 1292, "text": " On tech governance, of course, time permitting." }, { "end": 1303, "start": 1295, "text": " But then again, if it's not rigorous enough, a reviewer might be getting the idea that the rest of your paper isn't rigorous enough." }, { "end": 1306, "start": 1303, "text": " So maybe time must permit for this one." }, { "end": 1310, "start": 1306, "text": " And they again say, think about impacts even for theoretical work." }, { "end": 1316, "start": 1310, "text": " So the official communication always says that if you don't feel this applies to you, you're very free to write." }, { "end": 1317, "start": 1316, "text": " This doesn't apply to me." }, { "end": 1322, "start": 1317, "text": " But the in official communication says if this doesn't apply, you're doing something wrong." }, { "end": 1328, "start": 1322, "text": " And by the way, we're evaluate the rest of your paper based on the amount of work you put into that statement." }, { "end": 1337, "start": 1328, "text": " Ultimately, these statements are just going to boil down to you can do good and bad things with any technology as is visible on this example they give here." }, { "end": 1353, "start": 1337, "text": " Pluribus, a superhuman AI for multiplayer poker, they say they intentionally choose to broaden the focus of their broader impact assessment, depending who can use this scientific advance such as criminals or well motivated citizens." }, { "end": 1357, "start": 1353, "text": " This technology may be socially harmful or beneficial." }, { "end": 1363, "start": 1357, "text": " If access to this capability is mostly available to the wealthy, it could plausibly promote concentration of wealth." }, { "end": 1369, "start": 1363, "text": " And further on the other side, increased skill could increase total welfare." }, { "end": 1374, "start": 1369, "text": " Gee, if that doesn't apply to every single technology ever, I don't know." }, { "end": 1381, "start": 1374, "text": " Again, my general assessment of this is not that it is absolutely wrong to do this or very useless." }, { "end": 1389, "start": 1381, "text": " It is just shifting the balance a bit more on to large institutions who can actually afford to spend a lot of time and work into crafting beautiful statements." }, { "end": 1398, "start": 1389, "text": " And in general, I don't think it's that big of a deal, but I also don't think it's going to help very much to just force everyone to do this." }, { "end": 1400, "start": 1398, "text": " I guess we'll see how it turns out." }, { "end": 1409, "start": 1400, "text": " In this VentureBeat article, they link someone named Joe Redmond saying, I stopped doing CV research because I saw the impact my work was having." }, { "end": 1416, "start": 1409, "text": " I love the work, but the military applications and privacy concerns eventually became impossible to ignore, which I respect a lot." }, { "end": 1424, "start": 1416, "text": " But I would ask, did Joe Redmond realize this after being forced to write a broader impact statement or at some other point?" }, { "end": 1426, "start": 1424, "text": " That was my two cents." }, { "end": 1448, "start": 1426, "text": " If you like videos like this and paper analysis and other things, then subscribe, like wherever these buttons are, share it with your friends and see you next time." } ]
IIebBjbBevs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
When BERT Plays the Lottery, All Tickets Are Winning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "bert", "nlp", "lottery ticket", "good", "bad", "winning", "pruning", "weights", "attention", "transformer", "heads", "multi-head", "fine-tuning", "glue", "benchmark" ]
BERT is a giant model. Turns out you can prune away many of its components and it still works. This paper analyzes BERT pruning in light of the Lottery Ticket Hypothesis and finds that even the "bad" lottery tickets can be fine-tuned to good accuracy. OUTLINE: 0:00 - Overview 1:20 - BERT 3:20 - Lottery Ticket Hypothesis 13:00 - Paper Abstract 18:00 - Pruning BERT 23:00 - Experiments 50:00 - Conclusion https://arxiv.org/abs/2005.00561 ML Street Talk Channel: https://www.youtube.com/channel/UCMLtBahI5DMrt0NPvDSoIRQ Abstract: Much of the recent success in NLP is due to the large Transformer-based models such as BERT (Devlin et al, 2019). However, these models have been shown to be reducible to a smaller number of self-attention heads and layers. We consider this phenomenon from the perspective of the lottery ticket hypothesis. For fine-tuned BERT, we show that (a) it is possible to find a subnetwork of elements that achieves performance comparable with that of the full model, and (b) similarly-sized subnetworks sampled from the rest of the model perform worse. However, the "bad" subnetworks can be fine-tuned separately to achieve only slightly worse performance than the "good" ones, indicating that most weights in the pre-trained BERT are potentially useful. We also show that the "good" subnetworks vary considerably across GLUE tasks, opening up the possibilities to learn what knowledge BERT actually uses at inference time. Authors: Sai Prasanna, Anna Rogers, Anna Rumshisky Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. Today we're looking at when BERT plays the lottery. All tickets are winning by Sy, Prasanna, Anna Rogers and Anna Rumschiski. So a high level overview of this paper is the following. The paper basically looks at BERT in terms of the lottery ticket hypothesis and it says that if you fine tune BERT on different downstream tasks, then the lottery ticket winners you're going to find are different between the tasks. And also the claim all tickets are winning refers to the fact that if you remove the winning tickets, then you can still train the rest to relatively good performance. Therefore, all tickets are winning, not just the sub network. So that was the high level overview for those of you who just want to be interested, if you want to continue watching this video. If you do like videos like this, consider sharing, liking, subscribing, telling your mother, father, brother and friends about it. Alright, let's dive in. So BERT, it is a language model. Basically, if you don't know what BERT is, I've done a video on BERT, but really quickly, what you can do with BERT is you can take a sentence, something like hello there, and you can put it through this multi-layer neural network. And what you'll get out is basically an embedding of that sentence. So a vector embedding of it. We'll make it really easy. And what is usually done is this is pre-trained on a task called masked language modeling. This is unsupervised training. And then you take this embedding and you fine tune. Basically, you put on a classifier head. Basically, say, let's take sentiment classification. So you have two output classes. And you want to say, is this sentence I put in positive or negative sentiment? So you would train this classifier by basically taking this part from the pre-trained masked language modeling and then training the part here that does the sentiment classification. You would sort of add that on top and then fine tune the entire network to solve this task. So that is basically BERT fine tuning on different tasks. And there is this benchmark called GLUE, where it has a number of tasks. In this case, I think they look at nine tasks of GLUE. It has nine of these tasks. One is the, an example is the sentiment classification. And it basically gets a score for each one. And thereby, you can sort of estimate how good your language model is by how well it is performing on each of those individual tasks. But the notable difference here, too, let's say a computer vision, like an ImageNet classifier is the fact that first it is pre-trained, right? This part here is pre-trained on a large corpus. And second, there are different downstream tasks that you fine tune on. So the second part that is important is the lottery ticket hypothesis. So I've also done a video on the lottery ticket hypothesis. And if you very quickly what the lottery ticket hypothesis is the following. So let's say you have an image classifier. And I have a bunch of layers in my neural network. I'm going to draw them like this. And at the end, right, I can classify it into like 10 different or a thousand different classes, whatever. And the input here is an image. So my neural network is going to have weights. So every one of these neurons is connected to each other. Now this can be a convolutional network or a MLP. So all is connected to all and as well here, right, everything is connected to pretty much everything. So we know, first of all, we know that we can train these big networks to relatively good accuracy. And then second of all, so first, we can train them. Second, we know we can prune them after training. What pruning means is the fact that after I have trained such a thing, I can then go and I can figure out which one which of these connections that I have learned are the important ones. And maybe I'll say, ah, these these here, actually these these five, I don't need more than those. Let's actually connect them to the end. These seven or so. I don't need all of the other ones. I just need those. And I can pretty much get the same accuracy as the full network. Now, the important part here is that you can only do pruning after you've trained a network. If you try to prune at the beginning, it doesn't work. So what the lottery ticket hypothesis says is basically, so how does training work? First you have your parameters. Let's let's tell them as a list. So each of these weights is an entry in the list here. First, you initialize these randomly, then through training, training, you get to your train state, right? You get each of the ones into your these are now trained. And in the train state, you can select the ones you think are important. The lottery ticket hypothesis says if I take those that are important and basically go back to the beginning, like here, here, and here, and I basically roll them back to that state that they were in when they were initialized. So I put the same random number there that I got at initialization. I can then make a network where I only have those. I can train that network and I can get a good accuracy. So this basically wasn't possible in the pruning framework because we said we can't prune, we can only prune after training because only then do we know which ones are the important ones. In fact, the lottery ticket hypothesis or the paper shows that you can train the smaller neural network from the beginning, but the catch of course is you have to know which ones those are and you have to know what value to set them on at the beginning. And you only know that after you've trained the full network. But still it kind of gives the, it tells you that you don't need all these connections for training. You basically only need so many connections such that somewhere in there, there's going to be the good ones, right? And if you knew the good ones from the beginning, you could just train those and then only train the smaller sub network. So that's the lottery ticket hypothesis. Naturally these connections here, this small network is called the winning, a winning ticket because if you knew what it was, you could basically train a much smaller network and reach the same accuracy. So this paper looks at BERT in terms of this lottery ticket hypothesis. Now it's a bit more complicated than just in these feed forward networks because BERT is not a feed forward network. BERT is a transformer. So what does that mean? A transformer consists of many layers and each layer, let's expand the layer here. So each layer consists of, let's go over there, need some space. So again, we have our layers of BERT and it goes, the signal goes like this. So each layer consists of two things. First of all, of many attention heads, that's called. Now I'm going to draw these as blocks right here. So four, let's call them four attention heads. Individual attention heads are all parameterized by individual matrices. And then on top of that, there is an MLP. So this is the multi-layer perceptron. This is one, basically a feed forward layer, residual actually. And so there is a skip connection right here. And these are the attention heads. Okay. And then the next layer would again have the same structure, four of these and then one of these and so on with the skip connection. All right. So the pruning in BERT is different than pruning in the feed forward or convolutional layer that we looked at. Pruning in BERT to what this paper looks at is pruning either an entire attention head like this. So kind of leaving out in the entire head away, which is, this is an entire matrix. This is not a single weight, right? This is many, many weights or even more drastically leaving away an entire MLP and basically only relying on the skip connection. All right. So this, you have these two things you can do. You can leave away heads or you can leave away entire MLPs or you can combine these things in some way. Right. So the notable difference here to the lottery ticket hypothesis pruning is the fact that here over up here, what we prune are connections. So prune connections, individual connections, individual connections. And here we prune entire modules. Now this is a, in my opinion, this is a qualitative difference, a very large qualitative difference actually. Why would you do this? So this paper basically doesn't invent this kind of pruning. They go after already existing literature. So what's the advantage in pruning modules? Well, you have to see what's the advantage in pruning per se. So in pruning, what you're trying to do is obtain a smaller network that gives you the same accuracy, but that you can run faster, right? That uses less memory and you can run it faster. And if you prune like this, like we did in the lottery ticket hypothesis paper, you don't really gain anything because if you have a matrix, if you have a matrix, matrix multiply, right? I have two matrices and I multiply them right here. If I cut out one weight here or one weight here, it doesn't help me because I have GPUs and those will parallelize these matrix procedures. And it doesn't really help me because we don't really have good hardware for sparse or matrix, matrix multiplication or matrix, matrix multiplication with holes or things like this. So it almost gains me nothing. The lottery ticket hypothesis paper is very much a kind of more of a scientific curiosity paper. And once we have sparse matrix multiply hardware, which I think already exists, but is not super widely distributed, once we have that, we will be able to make use of this. Whereas the people that prune BERT, so these are more, let's say, industry people. If you prune an entire module, well, that's an entire matrix that falls away. So I have to, I can basically save an entire matrix, matrix multiply in the forward pass here and the backward pass. Well, okay, I don't prune during training, but I can basically save an entire matrix multiply here by pruning an entire module. So I'm not sure if I were an author and I say I want to look at BERT in terms of the lottery ticket hypothesis, I would find a way, I would go away from this literature and find a way to also just prune here individual weights. It's not going to be faster, but the lottery ticket investigations aren't supposed to be faster, they're supposed to tell you something about the nature of the things you're investigating. And of course how you do this is simply by masking, right? You simply force these entries to be zero and therefore you don't have forward signal, you don't have gradient. Interestingly, they actually do the masking, but they do it on the whole entire module level. Okay, so this was BERT and the lottery ticket hypothesis and the all tickets are winning, we're going to investigate later. Let's see what they say in the abstract. Say much of the recent success in NLP is due to the large transformer based models such as BERT. Okay, they say however these models have been shown to be reducible to a smaller number of self-attention heads and layers. So this would be pruning. We consider this phenomenon from the perspective of the lottery ticket hypothesis. For fine-tuned BERT we show that, here's the contributions, A, it is possible to find a sub network of elements that achieves performance comparable with that of the full model. So basically this is the pruning objective, right? You want to prune it such that the performance holds and in terms of the lottery ticket hypothesis you want to prune, reset to the beginning and then also and then train again and that will give you, actually in the lottery ticket hypothesis you can gain performance if you prune by a certain amount. In this case here they always lose performance but yeah. So second of all similarly sized sub networks sampled from the rest of the model so the non-winning ticket perform worse. So if you just prune away the good parts then the bad parts perform worse of course. However the bad sub networks can be fine-tuned separately to achieve only slightly worse performance than the good ones indicating that most weights in the pre-trained BERT are potentially useful. So this is interesting. If they be fine-tuned separately this is exactly what the lottery ticket hypothesis is doing, right? It's basically fine-tuning only a sub part of the network and here they say even if we take the parts of the network that have low scores for pruning and we retrain those then we can achieve a good performance. So further they say we also show that the good sub networks vary considerably across glue tasks. This is this benchmark opening up the possibilities to learn what knowledge BERT actually uses at inference time. Alright so this is the overview of the paper. So a last thing to say which I've already kind of alluded to is the fact that in the original lottery ticket hypothesis as I said you had a graph and you had some sort of here was 100% accuracy and here was how much you prune. Of course you start at 100% if you prune nothing but then as you prune the interesting thing is it kind of goes up and then it goes down. So this is the first thing here it goes up to a certain amount if you don't prune and in the original lottery ticket hypothesis here somewhere here would be 50% of the network I think. And then once you go down let's say here to 90% of performance you are at something like 5% of the network size or 3%. So you can prune away most things and still be like extremely extremely powerful. Now we're going to see what these essentially what these people do here is here is 100% and they simply prune until they reach 90%. So we don't necessarily know what happens in the middle we just know they start here and somehow they get to 90% and what they end up with is something like 50% of the network still remaining. So again see the qualitative difference here between the 5% of the lottery tickets in the original paper and the 50-ish or so percent or considerable amount more in this paper right here and I'm pretty sure that is due to the fact that they prune entire modules here so they don't prune on a fine-grained enough level to investigate these phenomenon because as I said we don't know but I'm pretty sure this just goes down here and not up first. So qualitatively it seems different. Alright so here they introduce what they do again ERT is made up of these attention heads and MLPs. The MLPs have a skip connection as you can see here and the attention head attention layers are basically made up each of N of these attention heads. What they will do is they will look at 12 layer networks. Each layer will have 12 of these attention heads and one of the MLPs. So you have in total 144 heads and 12 MLP layers. The way they determine which ones to prune is pretty easy. In front of each attention head and in front of each MLP they put one of these binary variables right here. These variables can take values 0 or 1. If they are 0 the layers or the head is basically inactive, no propagation. If they are 1 they are active. And they determine what value to set them to by computing important scores. Basically determining how important is a head or a layer for the network. And that's pretty simple. You simply take the gradient of the loss. I think they go after this paper right here that's supposed to be the following. You derive the loss by these variables right here and therefore you get these important scores. And then you can simply prune the layers with the lowest important scores because that means that the gradient with respect to them is the smallest. That means your loss changes the least if you were to leave them away. So they here determine their pruning strategy. Their constraint here is as I said 90% of the performance of the full model. So they train the full model, fine tune the full model on this task and then they set themselves a budget of 90% and they simply prune until the model reaches 90%. Once it goes lower they stop. So they have three methods of pruning. One is heads only where they only cut away these attention heads. As I said there are 144 of them. They have the pruning strategy of MLPs only where they only prune the MLPs, leave all the attention heads alone. Then they have this heads and MLPs. They say we compute head and MLP important scores in a single backward pass, pruning 10% heads and one MLP with the smallest scores until the performance on the dev set is within 90%. Then we continue pruning heads alone and then MLPs alone. This I guess until again they are no longer in the 90% so until they reach their budget. So this is a combined strategy. This strategy results in a larger number of total components pruned within our performance threshold. So this is the thing we should focus on right because in pruning the name of the game is how much can you take away and still be within your budget. This strategy seems to be the viable strategy here. A last thing here is fine tuning. So the other difference between this paper and the lottery ticket hypothesis is that we said that in the original paper here these are randomly initialized weights. Like you train a class for an ImageNet or something, you start from randomly initialized weights and the lottery ticket papers they all kind of presuppose random initializations. Whereas BERT, when you do the same thing for BERT, these are not random initializations. We said in BERT what you usually do is you train the encoder part here. You pre-train with masked language modeling first and then second you train the entire thing. Let's skip the color here. Second you train the entire thing. You fine tune the entire thing. So if we talk about initializations in the BERT task then the initialization would be at this point right here after the masked language modeling would be the initialization. So the weights are not random. The weights are actually pre-trained on the masked language modeling task which is also a qualitative difference and sort of lets us inspect. So the authors say that since we trained with masked language modeling and people sort of claim that masked language modeling learned something about the language, we can now investigate kind of which attention heads, which modules in BERT are encoding which parts of the language. And this is going to be interesting once we look at which attention heads and which modules survive in the individual tasks, we can sort of compare tasks across each other by seeing which of the heads they share in their winning tickets. Alright, so they produce these graphs here. These are sort of one of the central graphs here and the way to read this is on the left side here you have the layer, you have the layer index and on the x-axis you simply have the index of the head. There are 144 boxes here. Each one corresponds to one of the attention heads. The top number is always the mean number of glue tasks that this head survived in. So what they do is they take the pre-trained BERT, they fine tune it on these nine tasks and for each of the nine tasks they determine the winning tickets. And the number here says how many, in how many of these nine tasks is this particular attention head a part of the winning ticket. Now they repeat it for different random seats, that's why you have floating point numbers and the lower part is the standard deviation across that. So you can see quite a number of heads make it into a lot of these tasks. So you can say this part, this thing right here, red on red, this head right here survives in seven out of the nine tasks. So it should be fairly, it should probably encode something fairly substantial about language that is shared across these seven tasks. You can see some of the heads like this one here doesn't survive in almost any task which basically means that it's, you know, that one's not really super important for these tasks. It might have been, you know, important for the pre-training but not for these particular tasks. What's interesting, so what you can see is that the mean or median or so is like three, four or five. And that means that a lot of the heads are sort of somewhat important for some of the tasks. And you can see the qualitative difference. If this were the like original lottery ticket paper, most of these numbers would be at zero because the lottery ticket size is just so much smaller. Here you can directly see that you are going to retain a large number of things in your network in order to get 90% of the performance. And that's probably because you prune entire modules again. So they have this for two variants here. First for the strategy of masking heads only. And the right one is for masking heads and MLPs. And the same here on the bottom. These are the same numbers but not for attention heads but for MLP layers. And you see again this is masking MLPs only. This is masking heads and MLPs. So if you compare the two, you see that for example this here and this here are substantially darker which means more of this stuff survives. Now we can't really... It seems like here for example, it's darker than here. So on the right side more stuff survives but also you have more things to prune, right? You can prune the heads and the MLPs. And they claim before that the masking heads and MLP strategy results in more things being pruned which isn't really congruent with here generally more things surviving. But it could be because of the fact maybe the sum of the two is still lower than the sum of each individual thing here. Though it doesn't really look like it. So I'm a bit confused about this but I'm just going to assume that the sum of the two is lower. Does that make sense if both are darker? Well it shouldn't be the sum... It should be the sum of this plus a completely dark this in terms of masking heads only or vice versa versus the sum of those two, right? So that should be the measure. But it just seems a bit doesn't work out too much. But okay that's what they say. So by the way if the authors are here you have... This is cut off. Haha. Yeah this is annoying. This is like trying to get LoTeC to do things and it doesn't comply. Alright so what you can... Another thing you can see in the authors point out here is that if you mask heads and MLPs you sort of shift more things to the back of the network to the higher up layers. And they reason now because you also mask the heads basically they can't do as much work so the heads would be masked somewhere here. So all that work is going to shift upon the MLPs and mostly to the back of the network because this thing here cannot take over work that this attention head here is now not performing anymore because it was pruned because the signal travels this way. So the authors kind of interpret these results right here and I think the most important thing to see is simply the variance of things. So most heads are actually important for at least two or three tasks and no head is important for all the tasks consistently. I think that's the take home message right here. Okay and they contrast this to previous research that has basically said this experiment falls up on a study by this that showed that only a few transformer heads in machine translation tasks did the heavy lifting while the rest could be pruned. And this paper similarly showed that most of BERT self-attention head in MNLI task could be pruned and that the good heads were mostly shared between the MNLI matched and mismatched. And they basically say yeah that's correct but that is only within one task right if you go beyond if you go to several tasks then the heads that are important differ quite a bit. Okay so let's continue and go here. They ask how task independent are the good self-nerve works and they basically look at these kinds of graphs right here which are pretty interesting. So we've got this. This is heads shared between tasks. So what this measures is these are the different tasks in the glue benchmark and they basically look at each task look at its winning lottery ticket and look at which heads survive in the winning ticket. And then they put that here on the diagonal. So if in QNLI a head survives it gets a 1 here and if it doesn't survive it gets a 0. So on average 85 out of the 144 heads survive right 85 heads survive. That's pretty as I said this is somewhat like over 50% of the network. It's entirely different than the original lottery ticket hypothesis paper. So 85% not percent 85 of the 144 heads survive. Now they look at the other tasks so for QNLI they would look at maybe MNLI task here and ask which of the heads that survived in QNLI also survives in MNLI. So that gets you the shared heads and again the lower numbers is standard deviation. So 62 heads survive in QNLI and MNLI and the authors here are sort of arguing that from these sort of numbers you should be able to see which of the tasks share different linguistic knowledge. So different linguistic knowledge could be relevant for different tasks but if some tasks share a lot of the attention heads that survive in the winning tickets that basically means that the model is using that information that is in that head for both tasks. This could be good in that you say oh yeah these tasks really are used similar linguistic features or it could be something that you don't expect and then you might be able to investigate maybe the model is doing something shady here because it really shouldn't, these tasks don't really have much in common. So they do this for the heads and the MLPs here. Now if you ask why the WNLI here has a bunch of zeros that's because it's a wonky task and basically the best thing you can do is predict the most frequent class. So you can prune just about anything away on these MLPs because they have the skip connections you don't need them to predict the most frequent class. What I want to go into is the following statement right here. So, note that figure one, so the figure before, shows very few heads or MLPs that are universally useless. Only seven heads that survived in less than two tasks. 86% of heads and 67% of MLPs survived in two to seven tasks with relatively high standard deviation. They say this means that the good sub networks for different tasks have relatively little in common. So they make this sort of statement again here that the good sub network have little in common and it might seem like that for the figure initially. But if you look at this figure it actually shows something pretty interesting I think. So if you look at a number, let's say for example this here, this 74 and I haven't actually tried. Yeah let's look at the 74 and this here. So let's look at these tasks QQP and RTE. Okay, so if you look at QQP and RTE you could see that these are tasks that already they don't have a lot of heads in common right and you might be able to say well if what they're saying is true that the tasks share relatively little you would expect them to be relatively independent. But if I look at this 78 here it means that 78 out of 144 heads survive and here it means that 74 out of 144 heads survive. So if I now would think that okay generally for different tasks things are different how many heads would I expect there to be surviving in both if the tasks are independent. So that's these two things multiplied right times 144. So I can scratch this here and the 7 times 7 is whatever 49. Let's go 7 times 8 about this so that's 5, 6. Do I need to get out the calculator? I want to do this calculator. I'm going to do this the right way. Okay I hope you can see that so that's 78 times 74 divided by 144. Did I do it right? I probably did it wrong. 78 times 74 divided by 144. Alright so that's 40 heads and you see that there's 43 heads and I've actually gone through a bunch of these numbers before not these ones but generally the shared number of heads is higher than what one would expect if you assume that the tasks are independent. And I'm sort of missing sort of an analysis of that here because that I find to be a pretty interesting finding of these things and sort of I mean I get the fact that they say based on the graphics up here that the tasks are sort of seem to be relatively independent with respect to the heads that survive and of course relatively independent is a relative term but it's sort of an investigation into why we see considerable difference between tasks here in terms of that. So these numbers are always over what you would assume for independence. That seems to be pretty interesting. Alright so they say they here go into this figure two and this pairwise comparison and they analyze a couple of the different tasks here and what you would expect and I don't want to go too much into these tasks because honestly I also don't know all of these tasks. I don't know which tasks should share a lot of things, which ones shouldn't but it is a good way. Like it is a very smart way to investigate if the model really learns similar tasks to use similar information. Alright the last thing they do right here is the good and the bad subnetworks in BERT fine tuning. So they say our final experiment puts the above evidence of good subnetworks in BERT fine tuned from the perspective of lottery ticket hypothesis which predicts that the lucky subnetworks can be retrained from scratch to match the performance of the full network. To test this hypothesis we experiment with the following subnetworks. So that means I wasn't really sure when I read it the first time but now I'm fairly sure that all of the results so far were just pruning and maybe not retraining. So just sort of doing the pruning thing and not doing this lottery ticket retraining which shouldn't make a lot of the difference as we're going to see but just for the understanding. Because it seems like pruning and retraining doesn't do that much for the winning tickets as you'll see right now. But yeah so now they actually retrain from scratch. So good networks the elements selected from the full model by importance scores as described in the previous section. So here they're going to evaluate these good networks. First of all they're going to evaluate them pruned and they're going to evaluate them retrained in the lottery ticket style. Then they're also going to evaluate bad subnetworks. The elements sampled from those that did not survive the pruning plus a random sample of elements with high importance score so as to match the size of the good subnetworks. So because the good subnetworks are 50% or more of the network they want to sample from the things that did not survive so from the bad ones and they plus a random sample of the good subnetworks to just match the size. So we would expect these to perform maybe worse but maybe we can also train them to achieve good performance. And then they investigate bad subnetworks. Simple inversion of the good subnetworks. So these would be just anything but the good. They are 5 to 18% smaller in size than the sampled bad subnetworks but they do not contain any elements with high importance scores. And they say okay for all of them they evaluate their performance on all tasks simply after pruning and with fine tuning the same subnetwork with the same random seeds and with the rest of the model of masks. So this is really what the lottery ticket hypothesis does except they of course mask entire modules and not individual weights. And here you can see the general results. So the general results look like something like this. This is a typical example. So this is the, let's go out, oh yeah, this here is simply the dumb classifier that always tells the highest probability class. This is the, like this is sort of the idiot's baseline. Okay. This here is the full model. This here is the good pruned and this here is the good after it's retrained again. Okay. So you see by retraining you can basically gain. Now the original lottery ticket this would sometimes even go up here depending on how much was pruned but you can see that there is a slight gain after you retrain the pruned part. Okay. And the other thing to note here is that you don't lose much. Basically you only drop a little bit by pruning which that's what makes it the good part. You only drop a little bit. However, if you have the bad part which are these and let's say the good plus bad. These are the bad plus some of the good ones. You see that the performance drops pretty heavily almost to the baseline of the most frequent class and also here. So I would actually, I would go with this one right here if, because that's just the bad ones. You see the performance drops considerably but then, and that's what the authors claim is pretty interesting. If you retrain that part, the bad part, so to say, you can achieve sort of a very comparable performance to what you can achieve with the good parts. And this appears to be true for most of the results right here. There are some outliers like this one but there the score is also, so this is the Matthews correlation and not the accuracy. So the score is a bit different there. But you can see here the good plus bad also gets a fairly high accuracy. So the authors claim this is pretty surprising which I guess it is if you look at this. But what I want to do is I actually want, I have asked the author of the lottery ticket hypothesis this question. So this is from our machine learning street talk with Jonathan Franco and this is another channel that I am a part of and I would like to show you this right here when I ask them this question. Another question from Reddit, Imnemo asks, suppose you try to construct a lottery ticket by taking all the weights that were not part of a winning ticket and retraining from those, will that model be unable to learn the task or might there be another winning ticket hiding among them or one that was not originally used? So this is the most common question I get by people who read the original paper and I hope that by answering it here in a public forum I can answer it once and for all. The challenge in doing this experiment is let us take the MNIST example. So suppose that we find a winning ticket on MNIST. It is going to be about 3% of the original size of the network. So that means that if you remove it you still get 97% of the weights left. And so my guess is that if you were to train those 97% of weights you will get to the same accuracy as you got with 100% of weights because you have barely pruned the network at all. You could randomly prune by 3% and it would not affect it. And then you could go and find another lottery ticket that is mutually exclusive with the first. You still have 94% of the weights and you could probably iterate this for a very long time. You could probably this way find 10, 15 lottery tickets like this, maybe more, that are all mutually exclusive and still leave you with a remaining residual that is capable of training to full accuracy. So the challenge with this experiment is that the lottery tickets are small, which is great, but it means that whatever is left is large enough that I am sure there is another lottery ticket in there and another lottery ticket in there and so on and so on and so on. So it is an interesting idea in principle, but once you kind of look at the sizes of things you still got so much over-parameterization left that I think you just find more lottery tickets. You can even probably, I am guessing, swap out one weight from a lottery ticket with another weight and it would not matter or swap out a handful of weights. And so combinatorially the number of lottery tickets is massive and we are just finding one. All right, so as you saw this is kind of the most common question that Jonathan gets here. And as you can see the difference here of course is that our original tickets are already sort of 50% of the network, so what is left is only 50%. So this is substantially different. Now two things I have to remark here. First of all, because we are pruning modules and not individual weights for the good one, it is the reason that we do get these big winning tickets, right? But also what I think is happening is that because we are pruning these entire modules we are actually not fine-grained enough. So that means every time we eliminate a module we actually kill some good ones and some bad ones. So in here I am going to guess there are some good ones and there are some bad ones. But since we can only kill entire modules, we sort of, we simply kill the one that on average has the most good ones. But I am guessing that in the thing we kill there are simply, sorry, we kill the one that has on average the least good ones. But there are still some good weights in there. And if you believe the original lottery ticket hypothesis that means that these, actually these very few weights in those modules can still train to full accuracy. So actually what these authors claim is surprising in light of the original lottery ticket hypothesis. I think if you look at it from the perspective of the actual hypothesis which considers individual weight and a very small subset of them, the original hypothesis would pretty much predict that you could train something where you pruned away a bunch of modules entirely. Or you could train these bad modules because they are still going to contain a small-sized lottery ticket that is going to be responsible for the good performance. So that is kind of the first thing. And the second thing, in general, you heard Jonathan, I do not think that is actually even a question of the size of the tickets. Nothing in the original hypothesis forbids the non-winning ticket from also being trained to good accuracy. It simply says something about the winning ticket. It does not say anything about the non-winning ticket. So those are the two comments. And I think the question and the investigation, even though it is interesting, I think it is sort of maybe not thought through, at least in the perspective of what they go for here. The result is very interesting. But again, I think they claim the original hypothesis would sort of say these are the bad parts and you could not train them. And then they say it is surprising that you can. But I would say that the original hypothesis would in fact predict that you could train those things because you have pruned away these entire modules, which is very coarse-grained and that leaves still good weights in the bad parts. Okay. So they conclude. However, we can see that both good and bad networks can be retrained with comparable performance for many tasks. The inverted bad networks perform worse than the sampled ones, but that could be due to them being smaller in size. Performance of all inverted bad networks on call is almost zero. Okay. Yeah, okay. Very little remains when that mask is inverted. That is the task we looked at because they claim that is so small, which makes sense, right? So discussion. Say, does BERT have bad subnetworks? The key result of this study is that as far as fine-tuning is concerned, BERT does not seem to have bad subnetworks that cannot be retrained to relatively good performance level, suggesting that the weight that do not survive pruning are not just inactive. However, it is important to remember that we consider elements of BERT architecture as atomic units, while the original lottery ticket work relied on magnitude pruning of individual weights. So they're well aware here of these differences, and they can see which, and they can see to that right here. So that's good. On that level, BERT probably does have bad subnetworks, and they show that can be found in the transformer model with global iterative pruning. We'll leave it to future research to find out to what extent the effective subnetworks overlap with the effective architectural blocks, and what that says about the architecture of BERT and the other transformers. So as you see, they're well aware that all of what I said is the case. So it's not like I'm criticizing and saying they're wrong. It's just that if you read it, you sort of get the impression that this is what they're saying. And I think the light of which a reader goes through it is just a bit such that you come off, if you don't read until here, you come off thinking something else. Our results suggest that most architecture blocks of BERT are potentially usable in fine tuning. This should not be interpreted as proof that they all encode potentially irrelevant linguistic information. That's absolutely true. It is also possible that pre-training somehow simply made them more amenable to optimization, which is another question for future research. And they go into what do different BERT components do in the different things. So again, I think this work here is actually most relevant for investigating this question, what do BERT components, the different BERT components do for the different tasks, to look which tasks use which things. And the actual recognition that none of these modules is useless, I would consider pretty pretty cool finding. Okay, so in conclusion, they say prior work shows that it was possible to prune most self-attention ads. We extend this to the fully connected layers. We show fine tune purses good and bad top networks, where the good heads and MOPs alone reach performance comparable with the full network, and the bad ones do not perform well. However, this pattern does not quite conform to lottery ticket hypothesis. Both good and bad networks can be fine tuned separately to reach comparable performance. We also show that 86% of heads and 57% of MOPs and good sub networks are not universally useful. Cross-glue tasks and overlap between good and sub networks do not necessarily correspond to task types. So that's where we didn't go into. This raises questions about the degree to which fine tune BERT relies on task specific or general linguistic knowledge and opens up the possibility of studying the good sub networks to see what types of knowledge BERT actually relies on at inference time. So this is sort of future research direction. And with that, I think we've gone through the paper. I hope you got something useful out of this. I think it's a pretty cool paper. It's a pretty cool methodology, and I think a lot of work can build upon this to do interesting analysis of these language models. Again, if you like this video, consider sharing it, subscribing, liking, and bye bye.
[ { "end": 5.5600000000000005, "start": 0, "text": " Hi there. Today we're looking at when BERT plays the lottery. All tickets are winning" }, { "end": 12.68, "start": 5.5600000000000005, "text": " by Sy, Prasanna, Anna Rogers and Anna Rumschiski. So a high level overview of this paper is" }, { "end": 18.92, "start": 12.68, "text": " the following. The paper basically looks at BERT in terms of the lottery ticket hypothesis" }, { "end": 26.68, "start": 18.92, "text": " and it says that if you fine tune BERT on different downstream tasks, then the lottery" }, { "end": 34.56, "start": 26.68, "text": " ticket winners you're going to find are different between the tasks. And also the claim all" }, { "end": 40.6, "start": 34.56, "text": " tickets are winning refers to the fact that if you remove the winning tickets, then you" }, { "end": 47.32, "start": 40.6, "text": " can still train the rest to relatively good performance. Therefore, all tickets are winning," }, { "end": 54.4, "start": 47.32, "text": " not just the sub network. So that was the high level overview for those of you who just" }, { "end": 59.28, "start": 54.4, "text": " want to be interested, if you want to continue watching this video. If you do like videos" }, { "end": 66.68, "start": 59.28, "text": " like this, consider sharing, liking, subscribing, telling your mother, father, brother and friends" }, { "end": 74.36, "start": 66.68, "text": " about it. Alright, let's dive in. So BERT, it is a language model. Basically, if you" }, { "end": 78.64, "start": 74.36, "text": " don't know what BERT is, I've done a video on BERT, but really quickly, what you can" }, { "end": 85, "start": 78.64, "text": " do with BERT is you can take a sentence, something like hello there, and you can put it through" }, { "end": 91.32, "start": 85, "text": " this multi-layer neural network. And what you'll get out is basically an embedding of" }, { "end": 101.84, "start": 91.32, "text": " that sentence. So a vector embedding of it. We'll make it really easy. And what is usually" }, { "end": 107.24000000000001, "start": 101.84, "text": " done is this is pre-trained on a task called masked language modeling. This is unsupervised" }, { "end": 112.67999999999999, "start": 107.24, "text": " training. And then you take this embedding and you fine tune. Basically, you put on a" }, { "end": 118.67999999999999, "start": 112.67999999999999, "text": " classifier head. Basically, say, let's take sentiment classification. So you have two" }, { "end": 126.25999999999999, "start": 118.67999999999999, "text": " output classes. And you want to say, is this sentence I put in positive or negative sentiment?" }, { "end": 133.07999999999998, "start": 126.25999999999999, "text": " So you would train this classifier by basically taking this part from the pre-trained masked" }, { "end": 139.56, "start": 133.08, "text": " language modeling and then training the part here that does the sentiment classification." }, { "end": 146.42000000000002, "start": 139.56, "text": " You would sort of add that on top and then fine tune the entire network to solve this" }, { "end": 154, "start": 146.42000000000002, "text": " task. So that is basically BERT fine tuning on different tasks. And there is this benchmark" }, { "end": 161, "start": 154, "text": " called GLUE, where it has a number of tasks. In this case, I think they look at nine tasks" }, { "end": 168.24, "start": 161, "text": " of GLUE. It has nine of these tasks. One is the, an example is the sentiment classification." }, { "end": 174.4, "start": 168.24, "text": " And it basically gets a score for each one. And thereby, you can sort of estimate how" }, { "end": 179.92000000000002, "start": 174.4, "text": " good your language model is by how well it is performing on each of those individual" }, { "end": 186.28, "start": 179.92000000000002, "text": " tasks. But the notable difference here, too, let's say a computer vision, like an ImageNet" }, { "end": 192.16, "start": 186.28, "text": " classifier is the fact that first it is pre-trained, right? This part here is pre-trained on a" }, { "end": 202.04, "start": 192.16, "text": " large corpus. And second, there are different downstream tasks that you fine tune on. So" }, { "end": 208.44, "start": 202.04, "text": " the second part that is important is the lottery ticket hypothesis. So I've also done a video" }, { "end": 215.4, "start": 208.44, "text": " on the lottery ticket hypothesis. And if you very quickly what the lottery ticket hypothesis" }, { "end": 221.8, "start": 215.4, "text": " is the following. So let's say you have an image classifier. And I have a bunch of layers" }, { "end": 228.22, "start": 221.8, "text": " in my neural network. I'm going to draw them like this. And at the end, right, I can classify" }, { "end": 234.72, "start": 228.22, "text": " it into like 10 different or a thousand different classes, whatever. And the input here is an" }, { "end": 240.64000000000001, "start": 234.72, "text": " image. So my neural network is going to have weights. So every one of these neurons is" }, { "end": 246.73999999999998, "start": 240.64, "text": " connected to each other. Now this can be a convolutional network or a MLP. So all is" }, { "end": 253.11999999999998, "start": 246.73999999999998, "text": " connected to all and as well here, right, everything is connected to pretty much everything." }, { "end": 260.64, "start": 253.11999999999998, "text": " So we know, first of all, we know that we can train these big networks to relatively" }, { "end": 268.2, "start": 260.64, "text": " good accuracy. And then second of all, so first, we can train them. Second, we know" }, { "end": 274.96, "start": 268.2, "text": " we can prune them after training. What pruning means is the fact that after I have trained" }, { "end": 279.84, "start": 274.96, "text": " such a thing, I can then go and I can figure out which one which of these connections that" }, { "end": 285.71999999999997, "start": 279.84, "text": " I have learned are the important ones. And maybe I'll say, ah, these these here, actually" }, { "end": 291.96, "start": 285.71999999999997, "text": " these these five, I don't need more than those. Let's actually connect them to the end. These" }, { "end": 296.8, "start": 291.96, "text": " seven or so. I don't need all of the other ones. I just need those. And I can pretty" }, { "end": 302.68, "start": 296.8, "text": " much get the same accuracy as the full network. Now, the important part here is that you can" }, { "end": 308.44, "start": 302.68, "text": " only do pruning after you've trained a network. If you try to prune at the beginning, it doesn't" }, { "end": 314.76, "start": 308.44, "text": " work. So what the lottery ticket hypothesis says is basically, so how does training work?" }, { "end": 318.88, "start": 314.76, "text": " First you have your parameters. Let's let's tell them as a list. So each of these weights" }, { "end": 328.96, "start": 318.88, "text": " is an entry in the list here. First, you initialize these randomly, then through training, training," }, { "end": 337.2, "start": 328.96, "text": " you get to your train state, right? You get each of the ones into your these are now trained." }, { "end": 343.24, "start": 337.2, "text": " And in the train state, you can select the ones you think are important. The lottery" }, { "end": 349.40000000000003, "start": 343.24, "text": " ticket hypothesis says if I take those that are important and basically go back to the" }, { "end": 357.64, "start": 349.40000000000003, "text": " beginning, like here, here, and here, and I basically roll them back to that state that" }, { "end": 362.76, "start": 357.64, "text": " they were in when they were initialized. So I put the same random number there that I" }, { "end": 371.24, "start": 362.76, "text": " got at initialization. I can then make a network where I only have those. I can train that" }, { "end": 379.72, "start": 371.24, "text": " network and I can get a good accuracy. So this basically wasn't possible in the pruning" }, { "end": 385.68, "start": 379.72, "text": " framework because we said we can't prune, we can only prune after training because only" }, { "end": 390.72, "start": 385.68, "text": " then do we know which ones are the important ones. In fact, the lottery ticket hypothesis" }, { "end": 396.36, "start": 390.72, "text": " or the paper shows that you can train the smaller neural network from the beginning," }, { "end": 401.64, "start": 396.36, "text": " but the catch of course is you have to know which ones those are and you have to know" }, { "end": 406.44, "start": 401.64, "text": " what value to set them on at the beginning. And you only know that after you've trained" }, { "end": 413.40000000000003, "start": 406.44, "text": " the full network. But still it kind of gives the, it tells you that you don't need all" }, { "end": 419.68, "start": 413.40000000000003, "text": " these connections for training. You basically only need so many connections such that somewhere" }, { "end": 423.52000000000004, "start": 419.68, "text": " in there, there's going to be the good ones, right? And if you knew the good ones from" }, { "end": 429.96, "start": 423.52, "text": " the beginning, you could just train those and then only train the smaller sub network." }, { "end": 435.28, "start": 429.96, "text": " So that's the lottery ticket hypothesis. Naturally these connections here, this small network" }, { "end": 442.4, "start": 435.28, "text": " is called the winning, a winning ticket because if you knew what it was, you could basically" }, { "end": 448.64, "start": 442.4, "text": " train a much smaller network and reach the same accuracy. So this paper looks at BERT" }, { "end": 454.12, "start": 448.64, "text": " in terms of this lottery ticket hypothesis. Now it's a bit more complicated than just" }, { "end": 459.74, "start": 454.12, "text": " in these feed forward networks because BERT is not a feed forward network. BERT is a transformer." }, { "end": 466.08, "start": 459.74, "text": " So what does that mean? A transformer consists of many layers and each layer, let's expand" }, { "end": 474.96, "start": 466.08, "text": " the layer here. So each layer consists of, let's go over there, need some space. So again," }, { "end": 481.28, "start": 474.96, "text": " we have our layers of BERT and it goes, the signal goes like this. So each layer consists" }, { "end": 487.84, "start": 481.28, "text": " of two things. First of all, of many attention heads, that's called. Now I'm going to draw" }, { "end": 493.12, "start": 487.84, "text": " these as blocks right here. So four, let's call them four attention heads. Individual" }, { "end": 499.08, "start": 493.12, "text": " attention heads are all parameterized by individual matrices. And then on top of that, there is" }, { "end": 505.68, "start": 499.08, "text": " an MLP. So this is the multi-layer perceptron. This is one, basically a feed forward layer," }, { "end": 512.8, "start": 505.68, "text": " residual actually. And so there is a skip connection right here. And these are the attention" }, { "end": 520.36, "start": 512.8, "text": " heads. Okay. And then the next layer would again have the same structure, four of these" }, { "end": 528.76, "start": 520.36, "text": " and then one of these and so on with the skip connection. All right. So the pruning in BERT" }, { "end": 533.96, "start": 528.76, "text": " is different than pruning in the feed forward or convolutional layer that we looked at." }, { "end": 540.2, "start": 533.96, "text": " Pruning in BERT to what this paper looks at is pruning either an entire attention head" }, { "end": 546.68, "start": 540.2, "text": " like this. So kind of leaving out in the entire head away, which is, this is an entire matrix." }, { "end": 552.3199999999999, "start": 546.68, "text": " This is not a single weight, right? This is many, many weights or even more drastically" }, { "end": 558.48, "start": 552.3199999999999, "text": " leaving away an entire MLP and basically only relying on the skip connection. All right." }, { "end": 565.12, "start": 558.48, "text": " So this, you have these two things you can do. You can leave away heads or you can leave" }, { "end": 571.66, "start": 565.12, "text": " away entire MLPs or you can combine these things in some way. Right. So the notable" }, { "end": 579.5600000000001, "start": 571.66, "text": " difference here to the lottery ticket hypothesis pruning is the fact that here over up here," }, { "end": 587.9200000000001, "start": 579.5600000000001, "text": " what we prune are connections. So prune connections, individual connections, individual connections." }, { "end": 597.64, "start": 587.92, "text": " And here we prune entire modules. Now this is a, in my opinion, this is a qualitative" }, { "end": 604.5999999999999, "start": 597.64, "text": " difference, a very large qualitative difference actually. Why would you do this? So this paper" }, { "end": 610.92, "start": 604.5999999999999, "text": " basically doesn't invent this kind of pruning. They go after already existing literature." }, { "end": 615.9399999999999, "start": 610.92, "text": " So what's the advantage in pruning modules? Well, you have to see what's the advantage" }, { "end": 622.8800000000001, "start": 615.94, "text": " in pruning per se. So in pruning, what you're trying to do is obtain a smaller network that" }, { "end": 629.2, "start": 622.8800000000001, "text": " gives you the same accuracy, but that you can run faster, right? That uses less memory" }, { "end": 636.32, "start": 629.2, "text": " and you can run it faster. And if you prune like this, like we did in the lottery ticket" }, { "end": 640.6400000000001, "start": 636.32, "text": " hypothesis paper, you don't really gain anything because if you have a matrix, if you have" }, { "end": 649.72, "start": 640.64, "text": " a matrix, matrix multiply, right? I have two matrices and I multiply them right here. If" }, { "end": 655.72, "start": 649.72, "text": " I cut out one weight here or one weight here, it doesn't help me because I have GPUs and" }, { "end": 662.86, "start": 655.72, "text": " those will parallelize these matrix procedures. And it doesn't really help me because we don't" }, { "end": 669.2, "start": 662.86, "text": " really have good hardware for sparse or matrix, matrix multiplication or matrix, matrix multiplication" }, { "end": 674.5200000000001, "start": 669.2, "text": " with holes or things like this. So it almost gains me nothing. The lottery ticket hypothesis" }, { "end": 683.6400000000001, "start": 674.5200000000001, "text": " paper is very much a kind of more of a scientific curiosity paper. And once we have sparse matrix" }, { "end": 690.48, "start": 683.6400000000001, "text": " multiply hardware, which I think already exists, but is not super widely distributed, once" }, { "end": 696, "start": 690.48, "text": " we have that, we will be able to make use of this. Whereas the people that prune BERT," }, { "end": 702.16, "start": 696, "text": " so these are more, let's say, industry people. If you prune an entire module, well, that's" }, { "end": 708.88, "start": 702.16, "text": " an entire matrix that falls away. So I have to, I can basically save an entire matrix," }, { "end": 715.52, "start": 708.88, "text": " matrix multiply in the forward pass here and the backward pass. Well, okay, I don't prune" }, { "end": 721.38, "start": 715.52, "text": " during training, but I can basically save an entire matrix multiply here by pruning" }, { "end": 728.76, "start": 721.38, "text": " an entire module. So I'm not sure if I were an author and I say I want to look at BERT" }, { "end": 734.36, "start": 728.76, "text": " in terms of the lottery ticket hypothesis, I would find a way, I would go away from this" }, { "end": 739.16, "start": 734.36, "text": " literature and find a way to also just prune here individual weights. It's not going to" }, { "end": 745.36, "start": 739.16, "text": " be faster, but the lottery ticket investigations aren't supposed to be faster, they're supposed" }, { "end": 751.36, "start": 745.36, "text": " to tell you something about the nature of the things you're investigating. And of course" }, { "end": 759.48, "start": 751.36, "text": " how you do this is simply by masking, right? You simply force these entries to be zero" }, { "end": 764.6800000000001, "start": 759.48, "text": " and therefore you don't have forward signal, you don't have gradient. Interestingly, they" }, { "end": 770.24, "start": 764.6800000000001, "text": " actually do the masking, but they do it on the whole entire module level. Okay, so this" }, { "end": 775.96, "start": 770.24, "text": " was BERT and the lottery ticket hypothesis and the all tickets are winning, we're going" }, { "end": 782.72, "start": 775.96, "text": " to investigate later. Let's see what they say in the abstract. Say much of the recent" }, { "end": 788.84, "start": 782.72, "text": " success in NLP is due to the large transformer based models such as BERT. Okay, they say" }, { "end": 793.48, "start": 788.84, "text": " however these models have been shown to be reducible to a smaller number of self-attention" }, { "end": 799.24, "start": 793.48, "text": " heads and layers. So this would be pruning. We consider this phenomenon from the perspective" }, { "end": 804, "start": 799.24, "text": " of the lottery ticket hypothesis. For fine-tuned BERT we show that, here's the contributions," }, { "end": 810.4, "start": 804, "text": " A, it is possible to find a sub network of elements that achieves performance comparable" }, { "end": 815.32, "start": 810.4, "text": " with that of the full model. So basically this is the pruning objective, right? You" }, { "end": 820.56, "start": 815.32, "text": " want to prune it such that the performance holds and in terms of the lottery ticket hypothesis" }, { "end": 826.76, "start": 820.56, "text": " you want to prune, reset to the beginning and then also and then train again and that" }, { "end": 833.48, "start": 826.76, "text": " will give you, actually in the lottery ticket hypothesis you can gain performance if you" }, { "end": 843.9200000000001, "start": 833.48, "text": " prune by a certain amount. In this case here they always lose performance but yeah. So" }, { "end": 850.08, "start": 843.9200000000001, "text": " second of all similarly sized sub networks sampled from the rest of the model so the" }, { "end": 859.2, "start": 850.08, "text": " non-winning ticket perform worse. So if you just prune away the good parts then the bad" }, { "end": 867.44, "start": 859.2, "text": " parts perform worse of course. However the bad sub networks can be fine-tuned separately" }, { "end": 873.6600000000001, "start": 867.44, "text": " to achieve only slightly worse performance than the good ones indicating that most weights" }, { "end": 880.6800000000001, "start": 873.6600000000001, "text": " in the pre-trained BERT are potentially useful. So this is interesting. If they be fine-tuned" }, { "end": 886.2, "start": 880.6800000000001, "text": " separately this is exactly what the lottery ticket hypothesis is doing, right? It's basically" }, { "end": 893.8000000000001, "start": 886.2, "text": " fine-tuning only a sub part of the network and here they say even if we take the parts" }, { "end": 902.48, "start": 893.8000000000001, "text": " of the network that have low scores for pruning and we retrain those then we can achieve a" }, { "end": 911.7, "start": 902.48, "text": " good performance. So further they say we also show that the good sub networks vary considerably" }, { "end": 917.6800000000001, "start": 911.7, "text": " across glue tasks. This is this benchmark opening up the possibilities to learn what" }, { "end": 926.12, "start": 917.6800000000001, "text": " knowledge BERT actually uses at inference time. Alright so this is the overview of the" }, { "end": 932.88, "start": 926.12, "text": " paper. So a last thing to say which I've already kind of alluded to is the fact that in the" }, { "end": 938.48, "start": 932.88, "text": " original lottery ticket hypothesis as I said you had a graph and you had some sort of here" }, { "end": 945.16, "start": 938.48, "text": " was 100% accuracy and here was how much you prune. Of course you start at 100% if you" }, { "end": 950.88, "start": 945.16, "text": " prune nothing but then as you prune the interesting thing is it kind of goes up and then it goes" }, { "end": 958, "start": 950.88, "text": " down. So this is the first thing here it goes up to a certain amount if you don't prune" }, { "end": 964.5, "start": 958, "text": " and in the original lottery ticket hypothesis here somewhere here would be 50% of the network" }, { "end": 972.56, "start": 964.5, "text": " I think. And then once you go down let's say here to 90% of performance you are at something" }, { "end": 980.8, "start": 972.56, "text": " like 5% of the network size or 3%. So you can prune away most things and still be like" }, { "end": 988.2, "start": 980.8, "text": " extremely extremely powerful. Now we're going to see what these essentially what these people" }, { "end": 996.12, "start": 988.2, "text": " do here is here is 100% and they simply prune until they reach 90%. So we don't necessarily" }, { "end": 1002.8000000000001, "start": 996.12, "text": " know what happens in the middle we just know they start here and somehow they get to 90%" }, { "end": 1009.3000000000001, "start": 1002.8000000000001, "text": " and what they end up with is something like 50% of the network still remaining. So again" }, { "end": 1015.4000000000001, "start": 1009.3000000000001, "text": " see the qualitative difference here between the 5% of the lottery tickets in the original" }, { "end": 1022.24, "start": 1015.4, "text": " paper and the 50-ish or so percent or considerable amount more in this paper right here and I'm" }, { "end": 1028.16, "start": 1022.24, "text": " pretty sure that is due to the fact that they prune entire modules here so they don't prune" }, { "end": 1035.16, "start": 1028.16, "text": " on a fine-grained enough level to investigate these phenomenon because as I said we don't" }, { "end": 1041.92, "start": 1035.16, "text": " know but I'm pretty sure this just goes down here and not up first. So qualitatively it" }, { "end": 1052.88, "start": 1041.92, "text": " seems different. Alright so here they introduce what they do again ERT is made up of these" }, { "end": 1059.76, "start": 1052.88, "text": " attention heads and MLPs. The MLPs have a skip connection as you can see here and the" }, { "end": 1066, "start": 1059.76, "text": " attention head attention layers are basically made up each of N of these attention heads." }, { "end": 1072.36, "start": 1066, "text": " What they will do is they will look at 12 layer networks. Each layer will have 12 of" }, { "end": 1080.84, "start": 1072.36, "text": " these attention heads and one of the MLPs. So you have in total 144 heads and 12 MLP" }, { "end": 1086.8, "start": 1080.84, "text": " layers. The way they determine which ones to prune is pretty easy. In front of each" }, { "end": 1093.52, "start": 1086.8, "text": " attention head and in front of each MLP they put one of these binary variables right here." }, { "end": 1101.48, "start": 1093.52, "text": " These variables can take values 0 or 1. If they are 0 the layers or the head is basically" }, { "end": 1108.36, "start": 1101.48, "text": " inactive, no propagation. If they are 1 they are active. And they determine what value" }, { "end": 1114.4, "start": 1108.36, "text": " to set them to by computing important scores. Basically determining how important is a head" }, { "end": 1120.12, "start": 1114.4, "text": " or a layer for the network. And that's pretty simple. You simply take the gradient of the" }, { "end": 1127.36, "start": 1120.12, "text": " loss. I think they go after this paper right here that's supposed to be the following." }, { "end": 1134.52, "start": 1127.36, "text": " You derive the loss by these variables right here and therefore you get these important" }, { "end": 1140.08, "start": 1134.52, "text": " scores. And then you can simply prune the layers with the lowest important scores because" }, { "end": 1145.4799999999998, "start": 1140.08, "text": " that means that the gradient with respect to them is the smallest. That means your loss" }, { "end": 1158.2, "start": 1145.48, "text": " changes the least if you were to leave them away. So they here determine their pruning" }, { "end": 1165.68, "start": 1158.2, "text": " strategy. Their constraint here is as I said 90% of the performance of the full model." }, { "end": 1172.76, "start": 1165.68, "text": " So they train the full model, fine tune the full model on this task and then they set" }, { "end": 1179.68, "start": 1172.76, "text": " themselves a budget of 90% and they simply prune until the model reaches 90%. Once it" }, { "end": 1187.92, "start": 1179.68, "text": " goes lower they stop. So they have three methods of pruning. One is heads only where they only" }, { "end": 1195.04, "start": 1187.92, "text": " cut away these attention heads. As I said there are 144 of them. They have the pruning" }, { "end": 1201.16, "start": 1195.04, "text": " strategy of MLPs only where they only prune the MLPs, leave all the attention heads alone." }, { "end": 1208.3600000000001, "start": 1201.16, "text": " Then they have this heads and MLPs. They say we compute head and MLP important scores in" }, { "end": 1215.44, "start": 1208.3600000000001, "text": " a single backward pass, pruning 10% heads and one MLP with the smallest scores until" }, { "end": 1224, "start": 1215.44, "text": " the performance on the dev set is within 90%. Then we continue pruning heads alone and then" }, { "end": 1231.6, "start": 1224, "text": " MLPs alone. This I guess until again they are no longer in the 90% so until they reach" }, { "end": 1238.64, "start": 1231.6, "text": " their budget. So this is a combined strategy. This strategy results in a larger number of" }, { "end": 1245.28, "start": 1238.64, "text": " total components pruned within our performance threshold. So this is the thing we should" }, { "end": 1249.96, "start": 1245.28, "text": " focus on right because in pruning the name of the game is how much can you take away" }, { "end": 1260.28, "start": 1249.96, "text": " and still be within your budget. This strategy seems to be the viable strategy here." }, { "end": 1268.32, "start": 1260.28, "text": " A last thing here is fine tuning. So the other difference between this paper and the lottery" }, { "end": 1275.96, "start": 1268.32, "text": " ticket hypothesis is that we said that in the original paper here these are randomly" }, { "end": 1279.88, "start": 1275.96, "text": " initialized weights. Like you train a class for an ImageNet or something, you start from" }, { "end": 1286.52, "start": 1279.88, "text": " randomly initialized weights and the lottery ticket papers they all kind of presuppose" }, { "end": 1292.1200000000001, "start": 1286.52, "text": " random initializations. Whereas BERT, when you do the same thing for BERT, these are" }, { "end": 1300, "start": 1292.1200000000001, "text": " not random initializations. We said in BERT what you usually do is you train the encoder" }, { "end": 1307.16, "start": 1300, "text": " part here. You pre-train with masked language modeling first and then second you train the" }, { "end": 1314.76, "start": 1307.16, "text": " entire thing. Let's skip the color here. Second you train the entire thing. You fine tune" }, { "end": 1325.52, "start": 1314.76, "text": " the entire thing. So if we talk about initializations in the BERT task then the initialization would" }, { "end": 1331.2, "start": 1325.52, "text": " be at this point right here after the masked language modeling would be the initialization." }, { "end": 1338.2, "start": 1331.2, "text": " So the weights are not random. The weights are actually pre-trained on the masked language" }, { "end": 1345.72, "start": 1338.2, "text": " modeling task which is also a qualitative difference and sort of lets us inspect. So" }, { "end": 1352.44, "start": 1345.72, "text": " the authors say that since we trained with masked language modeling and people sort of" }, { "end": 1358.76, "start": 1352.44, "text": " claim that masked language modeling learned something about the language, we can now investigate" }, { "end": 1366.72, "start": 1358.76, "text": " kind of which attention heads, which modules in BERT are encoding which parts of the language." }, { "end": 1371.8, "start": 1366.72, "text": " And this is going to be interesting once we look at which attention heads and which modules" }, { "end": 1377.96, "start": 1371.8, "text": " survive in the individual tasks, we can sort of compare tasks across each other by seeing" }, { "end": 1385.64, "start": 1377.96, "text": " which of the heads they share in their winning tickets. Alright, so they produce these graphs" }, { "end": 1390.2, "start": 1385.64, "text": " here. These are sort of one of the central graphs here and the way to read this is on" }, { "end": 1399.48, "start": 1390.2, "text": " the left side here you have the layer, you have the layer index and on the x-axis you" }, { "end": 1406.88, "start": 1399.48, "text": " simply have the index of the head. There are 144 boxes here. Each one corresponds to one" }, { "end": 1414.2, "start": 1406.88, "text": " of the attention heads. The top number is always the mean number of glue tasks that" }, { "end": 1420.3600000000001, "start": 1414.2, "text": " this head survived in. So what they do is they take the pre-trained BERT, they fine" }, { "end": 1426.1200000000001, "start": 1420.3600000000001, "text": " tune it on these nine tasks and for each of the nine tasks they determine the winning" }, { "end": 1436.8400000000001, "start": 1426.1200000000001, "text": " tickets. And the number here says how many, in how many of these nine tasks is this particular" }, { "end": 1442.08, "start": 1436.84, "text": " attention head a part of the winning ticket. Now they repeat it for different random seats," }, { "end": 1447.6799999999998, "start": 1442.08, "text": " that's why you have floating point numbers and the lower part is the standard deviation" }, { "end": 1455.1999999999998, "start": 1447.6799999999998, "text": " across that. So you can see quite a number of heads make it into a lot of these tasks." }, { "end": 1462.36, "start": 1455.1999999999998, "text": " So you can say this part, this thing right here, red on red, this head right here survives" }, { "end": 1469.6, "start": 1462.36, "text": " in seven out of the nine tasks. So it should be fairly, it should probably encode something" }, { "end": 1476.24, "start": 1469.6, "text": " fairly substantial about language that is shared across these seven tasks. You can see" }, { "end": 1481.6799999999998, "start": 1476.24, "text": " some of the heads like this one here doesn't survive in almost any task which basically" }, { "end": 1488.1599999999999, "start": 1481.6799999999998, "text": " means that it's, you know, that one's not really super important for these tasks. It" }, { "end": 1494.0400000000002, "start": 1488.16, "text": " might have been, you know, important for the pre-training but not for these particular" }, { "end": 1499.64, "start": 1494.0400000000002, "text": " tasks. What's interesting, so what you can see is that the mean or median or so is like" }, { "end": 1509.0800000000002, "start": 1499.64, "text": " three, four or five. And that means that a lot of the heads are sort of somewhat important" }, { "end": 1513.68, "start": 1509.0800000000002, "text": " for some of the tasks. And you can see the qualitative difference. If this were the like" }, { "end": 1518.92, "start": 1513.68, "text": " original lottery ticket paper, most of these numbers would be at zero because the lottery" }, { "end": 1525.24, "start": 1518.92, "text": " ticket size is just so much smaller. Here you can directly see that you are going to" }, { "end": 1532.1200000000001, "start": 1525.24, "text": " retain a large number of things in your network in order to get 90% of the performance. And" }, { "end": 1539.92, "start": 1532.1200000000001, "text": " that's probably because you prune entire modules again. So they have this for two variants" }, { "end": 1546.16, "start": 1539.92, "text": " here. First for the strategy of masking heads only. And the right one is for masking heads" }, { "end": 1551.92, "start": 1546.16, "text": " and MLPs. And the same here on the bottom. These are the same numbers but not for attention" }, { "end": 1558.4, "start": 1551.92, "text": " heads but for MLP layers. And you see again this is masking MLPs only. This is masking" }, { "end": 1568.66, "start": 1558.4, "text": " heads and MLPs. So if you compare the two, you see that for example this here and this" }, { "end": 1576.24, "start": 1568.66, "text": " here are substantially darker which means more of this stuff survives. Now we can't" }, { "end": 1583.8000000000002, "start": 1576.24, "text": " really... It seems like here for example, it's darker than here. So on the right side" }, { "end": 1589.24, "start": 1583.8000000000002, "text": " more stuff survives but also you have more things to prune, right? You can prune the" }, { "end": 1597.0800000000002, "start": 1589.24, "text": " heads and the MLPs. And they claim before that the masking heads and MLP strategy results" }, { "end": 1605.1599999999999, "start": 1597.08, "text": " in more things being pruned which isn't really congruent with here generally more things" }, { "end": 1613.04, "start": 1605.1599999999999, "text": " surviving. But it could be because of the fact maybe the sum of the two is still lower" }, { "end": 1620.28, "start": 1613.04, "text": " than the sum of each individual thing here. Though it doesn't really look like it. So" }, { "end": 1627.8799999999999, "start": 1620.28, "text": " I'm a bit confused about this but I'm just going to assume that the sum of the two is" }, { "end": 1636.16, "start": 1627.8799999999999, "text": " lower. Does that make sense if both are darker? Well it shouldn't be the sum... It should" }, { "end": 1642.86, "start": 1636.16, "text": " be the sum of this plus a completely dark this in terms of masking heads only or vice" }, { "end": 1649.86, "start": 1642.86, "text": " versa versus the sum of those two, right? So that should be the measure. But it just seems" }, { "end": 1657.12, "start": 1649.86, "text": " a bit doesn't work out too much. But okay that's what they say. So by the way if the" }, { "end": 1665.36, "start": 1657.12, "text": " authors are here you have... This is cut off. Haha. Yeah this is annoying. This is like trying" }, { "end": 1674.28, "start": 1665.36, "text": " to get LoTeC to do things and it doesn't comply. Alright so what you can... Another thing you" }, { "end": 1681.28, "start": 1674.28, "text": " can see in the authors point out here is that if you mask heads and MLPs you sort of shift" }, { "end": 1687.32, "start": 1681.28, "text": " more things to the back of the network to the higher up layers. And they reason now" }, { "end": 1695.84, "start": 1687.32, "text": " because you also mask the heads basically they can't do as much work so the heads would" }, { "end": 1702.8799999999999, "start": 1695.84, "text": " be masked somewhere here. So all that work is going to shift upon the MLPs and mostly" }, { "end": 1709.5200000000002, "start": 1702.88, "text": " to the back of the network because this thing here cannot take over work that this attention" }, { "end": 1714.6000000000001, "start": 1709.5200000000002, "text": " head here is now not performing anymore because it was pruned because the signal travels this" }, { "end": 1721.7600000000002, "start": 1714.6000000000001, "text": " way. So the authors kind of interpret these results right here and I think the most important" }, { "end": 1727.0800000000002, "start": 1721.7600000000002, "text": " thing to see is simply the variance of things. So most heads are actually important for at" }, { "end": 1735.04, "start": 1727.08, "text": " least two or three tasks and no head is important for all the tasks consistently. I think that's" }, { "end": 1744.8799999999999, "start": 1735.04, "text": " the take home message right here. Okay and they contrast this to previous research that" }, { "end": 1750.36, "start": 1744.8799999999999, "text": " has basically said this experiment falls up on a study by this that showed that only a" }, { "end": 1756.6399999999999, "start": 1750.36, "text": " few transformer heads in machine translation tasks did the heavy lifting while the rest" }, { "end": 1762.0800000000002, "start": 1756.64, "text": " could be pruned. And this paper similarly showed that most of BERT self-attention head" }, { "end": 1767.48, "start": 1762.0800000000002, "text": " in MNLI task could be pruned and that the good heads were mostly shared between the" }, { "end": 1774.8000000000002, "start": 1767.48, "text": " MNLI matched and mismatched. And they basically say yeah that's correct but that is only within" }, { "end": 1781.1200000000001, "start": 1774.8000000000002, "text": " one task right if you go beyond if you go to several tasks then the heads that are important" }, { "end": 1797, "start": 1781.12, "text": " differ quite a bit. Okay so let's continue and go here. They ask how task independent" }, { "end": 1805.1399999999999, "start": 1797, "text": " are the good self-nerve works and they basically look at these kinds of graphs right here which" }, { "end": 1813.16, "start": 1805.14, "text": " are pretty interesting. So we've got this. This is heads shared between tasks. So what" }, { "end": 1821.16, "start": 1813.16, "text": " this measures is these are the different tasks in the glue benchmark and they basically look" }, { "end": 1828.3600000000001, "start": 1821.16, "text": " at each task look at its winning lottery ticket and look at which heads survive in the winning" }, { "end": 1837.12, "start": 1828.36, "text": " ticket. And then they put that here on the diagonal. So if in QNLI a head survives it" }, { "end": 1845, "start": 1837.12, "text": " gets a 1 here and if it doesn't survive it gets a 0. So on average 85 out of the 144" }, { "end": 1851.9199999999998, "start": 1845, "text": " heads survive right 85 heads survive. That's pretty as I said this is somewhat like over" }, { "end": 1857.56, "start": 1851.9199999999998, "text": " 50% of the network. It's entirely different than the original lottery ticket hypothesis" }, { "end": 1866.6399999999999, "start": 1857.56, "text": " paper. So 85% not percent 85 of the 144 heads survive. Now they look at the other tasks" }, { "end": 1874.1599999999999, "start": 1866.6399999999999, "text": " so for QNLI they would look at maybe MNLI task here and ask which of the heads that" }, { "end": 1880.3799999999999, "start": 1874.1599999999999, "text": " survived in QNLI also survives in MNLI. So that gets you the shared heads and again the" }, { "end": 1890.0800000000002, "start": 1880.38, "text": " lower numbers is standard deviation. So 62 heads survive in QNLI and MNLI and the authors" }, { "end": 1896.0800000000002, "start": 1890.0800000000002, "text": " here are sort of arguing that from these sort of numbers you should be able to see which" }, { "end": 1904.0800000000002, "start": 1896.0800000000002, "text": " of the tasks share different linguistic knowledge. So different linguistic knowledge could be" }, { "end": 1912.72, "start": 1904.08, "text": " relevant for different tasks but if some tasks share a lot of the attention heads that survive" }, { "end": 1918.8799999999999, "start": 1912.72, "text": " in the winning tickets that basically means that the model is using that information that" }, { "end": 1924.32, "start": 1918.8799999999999, "text": " is in that head for both tasks. This could be good in that you say oh yeah these tasks" }, { "end": 1930.4399999999998, "start": 1924.32, "text": " really are used similar linguistic features or it could be something that you don't expect" }, { "end": 1934.8400000000001, "start": 1930.44, "text": " and then you might be able to investigate maybe the model is doing something shady here" }, { "end": 1941.92, "start": 1934.8400000000001, "text": " because it really shouldn't, these tasks don't really have much in common. So they do this" }, { "end": 1949.28, "start": 1941.92, "text": " for the heads and the MLPs here. Now if you ask why the WNLI here has a bunch of zeros" }, { "end": 1954.3600000000001, "start": 1949.28, "text": " that's because it's a wonky task and basically the best thing you can do is predict the most" }, { "end": 1960.2, "start": 1954.3600000000001, "text": " frequent class. So you can prune just about anything away on these MLPs because they have" }, { "end": 1967.8, "start": 1960.2, "text": " the skip connections you don't need them to predict the most frequent class. What I want" }, { "end": 1980.88, "start": 1967.8, "text": " to go into is the following statement right here. So, note that figure one, so the figure" }, { "end": 1987.3600000000001, "start": 1980.88, "text": " before, shows very few heads or MLPs that are universally useless. Only seven heads" }, { "end": 1993.9199999999998, "start": 1987.36, "text": " that survived in less than two tasks. 86% of heads and 67% of MLPs survived in two to" }, { "end": 2000.24, "start": 1993.9199999999998, "text": " seven tasks with relatively high standard deviation. They say this means that the good" }, { "end": 2011.12, "start": 2000.24, "text": " sub networks for different tasks have relatively little in common. So they make this sort of" }, { "end": 2018.52, "start": 2011.12, "text": " statement again here that the good sub network have little in common and it might seem like" }, { "end": 2027.04, "start": 2018.52, "text": " that for the figure initially. But if you look at this figure it actually shows something" }, { "end": 2036.76, "start": 2027.04, "text": " pretty interesting I think. So if you look at a number, let's say for example this here," }, { "end": 2046.2, "start": 2036.76, "text": " this 74 and I haven't actually tried. Yeah let's look at the 74 and this here. So let's" }, { "end": 2056.92, "start": 2046.2, "text": " look at these tasks QQP and RTE. Okay, so if you look at QQP and RTE you could see that" }, { "end": 2063.56, "start": 2056.92, "text": " these are tasks that already they don't have a lot of heads in common right and you might" }, { "end": 2071.16, "start": 2063.56, "text": " be able to say well if what they're saying is true that the tasks share relatively little" }, { "end": 2079.2, "start": 2071.16, "text": " you would expect them to be relatively independent. But if I look at this 78 here it means that" }, { "end": 2090.88, "start": 2079.2, "text": " 78 out of 144 heads survive and here it means that 74 out of 144 heads survive. So if I" }, { "end": 2098.44, "start": 2090.88, "text": " now would think that okay generally for different tasks things are different how many heads" }, { "end": 2105.1400000000003, "start": 2098.44, "text": " would I expect there to be surviving in both if the tasks are independent. So that's these" }, { "end": 2114.08, "start": 2105.1400000000003, "text": " two things multiplied right times 144. So I can scratch this here and the 7 times 7 is" }, { "end": 2125.4, "start": 2114.08, "text": " whatever 49. Let's go 7 times 8 about this so that's 5, 6. Do I need to get out the calculator?" }, { "end": 2136.7999999999997, "start": 2125.4, "text": " I want to do this calculator. I'm going to do this the right way. Okay I hope you can" }, { "end": 2148.2400000000002, "start": 2136.8, "text": " see that so that's 78 times 74 divided by 144. Did I do it right? I probably did it" }, { "end": 2161.32, "start": 2148.2400000000002, "text": " wrong. 78 times 74 divided by 144. Alright so that's 40 heads and you see that there's" }, { "end": 2166.6800000000003, "start": 2161.32, "text": " 43 heads and I've actually gone through a bunch of these numbers before not these ones" }, { "end": 2174.92, "start": 2166.6800000000003, "text": " but generally the shared number of heads is higher than what one would expect if you assume" }, { "end": 2182.32, "start": 2174.92, "text": " that the tasks are independent. And I'm sort of missing sort of an analysis of that here" }, { "end": 2190.0800000000004, "start": 2182.32, "text": " because that I find to be a pretty interesting finding of these things and sort of I mean" }, { "end": 2197.2, "start": 2190.08, "text": " I get the fact that they say based on the graphics up here that the tasks are sort of" }, { "end": 2201.7999999999997, "start": 2197.2, "text": " seem to be relatively independent with respect to the heads that survive and of course relatively" }, { "end": 2212.2, "start": 2201.7999999999997, "text": " independent is a relative term but it's sort of an investigation into why we see considerable" }, { "end": 2220.2, "start": 2212.2, "text": " difference between tasks here in terms of that. So these numbers are always over what" }, { "end": 2230.24, "start": 2220.2, "text": " you would assume for independence. That seems to be pretty interesting. Alright so they" }, { "end": 2238.7599999999998, "start": 2230.24, "text": " say they here go into this figure two and this pairwise comparison and they analyze" }, { "end": 2246.2000000000003, "start": 2238.76, "text": " a couple of the different tasks here and what you would expect and I don't want to go too" }, { "end": 2250.76, "start": 2246.2000000000003, "text": " much into these tasks because honestly I also don't know all of these tasks. I don't know" }, { "end": 2255.92, "start": 2250.76, "text": " which tasks should share a lot of things, which ones shouldn't but it is a good way." }, { "end": 2261.48, "start": 2255.92, "text": " Like it is a very smart way to investigate if the model really learns similar tasks to" }, { "end": 2267.88, "start": 2261.48, "text": " use similar information. Alright the last thing they do right here is the good and the" }, { "end": 2274.6400000000003, "start": 2267.88, "text": " bad subnetworks in BERT fine tuning. So they say our final experiment puts the above evidence" }, { "end": 2280.6800000000003, "start": 2274.6400000000003, "text": " of good subnetworks in BERT fine tuned from the perspective of lottery ticket hypothesis" }, { "end": 2285.7200000000003, "start": 2280.6800000000003, "text": " which predicts that the lucky subnetworks can be retrained from scratch to match the" }, { "end": 2292.56, "start": 2285.7200000000003, "text": " performance of the full network. To test this hypothesis we experiment with the following" }, { "end": 2298.44, "start": 2292.56, "text": " subnetworks. So that means I wasn't really sure when I read it the first time but now" }, { "end": 2307.56, "start": 2298.44, "text": " I'm fairly sure that all of the results so far were just pruning and maybe not retraining." }, { "end": 2315.68, "start": 2307.56, "text": " So just sort of doing the pruning thing and not doing this lottery ticket retraining which" }, { "end": 2322.48, "start": 2315.68, "text": " shouldn't make a lot of the difference as we're going to see but just for the understanding." }, { "end": 2328.68, "start": 2322.48, "text": " Because it seems like pruning and retraining doesn't do that much for the winning tickets" }, { "end": 2337.92, "start": 2328.68, "text": " as you'll see right now. But yeah so now they actually retrain from scratch. So good networks" }, { "end": 2343.8, "start": 2337.92, "text": " the elements selected from the full model by importance scores as described in the previous" }, { "end": 2349.32, "start": 2343.8, "text": " section. So here they're going to evaluate these good networks. First of all they're" }, { "end": 2354.7200000000003, "start": 2349.32, "text": " going to evaluate them pruned and they're going to evaluate them retrained in the lottery" }, { "end": 2362.84, "start": 2354.7200000000003, "text": " ticket style. Then they're also going to evaluate bad subnetworks. The elements sampled from" }, { "end": 2369.6000000000004, "start": 2362.84, "text": " those that did not survive the pruning plus a random sample of elements with high importance" }, { "end": 2375.2000000000003, "start": 2369.6000000000004, "text": " score so as to match the size of the good subnetworks. So because the good subnetworks" }, { "end": 2386.04, "start": 2375.2, "text": " are 50% or more of the network they want to sample from the things that did not survive" }, { "end": 2392, "start": 2386.04, "text": " so from the bad ones and they plus a random sample of the good subnetworks to just match" }, { "end": 2399.9199999999996, "start": 2392, "text": " the size. So we would expect these to perform maybe worse but maybe we can also train them" }, { "end": 2407.4, "start": 2399.92, "text": " to achieve good performance. And then they investigate bad subnetworks. Simple inversion" }, { "end": 2412.36, "start": 2407.4, "text": " of the good subnetworks. So these would be just anything but the good. They are 5 to" }, { "end": 2419.3, "start": 2412.36, "text": " 18% smaller in size than the sampled bad subnetworks but they do not contain any elements with" }, { "end": 2426.1, "start": 2419.3, "text": " high importance scores. And they say okay for all of them they evaluate their performance" }, { "end": 2432.24, "start": 2426.1, "text": " on all tasks simply after pruning and with fine tuning the same subnetwork with the same" }, { "end": 2437.2, "start": 2432.24, "text": " random seeds and with the rest of the model of masks. So this is really what the lottery" }, { "end": 2443.56, "start": 2437.2, "text": " ticket hypothesis does except they of course mask entire modules and not individual weights." }, { "end": 2449.64, "start": 2443.56, "text": " And here you can see the general results. So the general results look like something" }, { "end": 2458.72, "start": 2449.64, "text": " like this. This is a typical example. So this is the, let's go out, oh yeah, this here is" }, { "end": 2465.08, "start": 2458.72, "text": " simply the dumb classifier that always tells the highest probability class. This is the," }, { "end": 2473.7999999999997, "start": 2465.08, "text": " like this is sort of the idiot's baseline. Okay. This here is the full model. This here" }, { "end": 2482.04, "start": 2473.8, "text": " is the good pruned and this here is the good after it's retrained again. Okay. So you see" }, { "end": 2486.7400000000002, "start": 2482.04, "text": " by retraining you can basically gain. Now the original lottery ticket this would sometimes" }, { "end": 2491.8, "start": 2486.7400000000002, "text": " even go up here depending on how much was pruned but you can see that there is a slight" }, { "end": 2499.8, "start": 2491.8, "text": " gain after you retrain the pruned part. Okay. And the other thing to note here is that you" }, { "end": 2507.5600000000004, "start": 2499.8, "text": " don't lose much. Basically you only drop a little bit by pruning which that's what makes" }, { "end": 2514.4, "start": 2507.5600000000004, "text": " it the good part. You only drop a little bit. However, if you have the bad part which are" }, { "end": 2522.0800000000004, "start": 2514.4, "text": " these and let's say the good plus bad. These are the bad plus some of the good ones. You" }, { "end": 2530.24, "start": 2522.08, "text": " see that the performance drops pretty heavily almost to the baseline of the most frequent" }, { "end": 2539.04, "start": 2530.24, "text": " class and also here. So I would actually, I would go with this one right here if, because" }, { "end": 2545.3199999999997, "start": 2539.04, "text": " that's just the bad ones. You see the performance drops considerably but then, and that's what" }, { "end": 2551.08, "start": 2545.3199999999997, "text": " the authors claim is pretty interesting. If you retrain that part, the bad part, so to" }, { "end": 2557.96, "start": 2551.08, "text": " say, you can achieve sort of a very comparable performance to what you can achieve with the" }, { "end": 2565.84, "start": 2557.96, "text": " good parts. And this appears to be true for most of the results right here. There are" }, { "end": 2570.68, "start": 2565.84, "text": " some outliers like this one but there the score is also, so this is the Matthews correlation" }, { "end": 2576.68, "start": 2570.68, "text": " and not the accuracy. So the score is a bit different there. But you can see here the" }, { "end": 2584.52, "start": 2576.68, "text": " good plus bad also gets a fairly high accuracy. So the authors claim this is pretty surprising" }, { "end": 2589.6, "start": 2584.52, "text": " which I guess it is if you look at this. But what I want to do is I actually want, I have" }, { "end": 2596.74, "start": 2589.6, "text": " asked the author of the lottery ticket hypothesis this question. So this is from our machine" }, { "end": 2604.96, "start": 2596.74, "text": " learning street talk with Jonathan Franco and this is another channel that I am a part" }, { "end": 2613.4, "start": 2604.96, "text": " of and I would like to show you this right here when I ask them this question." }, { "end": 2619.5, "start": 2613.4, "text": " Another question from Reddit, Imnemo asks, suppose you try to construct a lottery ticket" }, { "end": 2626.28, "start": 2619.5, "text": " by taking all the weights that were not part of a winning ticket and retraining from those," }, { "end": 2631.32, "start": 2626.28, "text": " will that model be unable to learn the task or might there be another winning ticket hiding" }, { "end": 2636.28, "start": 2631.32, "text": " among them or one that was not originally used?" }, { "end": 2642.44, "start": 2636.28, "text": " So this is the most common question I get by people who read the original paper and" }, { "end": 2647.8, "start": 2642.44, "text": " I hope that by answering it here in a public forum I can answer it once and for all. The" }, { "end": 2652.0800000000004, "start": 2647.8, "text": " challenge in doing this experiment is let us take the MNIST example. So suppose that" }, { "end": 2656.6000000000004, "start": 2652.0800000000004, "text": " we find a winning ticket on MNIST. It is going to be about 3% of the original size of the" }, { "end": 2661.72, "start": 2656.6, "text": " network. So that means that if you remove it you still get 97% of the weights left." }, { "end": 2665.56, "start": 2661.72, "text": " And so my guess is that if you were to train those 97% of weights you will get to the same" }, { "end": 2669.04, "start": 2665.56, "text": " accuracy as you got with 100% of weights because you have barely pruned the network at all." }, { "end": 2672.3199999999997, "start": 2669.04, "text": " You could randomly prune by 3% and it would not affect it. And then you could go and find" }, { "end": 2676.6, "start": 2672.3199999999997, "text": " another lottery ticket that is mutually exclusive with the first. You still have 94% of the" }, { "end": 2681.24, "start": 2676.6, "text": " weights and you could probably iterate this for a very long time. You could probably this" }, { "end": 2689.2, "start": 2681.24, "text": " way find 10, 15 lottery tickets like this, maybe more, that are all mutually exclusive" }, { "end": 2695.04, "start": 2689.2, "text": " and still leave you with a remaining residual that is capable of training to full accuracy." }, { "end": 2699.16, "start": 2695.04, "text": " So the challenge with this experiment is that the lottery tickets are small, which is great," }, { "end": 2702.7999999999997, "start": 2699.16, "text": " but it means that whatever is left is large enough that I am sure there is another lottery" }, { "end": 2706.56, "start": 2702.7999999999997, "text": " ticket in there and another lottery ticket in there and so on and so on and so on. So" }, { "end": 2714.12, "start": 2706.56, "text": " it is an interesting idea in principle, but once you kind of look at the sizes of things" }, { "end": 2718.6, "start": 2714.12, "text": " you still got so much over-parameterization left that I think you just find more lottery" }, { "end": 2722.16, "start": 2718.6, "text": " tickets. You can even probably, I am guessing, swap out one weight from a lottery ticket" }, { "end": 2725.88, "start": 2722.16, "text": " with another weight and it would not matter or swap out a handful of weights. And so combinatorially" }, { "end": 2730.88, "start": 2725.88, "text": " the number of lottery tickets is massive and we are just finding one." }, { "end": 2740.08, "start": 2730.88, "text": " All right, so as you saw this is kind of the most common question that Jonathan gets here." }, { "end": 2746.2000000000003, "start": 2740.08, "text": " And as you can see the difference here of course is that our original tickets are already" }, { "end": 2753.7200000000003, "start": 2746.2000000000003, "text": " sort of 50% of the network, so what is left is only 50%. So this is substantially different." }, { "end": 2763.8399999999997, "start": 2753.72, "text": " Now two things I have to remark here. First of all, because we are pruning modules and" }, { "end": 2771.2, "start": 2763.8399999999997, "text": " not individual weights for the good one, it is the reason that we do get these big winning" }, { "end": 2778.04, "start": 2771.2, "text": " tickets, right? But also what I think is happening is that because we are pruning these entire" }, { "end": 2785.64, "start": 2778.04, "text": " modules we are actually not fine-grained enough. So that means every time we eliminate a module" }, { "end": 2792.84, "start": 2785.64, "text": " we actually kill some good ones and some bad ones. So in here I am going to guess there" }, { "end": 2800.68, "start": 2792.84, "text": " are some good ones and there are some bad ones. But since we can only kill entire modules," }, { "end": 2808, "start": 2800.68, "text": " we sort of, we simply kill the one that on average has the most good ones. But I am guessing" }, { "end": 2817.28, "start": 2808, "text": " that in the thing we kill there are simply, sorry, we kill the one that has on average" }, { "end": 2822.28, "start": 2817.28, "text": " the least good ones. But there are still some good weights in there. And if you believe" }, { "end": 2829.48, "start": 2822.28, "text": " the original lottery ticket hypothesis that means that these, actually these very few" }, { "end": 2839.16, "start": 2829.48, "text": " weights in those modules can still train to full accuracy. So actually what these authors" }, { "end": 2843.88, "start": 2839.16, "text": " claim is surprising in light of the original lottery ticket hypothesis. I think if you" }, { "end": 2849.48, "start": 2843.88, "text": " look at it from the perspective of the actual hypothesis which considers individual weight" }, { "end": 2857.72, "start": 2849.48, "text": " and a very small subset of them, the original hypothesis would pretty much predict that" }, { "end": 2865.12, "start": 2857.72, "text": " you could train something where you pruned away a bunch of modules entirely. Or you could" }, { "end": 2872.8399999999997, "start": 2865.12, "text": " train these bad modules because they are still going to contain a small-sized lottery ticket" }, { "end": 2878.4399999999996, "start": 2872.8399999999997, "text": " that is going to be responsible for the good performance. So that is kind of the first" }, { "end": 2883.48, "start": 2878.4399999999996, "text": " thing. And the second thing, in general, you heard Jonathan, I do not think that is actually" }, { "end": 2890.6, "start": 2883.48, "text": " even a question of the size of the tickets. Nothing in the original hypothesis forbids" }, { "end": 2896.2400000000002, "start": 2890.6, "text": " the non-winning ticket from also being trained to good accuracy. It simply says something" }, { "end": 2902.96, "start": 2896.2400000000002, "text": " about the winning ticket. It does not say anything about the non-winning ticket. So" }, { "end": 2908.76, "start": 2902.96, "text": " those are the two comments. And I think the question and the investigation, even though" }, { "end": 2917.88, "start": 2908.76, "text": " it is interesting, I think it is sort of maybe not thought through, at least in the perspective" }, { "end": 2925.2400000000002, "start": 2917.88, "text": " of what they go for here. The result is very interesting. But again, I think they claim" }, { "end": 2929.8, "start": 2925.2400000000002, "text": " the original hypothesis would sort of say these are the bad parts and you could not" }, { "end": 2936.5200000000004, "start": 2929.8, "text": " train them. And then they say it is surprising that you can. But I would say that the original" }, { "end": 2941.96, "start": 2936.52, "text": " hypothesis would in fact predict that you could train those things because you have" }, { "end": 2948.28, "start": 2941.96, "text": " pruned away these entire modules, which is very coarse-grained and that leaves still" }, { "end": 2954.84, "start": 2948.28, "text": " good weights in the bad parts. Okay. So they conclude. However, we can see that both good" }, { "end": 2961, "start": 2954.84, "text": " and bad networks can be retrained with comparable performance for many tasks. The inverted bad" }, { "end": 2965.16, "start": 2961, "text": " networks perform worse than the sampled ones, but that could be due to them being smaller" }, { "end": 2974.52, "start": 2965.16, "text": " in size. Performance of all inverted bad networks on call is almost zero. Okay. Yeah, okay. Very" }, { "end": 2980.44, "start": 2974.52, "text": " little remains when that mask is inverted. That is the task we looked at because they" }, { "end": 2987.56, "start": 2980.44, "text": " claim that is so small, which makes sense, right? So discussion. Say, does BERT have" }, { "end": 2992.8399999999997, "start": 2987.56, "text": " bad subnetworks? The key result of this study is that as far as fine-tuning is concerned," }, { "end": 2997.96, "start": 2992.84, "text": " BERT does not seem to have bad subnetworks that cannot be retrained to relatively good" }, { "end": 3003.4, "start": 2997.96, "text": " performance level, suggesting that the weight that do not survive pruning are not just inactive." }, { "end": 3007.2400000000002, "start": 3003.4, "text": " However, it is important to remember that we consider elements of BERT architecture" }, { "end": 3011.6400000000003, "start": 3007.2400000000002, "text": " as atomic units, while the original lottery ticket work relied on magnitude pruning of" }, { "end": 3017.48, "start": 3011.6400000000003, "text": " individual weights. So they're well aware here of these differences, and they can see" }, { "end": 3023.72, "start": 3017.48, "text": " which, and they can see to that right here. So that's good. On that level, BERT probably" }, { "end": 3029.48, "start": 3023.72, "text": " does have bad subnetworks, and they show that can be found in the transformer model with global" }, { "end": 3034.2, "start": 3029.48, "text": " iterative pruning. We'll leave it to future research to find out to what extent the effective" }, { "end": 3039.48, "start": 3034.2, "text": " subnetworks overlap with the effective architectural blocks, and what that says about the architecture" }, { "end": 3047.2400000000002, "start": 3039.48, "text": " of BERT and the other transformers. So as you see, they're well aware that all of what I" }, { "end": 3056.04, "start": 3047.24, "text": " said is the case. So it's not like I'm criticizing and saying they're wrong. It's just that if you" }, { "end": 3066.52, "start": 3056.04, "text": " read it, you sort of get the impression that this is what they're saying. And I think the light of" }, { "end": 3073.64, "start": 3066.52, "text": " which a reader goes through it is just a bit such that you come off, if you don't read until here," }, { "end": 3080.92, "start": 3073.64, "text": " you come off thinking something else. Our results suggest that most architecture blocks of BERT are" }, { "end": 3085.24, "start": 3080.92, "text": " potentially usable in fine tuning. This should not be interpreted as proof that they all encode" }, { "end": 3092.68, "start": 3085.24, "text": " potentially irrelevant linguistic information. That's absolutely true. It is also possible that" }, { "end": 3097.8799999999997, "start": 3092.68, "text": " pre-training somehow simply made them more amenable to optimization, which is another question for" }, { "end": 3105, "start": 3097.88, "text": " future research. And they go into what do different BERT components do in the different things. So" }, { "end": 3111.1600000000003, "start": 3105, "text": " again, I think this work here is actually most relevant for investigating this question, what do" }, { "end": 3116.76, "start": 3111.1600000000003, "text": " BERT components, the different BERT components do for the different tasks, to look which tasks use" }, { "end": 3126.84, "start": 3116.76, "text": " which things. And the actual recognition that none of these modules is useless, I would consider pretty" }, { "end": 3133.2400000000002, "start": 3126.84, "text": " pretty cool finding. Okay, so in conclusion, they say prior work shows that it was possible to prune" }, { "end": 3137.48, "start": 3133.2400000000002, "text": " most self-attention ads. We extend this to the fully connected layers. We show fine tune purses" }, { "end": 3141.48, "start": 3137.48, "text": " good and bad top networks, where the good heads and MOPs alone reach performance comparable with" }, { "end": 3146.2000000000003, "start": 3141.48, "text": " the full network, and the bad ones do not perform well. However, this pattern does not quite conform" }, { "end": 3151.1600000000003, "start": 3146.2000000000003, "text": " to lottery ticket hypothesis. Both good and bad networks can be fine tuned separately to reach" }, { "end": 3159.64, "start": 3151.16, "text": " comparable performance. We also show that 86% of heads and 57% of MOPs and good sub networks are" }, { "end": 3164.52, "start": 3159.64, "text": " not universally useful. Cross-glue tasks and overlap between good and sub networks do not" }, { "end": 3171.64, "start": 3164.52, "text": " necessarily correspond to task types. So that's where we didn't go into. This raises questions" }, { "end": 3176.68, "start": 3171.64, "text": " about the degree to which fine tune BERT relies on task specific or general linguistic knowledge" }, { "end": 3181.24, "start": 3176.68, "text": " and opens up the possibility of studying the good sub networks to see what types of knowledge BERT" }, { "end": 3187.56, "start": 3181.24, "text": " actually relies on at inference time. So this is sort of future research direction. And with that," }, { "end": 3193.8799999999997, "start": 3187.56, "text": " I think we've gone through the paper. I hope you got something useful out of this. I think it's a" }, { "end": 3199.7999999999997, "start": 3193.8799999999997, "text": " pretty cool paper. It's a pretty cool methodology, and I think a lot of work can build upon this to" }, { "end": 3206.6000000000004, "start": 3199.8, "text": " do interesting analysis of these language models. Again, if you like this video, consider sharing it," }, { "end": 3231.4, "start": 3206.6, "text": " subscribing, liking, and bye bye." } ]
utuz7wBGjKM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[News] OpenAI Model Generates Python Code
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "microsoft", "openai", "msbuild", "build", "code", "gpt2", "language model", "completion", "intellisense", "intellicode", "vscode", "github", "python", "code completion", "smart", "generate", "function body", "docstring", "name", "arguments", "programmer", "stackoverflow", "dataset", "interpolate" ]
This code completion engine can write an entire function from just the name! OpenAI demonstrates what happens when you learn a language model on thousands of GitHub Python repositories. Source Clip: https://youtu.be/fZSFNUT6iY8 Full Video: https://www.pscp.tv/Microsoft/1OyKAYWPRrWKb Kite: https://kite.com/ TabNine: https://www.tabnine.com/ Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. So I saw this and probably many of you have seen this. OpenAI was demonstrating at MSBuild basically a GPT-2 language model but trained not on language but on code, on Python code, open source code from GitHub. And so the idea is that the model learns to produce code. And we'll just have a short look at the clip they have here. I'll link the entire clip down. So this is what the human types. Def is palindrome, so the function name, the argument, and the doc string. And now the model is asked to produce the rest of the function and check out. So it's pretty good, right? This is actually to check whether, this is a basic check, whether a string is a palindrome, as long as you can ensure that s is a string and so on. So the model learned this. You can still say maybe that's just interpolating from, you know, something like this is surely in a GitHub repo somewhere. So they go further and they try to say, okay, please give me a function where the palindromes, so return a list indices for elements that are palindromes and at least seven characters. And I personally have searched for this function on GitHub and it does not exist. So what does the model produce? Pretty cool. So this is first of all a list comprehension in Python, which is reasonably complicated, right? And you can see there is this length filter that is greater or equal to seven. And it refers actually back to the is palindrome function that it wrote before. That's pretty cool. Now, this is not like a language model producing, as far as I understand, producing this basically letter by letter or word by word. This goes over the constraints of abstract syntax trees. So it is, I think that's what's happening. They don't have a paper to go along, though I will look into more papers of that sort. They do kind of constrain the model to actually produce valid code. But of course, which variables go where and so on that that's that is completely up to the model. And you see here it understands completely what the user wants. Now, of course, these examples might be cherry picked, right? But it's even for cherry picked examples, still pretty impressive. As I said, I could not find this particular function. So they post two classes here data classes, item and order. And now the model is asked to compute the total order price, which is a method of the of the of the order class. And they stop here. So this is what the human enters. The human enters that just the name of the function, not even the doc string. And the model comes up with the following, compute the total price of the order, including the palindrome. So it does all of that by itself, including the doc string, just from the method. Now you can see pretty much what's happening here. It's kind of like the GPT-2 language model. So what it does is probably it from the method name, it derives this doc string compute total price, right? That's a lot of programmers do this of the order. Order is, of course, the name of the class of self. And here it says including the palindrome discount. And that is probably somewhat pattern match to other functions that have some sort of discount or something like this or one argument that is a number. But the fact that it is also able to see, it adds up the total price per item. And it basically discounts every item. Now it cannot work out that palindrome discount should mean that every item should be a palindrome. That's the only thing it can't work out. It just applies a discount to every single item. Now they go ahead and kind of change that and write the doc strings themselves such that it is clear that apply the discount to items whose name are palindromes. Now the model is again asked to complete this and absolutely crazy. If it's a palindrome, then multiply it with the discount. And if it's not a palindrome, then just add the price. And this final touch here, that's one minus the palindrome discount is added by the programmer. So you can see that this goes towards kind of a symbiosis of human and machine in this way. I don't think the AI will replace programmers, but it certainly is going to be very helpful to automate some of the things or give you suggestions for things that you have to do over and over again. Now I think a lot of these rely on the fact that a lot of programming is redundant still. A lot of people name the function and then in the doc string, they basically repeat the function because they've already intelligently named the function. So technically there doesn't need to be a doc string, but then whatever your style guide comes in, it says there needs to be a doc string. Every argument must be described. Every argument must have a description and a help string and a type, even though it is completely obvious from the names what they do. So if it is completely obvious, I would argue you don't need a doc string. And this is kind of additional information that this model is able to actually sort of make use of. So the fact that a lot of these functions, you can already, the doc string is sort of already the implementation of the function almost. So the distance there is not, it's not like you can say whatever you want. And yeah, so you can see here when it's asked to print the receipt, it just works out. So it's printing, it's doing format strings and whatnot. So it's just learned to do that. I would argue, again, this works. You couldn't just put anything like doc string language is a very specific type of language that programmers use where they basically already sort of implement the method in the doc string. And then the body of the method is just the then really specific code. But as of that, yeah, there's a lot of information already in the doc string in the naming of the function. And it's still pretty impressive, right? So yeah, I just wanted to show you that this already is available, even though not in as big of a form, it can't write giant functions for you, you can't write function bodies, but there are some machine learning based completions already available. So kite is one of them. And tab nine is the other one that I use for now. Both are close sources, I understand it. So that's a bit of a downer there. But these are exactly kind of GPT language models learned on a lot of code. So they can kind of guess what you want and interpolate with your variables in there and so on. I also found this when I searched this comparison here kite versus tab nine. And you see it starts off Yeah, but these are I think these are kind of auto generated. So when you look at the video, you get tab nine is correct. But that kite, it's an actual video review of a kite. Yeah, so, you know, who knows. But what I wanted to do is basically show you a bit of the power of tab nine. Let me get this out of the way. So a while back ago, I live coded this session right here where I where I implemented a sentiment classifier from scratch using hugging face libraries. And I thought we would just play around in here a bit to see what the tab nine could do for us. Alright, so I have tab nine in here together with a bunch of other stuff, I have to admit, so I'm not sure how this is going to turn out. So let's say we wanted to compute the loss here. Let's say we wanted to compute square loss, square loss, you see that tab nine immediately kind of turns up. I've not tried this, I'm impressed. So it says it estimates loss here. And no, that's maybe not what I want. So I'll go with square. Now this is a language server suggestion. So and you can see right here, even though I don't have these variables, kind of tab nine will suggest train loss and validation loss. So let me start a new file right here. Just to see what we can get this thing to do without doing anything. So and so let's say we'll import OS and we'll say if name. So tab nine auto suggests that. And it auto suggests that we should write main here, right. And it knows that a lot of people then call a function called main. So we should probably do that def main, right. Sorry. Okay, let's go with the following, we'll say, we'll try the same thing they did, right. So we'll say we have a data class order, it has, let's say price float and a name string. And order one is an order with the price of five and the name of hello. And we can print CC tab nine, tab nine, if you can see that, it's closely suggested to print order dot price order one dot price. So it can it can see that we kind of want that. How do I select that? Right here. See, order two equals so we'll get another order right here. Seven. Hello. Let's get it with order two. Let's do that. Orders equals order one, order two. So total price, total price equals zero for order in. Wow, did you see that? Price, total price equals zero for order in. Wow, did you see that? In, I can't get it anymore. In orders, that's what tab nine says. Total price plus equals order dot price. So it is already pretty smart, I would argue. Print. And there you saw that total price was suggested. How can I, I don't know how to select this. But I'll figure it out. I'm not that advanced yet. So you can see this already sort of works. And I think it's pretty cool already. And I'm very excited to see what kind of how far people can push this because I think this code generation kind of inferring what you want is only at the beginning right now. And it's for sure going to be a very, very, very, very, very, very, very interesting, very, very, very interesting thing to come more. And yeah, with that, bye bye.
[ { "end": 8, "start": 0, "text": " Hi there. So I saw this and probably many of you have seen this. OpenAI was demonstrating at MSBuild" }, { "end": 14.88, "start": 8, "text": " basically a GPT-2 language model but trained not on language but on code, on Python code," }, { "end": 21.28, "start": 14.88, "text": " open source code from GitHub. And so the idea is that the model learns to produce code. And" }, { "end": 27.04, "start": 21.28, "text": " we'll just have a short look at the clip they have here. I'll link the entire clip down. So this is" }, { "end": 32.32, "start": 27.04, "text": " what the human types. Def is palindrome, so the function name, the argument, and the doc string." }, { "end": 36.4, "start": 32.32, "text": " And now the model is asked to produce the rest of the function and check out." }, { "end": 43.44, "start": 38.64, "text": " So it's pretty good, right? This is actually to check whether, this is a basic check, whether a" }, { "end": 49.2, "start": 43.44, "text": " string is a palindrome, as long as you can ensure that s is a string and so on. So the model learned" }, { "end": 53.68, "start": 49.2, "text": " this. You can still say maybe that's just interpolating from, you know, something like" }, { "end": 59.44, "start": 53.68, "text": " this is surely in a GitHub repo somewhere. So they go further and they try to say, okay, please give" }, { "end": 67.36, "start": 59.44, "text": " me a function where the palindromes, so return a list indices for elements that are palindromes and" }, { "end": 71.84, "start": 67.36, "text": " at least seven characters. And I personally have searched for this function on GitHub and it does" }, { "end": 78.48, "start": 71.84, "text": " not exist. So what does the model produce? Pretty cool. So this is first of all a list comprehension" }, { "end": 84, "start": 78.48, "text": " in Python, which is reasonably complicated, right? And you can see there is this length filter" }, { "end": 90.24000000000001, "start": 84, "text": " that is greater or equal to seven. And it refers actually back to the is palindrome function that" }, { "end": 96.08000000000001, "start": 90.24000000000001, "text": " it wrote before. That's pretty cool. Now, this is not like a language model producing, as far as I" }, { "end": 101.28, "start": 96.08000000000001, "text": " understand, producing this basically letter by letter or word by word. This goes over the" }, { "end": 107.52000000000001, "start": 101.28, "text": " constraints of abstract syntax trees. So it is, I think that's what's happening. They don't have a" }, { "end": 113.52, "start": 107.52, "text": " paper to go along, though I will look into more papers of that sort. They do kind of constrain" }, { "end": 118, "start": 113.52, "text": " the model to actually produce valid code. But of course, which variables go where and so on that" }, { "end": 125.36, "start": 118, "text": " that's that is completely up to the model. And you see here it understands completely what the user" }, { "end": 129.6, "start": 125.36, "text": " wants. Now, of course, these examples might be cherry picked, right? But it's even for cherry" }, { "end": 135.04, "start": 129.6, "text": " picked examples, still pretty impressive. As I said, I could not find this particular function." }, { "end": 142.16, "start": 135.04, "text": " So they post two classes here data classes, item and order. And now the model is asked to compute" }, { "end": 151.04, "start": 142.16, "text": " the total order price, which is a method of the of the of the order class. And they stop here. So" }, { "end": 157.68, "start": 151.04, "text": " this is what the human enters. The human enters that just the name of the function, not even the" }, { "end": 163.04, "start": 157.68, "text": " doc string. And the model comes up with the following, compute the total price of the order," }, { "end": 169.76, "start": 163.04, "text": " including the palindrome. So it does all of that by itself, including the doc string, just from" }, { "end": 173.76, "start": 169.76, "text": " the method. Now you can see pretty much what's happening here. It's kind of like the GPT-2" }, { "end": 178.64, "start": 173.76, "text": " language model. So what it does is probably it from the method name, it derives this doc string" }, { "end": 184.23999999999998, "start": 178.64, "text": " compute total price, right? That's a lot of programmers do this of the order. Order is," }, { "end": 189.68, "start": 184.23999999999998, "text": " of course, the name of the class of self. And here it says including the palindrome discount." }, { "end": 195.44, "start": 189.68, "text": " And that is probably somewhat pattern match to other functions that have some sort of discount" }, { "end": 202.8, "start": 195.44, "text": " or something like this or one argument that is a number. But the fact that it is also able to see," }, { "end": 212.4, "start": 202.8, "text": " it adds up the total price per item. And it basically discounts every item. Now it cannot" }, { "end": 217.44, "start": 212.4, "text": " work out that palindrome discount should mean that every item should be a palindrome. That's the only" }, { "end": 222.96, "start": 217.44, "text": " thing it can't work out. It just applies a discount to every single item. Now they go ahead and kind" }, { "end": 228.32, "start": 222.96, "text": " of change that and write the doc strings themselves such that it is clear that apply the discount to" }, { "end": 235.92, "start": 228.32, "text": " items whose name are palindromes. Now the model is again asked to complete this and absolutely crazy." }, { "end": 240.07999999999998, "start": 235.92, "text": " If it's a palindrome, then multiply it with the discount. And if it's not a palindrome," }, { "end": 244.96, "start": 240.07999999999998, "text": " then just add the price. And this final touch here, that's one minus the palindrome discount" }, { "end": 251.84, "start": 244.96, "text": " is added by the programmer. So you can see that this goes towards kind of a symbiosis of human and" }, { "end": 258.96000000000004, "start": 251.84, "text": " machine in this way. I don't think the AI will replace programmers, but it certainly is going" }, { "end": 264.8, "start": 258.96000000000004, "text": " to be very helpful to automate some of the things or give you suggestions for things that you have" }, { "end": 270.64, "start": 264.8, "text": " to do over and over again. Now I think a lot of these rely on the fact that a lot of programming" }, { "end": 276.56, "start": 270.64, "text": " is redundant still. A lot of people name the function and then in the doc string, they basically" }, { "end": 280.4, "start": 276.56, "text": " repeat the function because they've already intelligently named the function. So technically" }, { "end": 284.32, "start": 280.4, "text": " there doesn't need to be a doc string, but then whatever your style guide comes in, it says there" }, { "end": 289.28, "start": 284.32, "text": " needs to be a doc string. Every argument must be described. Every argument must have a description" }, { "end": 296.47999999999996, "start": 289.28, "text": " and a help string and a type, even though it is completely obvious from the names what they do." }, { "end": 302.32, "start": 296.48, "text": " So if it is completely obvious, I would argue you don't need a doc string. And this is kind" }, { "end": 310.64000000000004, "start": 302.32, "text": " of additional information that this model is able to actually sort of make use of. So the fact that" }, { "end": 314.96000000000004, "start": 310.64000000000004, "text": " a lot of these functions, you can already, the doc string is sort of already the implementation" }, { "end": 320.64000000000004, "start": 314.96000000000004, "text": " of the function almost. So the distance there is not, it's not like you can say whatever you want." }, { "end": 328.08, "start": 320.64, "text": " And yeah, so you can see here when it's asked to print the receipt, it just works out. So it's" }, { "end": 332.88, "start": 328.08, "text": " printing, it's doing format strings and whatnot. So it's just learned to do that. I would argue," }, { "end": 337.68, "start": 332.88, "text": " again, this works. You couldn't just put anything like doc string language is a very specific type" }, { "end": 341.91999999999996, "start": 337.68, "text": " of language that programmers use where they basically already sort of implement the method" }, { "end": 348.71999999999997, "start": 341.91999999999996, "text": " in the doc string. And then the body of the method is just the then really specific code." }, { "end": 354.24, "start": 348.72, "text": " But as of that, yeah, there's a lot of information already in the doc string in the naming of the" }, { "end": 362, "start": 354.24, "text": " function. And it's still pretty impressive, right? So yeah, I just wanted to show you that this" }, { "end": 369.6, "start": 362, "text": " already is available, even though not in as big of a form, it can't write giant functions for you," }, { "end": 374.08000000000004, "start": 369.6, "text": " you can't write function bodies, but there are some machine learning based completions" }, { "end": 380.88, "start": 374.08, "text": " already available. So kite is one of them. And tab nine is the other one that I use for now." }, { "end": 386.96, "start": 381.68, "text": " Both are close sources, I understand it. So that's a bit of a downer there. But these are" }, { "end": 392.79999999999995, "start": 386.96, "text": " exactly kind of GPT language models learned on a lot of code. So they can kind of guess what you" }, { "end": 397.76, "start": 392.79999999999995, "text": " want and interpolate with your variables in there and so on. I also found this when I searched this" }, { "end": 404, "start": 397.76, "text": " comparison here kite versus tab nine. And you see it starts off Yeah, but these are I think these" }, { "end": 408.88, "start": 404, "text": " are kind of auto generated. So when you look at the video, you get tab nine is correct. But that" }, { "end": 417.84, "start": 408.88, "text": " kite, it's an actual video review of a kite. Yeah, so, you know, who knows. But what I wanted to do" }, { "end": 425.03999999999996, "start": 417.84, "text": " is basically show you a bit of the power of tab nine. Let me get this out of the way. So" }, { "end": 432, "start": 425.04, "text": " a while back ago, I live coded this session right here where I where I implemented a sentiment" }, { "end": 437.28000000000003, "start": 432, "text": " classifier from scratch using hugging face libraries. And I thought we would just play around in here a" }, { "end": 443.76, "start": 437.28000000000003, "text": " bit to see what the tab nine could do for us. Alright, so I have tab nine in here together with" }, { "end": 449.20000000000005, "start": 443.76, "text": " a bunch of other stuff, I have to admit, so I'm not sure how this is going to turn out. So let's say" }, { "end": 461.44, "start": 449.2, "text": " we wanted to compute the loss here. Let's say we wanted to compute square loss, square loss, you see" }, { "end": 470.08, "start": 461.44, "text": " that tab nine immediately kind of turns up. I've not tried this, I'm impressed. So it says it" }, { "end": 479.76, "start": 470.08, "text": " estimates loss here. And no, that's maybe not what I want. So I'll go with square. Now this is a" }, { "end": 486.96, "start": 479.76, "text": " language server suggestion. So and you can see right here, even though I don't have these" }, { "end": 492.47999999999996, "start": 486.96, "text": " variables, kind of tab nine will suggest train loss and validation loss. So let me start a new" }, { "end": 502.56, "start": 492.48, "text": " file right here. Just to see what we can get this thing to do without doing anything. So and so" }, { "end": 513.9200000000001, "start": 502.56, "text": " let's say we'll import OS and we'll say if name. So tab nine auto suggests that. And it auto" }, { "end": 521.36, "start": 513.92, "text": " suggests that we should write main here, right. And it knows that a lot of people then call a" }, { "end": 529.5999999999999, "start": 521.36, "text": " function called main. So we should probably do that def main, right. Sorry. Okay, let's go with" }, { "end": 536.88, "start": 529.5999999999999, "text": " the following, we'll say, we'll try the same thing they did, right. So we'll say we have a data class" }, { "end": 548.4, "start": 536.88, "text": " order, it has, let's say price float and a name string. And order one is an order with the price" }, { "end": 558.48, "start": 548.4, "text": " of five and the name of hello. And we can print CC tab nine, tab nine, if you can see that," }, { "end": 564.8000000000001, "start": 558.48, "text": " it's closely suggested to print order dot price order one dot price. So it can it can see that" }, { "end": 577.12, "start": 564.8000000000001, "text": " we kind of want that. How do I select that? Right here. See, order two equals so we'll get another" }, { "end": 587.28, "start": 577.12, "text": " order right here. Seven. Hello. Let's get it with order two. Let's do that. Orders equals order one," }, { "end": 600.64, "start": 587.28, "text": " order two. So total price, total price equals zero for order in. Wow, did you see that?" }, { "end": 607.28, "start": 600.64, "text": " Price, total price equals zero for order in. Wow, did you see that?" }, { "end": 617.84, "start": 608.56, "text": " In, I can't get it anymore. In orders, that's what tab nine says. Total price plus equals" }, { "end": 624.3199999999999, "start": 618.96, "text": " order dot price. So it is already pretty smart, I would argue." }, { "end": 631.6800000000001, "start": 624.32, "text": " Print. And there you saw that total price was suggested. How can I, I don't know how to select" }, { "end": 638.4000000000001, "start": 631.6800000000001, "text": " this. But I'll figure it out. I'm not that advanced yet. So you can see this already sort of works." }, { "end": 644, "start": 638.4000000000001, "text": " And I think it's pretty cool already. And I'm very excited to see what kind of how far people" }, { "end": 648.4000000000001, "start": 644, "text": " can push this because I think this code generation kind of inferring what you want is only at the" }, { "end": 654.08, "start": 648.4000000000001, "text": " beginning right now. And it's for sure going to be a very, very, very, very, very, very, very" }, { "end": 664.1600000000001, "start": 654.08, "text": " interesting, very, very, very interesting thing to come more. And yeah, with that, bye bye." } ]
Nfry2b4RFI4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Investigating Human Priors for Playing Video Games (Paper & Demo)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "reinforcement learning", "deep rl", "human", "prior", "objects", "game", "video game", "key", "visuals", "enemy", "ladder", "gravity", "ablation" ]
Why are humans so good at video games? Maybe it's because a lot of games are designed with humans in mind. What happens if we change that? This paper removes the influence of human priors from a game and ends up with a pretty fun experience. Paper: https://arxiv.org/abs/1802.10217 Website: https://rach0012.github.io/humanRL_website/ Code: https://github.com/rach0012/humanRL_prior_games Abstract: What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors on human performance. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play. Videos and the game manipulations are available at this https URL Authors: Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L. Griffiths, Alexei A. Efros Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hey there, what's going on today? We're looking at investigating human priors for playing video games by Rachid Duby, Pulkit Agrawal, Deepak Patak, Tom Griffiths and Alexei Aeferos. So there is a paper to go with this, but I actually don't want to get into the paper too much in order to not reveal too much of what's coming. But basically they're trying to investigate what makes video games work for humans. So what do humans pay attention to? What priors bring humans into a video game? And the fun thing is they've created these games where they ablate these individual priors. And we are going to play them. So the original game right here, as you can see, is kind of this Montezuma's Revenge type of game. So you only need the arrow keys. If you go to a bad blob like this, then you die. And you can jump on them and you can use the ladders and the spikes. They'll hurt you if you jump on them. And also if you fall down between the platforms. So what you've got to do is basically get the key, then go to the door over here and bada boom. Cool. So let's try out. So they basically ablate different things here. Mask semantics means that you don't know what the objects are anymore. So you might go over here and you might be like, what's this green thing? Can I jump on it? Oh. So we're probably a bit biased because we've seen the game before. So we know that these are the pink ones are the bad ones. And that's the key. So we should probably get it. But you can imagine that it is a bit harder, but you could still solve it right. Reverse semantics is very interesting if you play it for the first time, because all of a sudden now there's the coins. Oh, and the fire. But I think humans could probably still figure it out with like some minimal trial and error. I do this ice cream cone. You realize, OK, now it gets interesting because right now we've always had sort of we know that there's an object and there's no object on the platforms. But now these are masked. So basically you don't know what's like a relevant object and what isn't. So I know that there is like a bad thing here and a bad thing down to the left. So I'm going to guess these light. These light pink things are the bad things. Yeah. Yeah. These are the ladders. Cool. Bad thing right here. We are rocking this. OK. Key. Where's that? That's the key and the door. So it gets harder because you kind of have to remember the colors right. I know that the light pink ones are the bad squares. Still, still solvable. So let's jump over these on the left because these get really actually it's going to here. So masked affordances. What they're saying is that, OK, you can kind of from the way something looks, you can tell what you can do with it. For example, the platforms you can jump on them and the background is sort of empty space. So you know that there's nothing much happening there. So they trying to take that away by simply retexturing all the objects here such that you don't know how you can interact with them. And it does get significantly harder because OK, so these these green ones are these green ones are the platforms right here. So I can still see that that must be the latter. Right. You can imagine if you were playing this again for the first time that this is significantly more difficult, but you still see the key and the green ones being the platforms. We got this. Now it gets harder. Masked visual similarity. So this is where they say maybe as we did so far, maybe you as a human can kind of make out that things that are visually similar to each other can you can do the same things with. Like we said, the green ones are probably the platforms. So they took it away. Gee. OK, so that must be OK. Can't go here. Fell down. Let's try again. This is a platform. Is this one here? Yes. These are platforms. Ah, that was easy. Too easy. Too easy running into that bad blob there. OK, the ladders are still like this, but then OK. Yeah, this gets harder as as you can. Gee. OK, I'm too dumb to remember from before. I'm like the ideal subject because I don't remember. How did this work? I'm going to solve this just so you know, even if this video gets to 50 minutes, I'm going to make it through this. OK, here we can. OK, see my short term memory is so bad. OK, we got the key. Now just get over to the door. Doors over there. Yeah. OK, now let's wipe the short term memory again. Here changed ladder interaction where they basically say, OK, one of the things that you could know from the real world is how these objects work in the real world. So there's not really any pink blobs with evil faces. There might be spikes. Yes. But ladder is something, you know, that works. So if you want to go up here, that doesn't work. So you kind of have to figure. OK. So you have to go kind of left and right to go up the ladders. And so that one I actually tried before and I figured that out pretty quickly. I think humans are able to figure that out fairly quickly because you kind of on the ladder, right? You can actually go down easily. You're on the ladder and then it kind of doesn't work. And then you kind of try to wiggle because there's two of them. I don't think that's that's necessarily super hard. And now it feels a bit like, you know, the Super Mario maker thing where where people just try to make levels as hard as possible and trick you with trick blocks and invisible stuff. This is hard. So the direction of gravity. So now the left key jumps right here, this key. So this is like this is extremely, extremely hard because I have to like think about every move I make before I do it. And OK, no, no, no, this is so unintuitive for real. Yeah, got it. Got it. OK. Really? Try this out. This is crazy. Yeah. Yeah. OK, so the last thing is we combine all of it, I guess, except the changed gravity and changed interaction. So now all the priors, all the visual object priors removed. This is this is King's Discipline right here. OK, so we figured out where the blue. Cool. So where's the next? This is the next platform. Where's the OK. There must be like a yeah, take that. OK, but we know the next. So we can't really generalize from this because we know the next bad blob isn't going to be the same color. Right. OK. The white done this. I know there's a bad thing here, but we'd have to figure this out. So basically kind of the point of the paper, I think, is to say that this is what you're doing to our. No spikes. This is what you're doing to our algorithms. If you're in the most case, so they simply have to go and to basically try every single thing and remember what worked and what didn't work. Now, of course, the algorithms can also exhibit like can also use the visual similarity. That was the key. Yeah. Yeah. Let's go to the door door door. No, there is a there's like a bad thing here. Right. No spikes. OK. So either we build these priors into the algorithms if we want to get them to human level or we have some sort of learning these priors before we let the people go onto a paper or I don't know. Or we just take it that algorithms have to figure all of this out by themselves. So they ablate these things right here. You can see the masked object identity makes kind of the biggest difference in terms of time, number of deaths, the number of states explored. Reverse semantics. I believe these are humans that are trying it for the first time and they're just like, oh, an ice cream. So it can also hurt. Right. The algorithm wouldn't be super impressed by it looking like an ice cream. But the human is very much and the crazy thing here, you can see exploration, the original game and then exploration in the no object prior game, especially if you play this for the first time. This is just mad. Like no freaking way. I would actually like love to see video games like this coming out. This would be the worst selling video game of all times where dynamically it just removes these kind of priors. But it's a I think it's a really fun way to investigate what humans learn and what they already bring into the game. So here they have another game and they do this same thing on an RL agent. And you see here the RL agent just don't care about any of these things except visual similarity. So visual similarity helps the RL agent to generalize across the game. So if you see a bad blob, the next bad blob will look similar. And that's sort of kind of an invariance that we know they can exploit since they're using convolutional neural networks and so on. But I think it is really drawing attention to the importance of priors prior knowledge in reinforcement learning and human knowledge. So in this game right here, where you have these hidden rewards that the human doesn't see, right? But if they kind of touch it, they're kind of coins and the human performs way worse than the RL agent because the RL agent will actually try those things out. And the human having the prior that the black thing is like they don't see the yellow boxes that the black thing is just empty space. They won't even explore that. So maybe, you know, that is something to think about with respect to building RL agents. All right. I don't want to go into the paper too much. It's a very cool paper, but we're here to play games and I invite you to read the paper. Check out the website. Try these games for yourself. They're a lot of fun, especially if you try them first time. And bye bye.
[ { "end": 11, "start": 0, "text": " Hey there, what's going on today? We're looking at investigating human priors for playing video games by Rachid Duby, Pulkit Agrawal, Deepak Patak, Tom Griffiths and Alexei Aeferos." }, { "end": 19, "start": 11, "text": " So there is a paper to go with this, but I actually don't want to get into the paper too much in order to not reveal too much of what's coming." }, { "end": 26, "start": 19, "text": " But basically they're trying to investigate what makes video games work for humans. So what do humans pay attention to?" }, { "end": 34, "start": 26, "text": " What priors bring humans into a video game? And the fun thing is they've created these games where they ablate these individual priors." }, { "end": 42, "start": 34, "text": " And we are going to play them. So the original game right here, as you can see, is kind of this Montezuma's Revenge type of game." }, { "end": 48, "start": 42, "text": " So you only need the arrow keys. If you go to a bad blob like this, then you die." }, { "end": 53, "start": 48, "text": " And you can jump on them and you can use the ladders and the spikes. They'll hurt you if you jump on them." }, { "end": 62, "start": 53, "text": " And also if you fall down between the platforms. So what you've got to do is basically get the key, then go to the door over here and bada boom." }, { "end": 69, "start": 62, "text": " Cool. So let's try out. So they basically ablate different things here." }, { "end": 74, "start": 69, "text": " Mask semantics means that you don't know what the objects are anymore." }, { "end": 78, "start": 74, "text": " So you might go over here and you might be like, what's this green thing? Can I jump on it? Oh." }, { "end": 83, "start": 78, "text": " So we're probably a bit biased because we've seen the game before." }, { "end": 90, "start": 83, "text": " So we know that these are the pink ones are the bad ones. And that's the key. So we should probably get it." }, { "end": 95, "start": 90, "text": " But you can imagine that it is a bit harder, but you could still solve it right." }, { "end": 104, "start": 95, "text": " Reverse semantics is very interesting if you play it for the first time, because all of a sudden now there's the coins. Oh, and the fire." }, { "end": 109, "start": 104, "text": " But I think humans could probably still figure it out with like some minimal trial and error." }, { "end": 121, "start": 109, "text": " I do this ice cream cone. You realize, OK, now it gets interesting because right now we've always had sort of we know that there's an object and there's no object on the platforms." }, { "end": 126, "start": 121, "text": " But now these are masked. So basically you don't know what's like a relevant object and what isn't." }, { "end": 135, "start": 126, "text": " So I know that there is like a bad thing here and a bad thing down to the left. So I'm going to guess these light." }, { "end": 143, "start": 135, "text": " These light pink things are the bad things. Yeah. Yeah. These are the ladders. Cool. Bad thing right here." }, { "end": 149, "start": 143, "text": " We are rocking this. OK. Key. Where's that? That's the key and the door." }, { "end": 156, "start": 149, "text": " So it gets harder because you kind of have to remember the colors right. I know that the light pink ones are the bad squares." }, { "end": 163, "start": 156, "text": " Still, still solvable. So let's jump over these on the left because these get really actually it's going to here." }, { "end": 171, "start": 163, "text": " So masked affordances. What they're saying is that, OK, you can kind of from the way something looks, you can tell what you can do with it." }, { "end": 179, "start": 171, "text": " For example, the platforms you can jump on them and the background is sort of empty space. So you know that there's nothing much happening there." }, { "end": 187, "start": 179, "text": " So they trying to take that away by simply retexturing all the objects here such that you don't know how you can interact with them." }, { "end": 197, "start": 187, "text": " And it does get significantly harder because OK, so these these green ones are these green ones are the platforms right here." }, { "end": 202, "start": 197, "text": " So I can still see that that must be the latter. Right." }, { "end": 211, "start": 202, "text": " You can imagine if you were playing this again for the first time that this is significantly more difficult, but you still see the key and the green ones being the platforms." }, { "end": 216, "start": 211, "text": " We got this. Now it gets harder. Masked visual similarity." }, { "end": 228, "start": 216, "text": " So this is where they say maybe as we did so far, maybe you as a human can kind of make out that things that are visually similar to each other can you can do the same things with." }, { "end": 233, "start": 228, "text": " Like we said, the green ones are probably the platforms. So they took it away." }, { "end": 237, "start": 233, "text": " Gee. OK, so that must be OK. Can't go here." }, { "end": 246, "start": 237, "text": " Fell down. Let's try again. This is a platform. Is this one here? Yes. These are platforms." }, { "end": 252, "start": 246, "text": " Ah, that was easy. Too easy. Too easy running into that bad blob there." }, { "end": 258, "start": 252, "text": " OK, the ladders are still like this, but then OK." }, { "end": 265, "start": 258, "text": " Yeah, this gets harder as as you can. Gee. OK, I'm too dumb to remember from before." }, { "end": 269, "start": 265, "text": " I'm like the ideal subject because I don't remember." }, { "end": 273, "start": 269, "text": " How did this work?" }, { "end": 280, "start": 273, "text": " I'm going to solve this just so you know, even if this video gets to 50 minutes, I'm going to make it through this." }, { "end": 286, "start": 280, "text": " OK, here we can. OK, see my short term memory is so bad." }, { "end": 292, "start": 286, "text": " OK, we got the key. Now just get over to the door. Doors over there. Yeah." }, { "end": 296, "start": 292, "text": " OK, now let's wipe the short term memory again." }, { "end": 306, "start": 296, "text": " Here changed ladder interaction where they basically say, OK, one of the things that you could know from the real world is how these objects work in the real world." }, { "end": 313, "start": 306, "text": " So there's not really any pink blobs with evil faces. There might be spikes. Yes. But ladder is something, you know, that works." }, { "end": 318, "start": 313, "text": " So if you want to go up here, that doesn't work. So you kind of have to figure. OK." }, { "end": 326, "start": 318, "text": " So you have to go kind of left and right to go up the ladders. And so that one I actually tried before and I figured that out pretty quickly." }, { "end": 330, "start": 326, "text": " I think humans are able to figure that out fairly quickly because you kind of on the ladder, right?" }, { "end": 335, "start": 330, "text": " You can actually go down easily. You're on the ladder and then it kind of doesn't work." }, { "end": 343, "start": 335, "text": " And then you kind of try to wiggle because there's two of them. I don't think that's that's necessarily super hard." }, { "end": 355, "start": 343, "text": " And now it feels a bit like, you know, the Super Mario maker thing where where people just try to make levels as hard as possible and trick you with trick blocks and invisible stuff." }, { "end": 362, "start": 355, "text": " This is hard. So the direction of gravity. So now the left key jumps right here, this key." }, { "end": 373, "start": 362, "text": " So this is like this is extremely, extremely hard because I have to like think about every move I make before I do it." }, { "end": 381, "start": 373, "text": " And OK, no, no, no, this is so unintuitive for real." }, { "end": 391, "start": 381, "text": " Yeah, got it. Got it. OK. Really? Try this out. This is crazy. Yeah. Yeah." }, { "end": 398, "start": 391, "text": " OK, so the last thing is we combine all of it, I guess, except the changed gravity and changed interaction." }, { "end": 403, "start": 398, "text": " So now all the priors, all the visual object priors removed." }, { "end": 411, "start": 403, "text": " This is this is King's Discipline right here. OK, so we figured out where the blue. Cool." }, { "end": 416, "start": 411, "text": " So where's the next? This is the next platform. Where's the OK." }, { "end": 422, "start": 416, "text": " There must be like a yeah, take that. OK, but we know the next." }, { "end": 428, "start": 422, "text": " So we can't really generalize from this because we know the next bad blob isn't going to be the same color." }, { "end": 434, "start": 428, "text": " Right. OK. The white done this." }, { "end": 438, "start": 434, "text": " I know there's a bad thing here, but we'd have to figure this out." }, { "end": 444, "start": 438, "text": " So basically kind of the point of the paper, I think, is to say that this is what you're doing to our." }, { "end": 449, "start": 444, "text": " No spikes. This is what you're doing to our algorithms." }, { "end": 460, "start": 449, "text": " If you're in the most case, so they simply have to go and to basically try every single thing and remember what worked and what didn't work." }, { "end": 466, "start": 460, "text": " Now, of course, the algorithms can also exhibit like can also use the visual similarity." }, { "end": 472, "start": 466, "text": " That was the key. Yeah. Yeah. Let's go to the door door door. No, there is a there's like a bad thing here." }, { "end": 477, "start": 472, "text": " Right. No spikes." }, { "end": 499, "start": 477, "text": " OK." }, { "end": 515, "start": 499, "text": " So either we build these priors into the algorithms if we want to get them to human level or we have some sort of learning these priors before we let the people go onto a paper or I don't know." }, { "end": 521, "start": 515, "text": " Or we just take it that algorithms have to figure all of this out by themselves." }, { "end": 532, "start": 521, "text": " So they ablate these things right here. You can see the masked object identity makes kind of the biggest difference in terms of time, number of deaths, the number of states explored." }, { "end": 538, "start": 532, "text": " Reverse semantics. I believe these are humans that are trying it for the first time and they're just like, oh, an ice cream." }, { "end": 544, "start": 538, "text": " So it can also hurt. Right. The algorithm wouldn't be super impressed by it looking like an ice cream." }, { "end": 557, "start": 544, "text": " But the human is very much and the crazy thing here, you can see exploration, the original game and then exploration in the no object prior game, especially if you play this for the first time." }, { "end": 565, "start": 557, "text": " This is just mad. Like no freaking way. I would actually like love to see video games like this coming out." }, { "end": 578, "start": 565, "text": " This would be the worst selling video game of all times where dynamically it just removes these kind of priors. But it's a I think it's a really fun way to investigate what humans learn and what they already bring into the game." }, { "end": 587, "start": 578, "text": " So here they have another game and they do this same thing on an RL agent. And you see here the RL agent just don't care about any of these things except visual similarity." }, { "end": 596, "start": 587, "text": " So visual similarity helps the RL agent to generalize across the game. So if you see a bad blob, the next bad blob will look similar." }, { "end": 603, "start": 596, "text": " And that's sort of kind of an invariance that we know they can exploit since they're using convolutional neural networks and so on." }, { "end": 610, "start": 603, "text": " But I think it is really drawing attention to the importance of priors prior knowledge in reinforcement learning and human knowledge." }, { "end": 623, "start": 610, "text": " So in this game right here, where you have these hidden rewards that the human doesn't see, right? But if they kind of touch it, they're kind of coins and the human performs way worse than the RL agent because the RL agent will actually try those things out." }, { "end": 632, "start": 623, "text": " And the human having the prior that the black thing is like they don't see the yellow boxes that the black thing is just empty space." }, { "end": 640, "start": 632, "text": " They won't even explore that. So maybe, you know, that is something to think about with respect to building RL agents." }, { "end": 646, "start": 640, "text": " All right. I don't want to go into the paper too much. It's a very cool paper, but we're here to play games and I invite you to read the paper." }, { "end": 652, "start": 646, "text": " Check out the website. Try these games for yourself. They're a lot of fun, especially if you try them first time." }, { "end": 662, "start": 652, "text": " And bye bye." } ]
u5BkO8XMS2I
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
iMAML: Meta-Learning with Implicit Gradients (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper" ]
Gradient-based Meta-Learning requires full backpropagation through the inner optimization procedure, which is a computational nightmare. This paper is able to circumvent this and implicitly compute meta-gradients by the clever introduction of a quadratic regularizer. OUTLINE: 0:00 - Intro 0:15 - What is Meta-Learning? 9:05 - MAML vs iMAML 16:35 - Problem Formulation 19:15 - Proximal Regularization 26:10 - Derivation of the Implicit Gradient 40:55 - Intuition why this works 43:20 - Full Algorithm 47:40 - Experiments Paper: https://arxiv.org/abs/1909.04630 Blog Post: https://www.inference.vc/notes-on-imaml-meta-learning-without-differentiating-through/ Abstract: A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks. Authors: Aravind Rajeswaran, Chelsea Finn, Sham Kakade, Sergey Levine Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at meta-learning with implicit gradients by Arwind Rajeshwaran, Chelsea Finn, Shamka Khod and Sergei Levine. So this paper deals with the task of meta-learning. Now if you don't know what meta-learning is, let me quickly introduce the term. So in meta-learning you assume you have some sort of a distribution of tasks ahead. So let's make some examples. For example, task one could be you get an image, you have a data set of images and you want to classify them into cats or dogs. And you know you have a little data set with labeled images and you can train, test, split that and that's one task. Now task two is going to be, again you have a small data set of different images, but let's just all make image examples here. But you want to locate the pedestrian, so you want to locate the human in the image. So where is the human? And the task three could be again a small database of tasks, sorry of images and in each of the image you want to visually question answer. Or let's say you want to point out, there is a ground, there is a tree and there is a question about it. Yeah let's say visual question answering, which gives you yes or no questions, something like this. Now let's just say you have to segment, you have to segment that image. So down here would be ground, you have to segment the ground. Okay so these are all image tasks. These are all perfectly fine independent tasks. For each one you have a small data set. Now sometimes these data sets are very very small, such that you cannot really train a state-of-the-art model on them. For example if you have medical images, oftentimes the labels of these are very hard to get. I mean there's privacy concerns and then you know doctors have to look at it to produce the images, costs money and so on. So it's not like you have a lot of images and you could profit from more images. So one method that people come up with is called transfer learning. So in transfer learning you say I have this giant database of images. Let's say this is ImageNet. I have this giant database ImageNet with labeled images. What I can do is I can use this to train a neural network. These are the bunch of layers of neural network and I can train the neural network on this big database and get parameters theta. These are the parameters of the neural network and then I will basically adapt these parameters to each task individually. So in task one, sorry about that, I would then take these parameters as an input to the neural network. I would initialize the neural network with these parameters and then I would use this training set in order to fine-tune, this is what's called fine-tuning, to the task specific parameters here phi. So phi one because it's task one. For task two I would also take these as a starting point in order to train its neural network to fine-tune it on this bounding box task in order to obtain the parameters for task two. So you can see that there's a pre-training stage here to obtain good initial parameters and then we adapt these initial parameters for each task separately in a fine-tuning stage. So this is one way we can do it. Another way we can do it is called multitask learning. What we do in multitask learning is we say, well see probably a neural network that can segment the grounds is also pretty good at doing bounding boxes. It will use some of the same features. So can't we just kind of pull together these datasets into one bigger dataset and then train on, like if it's an image from task one we'll train on the loss of task one and if it's an image from task two we'll train on the loss of task two and so on. But we'll sort of use the same neural network basis. We just have kind of different heads on top of them. So this is called multitask learning. We have one shared neural network with different outputs for the different tasks and basically counting on the fact that you can sort of learn from one task what's useful in the other. Now this is a good method to combine the tasks and to basically share data information but it will also limit you because you now have to trade off between the tasks. Like this neural network right here, this joint encoder, will never be able to fully gear to one task because it also has to perform for the other tasks as well. So you kind of limit yourself in your top-out accuracy. Now maybe the regularization effect is good. So these are two methods. The first is called transfer learning here on the left and the other is called multitask learning. Now meta learning goes a different direction. Meta learning is like transfer learning but it says well what if we don't have this giant data set right here? What if we find a way to learn these initial parameters? So what we'll do is we'll start out with a guess. A guess of good initial parameters. Let's call that theta zero. And now we have all of these three tasks take theta zero and run their fine-tuning to come up with their own parameters starting at theta 0. So this is phi 1 started from theta 0 and we'll also give it to task 2 and to task 3. And each of these tasks is going to train on its own training data set and then evaluate on its own validation data set and then report back a number. So we do this for every task and every task basically trains this, runs through the validation data set, reports back a generalization error and then we know once we get all the information from all the tasks we know how good were these initial parameters. We get a measure of how easy is it for the tasks to adapt these initial parameters to their own data set. And then we somehow need to figure out a way. Okay these parameters were on average 81% good. Can we come up with a better set of initial parameters theta 1? In some way can we somehow find a better set of initial parameters such that it is easier for the tasks to adapt these initial parameters? And even more so there could be task 4 which we are not seeing during this training phase right? This is kind of our... so these up here could be our training tasks and this down here could be our validation task. So basically we're trying to come up with a set of initial parameters that if a new task comes along and it takes this thing as its initial parameters it will be able to adapt very quickly these initial parameters to its own data set. And most importantly it can do that it will result in a much better model than had the task just trained on its own small data set from scratch. So our task is in meta learning is basically to come up with a learning procedure to generate to iteratively generate better and better and better initial parameters. And what better way to do this than using gradient descent? So this is this is meta learning using gradient descent is the core of this paper basically. Now what do you need for gradient descent? So if you want to try to go from one task to the next using SGD or even GD gradient descent you need to come up with a gradient. Now why is this a problem in this case? So this is in this figure. So this meta learning using gradients was done in this technique called MAML. And essentially what you can see here is that if you have this set of initial parameters this is your current best guess of these good initial parameters and you want to come up with a gradient of how to get an even better set. Now this gradient here is indicated by this arrow. So you don't let's imagine you don't know the gradient yet you want to come up with a gradient. So what you'll need to do is you'll basically have to compute the loss function and you have to differentiate that loss function with respect to your parameters. That's down here what the description of the orange arrow. So the your loss function of your meta parameters is going to be the average or the sum of the loss functions across all of the different tasks individually. So the gradient is going to be the sum of the gradients of these loss functions with respect to your original parameter. Now this is the difference right? Usually we differentiate with respect to the parameters that we input into the loss function. But not here. What we input into the loss function is what is at the end of the task adapting to its own parameters. So this thing we input here is a function of our initial parameters but it's not our initial parameters. So what we have is initial parameters we give them to a task. This task runs SGD for k steps it runs it for a number of steps comes up with the adapted version to its own problem that goes into a loss function. So this thing here is the neural network that finally determines the loss function. So if we want to back propagate we can back propagate this loss function through the neural network the f the neural network is right here is parameterized by these things we can back propagate through that but then we'll have to back propagate through the optimization procedure that was used to derive these things. So that's the the problem right here. You can see this here you start out with the initial parameters and let's say you give them to task one. Task one is going to take these as initialization and then run SGD so maybe it will perturb these parameters to come up with here phi one these parameters these parameters are the adapted version for task one and then at the end of task one you use these characterizing neural network and you can calculate a gradient. So how would you need to update these parameters in order to make the loss go down and the neural network or sorry the computation will maybe result well you need to go up a bit. Well this is too strong. It will maybe say you need to go into this direction right here with respect to these parameters but now the question is how do you have to adjust your initial parameters such that your your final parameters will go into that direction and that's not really clear you could make a guess right you could make a guess and say well if my initial parameters just go up a bit maybe the optimization procedure will just you know sort of look the same but shift it up here so something like this and then I will end up here but that's not guaranteed like this is a super nonlinear procedure that you're running it through this SGD thing and it will basically it's an iterative recursive procedure so it will sort of accumulate its own nonlinear errors and that's why what you have to do is basically you forward propagate through SGD and then you have to back propagate this gradient right here you have to back propagate this through the entire SGD optimization procedure and that is computationally very expensive because if you have to compute the loss once here for a neural network you have to forward pass once and the back propagation will cost as much as the forward propagation or maybe twice as much constant number but if you run k steps of SGD you basically have to trace back those k steps via back propagating it through each step so basically this is k times a back propagation step and then computationally that's just not feasible for more than very few steps and so you can only ever do very few steps you basically accumulate your nonlinear error because gradient descent is a linear procedure and then you get some estimation of the gradient at the end now if you do that for all of your tasks then finally you can decide so maybe for a task one here the result will be in order to make this go up a bit you need to shift this a bit to up and the right right because then the gradient descent will kind of sort of end up here and you do this for all tasks and you average the gradient like here then you can come up with a final gradient for your inner sorry for your outer model for your initial parameters so this is a big load of computation that mammal does here now there is a naive approximation and this is exactly what we said at the beginning right this first-order mammal is the guess that if we want to go up a bit at the end here why don't we shift the beginning up a bit right and so the first-order mammal would just result in basically looking at the gradients at the end and sort of aggregating them right here and then coming up with a gradient but this is very inaccurate and generally doesn't work well because you have to understand how your end gradient is connected to your initial gradient because this is very nonlinear you can't just basically transfer it over now implicit mammal this paper right here circumvents that it circumvents the step to have to explicitly back propagate this gradient along the forward pass but it's still able to come up with an expression for how the final gradient relates to the initial gradient so this is quite cool and we're in this video I would basically like to explore how this comes about and why this comes about we won't go through all the theory and the proofs but I would like you to understand that this comes about by basically them imposing a quadratic regularizer and therefore this quadratic regularizer makes a very it kind of gives rise to a very strong connection between this final gradient and this initial gradient so they can basically transform one into the other and therefore they can compute the initial gradient in a closed form setting or at least in theory all right this was this now let's go into the problem formulation as they see it the entire problem formulation is you want to find these best meta learning parameters and they call this the outer level so on an outer level you want to run gradient descent to find the best meta learning parameters to minimize this function F right here now what is F F is the average of the validation loss function so here this is the loss function on the test sets of the individual tasks and the neural network that we evaluate on the test set is the neural network that is trained the algorithm is a training algorithm is trained on the training set of that particular task while starting from these parameters theta now you see there is no dependence on the task here there is no I down here for these meta parameters because these are always the same right that's the crucial point all the tasks start from the same initial parameters then they optimize on their own training data set and then they evaluate on their own test data set and that will give you the loss for that particular task and your goal is to find the meta parameters such that this function here the average loss that results from this procedure is minimal okay so where they say right here the inner level is this algorithm component so the algorithm starts from these from these meta parameters and runs gradient steps on the training data set loss now this is just the first step right here of this procedure of course in the next step these are going to be replaced by the by the Phi that by the Phi I that results from the previous step so the first step is run on the meta parameters and then subsequently the task specific parameters are updated the important thing here is that this doesn't need to be gradient descent actually with their method because their method doesn't need to back propagate through the optimization you can think of any inner optimization procedure that you want it can be like a black box solver whatever you want I'm going to be just interesting to see how this is going to affect something like reinforcement learning and so on this might have already happened and I have not looked up this so the crucial part of the paper I think is right here and it's sort of like you know section 2.2 but I would I would want to point out that I think this is the crucial part this is why the method works ultimately why it's in why it's able to build this implicit gradient so they do section is called proximal regularization in the inner level and we'll go through this with a bit of detail to have sufficient learning in the inner level while also avoiding overfitting ALG that's the inner optimization procedure needs to incorporate some form of regularization right so their their sort their goal here or their point here is that if especially if these individual tasks have small training data set you need to have some kind of protection against overfitting and that and that and they say since mammal uses a small number of gradient steps this corresponds to early stopping and can be interpreted as a form of regularization and Bayesian prior so mammal is this this previous method this basic method that back propagates through the optimization procedure and since it does this since it back propagates through the optimization procedure it's computationally limited to only run very few forward optimization steps because it then has to back propagate through each one right needs to store each one so it's computationally limited so by necessity it uses only a small number of gradient steps and therefore this is kind of early stopping and we know to prevent overfitting one thing you can do is to stop before your training accuracy reaches full zero and you can stop earlier than that ideally at a point when your validation accuracy reaches the low point but they say basically this this limited number of steps is a form of regularization now of course in in this new method we we don't have this constraint anymore we can run our inner optimization to super convergence and therefore we we don't have this implicit regularizer anymore and we'll have to make up for that they say in cases like ill-conditioned optimization landscapes and medium-shot learning we may want to take many gradient steps which poses two challenges for mammal first we need to store and differentiate through the long optimization path of ALG which imposes a considerable computation and memory burden right that's what we said second the dependence of the model parameters Phi I on the meta parameters shrinks and vanishes as the number of gradient steps in our growth making meta learning difficult so what they're saying is if you optimize your inner optimization algorithm to the very end then it is not very it's its dependence and especially for gradient descent its linear dependence on the initial parameters so on the meta parameters shrinks the more optimization steps you do because the more and more you're going to basically forget about your initialization move away from that and move to a local optimum from which you could reach you know from many many different initializations so that's still a question whether that's happening but this that's the idea here right if you're at the local optimum you could have reached that from any sort of point and there's going to be very little information about the end of the procedure from the beginning and therefore if you want to calculate the gradient that's going to be like super inaccurate and they say to overcome these limitations so they they solve these two things in one right here we consider a more explicitly regularized algorithm so what they'll say is they'll say we don't just want to optimize this inner objective so this would be here so the we don't just want to find the minimum of this inner loss function that's a one goal we have but the other goal we have is to stay close to the initial parameters and that's where this regularizer in comes in here so this basically says we we want with our parameters here that we are optimizing we want to find them such that they minimize this loss function right you know really minimize it find the best point but also with a trade-off of lambda we want to stay close to the to the initial parameters that we started from right this is this is the initial parameters and the closeness is measured in the L2 norm so it's a quadratic regularizer on how close you might know from you know initial supervised learning or something that sometimes you do something like plus lambda times the L2 norm of the weight so this would be called weight regularization weight decay L2 normalization something like this where you regularize your weights such that you stay close to the zero point right implicitly in this there is a minus zero given but here you want to stay close to your initial parameters so the inner optimization is no longer just minimizing the loss on the training data set the inner optimization is that and with ALG star we denote it so if ALG has a star we say that the this is this is referring not to the procedure of the algorithm but to the minimum that the algorithm has found right so ALG star means the algorithm has optimized this inner procedure to its minimum which is a balance of the training loss of that task and staying close to the original parameters and this I think this they say that here the proximal regularization term in equation three encourages the Phi I to remain close to theta thereby retaining a strong dependence throughout this is why their method works and we're going to see in the math right soon how exactly this this how exactly they're able through this to establish this implicit gradient correspondence so they formulate their entire algorithm as follows we want to find the best metal learning parameters by minimizing the function F where F is the average losses and L here now that's the test set loss the average validation loss for each task of the parameters that the inner optimization procedure finds when it runs to its optimum that is you can see here this is already different from the original mammal the original mammal was simply running it for a number of steps and now we're really running it to the optimum at least in the ideal case what does the inner optimization algorithm do the ALG star here minimizes this function G right now G has two arguments G as G has these parameters which are the local parameters and these are the meta parameters and we only optimize the local parameters right we take these as initial and then fine-tune them where the function G is defined as the training loss of the local parameters plus this closeness regularizer okay cool now the question of course is how does that lead to gradient descent so ultimately we want to minimize this function F right here so we have we're going to have to do something like DF by D theta right to in order to run gradient descent we need to calculate this gradient because we need to do gradient descent so what's that going to be that's going to be of course since F is a is this one over M up here right the the gradient simply distributes over that sum now it's the gradient or sorry the derivative of each of these inner loss functions let's go with this ALG star I theta that's basically what you have right here okay so in order to take the gradient of F we need to be able to take the team we need to be able to derive these loss functions and you can see right here theta is not the argument to the loss function theta is the argument to this inner procedure so by the chain rule right now this gives us this so the chain rule says we derive the outer thing with respect to its input that's this part right here the gradient of the loss with respect to the neural network and that thing here that's the ALG star so we need to gradient of the loss function with respect to the end parameters of the optimization procedure now that's easy that's we know how to do that that is the or that is the so if you remember the drawing at the beginning this gradient is the end arrow right here this is easy this is one backward propagation this is regular supervised learning backprop right you have parameters of a neural network a gradient for the loss function cool the hard part is to have the derivative of the algorithm itself with respect to the problem to the meta parameters so this is going to be this here is going to be a vector it's the gradient with respect to these parameters and this is going to be a matrix and the matrix will relate basically one dimension each dimension of this vector sorry this is going to be a product between this thing and this thing and it will result in this thing so the left thing is the gradient we want this is the derivative of the entire thing that's this now the right thing is the gradient at the end of the optimization procedure and this matrix here relates the individual dimension of this end gradient to the gradient that we want right this is a matrix that relates the two in a linear fashion this is what we're looking how if how do we need to change the initial parameters in order to change the end parameters by a certain in a certain way because we only need we only know this but we want to know this so how do we calculate this thing here this Jacobian that's the question right how do we derive the the algorithmic procedure this thing rear here and the paper goes on to say well okay yeah so we need to do this this is the entire gradient descent optimization procedure so we must compute this thing right here and they just throw it in your face it's this boom shagada bomb this thing here done let's go on no so you can you can see it's it's basically putting this just right here but we kind of want to explore where that comes from so the fact you can see here you can derive this gradient as a closed form expression of the inverse of a matrix that contains this is the identity matrix contains somehow this lambda factor that we saw before and it contains this Hessian matrix of the training loss right so this is this end gradient that we can calculate easily and the second derivative of that is the Hessian which is basically the curvature in the landscape of that loss but nowhere in this thing is the is the SGD procedure showing up even though this thing here is the SGD procedure and that's pretty impressive and we're going to look at how that comes about so so where where do we start first let's take this let's take this G right here this G function right and let's calculate the derivative with respect to the in these parameters right here so let's go for this end gradient what's this end gradient going to be so we'll derive the G with respect to the these parameters all right so this is a sum this first thing is pretty easy it's going to be the gradient of this loss function loss function is a scalar right so we can this we can count this is the simply one backward prop through the network the second thing we can also do pretty easily this is an L2 norm all right we know how to derive a square so the 2 comes down and the the this will simply result in the this vector right here so it's going to be lambda times Phi minus theta okay now this was relatively easy now imagine what happens when in this particular thing we have one additional information namely that F the inside of F F we will always optimize to the end we will always optimize this to its minimum right this star denotes that the inner optimization procedure will always go to the minimum of that function so what do we know about the minimum of a function we know that its gradient at that particular point is zero right this is the this is an important part so now we can restructure so if we take one to the right I might actually use black here because it's kind of burning my eyes we can isolate this part right here so we say the Phi is equal to first of all let's um let's take this to the right side so we'll have this gradient right here and I'm just gonna write L Phi let's keep the hat alive we'd have to divide this by lambda right and then bring over the theta so we have a close we have an expression that says at the optimum the parameters Phi the inner parameters are going to be given by this expression now that's pretty pretty cool but we know also that these parameters aren't just you know parameters per se they depend on these parameters right the end parameters depend on the initial parameters because the we use the initial parameters to initialize these end parameters so these are actually a function of the initial parameters so what we can do is we can derive this using red again let's use blue we can derive this thing by the initial parameters right how do the end parameters relate to the initial parameters now this is our basic question all along but we now have an exact expression for the end parameters which we didn't have before before we just knew they came about by SGD so important to say this only works at the optimum right this is at the optimum that this relation counts not anywhere and the paper is abusing this quite a bit right here so what does this do if we derive this thing here with respect to theta it's simply giving us the identity matrix right this is now our our Jacobian that appears here it's simply giving us this then this one divided by lambda is going to stay and now it gets a bit tricky because these things right here of course are also a function of theta so essentially this means we this thing right here is a gradient of a function of another function of theta so we can apply the chain rule again since this is already the first derivative it will give us the second derivative with respect to the loss function right here of with whatever goes into the loss function so that times the inner derivative now the inner derivative is simply how to derive again the Phi by the theta okay now I'm just okay yes so you can see first of all interesting that the expression here or the expression that we are looking to find appears in the expression itself right since since these parameters appear over here as a function argument as well we'll get basically this expression here twice but we can reformulate that and find that the the this term this Jacobian is basically this here inverted so the matrix we're looking for is sorry the inverse Jacobian the matrix we're looking for is given by this quantity right here the identity matrix minus this Hessian term right here okay and this is exactly what you see appearing here this is exactly that so the derivative we're looking for sorry this is actually the Jacobian not the inverse that's my bad what you're looking for is given by this expression now my eraser got stuck hello cool so that's how that appears you see it's the same thing if I had done everything correctly and so this this you do by simply shipping this to the other side which will make it the the inverse right so you divide you basically divide both sides by this and then you get this as an inverse now why does this work again I want to stress why did we get this identity here why were we able to express get a closed form solution to the for the inner thing or sorry for the end parameters in terms of the beginning parameters that doesn't have SGD first reason because we optimized to the end to the optimum that's why we got the equal zero right here second reason because we have this regularizer you see this directly comes from from this expression right here if we wouldn't have this regularizer then we could not make this expression we could not get Phi as a standalone quantity here and therefore this derivation wouldn't work now why is this important because if you look back into your drawing what you're basically doing is you are imposing a quadratic regularizer around this initial point right here and that creates this very strong connection between the end gradient and the initial gradient so now when you're optimizing when you have a training loss of the inner task and maybe the training loss looks something like it looks something like like this right here so SGD will it would go right to the very inner point right here if you're just let SGD run it would go there but now since you have this regularizer SGD needs to find a trade-off point between the two so what it will do is it will probably go somewhere and stop somewhere here so it will now have two forces pulling on it the first force will be this quantity right here and the second force will be pulling it back towards this and you can pretty much count so now SGD cannot just go to any point right here it cannot not go to any isoline these are not equal anymore maybe mainly it will go to the one point that points into the direction of this quadratic right here so since it's a quadratic we have closed form formulas for relating one gradient on the quadratic namely the one out here with the gradient there back here so we can express this Jacobian enclosed form because this is a quadratic and because we have this regularizer because you have these basically two forces pulling on this point in opposite direction one pointing towards the training loss and one pointing towards the inside of the quadratic so that's why this method works okay I can recommend Farin Hussar's blog post he has some very nice animations of why this basically restricts where gradient descent can go I can I can link to it in the description it's pretty cool to see I don't have it open right now so what does that give us the implicit model agnostic meta learning I mammal this is what this paper suggests while not converge to do sample a batch bunch of bunch of tasks right for each task compute the meta gradient G average these gradients to get a gradient for the outer parameters and then do gradient descent on the outer parameters pretty easy how do you do this how do you do this implicit implicit meta gradient this is this procedure right here so what you are going to do is met the parameters theta you initialize your parameters with the theta by the way it they don't need to be initializations they can be actually any sort of hyper parameters that this algorithm takes any parameter ization of this algorithm will do fine I just always said initial parameters such that it gets easier but it can be any sort of hyper parameters of the inner task obtain task parameters using iterative optimization solver such that the the inner parameters are close to the optimum of that algorithm so they actually extend this also in theory not so that you do not don't have to optimize the inner objective really to the optimum but you can be like Delta close to it that's pretty useful and that's in the part of the paper that we won't go over because this video would be like super long but I invite you to read it if you're interested then you compute the partial outer level gradient so this this would be your partial gradient your V would be this gradient at the end right the gradient at the end of the optimization procedure with respect to your validation datasets this is one back prop now we need to relate that end gradient to the beginning and that's and we do that by multiplying it with this matrix inverted right here now because obtaining the entire matrix this is the Hessian matrix and invert it is very memory and computation intensive because if you have D parameters in your neural network this is going to be a D by D matrix so if you have five million parameters this is going to be 25 million million size matrix is just not possible and that's why this paper extends this method to a second degree of approximation namely you don't have to compute the exact inverse you just have to compute something that is very close to the inverse times this integral this final gradient and a good method to do this is this conjugate gradient method and that method is able to to basically use the fact that you can compute Hessian vector products without having to compute the Hessian as a matrix this you can also do with a sort of modified back propagation algorithm also won't go in here but see you use iterative solver for example conjugate gradient along with reverse mode differentiation to compute Hessian vector products to compute GI so GI is going to be the final gradient pulled back through this matrix right here to give you the beginning gradient this meta gradient okay so two approximations here first approximation you don't actually have to solve to the very end you can solve it Delta close and second approximation you don't actually have to compute the inverse of that final gradient sorry compute the multiplication of the final gradient with the inverse of this matrix right here you can also find something that's a Delta prime close to that and they have a bunch of theory of that this still works they compare this of course to the other algorithms they observe that their algorithm uses substantially less memory and what substantially less memory and substantially less compute time once you go up to a number of inner gradient steps and it works better than this first-order mammal so this first-order mammal was our kind of initial guess of how we could do this this tends to perform very poorly as you can see there there oh you cannot you can't actually see that here that their method is better but their method is better and uses less time because you have this con inner conjugate gradient optimizer sorry this is the this is the outer optimizer okay so this is the error plot of how well are these methods are able to approximate the true gradient so if you could compute this true outer gradient you know that we did with mammal but we optimized to the end how close are you getting of course the problem with this method right here is that you do these approximations to you do these approximations and those could hurt you but the problem with mammal is that you're back propagating through the optimization procedure and that means the nonlinear errors could sort of accumulate and as you can see here even though both might eventually you know get to the to the zero error if you give them enough inner gradient steps especially at the low inner gradient step regime the implicit mammal is much better than mammal now I've just said the errors accumulate but the effect probably here is that the fact that with mammal you don't actually do good inner enough inner steps to reach a good enough optimum of the inner tasks so these inner gradient of the tasks their gradients when they're still very not optimized and therefore they are a very bad estimate for your outer gradient then when you do more gradient steps so that actually hurts you more which is also a bit surprising to me and then at the end you see this conjugate gradient steps this is when you approximate this matrix inverse if you just do two steps then at some point that error dominates but if you do more steps you can reach a much lower error and ten steps isn't that much for an algorithm like this as you can see here the ten steps your computation time will still in in the regime of many gradient steps will still be lower than the original mammal and then they actually test this thing and of course they're the best at pretty much everything I don't want to go into the exact details here I invite you to check out the paper for that check out if you're interested in the proofs and the approximation guarantees and with that bye bye
[ { "end": 4.94, "start": 0, "text": " Hi there! Today we're looking at meta-learning with implicit gradients by" }, { "end": 10.84, "start": 4.94, "text": " Arwind Rajeshwaran, Chelsea Finn, Shamka Khod and Sergei Levine." }, { "end": 16.080000000000002, "start": 10.84, "text": " So this paper deals with the task of meta-learning. Now if you don't know what" }, { "end": 20.76, "start": 16.080000000000002, "text": " meta-learning is, let me quickly introduce the term. So in meta-learning" }, { "end": 25.64, "start": 20.76, "text": " you assume you have some sort of a distribution of tasks ahead. So let's" }, { "end": 31.28, "start": 25.64, "text": " make some examples. For example, task one could be you get an image, you have a" }, { "end": 39.68, "start": 31.28, "text": " data set of images and you want to classify them into cats or dogs. And you" }, { "end": 43.08, "start": 39.68, "text": " know you have a little data set with labeled images and you can train, test," }, { "end": 49.64, "start": 43.08, "text": " split that and that's one task. Now task two is going to be, again you have a" }, { "end": 54.96, "start": 49.64, "text": " small data set of different images, but let's just all make image examples here." }, { "end": 60.84, "start": 54.96, "text": " But you want to locate the pedestrian, so you want to locate the human in the" }, { "end": 70.84, "start": 60.84, "text": " image. So where is the human? And the task three could be again a small" }, { "end": 78.6, "start": 70.84, "text": " database of tasks, sorry of images and in each of the image you want to visually" }, { "end": 86.28, "start": 78.6, "text": " question answer. Or let's say you want to point out, there is a" }, { "end": 90.88, "start": 86.28, "text": " ground, there is a tree and there is a question about it. Yeah let's say visual" }, { "end": 95.56, "start": 90.88, "text": " question answering, which gives you yes or no questions, something" }, { "end": 102.39999999999999, "start": 95.56, "text": " like this. Now let's just say you have to segment, you have to segment that image." }, { "end": 107, "start": 102.39999999999999, "text": " So down here would be ground, you have to segment the ground. Okay so these are" }, { "end": 113.48, "start": 107, "text": " all image tasks. These are all perfectly fine independent tasks. For" }, { "end": 119.08, "start": 113.48, "text": " each one you have a small data set. Now sometimes these data sets are very very" }, { "end": 124.24000000000001, "start": 119.08, "text": " small, such that you cannot really train a state-of-the-art model on them. For" }, { "end": 129.6, "start": 124.24000000000001, "text": " example if you have medical images, oftentimes the labels of these are very" }, { "end": 133.96, "start": 129.6, "text": " hard to get. I mean there's privacy concerns and then you know doctors have" }, { "end": 139.12, "start": 133.96, "text": " to look at it to produce the images, costs money and so on. So it's not like" }, { "end": 145.20000000000002, "start": 139.12, "text": " you have a lot of images and you could profit from more images. So one method" }, { "end": 149.52, "start": 145.20000000000002, "text": " that people come up with is called transfer learning. So in transfer" }, { "end": 154.24, "start": 149.52, "text": " learning you say I have this giant database of images. Let's say" }, { "end": 159.16, "start": 154.24, "text": " this is ImageNet. I have this giant database ImageNet with labeled" }, { "end": 165.88, "start": 159.16, "text": " images. What I can do is I can use this to train a neural network." }, { "end": 170.04, "start": 165.88, "text": " These are the bunch of layers of neural network and I can train the neural" }, { "end": 176.51999999999998, "start": 170.04, "text": " network on this big database and get parameters theta. These are the" }, { "end": 180.56, "start": 176.51999999999998, "text": " parameters of the neural network and then I will basically adapt these" }, { "end": 186.28, "start": 180.56, "text": " parameters to each task individually. So in task one, sorry about that, I would" }, { "end": 191.16, "start": 186.28, "text": " then take these parameters as an input to the neural network. I would initialize" }, { "end": 197.4, "start": 191.16, "text": " the neural network with these parameters and then I would use this training set" }, { "end": 204.16, "start": 197.4, "text": " in order to fine-tune, this is what's called fine-tuning, to the task specific" }, { "end": 210.64, "start": 204.16, "text": " parameters here phi. So phi one because it's task one. For task two I would" }, { "end": 217.27999999999997, "start": 210.64, "text": " also take these as a starting point in order to train its neural network to" }, { "end": 223.2, "start": 217.27999999999997, "text": " fine-tune it on this bounding box task in order to obtain the parameters for" }, { "end": 228.48, "start": 223.2, "text": " task two. So you can see that there's a pre-training stage here to" }, { "end": 234.45999999999998, "start": 228.48, "text": " obtain good initial parameters and then we adapt these initial parameters for" }, { "end": 240.16, "start": 234.45999999999998, "text": " each task separately in a fine-tuning stage. So this is one way we can do it." }, { "end": 245.35999999999999, "start": 240.16, "text": " Another way we can do it is called multitask learning. What we do in" }, { "end": 252.07999999999998, "start": 245.35999999999999, "text": " multitask learning is we say, well see probably a neural network that" }, { "end": 256.84, "start": 252.07999999999998, "text": " can segment the grounds is also pretty good at doing bounding boxes. It will" }, { "end": 261.52, "start": 256.84, "text": " use some of the same features. So can't we just kind of pull together these" }, { "end": 269.08, "start": 261.52, "text": " datasets into one bigger dataset and then train on, like if it's an" }, { "end": 273.12, "start": 269.08, "text": " image from task one we'll train on the loss of task one and if it's an image" }, { "end": 277.44, "start": 273.12, "text": " from task two we'll train on the loss of task two and so on. But we'll sort of use" }, { "end": 282.47999999999996, "start": 277.44, "text": " the same neural network basis. We just have kind of different heads on top of" }, { "end": 286.59999999999997, "start": 282.47999999999996, "text": " them. So this is called multitask learning. We have one" }, { "end": 292.52, "start": 286.59999999999997, "text": " shared neural network with different outputs for the different tasks and" }, { "end": 298.52, "start": 292.52, "text": " basically counting on the fact that you can sort of learn from one task what's" }, { "end": 303.88, "start": 298.52, "text": " useful in the other. Now this is a good method to combine the tasks and to" }, { "end": 309.88, "start": 303.88, "text": " basically share data information but it will also limit you because you now have" }, { "end": 314.96, "start": 309.88, "text": " to trade off between the tasks. Like this neural network right here, this joint" }, { "end": 322.68, "start": 314.96, "text": " encoder, will never be able to fully gear to one task because it" }, { "end": 329.12, "start": 322.68, "text": " also has to perform for the other tasks as well. So you kind of" }, { "end": 334.44, "start": 329.12, "text": " limit yourself in your top-out accuracy. Now maybe the regularization effect is" }, { "end": 339.64, "start": 334.44, "text": " good. So these are two methods. The first is called transfer learning here on the" }, { "end": 346.8, "start": 339.64, "text": " left and the other is called multitask learning. Now meta learning goes a" }, { "end": 351.88, "start": 346.8, "text": " different direction. Meta learning is like transfer learning but it says well" }, { "end": 359.15999999999997, "start": 351.88, "text": " what if we don't have this giant data set right here? What if we" }, { "end": 365.92, "start": 359.15999999999997, "text": " find a way to learn these initial parameters? So what we'll do is we'll" }, { "end": 371.32, "start": 365.92, "text": " start out with a guess. A guess of good initial parameters. Let's call that theta" }, { "end": 379.24, "start": 371.32, "text": " zero. And now we have all of these three tasks take theta zero and run their" }, { "end": 386.52, "start": 379.24, "text": " fine-tuning to come up with their own parameters starting at theta 0." }, { "end": 392.92, "start": 386.52, "text": " So this is phi 1 started from theta 0 and we'll also give it to task 2 and to" }, { "end": 400.6, "start": 392.92, "text": " task 3. And each of these tasks is going to train on its own training data" }, { "end": 406.96000000000004, "start": 400.6, "text": " set and then evaluate on its own validation data set and then report back" }, { "end": 412.28, "start": 406.96, "text": " a number. So we do this for every task and every task basically trains this," }, { "end": 417, "start": 412.28, "text": " runs through the validation data set, reports back a generalization error and" }, { "end": 421.35999999999996, "start": 417, "text": " then we know once we get all the information from all the tasks we know" }, { "end": 427.64, "start": 421.35999999999996, "text": " how good were these initial parameters. We get a measure of how easy is it" }, { "end": 434.64, "start": 427.64, "text": " for the tasks to adapt these initial parameters to their own data set. And" }, { "end": 441.15999999999997, "start": 434.64, "text": " then we somehow need to figure out a way. Okay these parameters were on average" }, { "end": 448.15999999999997, "start": 441.15999999999997, "text": " 81% good. Can we come up with a better set of initial parameters theta 1? In" }, { "end": 453.64, "start": 448.15999999999997, "text": " some way can we somehow find a better set of initial parameters such that it" }, { "end": 460.08, "start": 453.64, "text": " is easier for the tasks to adapt these initial parameters? And even more so" }, { "end": 466.59999999999997, "start": 460.08, "text": " there could be task 4 which we are not seeing during this training phase right?" }, { "end": 471.71999999999997, "start": 466.59999999999997, "text": " This is kind of our... so these up here could be our training tasks and" }, { "end": 476.15999999999997, "start": 471.71999999999997, "text": " this down here could be our validation task. So basically we're trying to come" }, { "end": 481.71999999999997, "start": 476.15999999999997, "text": " up with a set of initial parameters that if a new task comes along and it takes" }, { "end": 489.2, "start": 481.71999999999997, "text": " this thing as its initial parameters it will be able to adapt very quickly these" }, { "end": 495.4, "start": 489.2, "text": " initial parameters to its own data set. And most importantly it can do that it" }, { "end": 500.12, "start": 495.4, "text": " will result in a much better model than had the task just trained on its own" }, { "end": 507.36, "start": 500.12, "text": " small data set from scratch. So our task is in meta learning is basically to come" }, { "end": 511.4, "start": 507.36, "text": " up with a learning procedure to generate to iteratively generate better and" }, { "end": 517.8, "start": 511.4, "text": " better and better initial parameters. And what better way to do this than using" }, { "end": 524.5999999999999, "start": 517.8, "text": " gradient descent? So this is this is meta learning using gradient descent is" }, { "end": 532.3199999999999, "start": 524.5999999999999, "text": " the core of this paper basically. Now what do you need for gradient descent?" }, { "end": 538.04, "start": 532.3199999999999, "text": " So if you want to try to go from one task to the next using SGD or even GD" }, { "end": 542.24, "start": 538.04, "text": " gradient descent you need to come up with a gradient. Now why is this a" }, { "end": 548.52, "start": 542.24, "text": " problem in this case? So this is in this figure. So this meta learning using" }, { "end": 554, "start": 548.52, "text": " gradients was done in this technique called MAML. And essentially what you can" }, { "end": 559.5600000000001, "start": 554, "text": " see here is that if you have this set of initial parameters this is your" }, { "end": 563.6800000000001, "start": 559.5600000000001, "text": " current best guess of these good initial parameters and you want to come up with" }, { "end": 568.16, "start": 563.6800000000001, "text": " a gradient of how to get an even better set. Now this gradient here is indicated" }, { "end": 573.56, "start": 568.16, "text": " by this arrow. So you don't let's imagine you don't know the gradient yet you want" }, { "end": 578.6, "start": 573.56, "text": " to come up with a gradient. So what you'll need to do is you'll basically" }, { "end": 583.6, "start": 578.6, "text": " have to compute the loss function and you have to differentiate that loss" }, { "end": 587.9599999999999, "start": 583.6, "text": " function with respect to your parameters. That's down here what the description of" }, { "end": 592.04, "start": 587.9599999999999, "text": " the orange arrow. So the your loss function of your meta parameters is" }, { "end": 598.3199999999999, "start": 592.04, "text": " going to be the average or the sum of the loss functions across all of the" }, { "end": 603.36, "start": 598.3199999999999, "text": " different tasks individually. So the gradient is going to be the sum of the" }, { "end": 609.68, "start": 603.36, "text": " gradients of these loss functions with respect to your original parameter. Now" }, { "end": 615.5, "start": 609.68, "text": " this is the difference right? Usually we differentiate with respect to the" }, { "end": 620.5999999999999, "start": 615.5, "text": " parameters that we input into the loss function. But not here. What we input" }, { "end": 626.0400000000001, "start": 620.6, "text": " into the loss function is what is at the end of the task adapting to its own" }, { "end": 632.64, "start": 626.0400000000001, "text": " parameters. So this thing we input here is a function of our initial parameters" }, { "end": 637.2, "start": 632.64, "text": " but it's not our initial parameters. So what we have is initial parameters we" }, { "end": 644.76, "start": 637.2, "text": " give them to a task. This task runs SGD for k steps it runs it for a" }, { "end": 652.64, "start": 644.76, "text": " number of steps comes up with the adapted version to its own problem that" }, { "end": 660.96, "start": 652.64, "text": " goes into a loss function. So this thing here is the" }, { "end": 664.98, "start": 660.96, "text": " neural network that finally determines the loss function. So if we want to" }, { "end": 669.04, "start": 664.98, "text": " back propagate we can back propagate this loss function through the" }, { "end": 673.04, "start": 669.04, "text": " neural network the f the neural network is right here is parameterized by these" }, { "end": 676.9599999999999, "start": 673.04, "text": " things we can back propagate through that but then we'll have to back" }, { "end": 681.56, "start": 676.9599999999999, "text": " propagate through the optimization procedure that was used to derive these" }, { "end": 688.1999999999999, "start": 681.56, "text": " things. So that's the the problem right here. You can see this here you start" }, { "end": 692.52, "start": 688.1999999999999, "text": " out with the initial parameters and let's say you give them to task one. Task" }, { "end": 698.4, "start": 692.52, "text": " one is going to take these as initialization and then run SGD so maybe" }, { "end": 705.88, "start": 698.4, "text": " it will perturb these parameters to come up with here phi one these parameters" }, { "end": 712.16, "start": 705.88, "text": " these parameters are the adapted version for task one and then at the end of task" }, { "end": 716.8, "start": 712.16, "text": " one you use these characterizing neural network and you can calculate a" }, { "end": 722.9399999999999, "start": 716.8, "text": " gradient. So how would you need to update these parameters in order to make the" }, { "end": 727.84, "start": 722.9399999999999, "text": " loss go down and the neural network or sorry the computation will maybe result" }, { "end": 734.12, "start": 727.84, "text": " well you need to go up a bit. Well this is too strong. It will maybe say you need" }, { "end": 740.02, "start": 734.12, "text": " to go into this direction right here with respect to these parameters but now" }, { "end": 745.76, "start": 740.02, "text": " the question is how do you have to adjust your initial parameters such that" }, { "end": 750.48, "start": 745.76, "text": " your your final parameters will go into that direction and that's not really" }, { "end": 753.94, "start": 750.48, "text": " clear you could make a guess right you could make a guess and say well if my" }, { "end": 758.96, "start": 753.94, "text": " initial parameters just go up a bit maybe the optimization procedure will" }, { "end": 764.0400000000001, "start": 758.96, "text": " just you know sort of look the same but shift it up here so something like this" }, { "end": 768.84, "start": 764.0400000000001, "text": " and then I will end up here but that's not guaranteed like this is a super" }, { "end": 773.5600000000001, "start": 768.84, "text": " nonlinear procedure that you're running it through this SGD thing and it will" }, { "end": 779.36, "start": 773.5600000000001, "text": " basically it's an iterative recursive procedure so it will sort of accumulate" }, { "end": 785.96, "start": 779.36, "text": " its own nonlinear errors and that's why what you have to do is basically you" }, { "end": 791.6800000000001, "start": 785.96, "text": " forward propagate through SGD and then you have to back propagate this gradient" }, { "end": 795.32, "start": 791.6800000000001, "text": " right here you have to back propagate this through the entire SGD" }, { "end": 800.5600000000001, "start": 795.32, "text": " optimization procedure and that is computationally very expensive because" }, { "end": 804.02, "start": 800.5600000000001, "text": " if you have to compute the loss once here for a neural network you have to" }, { "end": 809.4399999999999, "start": 804.02, "text": " forward pass once and the back propagation will cost as much as the forward" }, { "end": 814.36, "start": 809.4399999999999, "text": " propagation or maybe twice as much constant number but if you run k steps" }, { "end": 820.28, "start": 814.36, "text": " of SGD you basically have to trace back those k steps via back propagating it" }, { "end": 826.4399999999999, "start": 820.28, "text": " through each step so basically this is k times a back propagation step and then" }, { "end": 832.16, "start": 826.4399999999999, "text": " computationally that's just not feasible for more than very few steps and so you" }, { "end": 835.9399999999999, "start": 832.16, "text": " can only ever do very few steps you basically accumulate your nonlinear" }, { "end": 840.7199999999999, "start": 835.9399999999999, "text": " error because gradient descent is a linear procedure and then you get some" }, { "end": 845.3199999999999, "start": 840.7199999999999, "text": " estimation of the gradient at the end now if you do that for all of your tasks" }, { "end": 850.64, "start": 845.3199999999999, "text": " then finally you can decide so maybe for a task one here the result will be in" }, { "end": 855.9599999999999, "start": 850.64, "text": " order to make this go up a bit you need to shift this a bit to up and the right" }, { "end": 861.4, "start": 855.9599999999999, "text": " right because then the gradient descent will kind of sort of end up here and you" }, { "end": 867.4399999999999, "start": 861.4, "text": " do this for all tasks and you average the gradient like here then you can come" }, { "end": 873.52, "start": 867.4399999999999, "text": " up with a final gradient for your inner sorry for your outer model for your" }, { "end": 881.12, "start": 873.52, "text": " initial parameters so this is a big load of computation that mammal does here now" }, { "end": 885.56, "start": 881.12, "text": " there is a naive approximation and this is exactly what we said at the beginning" }, { "end": 890.96, "start": 885.56, "text": " right this first-order mammal is the guess that if we want to go up a bit at" }, { "end": 896.6800000000001, "start": 890.96, "text": " the end here why don't we shift the beginning up a bit right and so the" }, { "end": 901.0400000000001, "start": 896.6800000000001, "text": " first-order mammal would just result in basically looking at the gradients at" }, { "end": 905.88, "start": 901.0400000000001, "text": " the end and sort of aggregating them right here and then coming up with a" }, { "end": 912.12, "start": 905.88, "text": " gradient but this is very inaccurate and generally doesn't work well because you" }, { "end": 919.6800000000001, "start": 912.12, "text": " have to understand how your end gradient is connected to your initial gradient" }, { "end": 926.8, "start": 919.68, "text": " because this is very nonlinear you can't just basically transfer it over now" }, { "end": 932.04, "start": 926.8, "text": " implicit mammal this paper right here circumvents that it circumvents the step" }, { "end": 938.04, "start": 932.04, "text": " to have to explicitly back propagate this gradient along the forward pass but" }, { "end": 944.28, "start": 938.04, "text": " it's still able to come up with an expression for how the final gradient" }, { "end": 952.8399999999999, "start": 944.28, "text": " relates to the initial gradient so this is quite cool and we're in this video I" }, { "end": 958.3199999999999, "start": 952.8399999999999, "text": " would basically like to explore how this comes about and why this comes about we" }, { "end": 961.88, "start": 958.3199999999999, "text": " won't go through all the theory and the proofs but I would like you to" }, { "end": 966.52, "start": 961.88, "text": " understand that this comes about by basically them imposing a quadratic" }, { "end": 972.4, "start": 966.52, "text": " regularizer and therefore this quadratic regularizer makes a very it kind of" }, { "end": 977.04, "start": 972.4, "text": " gives rise to a very strong connection between this final gradient and this" }, { "end": 981.4399999999999, "start": 977.04, "text": " initial gradient so they can basically transform one into the other and" }, { "end": 988.76, "start": 981.4399999999999, "text": " therefore they can compute the initial gradient in a closed form setting or at" }, { "end": 996.04, "start": 988.76, "text": " least in theory all right this was this now let's go into the problem formulation" }, { "end": 1003.28, "start": 996.04, "text": " as they see it the entire problem formulation is you want to find these" }, { "end": 1009.24, "start": 1003.28, "text": " best meta learning parameters and they call this the outer level so on an" }, { "end": 1013.1999999999999, "start": 1009.24, "text": " outer level you want to run gradient descent to find the best meta learning" }, { "end": 1020.68, "start": 1013.1999999999999, "text": " parameters to minimize this function F right here now what is F F is the average" }, { "end": 1025.52, "start": 1020.68, "text": " of the validation loss function so here this is the loss function on the test" }, { "end": 1032.24, "start": 1025.52, "text": " sets of the individual tasks and the neural network that we evaluate on the" }, { "end": 1038.36, "start": 1032.24, "text": " test set is the neural network that is trained the algorithm is" }, { "end": 1043.8799999999999, "start": 1038.36, "text": " a training algorithm is trained on the training set of that particular task" }, { "end": 1049.84, "start": 1043.8799999999999, "text": " while starting from these parameters theta now you see there is no" }, { "end": 1055.04, "start": 1049.84, "text": " dependence on the task here there is no I down here for these meta" }, { "end": 1059.12, "start": 1055.04, "text": " parameters because these are always the same right that's the crucial point all" }, { "end": 1065.52, "start": 1059.12, "text": " the tasks start from the same initial parameters then they optimize on their" }, { "end": 1070.52, "start": 1065.52, "text": " own training data set and then they evaluate on their own test data set and" }, { "end": 1075.44, "start": 1070.52, "text": " that will give you the loss for that particular task and your goal is to find" }, { "end": 1080.48, "start": 1075.44, "text": " the meta parameters such that this function here the average loss that" }, { "end": 1093.28, "start": 1080.48, "text": " results from this procedure is minimal okay so where they say right here the" }, { "end": 1098.72, "start": 1093.28, "text": " inner level is this algorithm component so the algorithm starts from these from" }, { "end": 1105.32, "start": 1098.72, "text": " these meta parameters and runs gradient steps on the training data set loss now" }, { "end": 1110.4, "start": 1105.32, "text": " this is just the first step right here of this procedure of course in the next" }, { "end": 1116.96, "start": 1110.4, "text": " step these are going to be replaced by the by the Phi that by the Phi I that" }, { "end": 1121.0800000000002, "start": 1116.96, "text": " results from the previous step so the first step is run on the meta parameters" }, { "end": 1127, "start": 1121.0800000000002, "text": " and then subsequently the task specific parameters are updated the important" }, { "end": 1131.1200000000001, "start": 1127, "text": " thing here is that this doesn't need to be gradient descent actually with their" }, { "end": 1135.02, "start": 1131.1200000000001, "text": " method because their method doesn't need to back propagate through the" }, { "end": 1140.3600000000001, "start": 1135.02, "text": " optimization you can think of any inner optimization procedure that you want it" }, { "end": 1146.8, "start": 1140.36, "text": " can be like a black box solver whatever you want I'm going to be just" }, { "end": 1151.56, "start": 1146.8, "text": " interesting to see how this is going to affect something like reinforcement" }, { "end": 1155, "start": 1151.56, "text": " learning and so on this might have already happened and I have not looked" }, { "end": 1162.52, "start": 1155, "text": " up this so the crucial part of the paper I think is right here and it's sort of" }, { "end": 1168.1999999999998, "start": 1162.52, "text": " like you know section 2.2 but I would I would want to point out that I think" }, { "end": 1173.76, "start": 1168.2, "text": " this is the crucial part this is why the method works ultimately why it's in why" }, { "end": 1177.8, "start": 1173.76, "text": " it's able to build this implicit gradient so they do section is called" }, { "end": 1182.48, "start": 1177.8, "text": " proximal regularization in the inner level and we'll go through this with a" }, { "end": 1187.2, "start": 1182.48, "text": " bit of detail to have sufficient learning in the inner level while also" }, { "end": 1192.3, "start": 1187.2, "text": " avoiding overfitting ALG that's the inner optimization procedure needs to" }, { "end": 1199.52, "start": 1192.3, "text": " incorporate some form of regularization right so their their sort their goal" }, { "end": 1204.8799999999999, "start": 1199.52, "text": " here or their point here is that if especially if these individual tasks" }, { "end": 1213.08, "start": 1204.8799999999999, "text": " have small training data set you need to have some kind of protection against" }, { "end": 1220.68, "start": 1213.08, "text": " overfitting and that and that and they say since mammal uses a small number of" }, { "end": 1225.4, "start": 1220.68, "text": " gradient steps this corresponds to early stopping and can be interpreted as a" }, { "end": 1231.28, "start": 1225.4, "text": " form of regularization and Bayesian prior so mammal is this this previous" }, { "end": 1236.28, "start": 1231.28, "text": " method this basic method that back propagates through the optimization" }, { "end": 1240.96, "start": 1236.28, "text": " procedure and since it does this since it back propagates through the" }, { "end": 1247.02, "start": 1240.96, "text": " optimization procedure it's computationally limited to only run very" }, { "end": 1252, "start": 1247.02, "text": " few forward optimization steps because it then has to back propagate through" }, { "end": 1257.56, "start": 1252, "text": " each one right needs to store each one so it's computationally limited so by" }, { "end": 1264.28, "start": 1257.56, "text": " necessity it uses only a small number of gradient steps and therefore this is" }, { "end": 1268.68, "start": 1264.28, "text": " kind of early stopping and we know to prevent overfitting one thing you can do" }, { "end": 1275.04, "start": 1268.68, "text": " is to stop before your training accuracy reaches full zero and you can stop" }, { "end": 1279.44, "start": 1275.04, "text": " earlier than that ideally at a point when your validation accuracy reaches" }, { "end": 1285.8799999999999, "start": 1279.44, "text": " the low point but they say basically this this limited number of steps is a" }, { "end": 1293, "start": 1285.8799999999999, "text": " form of regularization now of course in in this new method we we don't have this" }, { "end": 1298.28, "start": 1293, "text": " constraint anymore we can run our inner optimization to super convergence and" }, { "end": 1304.28, "start": 1298.28, "text": " therefore we we don't have this implicit regularizer anymore and we'll have to" }, { "end": 1310, "start": 1304.28, "text": " make up for that they say in cases like ill-conditioned optimization landscapes" }, { "end": 1315.6399999999999, "start": 1310, "text": " and medium-shot learning we may want to take many gradient steps which poses two" }, { "end": 1320.6399999999999, "start": 1315.6399999999999, "text": " challenges for mammal first we need to store and differentiate through the long" }, { "end": 1325.44, "start": 1320.6399999999999, "text": " optimization path of ALG which imposes a considerable computation and memory" }, { "end": 1330.44, "start": 1325.44, "text": " burden right that's what we said second the dependence of the model parameters" }, { "end": 1336.8, "start": 1330.44, "text": " Phi I on the meta parameters shrinks and vanishes as the number of gradient steps" }, { "end": 1342.4, "start": 1336.8, "text": " in our growth making meta learning difficult so what they're saying is if" }, { "end": 1349.04, "start": 1342.4, "text": " you optimize your inner optimization algorithm to the very end then it is not" }, { "end": 1354, "start": 1349.04, "text": " very it's its dependence and especially for gradient descent its linear" }, { "end": 1361.08, "start": 1354, "text": " dependence on the initial parameters so on the meta parameters shrinks the more" }, { "end": 1365.8, "start": 1361.08, "text": " optimization steps you do because the more and more you're going to basically" }, { "end": 1369.88, "start": 1365.8, "text": " forget about your initialization move away from that and move to a local" }, { "end": 1374.6, "start": 1369.88, "text": " optimum from which you could reach you know from many many different" }, { "end": 1379.36, "start": 1374.6, "text": " initializations so that's still a question whether that's happening but" }, { "end": 1383.16, "start": 1379.36, "text": " this that's the idea here right if you're at the local optimum you could" }, { "end": 1387.92, "start": 1383.16, "text": " have reached that from any sort of point and there's going to be very little" }, { "end": 1392.0800000000002, "start": 1387.92, "text": " information about the end of the procedure from the beginning and" }, { "end": 1395.52, "start": 1392.0800000000002, "text": " therefore if you want to calculate the gradient that's going to be like super" }, { "end": 1402.4, "start": 1395.52, "text": " inaccurate and they say to overcome these limitations so they they solve" }, { "end": 1408.88, "start": 1402.4, "text": " these two things in one right here we consider a more explicitly regularized" }, { "end": 1416.3200000000002, "start": 1408.88, "text": " algorithm so what they'll say is they'll say we don't just want to optimize this" }, { "end": 1421.6000000000001, "start": 1416.3200000000002, "text": " inner objective so this would be here so the we don't just want to find the" }, { "end": 1426.6000000000001, "start": 1421.6000000000001, "text": " minimum of this inner loss function that's a one goal we have but the other" }, { "end": 1431.7600000000002, "start": 1426.6000000000001, "text": " goal we have is to stay close to the initial parameters and that's where this" }, { "end": 1437.3600000000001, "start": 1431.7600000000002, "text": " regularizer in comes in here so this basically says we we want with our" }, { "end": 1442.04, "start": 1437.36, "text": " parameters here that we are optimizing we want to find them such that they" }, { "end": 1447.08, "start": 1442.04, "text": " minimize this loss function right you know really minimize it find the best" }, { "end": 1455.28, "start": 1447.08, "text": " point but also with a trade-off of lambda we want to stay close to the to" }, { "end": 1460.1599999999999, "start": 1455.28, "text": " the initial parameters that we started from right this is this is the initial" }, { "end": 1464.32, "start": 1460.1599999999999, "text": " parameters and the closeness is measured in the L2 norm so it's a quadratic" }, { "end": 1470.4399999999998, "start": 1464.32, "text": " regularizer on how close you might know from you know initial supervised" }, { "end": 1474.6, "start": 1470.4399999999998, "text": " learning or something that sometimes you do something like plus lambda times the" }, { "end": 1480.52, "start": 1474.6, "text": " L2 norm of the weight so this would be called weight regularization weight" }, { "end": 1485.6799999999998, "start": 1480.52, "text": " decay L2 normalization something like this where you regularize your weights" }, { "end": 1491.4399999999998, "start": 1485.6799999999998, "text": " such that you stay close to the zero point right implicitly in this there is" }, { "end": 1500.04, "start": 1491.44, "text": " a minus zero given but here you want to stay close to your initial parameters so" }, { "end": 1505.16, "start": 1500.04, "text": " the inner optimization is no longer just minimizing the loss on the training" }, { "end": 1511.68, "start": 1505.16, "text": " data set the inner optimization is that and with ALG star we denote it so if ALG" }, { "end": 1518.52, "start": 1511.68, "text": " has a star we say that the this is this is referring not to the procedure of the" }, { "end": 1524.76, "start": 1518.52, "text": " algorithm but to the minimum that the algorithm has found right so ALG star" }, { "end": 1530.52, "start": 1524.76, "text": " means the algorithm has optimized this inner procedure to its minimum which is" }, { "end": 1535.6399999999999, "start": 1530.52, "text": " a balance of the training loss of that task and staying close to the original" }, { "end": 1544.44, "start": 1535.6399999999999, "text": " parameters and this I think this they say that here the proximal regularization" }, { "end": 1551.8, "start": 1544.44, "text": " term in equation three encourages the Phi I to remain close to theta thereby" }, { "end": 1558.88, "start": 1551.8, "text": " retaining a strong dependence throughout this is why their method works and we're" }, { "end": 1568.0800000000002, "start": 1558.88, "text": " going to see in the math right soon how exactly this this how exactly they're" }, { "end": 1574.1200000000001, "start": 1568.0800000000002, "text": " able through this to establish this implicit gradient correspondence so they" }, { "end": 1580.8, "start": 1574.12, "text": " formulate their entire algorithm as follows we want to find the best metal" }, { "end": 1589.6, "start": 1580.8, "text": " learning parameters by minimizing the function F where F is the average losses" }, { "end": 1597.84, "start": 1589.6, "text": " and L here now that's the test set loss the average validation loss for each" }, { "end": 1606.32, "start": 1597.84, "text": " task of the parameters that the inner optimization procedure finds when it" }, { "end": 1610.8799999999999, "start": 1606.32, "text": " runs to its optimum that is you can see here this is already different from the" }, { "end": 1615.1999999999998, "start": 1610.8799999999999, "text": " original mammal the original mammal was simply running it for a number of steps" }, { "end": 1621.4399999999998, "start": 1615.1999999999998, "text": " and now we're really running it to the optimum at least in the ideal case what" }, { "end": 1626.76, "start": 1621.4399999999998, "text": " does the inner optimization algorithm do the ALG star here minimizes this" }, { "end": 1634.44, "start": 1626.76, "text": " function G right now G has two arguments G as G has these parameters" }, { "end": 1637.92, "start": 1634.44, "text": " which are the local parameters and these are the meta parameters and we only" }, { "end": 1643.8799999999999, "start": 1637.92, "text": " optimize the local parameters right we take these as initial and then fine-tune" }, { "end": 1650.92, "start": 1643.8799999999999, "text": " them where the function G is defined as the training loss of the local" }, { "end": 1659.8400000000001, "start": 1650.92, "text": " parameters plus this closeness regularizer okay cool now the question" }, { "end": 1665.3600000000001, "start": 1659.8400000000001, "text": " of course is how does that lead to gradient descent so ultimately we want" }, { "end": 1671.1200000000001, "start": 1665.3600000000001, "text": " to minimize this function F right here so we have we're going to have to do" }, { "end": 1678.2, "start": 1671.1200000000001, "text": " something like DF by D theta right to in order to run gradient descent we need to" }, { "end": 1683.48, "start": 1678.2, "text": " calculate this gradient because we need to do gradient descent so what's that" }, { "end": 1691.8, "start": 1683.48, "text": " going to be that's going to be of course since F is a is this one over M up here" }, { "end": 1697.72, "start": 1691.8, "text": " right the the gradient simply distributes over that sum now it's the" }, { "end": 1707.24, "start": 1697.72, "text": " gradient or sorry the derivative of each of these inner loss functions let's go" }, { "end": 1716.24, "start": 1707.24, "text": " with this ALG star I theta that's basically what you have right here okay" }, { "end": 1721.56, "start": 1716.24, "text": " so in order to take the gradient of F we need to be able to take the team we" }, { "end": 1725.84, "start": 1721.56, "text": " need to be able to derive these loss functions and you can see right here" }, { "end": 1730.64, "start": 1725.84, "text": " theta is not the argument to the loss function theta is the argument to this" }, { "end": 1736.84, "start": 1730.64, "text": " inner procedure so by the chain rule right now this gives us this so the" }, { "end": 1742.9599999999998, "start": 1736.84, "text": " chain rule says we derive the outer thing with respect to its input that's" }, { "end": 1749.32, "start": 1742.9599999999998, "text": " this part right here the gradient of the loss with respect to the neural network" }, { "end": 1757.36, "start": 1749.32, "text": " and that thing here that's the ALG star so we need to gradient of the loss" }, { "end": 1761.8, "start": 1757.36, "text": " function with respect to the end parameters of the optimization procedure" }, { "end": 1767.52, "start": 1761.8, "text": " now that's easy that's we know how to do that that is the or that is the so if" }, { "end": 1775.12, "start": 1767.52, "text": " you remember the drawing at the beginning this gradient is the end arrow" }, { "end": 1780.8799999999999, "start": 1775.12, "text": " right here this is easy this is one backward propagation this is regular" }, { "end": 1785, "start": 1780.8799999999999, "text": " supervised learning backprop right you have parameters of a neural network a" }, { "end": 1792.12, "start": 1785, "text": " gradient for the loss function cool the hard part is to have the derivative of" }, { "end": 1798.96, "start": 1792.12, "text": " the algorithm itself with respect to the problem to the meta parameters so this" }, { "end": 1806.08, "start": 1798.96, "text": " is going to be this here is going to be a vector it's the gradient with respect" }, { "end": 1811.6, "start": 1806.08, "text": " to these parameters and this is going to be a matrix and the matrix will relate" }, { "end": 1819.6399999999999, "start": 1811.6, "text": " basically one dimension each dimension of this vector sorry this is going to" }, { "end": 1825.12, "start": 1819.6399999999999, "text": " be a product between this thing and this thing and it will result in this thing" }, { "end": 1831.6, "start": 1825.12, "text": " so the left thing is the gradient we want this is the derivative of the" }, { "end": 1838.04, "start": 1831.6, "text": " entire thing that's this now the right thing is the gradient at the end of the" }, { "end": 1844.6399999999999, "start": 1838.04, "text": " optimization procedure and this matrix here relates the individual dimension of" }, { "end": 1852, "start": 1844.6399999999999, "text": " this end gradient to the gradient that we want right this is a matrix that" }, { "end": 1858.24, "start": 1852, "text": " relates the two in a linear fashion this is what we're looking how if how do we" }, { "end": 1865.68, "start": 1858.24, "text": " need to change the initial parameters in order to change the end parameters by a" }, { "end": 1869.68, "start": 1865.68, "text": " certain in a certain way because we only need we only know this but we want to" }, { "end": 1875.04, "start": 1869.68, "text": " know this so how do we calculate this thing here this Jacobian that's the" }, { "end": 1881.2, "start": 1875.04, "text": " question right how do we derive the the algorithmic procedure this thing rear" }, { "end": 1890.24, "start": 1881.2, "text": " here and the paper goes on to say well okay yeah so we need to do this this is" }, { "end": 1896.24, "start": 1890.24, "text": " the entire gradient descent optimization procedure so we must compute this thing" }, { "end": 1903.32, "start": 1896.24, "text": " right here and they just throw it in your face it's this boom shagada bomb" }, { "end": 1913.52, "start": 1903.32, "text": " this thing here done let's go on no so you can you can see it's it's basically" }, { "end": 1918.56, "start": 1913.52, "text": " putting this just right here but we kind of want to explore where that comes from" }, { "end": 1924.84, "start": 1918.56, "text": " so the fact you can see here you can derive this gradient as a closed form" }, { "end": 1930, "start": 1924.84, "text": " expression of the inverse of a matrix that contains this is the identity" }, { "end": 1934.9199999999998, "start": 1930, "text": " matrix contains somehow this lambda factor that we saw before and it" }, { "end": 1942.48, "start": 1934.9199999999998, "text": " contains this Hessian matrix of the training loss right so this is this end" }, { "end": 1947.24, "start": 1942.48, "text": " gradient that we can calculate easily and the second derivative of that is" }, { "end": 1952, "start": 1947.24, "text": " the Hessian which is basically the curvature in the landscape of that loss" }, { "end": 1959, "start": 1952, "text": " but nowhere in this thing is the is the SGD procedure showing up even though" }, { "end": 1964.36, "start": 1959, "text": " this thing here is the SGD procedure and that's pretty impressive and we're going" }, { "end": 1978.76, "start": 1964.36, "text": " to look at how that comes about so so where where do we start first let's take" }, { "end": 1987.84, "start": 1978.76, "text": " this let's take this G right here this G function right and let's calculate the" }, { "end": 1994.32, "start": 1987.84, "text": " derivative with respect to the in these parameters right here so let's go for" }, { "end": 2000, "start": 1994.32, "text": " this end gradient what's this end gradient going to be so we'll derive the" }, { "end": 2012.32, "start": 2000, "text": " G with respect to the these parameters all right so this is a sum this first" }, { "end": 2018.28, "start": 2012.32, "text": " thing is pretty easy it's going to be the gradient of this loss function loss" }, { "end": 2027.36, "start": 2018.28, "text": " function is a scalar right so we can this we can count this is the simply one" }, { "end": 2032.16, "start": 2027.36, "text": " backward prop through the network the second thing we can also do pretty" }, { "end": 2038.24, "start": 2032.16, "text": " easily this is an L2 norm all right we know how to derive a square so the 2" }, { "end": 2046.44, "start": 2038.24, "text": " comes down and the the this will simply result in the this vector right here so" }, { "end": 2056.68, "start": 2046.44, "text": " it's going to be lambda times Phi minus theta okay now this was relatively easy" }, { "end": 2063.96, "start": 2056.68, "text": " now imagine what happens when in this particular thing we have one additional" }, { "end": 2075, "start": 2063.96, "text": " information namely that F the inside of F F we will always optimize to the end" }, { "end": 2081.72, "start": 2075, "text": " we will always optimize this to its minimum right this star denotes that the" }, { "end": 2087.32, "start": 2081.72, "text": " inner optimization procedure will always go to the minimum of that function so" }, { "end": 2091.8, "start": 2087.32, "text": " what do we know about the minimum of a function we know that its gradient at" }, { "end": 2098.36, "start": 2091.8, "text": " that particular point is zero right this is the this is an important part so now" }, { "end": 2103.52, "start": 2098.36, "text": " we can restructure so if we take one to the right I might actually use black" }, { "end": 2110.96, "start": 2103.52, "text": " here because it's kind of burning my eyes we can isolate this part right here so" }, { "end": 2119.4, "start": 2110.96, "text": " we say the Phi is equal to first of all let's um let's take this to the right" }, { "end": 2126, "start": 2119.4, "text": " side so we'll have this gradient right here and I'm just gonna write L Phi" }, { "end": 2133.24, "start": 2126, "text": " let's keep the hat alive we'd have to divide this by lambda right and then" }, { "end": 2140.2, "start": 2133.24, "text": " bring over the theta so we have a close we have an expression that says at the" }, { "end": 2145.7599999999998, "start": 2140.2, "text": " optimum the parameters Phi the inner parameters are going to be given by this" }, { "end": 2156.04, "start": 2145.7599999999998, "text": " expression now that's pretty pretty cool but we know also that these parameters" }, { "end": 2163.04, "start": 2156.04, "text": " aren't just you know parameters per se they depend on these parameters right" }, { "end": 2167.72, "start": 2163.04, "text": " the end parameters depend on the initial parameters because the we use the" }, { "end": 2171.72, "start": 2167.72, "text": " initial parameters to initialize these end parameters so these are actually a" }, { "end": 2177.12, "start": 2171.72, "text": " function of the initial parameters so what we can do is we can derive this" }, { "end": 2185.16, "start": 2177.12, "text": " using red again let's use blue we can derive this thing by the initial" }, { "end": 2189.44, "start": 2185.16, "text": " parameters right how do the end parameters relate to the initial" }, { "end": 2193.4, "start": 2189.44, "text": " parameters now this is our basic question all along but we now have an" }, { "end": 2198.2000000000003, "start": 2193.4, "text": " exact expression for the end parameters which we didn't have before before we" }, { "end": 2204.08, "start": 2198.2000000000003, "text": " just knew they came about by SGD so important to say this only works at the" }, { "end": 2209.4, "start": 2204.08, "text": " optimum right this is at the optimum that this relation counts not anywhere" }, { "end": 2216.64, "start": 2209.4, "text": " and the paper is abusing this quite a bit right here so what does this do if" }, { "end": 2220.8799999999997, "start": 2216.64, "text": " we derive this thing here with respect to theta it's simply giving us the" }, { "end": 2226.4, "start": 2220.8799999999997, "text": " identity matrix right this is now our our Jacobian that appears here it's" }, { "end": 2234.2, "start": 2226.4, "text": " simply giving us this then this one divided by lambda is going to stay and" }, { "end": 2243.68, "start": 2234.2, "text": " now it gets a bit tricky because these things right here of course are also a" }, { "end": 2250.72, "start": 2243.68, "text": " function of theta so essentially this means we this thing right here is a" }, { "end": 2258.16, "start": 2250.72, "text": " gradient of a function of another function of theta so we can apply the" }, { "end": 2262.72, "start": 2258.16, "text": " chain rule again since this is already the first derivative it will give us" }, { "end": 2273.3999999999996, "start": 2262.72, "text": " the second derivative with respect to the loss function right here of with" }, { "end": 2283, "start": 2273.4, "text": " whatever goes into the loss function so that times the inner derivative now the" }, { "end": 2297.32, "start": 2283, "text": " inner derivative is simply how to derive again the Phi by the theta okay now I'm" }, { "end": 2306.1600000000003, "start": 2297.32, "text": " just okay yes so you can see first of all interesting that the expression here" }, { "end": 2312.2400000000002, "start": 2306.1600000000003, "text": " or the expression that we are looking to find appears in the expression itself" }, { "end": 2317.7200000000003, "start": 2312.2400000000002, "text": " right since since these parameters appear over here as a function argument" }, { "end": 2323.6800000000003, "start": 2317.7200000000003, "text": " as well we'll get basically this expression here twice but we can" }, { "end": 2335.08, "start": 2323.68, "text": " reformulate that and find that the the this term this Jacobian is basically" }, { "end": 2341.3199999999997, "start": 2335.08, "text": " this here inverted so the matrix we're looking for is sorry the inverse" }, { "end": 2345.7999999999997, "start": 2341.3199999999997, "text": " Jacobian the matrix we're looking for is given by this quantity right here the" }, { "end": 2354.6800000000003, "start": 2345.8, "text": " identity matrix minus this Hessian term right here okay and this is exactly what" }, { "end": 2362.6800000000003, "start": 2354.6800000000003, "text": " you see appearing here this is exactly that so the derivative we're looking for" }, { "end": 2368.5600000000004, "start": 2362.6800000000003, "text": " sorry this is actually the Jacobian not the inverse that's my bad what you're" }, { "end": 2378.4, "start": 2368.56, "text": " looking for is given by this expression now my eraser got stuck hello" }, { "end": 2386.6, "start": 2378.4, "text": " cool so that's how that appears you see it's the same thing if I had done" }, { "end": 2395.6, "start": 2386.6, "text": " everything correctly and so this this you do by simply shipping this to the" }, { "end": 2400.52, "start": 2395.6, "text": " other side which will make it the the inverse right so you divide you" }, { "end": 2407.48, "start": 2400.52, "text": " basically divide both sides by this and then you get this as an inverse now why" }, { "end": 2414, "start": 2407.48, "text": " does this work again I want to stress why did we get this identity here why" }, { "end": 2421.52, "start": 2414, "text": " were we able to express get a closed form solution to the for the inner thing" }, { "end": 2427.08, "start": 2421.52, "text": " or sorry for the end parameters in terms of the beginning parameters that doesn't" }, { "end": 2433, "start": 2427.08, "text": " have SGD first reason because we optimized to the end to the optimum" }, { "end": 2438.92, "start": 2433, "text": " that's why we got the equal zero right here second reason because we have this" }, { "end": 2445.36, "start": 2438.92, "text": " regularizer you see this directly comes from from this expression right here if" }, { "end": 2450.44, "start": 2445.36, "text": " we wouldn't have this regularizer then we could not make this expression we" }, { "end": 2455.52, "start": 2450.44, "text": " could not get Phi as a standalone quantity here and therefore this" }, { "end": 2462.7200000000003, "start": 2455.52, "text": " derivation wouldn't work now why is this important because if you look back into" }, { "end": 2468.96, "start": 2462.7200000000003, "text": " your drawing what you're basically doing is you are imposing a quadratic" }, { "end": 2478.04, "start": 2468.96, "text": " regularizer around this initial point right here and that creates this very" }, { "end": 2484.2799999999997, "start": 2478.04, "text": " strong connection between the end gradient and the initial gradient so now" }, { "end": 2488.4, "start": 2484.2799999999997, "text": " when you're optimizing when you have a training loss of the inner task and" }, { "end": 2494.4, "start": 2488.4, "text": " maybe the training loss looks something like it looks something like like this" }, { "end": 2504, "start": 2494.4, "text": " right here so SGD will it would go right to the very inner point right here if" }, { "end": 2508.72, "start": 2504, "text": " you're just let SGD run it would go there but now since you have this" }, { "end": 2513.4, "start": 2508.72, "text": " regularizer SGD needs to find a trade-off point between the two so what" }, { "end": 2517.88, "start": 2513.4, "text": " it will do is it will probably go somewhere and stop somewhere here so it" }, { "end": 2523.24, "start": 2517.88, "text": " will now have two forces pulling on it the first force will be this quantity" }, { "end": 2531.84, "start": 2523.24, "text": " right here and the second force will be pulling it back towards this and you can" }, { "end": 2539.2400000000002, "start": 2531.84, "text": " pretty much count so now SGD cannot just go to any point right here it cannot not" }, { "end": 2544.2000000000003, "start": 2539.2400000000002, "text": " go to any isoline these are not equal anymore maybe mainly it will go to the" }, { "end": 2548.8, "start": 2544.2000000000003, "text": " one point that points into the direction of this quadratic right here so since" }, { "end": 2553.76, "start": 2548.8, "text": " it's a quadratic we have closed form formulas for relating one gradient on" }, { "end": 2559.7200000000003, "start": 2553.76, "text": " the quadratic namely the one out here with the gradient there back here so we" }, { "end": 2564.7599999999998, "start": 2559.72, "text": " can express this Jacobian enclosed form because this is a quadratic and because" }, { "end": 2569.48, "start": 2564.7599999999998, "text": " we have this regularizer because you have these basically two forces pulling" }, { "end": 2574.9199999999996, "start": 2569.48, "text": " on this point in opposite direction one pointing towards the training loss and" }, { "end": 2580.68, "start": 2574.9199999999996, "text": " one pointing towards the inside of the quadratic so that's why this method" }, { "end": 2589.56, "start": 2580.68, "text": " works okay I can recommend Farin Hussar's blog post he has some very nice" }, { "end": 2594.92, "start": 2589.56, "text": " animations of why this basically restricts where gradient descent can go" }, { "end": 2600.44, "start": 2594.92, "text": " I can I can link to it in the description it's pretty cool to see I" }, { "end": 2607, "start": 2600.44, "text": " don't have it open right now so what does that give us the implicit model" }, { "end": 2613.7999999999997, "start": 2607, "text": " agnostic meta learning I mammal this is what this paper suggests while not" }, { "end": 2619.44, "start": 2613.8, "text": " converge to do sample a batch bunch of bunch of tasks right for each task" }, { "end": 2625.84, "start": 2619.44, "text": " compute the meta gradient G average these gradients to get a gradient for" }, { "end": 2630.6400000000003, "start": 2625.84, "text": " the outer parameters and then do gradient descent on the outer parameters" }, { "end": 2635.04, "start": 2630.6400000000003, "text": " pretty easy how do you do this how do you do this implicit implicit meta" }, { "end": 2645.16, "start": 2635.04, "text": " gradient this is this procedure right here so what you are going to do is met" }, { "end": 2649.72, "start": 2645.16, "text": " the parameters theta you initialize your parameters with the theta by the way it" }, { "end": 2653.6, "start": 2649.72, "text": " they don't need to be initializations they can be actually any sort of hyper" }, { "end": 2658.48, "start": 2653.6, "text": " parameters that this algorithm takes any parameter ization of this algorithm" }, { "end": 2664.32, "start": 2658.48, "text": " will do fine I just always said initial parameters such that it gets easier but" }, { "end": 2673.28, "start": 2664.32, "text": " it can be any sort of hyper parameters of the inner task obtain task parameters" }, { "end": 2681.04, "start": 2673.28, "text": " using iterative optimization solver such that the the inner parameters are close" }, { "end": 2685.6000000000004, "start": 2681.04, "text": " to the optimum of that algorithm so they actually extend this also in theory not" }, { "end": 2689.76, "start": 2685.6000000000004, "text": " so that you do not don't have to optimize the inner objective really to" }, { "end": 2697, "start": 2689.76, "text": " the optimum but you can be like Delta close to it that's pretty useful and" }, { "end": 2701.1600000000003, "start": 2697, "text": " that's in the part of the paper that we won't go over because this video would" }, { "end": 2707.36, "start": 2701.1600000000003, "text": " be like super long but I invite you to read it if you're interested then you" }, { "end": 2715.0800000000004, "start": 2707.36, "text": " compute the partial outer level gradient so this this would be your partial" }, { "end": 2720.84, "start": 2715.08, "text": " gradient your V would be this gradient at the end right the gradient at the end" }, { "end": 2726.7599999999998, "start": 2720.84, "text": " of the optimization procedure with respect to your validation datasets this" }, { "end": 2732.6, "start": 2726.7599999999998, "text": " is one back prop now we need to relate that end gradient to the beginning and" }, { "end": 2738.6, "start": 2732.6, "text": " that's and we do that by multiplying it with this matrix inverted right here now" }, { "end": 2746.04, "start": 2738.6, "text": " because obtaining the entire matrix this is the Hessian matrix and invert it is" }, { "end": 2752.12, "start": 2746.04, "text": " very memory and computation intensive because if you have D parameters in your" }, { "end": 2757.04, "start": 2752.12, "text": " neural network this is going to be a D by D matrix so if you have five million" }, { "end": 2763.24, "start": 2757.04, "text": " parameters this is going to be 25 million million size matrix is just not" }, { "end": 2768.72, "start": 2763.24, "text": " possible and that's why this paper extends this method to a second degree" }, { "end": 2773.56, "start": 2768.72, "text": " of approximation namely you don't have to compute the exact inverse you just" }, { "end": 2778.7599999999998, "start": 2773.56, "text": " have to compute something that is very close to the inverse times this" }, { "end": 2786.64, "start": 2778.7599999999998, "text": " integral this final gradient and a good method to do this is this conjugate" }, { "end": 2794.2, "start": 2786.64, "text": " gradient method and that method is able to to basically use the fact that you" }, { "end": 2799.68, "start": 2794.2, "text": " can compute Hessian vector products without having to compute the Hessian as" }, { "end": 2805, "start": 2799.68, "text": " a matrix this you can also do with a sort of modified back propagation" }, { "end": 2813.64, "start": 2805, "text": " algorithm also won't go in here but see you use iterative solver for example" }, { "end": 2818, "start": 2813.64, "text": " conjugate gradient along with reverse mode differentiation to compute Hessian" }, { "end": 2825.52, "start": 2818, "text": " vector products to compute GI so GI is going to be the final gradient pulled" }, { "end": 2832.7999999999997, "start": 2825.52, "text": " back through this matrix right here to give you the beginning gradient this" }, { "end": 2838.96, "start": 2832.7999999999997, "text": " meta gradient okay so two approximations here first approximation you don't" }, { "end": 2844.44, "start": 2838.96, "text": " actually have to solve to the very end you can solve it Delta close and second" }, { "end": 2848.68, "start": 2844.44, "text": " approximation you don't actually have to compute the inverse of that final gradient" }, { "end": 2852.96, "start": 2848.68, "text": " sorry compute the multiplication of the final gradient with the inverse of this" }, { "end": 2857.7200000000003, "start": 2852.96, "text": " matrix right here you can also find something that's a Delta prime close to" }, { "end": 2865.2, "start": 2857.7200000000003, "text": " that and they have a bunch of theory of that this still works they compare this" }, { "end": 2871.48, "start": 2865.2, "text": " of course to the other algorithms they observe that their algorithm uses" }, { "end": 2878.3999999999996, "start": 2871.48, "text": " substantially less memory and what substantially less memory and" }, { "end": 2886.3599999999997, "start": 2878.3999999999996, "text": " substantially less compute time once you go up to a number of inner gradient" }, { "end": 2891.9199999999996, "start": 2886.3599999999997, "text": " steps and it works better than this first-order mammal so this first-order" }, { "end": 2895.64, "start": 2891.92, "text": " mammal was our kind of initial guess of how we could do this this tends to" }, { "end": 2905, "start": 2895.64, "text": " perform very poorly as you can see there there oh you cannot you can't actually" }, { "end": 2908.48, "start": 2905, "text": " see that here that their method is better but their method is better and" }, { "end": 2914.88, "start": 2908.48, "text": " uses less time because you have this con inner conjugate gradient optimizer" }, { "end": 2922.56, "start": 2914.88, "text": " sorry this is the this is the outer optimizer okay so this is the error plot" }, { "end": 2931.4, "start": 2922.56, "text": " of how well are these methods are able to approximate the true gradient so if" }, { "end": 2935.76, "start": 2931.4, "text": " you could compute this true outer gradient you know that we did with" }, { "end": 2943.1600000000003, "start": 2935.76, "text": " mammal but we optimized to the end how close are you getting of course the" }, { "end": 2950.72, "start": 2943.16, "text": " problem with this method right here is that you do these approximations to you" }, { "end": 2957.3199999999997, "start": 2950.72, "text": " do these approximations and those could hurt you but the problem with mammal is" }, { "end": 2961.7599999999998, "start": 2957.3199999999997, "text": " that you're back propagating through the optimization procedure and that means" }, { "end": 2969.08, "start": 2961.7599999999998, "text": " the nonlinear errors could sort of accumulate and as you can see here even" }, { "end": 2974.72, "start": 2969.08, "text": " though both might eventually you know get to the to the zero error if you give" }, { "end": 2980.04, "start": 2974.72, "text": " them enough inner gradient steps especially at the low inner gradient" }, { "end": 2986.3199999999997, "start": 2980.04, "text": " step regime the implicit mammal is much better than mammal now I've just said" }, { "end": 2992.24, "start": 2986.3199999999997, "text": " the errors accumulate but the effect probably here is that the fact that with" }, { "end": 2998.84, "start": 2992.24, "text": " mammal you don't actually do good inner enough inner steps to reach a good" }, { "end": 3003.56, "start": 2998.84, "text": " enough optimum of the inner tasks so these inner gradient of the tasks their" }, { "end": 3009.28, "start": 3003.56, "text": " gradients when they're still very not optimized and therefore they are a very" }, { "end": 3013.88, "start": 3009.28, "text": " bad estimate for your outer gradient then when you do more gradient steps so" }, { "end": 3020, "start": 3013.88, "text": " that actually hurts you more which is also a bit surprising to me and then at" }, { "end": 3026.1600000000003, "start": 3020, "text": " the end you see this conjugate gradient steps this is when you approximate this" }, { "end": 3031.16, "start": 3026.16, "text": " matrix inverse if you just do two steps then at some point that error dominates" }, { "end": 3037.04, "start": 3031.16, "text": " but if you do more steps you can reach a much lower error and ten steps isn't" }, { "end": 3044.2799999999997, "start": 3037.04, "text": " that much for an algorithm like this as you can see here the ten steps your" }, { "end": 3051.12, "start": 3044.2799999999997, "text": " computation time will still in in the regime of many gradient steps will" }, { "end": 3058.2, "start": 3051.12, "text": " still be lower than the original mammal and then they actually test this thing" }, { "end": 3064, "start": 3058.2, "text": " and of course they're the best at pretty much everything I don't want to go into" }, { "end": 3069.04, "start": 3064, "text": " the exact details here I invite you to check out the paper for that check out" }, { "end": 3074.8399999999997, "start": 3069.04, "text": " if you're interested in the proofs and the approximation guarantees and with" }, { "end": 3081.84, "start": 3074.84, "text": " that bye bye" } ]
G3pOvrKkFuk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Code] PyTorch sentiment classifier from scratch with Huggingface NLP Library (Full Tutorial)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "code", "pytorch", "bert", "pretrained", "lightning", "live", "tutorial", "pip", "nlp", "transformers", "tokenizers", "sequence", "sentiment", "imdb", "dataset", "full", "github" ]
Huggingface released its newest library called NLP, which gives you easy access to almost any NLP dataset and metric in one convenient interface. We will combine this with a BERT model from Huggingface's Transformers library to build a sentiment classifier for IMDB. OUTLINE: 0:00 - Intro 1:30 - Boilerplate 3:20 - PyTorch Lightning Module 9:50 - Load Dataset 12:15 - Tokenization 20:50 - Torch Tensors 25:50 - Data Loader 28:00 - Create BERT Model 32:00 - Implement Validation and Train Step 47:00 - Run & Recap 50:20 - Epilogue My Code: https://github.com/yk/huggingface-nlp-demo NLP Library: https://github.com/huggingface/nlp Tutorial Colab: https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb Transformers Library: https://github.com/huggingface/transformers Pytorch Lightning: https://github.com/PyTorchLightning/pytorch-lightning Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
How did it do? So Hugging Face just released this NLP library right here and this is pretty cool because it allows you access to about a hundred NLP data sets and ten evaluation metrics pre-packaged. So knowing Hugging Face this is going to be a breeze to work with. So what I thought we would do is we would try to use this. I have not used this yet and it's been a while since I've used any Hugging Face stuff. So what we're trying to do is use this to load up the IMDB data set and then use a BERT model maybe to build a sentiment classifier on top of that using PyTorch, specifically PyTorch Lightning. So all of that combined from scratch and basically if I can do it then so can you and we're going to make some mistakes and have to look at the documentation a bit and so on but that's the process. So first of all if you like content like this let me know if you're not subscribed. Let me know in the comments if you have any sort of criticism or tips. I'm always happy for Vim tips honestly. So I have a pretty empty repo, git repo here. I have a git ignore but that's about it. So we'll just dive right in, start up Vim and let's make a file. So first some boilerplate code. I'm terrible at talking and coding at the same time but you know. So I like to use this APSAL library and I'm using as you can see I'm using the tab 9 completion engine with CoC with Neo Vim. This is absolutely great. We maybe need apps, app flags logging. That sounds good. So we'll need Torch probably right and we'll need PyTorch Lightning as PL. We'll need the NLP library of course since we're gonna use that and we'll need the Transformers library. Now I know Hugging Face has this tokenizers library too but there are some tokenizers in the transformer library already and we'll just keep it light like this. So maybe NumPy, maybe not. Let's see. So we'll export, we'll have these flags object here. Maybe we'll do some flags later and the main function. Let's just call hello. Actually let's log that info and alright. Run main. So this is our boilerplate and let's just quickly try it out just to see whether it works. So here we are. Hello. That's fine. Alright so where do we go from here? So in PyTorch Lightning what you'll have to do is you have to build this kind of model class. We'll build an IMDB sentiment classifier and that's going to extend this Lightning module of PyTorch Lightning. So you need different things in the PyTorch Lightning module. First of all you need the init and we'll just do like a very basic init. We'll call super on it and that's about it. And you need a forward method since this is a module. So in the forward method you're going to get a batch and you have to do something with it. What we also need is a training step method. Training step which gets a batch and a batch index and we'll have to output some kind of loss or some kind of training procedure. Then we'll need a train data loader. So all of this you can look up in the documentation of PyTorch Lightning. Basically you implement these methods and it will do the rest for you. So it will do all the training loop and it will do the handling of GPUs and whatnot. The whole looping over epochs. All of that is basically taken care of for you when you use PyTorch Lightning. So last thing we need is maybe a prepare data. Let's put that more up here. Prepare data. That method is optional but it gets called at the beginning and that's going to be pretty good for us. I have downloaded the weights of a BERT model and the data set so we don't need to do that anymore. That's about it. Maybe I've forgotten something. Lightning examples. There's what we're going to do. We're going to look at it like an example of PyTorch Lightning and just to see whether we'll have it. Maybe here domain examples, ImageNet sounds good. We'll have these methods. This is way more than we need but down here. Basically what you do is you instantiate your model and we won't have these hyper parameters here. These will be our flags but then you'll implement this trainer and then you call fit on the model. Let's maybe copy this down here. This is our IMDB sentiment classifier and the trainer. The root here, let's call that logs. GPUs. We'll give it a GPU if CUDA is available. Else zero. Then we'll make a flag for the epochs. We don't need the rest of this. Then at the end we'll call fit model. If we had a classifier this would already run. Now what I like to do is to have this module called SH which gives you some sort of easy shell commands. At the beginning of each run whenever the file loads I remove the logs folder. I have a clean logs folder and then I make it again like this. It just deletes the logs and then runs them again. If we run this right now this is going to give us an error. We don't have an epochs flag. We need to define a flag. Let's call define integer. We'll go for 10 epochs right now. We haven't configured our optimizers. In PyTorch Lightning you need some sort of optimizer configuration. I'll just copy that from an example. I'm going full Siraj here people. We need to configure optimizers. I like the SGD for this. It tends to work well in neural networks. We don't need the scheduler. We don't need any of that. Let's just return the SGD optimizer with the parameters and we'll make a flag for the learning rate and we'll make a flag for the momentum. We don't need any weight decay. Let's put these. We'll make floats for the learning rate. Maybe start off with something like this. I never put help strings if the description is rather clear. Only losers need help. Don't be kidding yourself. If you put the help string you need help. That's how it works. I just don't like that this library forces you to put the help string because it somehow makes me feel bad. It's very opinionated. It says basically you should put something there. We have this and now when we run this we don't have anything to optimize yet. First of all we need the model. Do we need to prepare data first? Let's check. I have this short snippet here that embeds an IPython shell. I just plug this into anywhere so I can see if I reach it. I reach the prepare data. Let's care about the data set first. This NLP library as you can see right here. There's the usage right here. You can load a data set here with the appropriate split. It will basically just give it back. If you don't have it, it will download it. It's pretty cool. We'll just load the data set. I've already checked out what they have and they have the IMDB data set. In this split argument we can say give me the train split. As a string you can say give me whatever the first 5% of the train split. This is just my laptop here. We won't be able to train a super high grade model. We'll go for 5% of the train split. This is to train data set. Now if we run until here. If you had not downloaded this, it would download this. Given the train data set, I hope you can see this. It says it's a data set. It has 1250 rows. Each entry has a text and a label. You can just index this like a data set. That's the first sample. The label is 1 here. It means that we should predict the label that this is a good sentiment. It's either 1 or 0. I think so. Either good sentiment or bad sentiment. Our first task is going to be to get this into a form where BERT can consume it. How do we do this with this NLP library? That's the pretty cool part. Right now you see this is text. In NLP we need to map this text into token IDs. We need to tokenize and we need to map this to IDs. Huggingface of course has very nice libraries for that. They're called tokenizers. We'll have one of these tokenizers. We'll use this from the transformers library. I think this is called BERT tokenizer. That the BERT models can use. Let's check it out. We're at the documentation. BERT tokenizer. There we go. There's a BERT tokenizer. Fast. Yes, okay. We'll take the fast one. Maybe not. Yeah, we'll take the fast one. Come on. Be risky. BERT tokenizer fast. I think we can do this from pre-trained. They have these methods from pre-trained. We'll take this from pre-trained. We'll put the model name here. I want to make this a flag. Such that I'm not bound to a particular model. Oops. Cool. This is called model. This is our model. BERT based on case. We have a tokenizer right now. We can now tokenize these things. Every entry in the data set. In a classic setting we'd have to write a loop for that. With this data set library, with this nlp library, it's pretty cool that we can tokenize each of the samples. We can map this tokenizer function across the training data set. How do we do that? We have this tokenizer. I'm pretty sure it has a tokenizer, an encode or something method. There's forward. This is the BERT model. Where's the BERT tokenizer? Right here. It has this encode or something. Here. Oh yeah. Encode. Where is the definition of that? Can we click on this? This encode takes text and it takes a bunch of other arguments. I hope you can see this. There we go. Whether or not you should add the special tokens or the max length. This is going to be pretty important. And pad to max length. We want everything to be of the same length. If you apply this encode function to a text of these samples. Let's just take the first sample here and let's take the text entry. Then what you're going to get is a list of these IDs. This is exactly what we want. The 101 here is this CLS token that BERT takes in. Then it's just the word pieces. You could also say instead of this say tokenize. I think. That will just give you the word pieces. Not the encodes yet. These are the word pieces right here. This is the tokenized text and with the encode function it does this. Then it maps these two IDs such that BERT can consume that. For this NLP, this library, has this convenient function called map in their data set. What we'll have to do first is define a tokenized function that takes in a single sample. Then it will run the tokenizer encode function across the text entry. We have already seen we need like... Add special tokens is true. This is cool. Max length, yes. We'll make a flag sequence length or something. We are going to pad to max length is true. Every single sample will be of the same size. In this function there's a number of ways what you can return here. One way is to return the original sample and actually just set a new attribute. I think. Set a new attribute on this original sample right here. Let's format this a bit nicer. You see we have this tokenized function. It takes a sample, it takes the text, it tokenizes it, encodes it and puts this as the new attribute input IDs and returns it again. Now what we can do is we can map this function across the training data set. This will go over the training data set and basically for each entry do this thing. Hopefully after this operation we'll have a data set where each sample not only has a text and a label but also an input IDs attribute. We don't have this sequence length thing right here. Let's put that here. Let's just go with 32 since this is just my laptop. 32 samples should be fine. Here it says can't pickle tokenizer objects. What it tries to do right here is it tries to it tries to parallelize basically this thing right here. If we look at this NLP thing, is there documentation to this? We can just look at the data sets maybe. Naming, splits, builder, arrow, data set. Map right here. This function I think it will try to multiprocess and therefore it needs to basically pickle all of the things that go into the function. It pickles all of the things that go into the function which means this tokenizer right here it needs to be pickled. Maybe there's a way to get around this. One thing we can try is we can try another tokenizer. Maybe this one can be pickled. This did library is pretty good but it can't pickle everything. This tokenizer can actually be pickled. I'm not entirely sure what you'd have to do honestly because I don't know the library but what you could do is make a thread or process local variable of this and basically make it a singleton in each process and then basically in here you call the function to get it and it returns the already instantiated object and so on. If you really want to multiprocess all of this. Anyway we have this train data set right now and you see the schema. If you can see this the schema has been extended so there is now text, there is label and there is input IDs which is a list of int64 things. That's pretty cool. So now what we can do since this is still a python list right this is still a python list. Now I know the tokenizers can already output PyTorch tensors but that's kind of cheating. So we want to use this library right here. We want the train data set. There is a method called set format right here and you say type equals torch. What that does and I think you need to say which columns you want. So we want columns. Maybe we should get all columns. Can we output the text? So you can select from the sample which of the columns you want and let's check it out again. For now as long as we're just debugging here I like to do a debug flag. So this is usually one of the first flags I do. It's define boolean debug. What this does is whenever this is active I try to be as fast as possible. So there in this PyTorch lightning trainer there's actually this fast def run argument. Which does the same thing but I can push it a bit harder with this debug here. So let me say this is like one. We'll just load batch size samples if we are in debug mode. We don't actually have a batch size argument yet do we? If flags.debug else 5 percent. So we don't have batch size yet. We're surely gonna need that at some point. So let's go with a batch size of 8 just because we can. Now if we run this in debug we should... Ah okay yes this needs to be a string. Shag-a-boom! Cool so it says it's the fast def run and if we run it in debug it just loads very few data points. So this map function here doesn't take this whole while. Maybe there's a way you can stream that I don't know. For now this is pretty good. So if we look at the train data set again you can see that it has the same entry. So this is still a list of 64 but if you index it right now if you go to the zero data point. Okay then it crashes because it tries to convert these two PyTorch tensors and it can't convert the string so we'll have to say we just want the columns input IDs and we want the label. Label can't spell. Okay let's try it again. So right here you see that what we get out is actually a PyTorch tensors for this and not kind of Python lists anymore. So this is now pretty this is one-to-one so with duck typing maybe it's even subclassed. This is a PyTorch data set right which we can load into a data loader. So this is a perfectly fine data set so we can now say self train data set is this train data set. Now we want to do this for the test as well but in order to do that we would have to write all of this code again which I'm not really in the mood so we'll just loop it. We'll create a function prepare data set and we'll take in the split name. The split name right like this and we'll just go with the split name here. That should do it and we just call it data set. Data set and return that. So now we can say train data set self dot test data set is prepare data set. For train and test. Excellent so now we have a training data set and a testing data set. So here in the train data loader we need to take the training data set and construct a data loader from it. This is super easy so what we'll do is we'll do it in one line. Data loader so what does the data loader need? The data loader needs a data set. So the prepare data is called at the beginning so we have this data set right here and I think we can go with a batch size right here and we already have a flag for that and I think there is like a drop last yes so the drop last will go for true we only want full batches during training and we'll also shuffle. Okay and the same goes for we need a validation data loader for our validation set. So in PyTorch Lightning you have train validation and test and test is really only for like the final final test. If the test data set we have here is the would be called the validation data set in PyTorch Lightning so we false here false we don't want to shuffle particularly. Okay so we have a training data loader and a validation data loader. Now what do we need? We have optimizer very good. Now what do we need? All we need to do is to actually pass our data through the BERT model. So this forward thing here we're just going to leave not implemented. Maybe we can implement it. Okay so we do need a model as you can see right here this batch let's say this batch is going to let's go right here right so if you know if you don't sometimes don't know what to do you just go to where you should be okay at ultimate empty parameter we don't have parameters yet all right so what do we do we go up here and we make a model we need to actually make the BERT model right here so from transformers we can use the BERT model now they have a lot of BERT models and we'll go back right here to the BERT models because they as you know BERT is just an encoder so we need to build a classifier on top of BERT but they already have done this so they have a bunch of BERT different configurations and the one we're looking for here would be this this BERT for sequence classification right this is BERT BERT model transformer with a sequence classification or regression head on top right so this is exactly what we need a classifier on top of BERT and we can um i think we can also load this with this from pre-trained and just put in the same name so we can this BERT for sequence classification and we'll load up the same model that we had okay so um this is our model easy as that so what do we what do we do with this BERT if we put in data what what happens for that we quickly go back again so in the forward method we can in we can input the input ids right which is batch size sequence length tensor we can input the attention mask that basically tells you where there's padding and where there isn't masks to avoid performing attention on padding token mask value selected in zero one one for tokens that are not masks zero for tokens that are masks then we can input the token type ids which we don't have here we just have one sentence but usually in BERT you have the capability of inputting two different types like a question and a paragraph or a first sentence and the second sentence um position ids are optional um blah blah blah blah blah blah blah none of that okay we could also input the labels these are optional and it would already compute a loss for us uh which we we don't this that's almost cheating so let's just focus on putting in the input ids and i think that's gonna be enough since we basically truncate our long text to 32 tokens we don't need to worry about masking right here otherwise you would input a mask for um actually we we can do it we can do it okay so what you could input a mask for basically where um your tokens are not pad tokens and the pad tokens in BERT are zero so basically your mask should just be whatever's non-zero uh but maybe also your model learns um to ignore the pad tokens i might be wrong here and it does it automatically right so in your forward pass what do you do actually let's go to the training step we'll put something here you can see it so if you if you didn't have BERT um it would actually uh BERT you it BERT you up it would download BERT right here but since i have it you can see here this is the smaller BERT model um pytorch lightning i don't have enough space in my console right here but it would give you a nice overview over your model how many parameters it has how what kind of layers it has and so on so uh we also need a validation step if we have a validation data loader validation step and we need the um validation epoch end function so usually in training you don't really care about epochs too much because you just have many batch after mini batch but in validation uh what you want is kind of one single metric across your entire test data set or validation data set and therefore you sort of in the validation step you'll just kind of output things you output local things per batch and then in the epoch end function you aggregate them into one big number so um we'll we'll we'll just put we'll put things into each thing thing thing so i'm pretty sure we're going to end up in the validation step first because if especially if we do this debug run it basically it tries to run a validation first uh at the very start of training so we can look at a batch right here so what's a batch um the batch seems to be a dictionary if you look at its keys we can see um the batch seems to be a dictionary if you look at its keys we have label and input ids okay so that's pretty cool so if we go for the input ids that gives us a tensor and the tensors of shape eight which is our batch size and 32 which is our sequence length and we should be able to pretty much input that into the BERT model that we created boom okay and what do we get out we get out a tuple and the first entry is going to be this looks like logits all right okay let's check the shape and this is eight so this is our batch size and two is the logit so one for the negative class and one for the positive class and this is this we can basically input into a cross entropy loss given our labels so we also have our label here and their label is all ones nice um is this maybe sorted is the data set sorted into good and bad things because that would be that would be bad in any case um so what do we have to do so in the forward method we get the input ids let's let's say we get the input ids and we run this through our model and we can actually construct a mask here and the mask is going to be wherever the input ids are not zero and um that as a what does it need to be so these mask this attention mask is going to be a float tensor okay so we'll put it as a float tensor cool um right like this so our logits are going to be that and yeah tuple with one entry so the comma here is important we're going to return the logits so this is our forward function so in the validation and the training step the first thing we got to do is we got to uh call this forward function with the input ids and these of course are in our batch like this so these are going to be our logits and then in the validation what we want to do is we first of all want to compute our loss right so we have to construct this up here in the init we can actually fold this prepare data um loss is going to be a cross entropy loss yes that exists with read reduction i like to put reduction none i don't think there's like an a deprecated reduce and there is like a reduction where you can put mean or something i like to not reduce the loss at first because then i can agro i can use the same thing for validation and training so in the validation step i just want to compute my loss right here with self so loss loss um and we'll have to cheat a bit so look up the cross entropy loss and come on okay where is the cross entropy loss loss cross entropy loss it takes yes it's reduction ha tada and so the input to the function that we construct is going to be first um n by c first the input and then the targets so first the logits and then the targets right criterion that combines logs of max and nl loss over a single class nice nice nice okay okay cool so first logits and then labels label okay that's our loss so if we check out what our loss is going to be it's probably going to be an vector of size eight because we have reduction none none loss yes c vector of size eight very nice so we can just um basically return i'll say we can return this loss just as is and then in the validation epoch end the outputs here is going to be a list of and every entry in the list is going to be one of these validation steps for for one batch so we can aggregate those so losses is will concatenate them since they're going to be chunks of eight outputs at the dimension zero and then we can calculate the mean right so um we can do that and then we can oh no we need to do more we also want to know the accuracy right so um the accuracy is going to be whether or not the logits dot arg max is go is equal to the label label so the accuracy for each sample is going to be that it's either going to be one or zero and we want that as a float so here let's output a dictionary with loss um and accuracy all right excellent so here then we can aggregate so the loss is going to be and i like to have um like a construction here that aggregates this still so we go out loss for o in outputs so these are now going to be uh entries each one is going to be a dictionary right so our loss losses we have concatenation to the mean okay our accuracy is going to be the same thing for the accuracy nice so our output here is going to be a dictionary and i think in pytorch lightning there there if you put validation accuracy select valac it selects the model according to this but i'm not sure so also in pytorch lightning i can now output this here but also if you have a log entry it will forward this to the logger which is the logger which we can uh actually do and make a tensor board logger out of this so what have we done we have first of all set up the validation step so the the pytorch lightning is going to run through the data loader for each batch do this so we forward it through the bert model to get our log it's and then we compute our loss by the cross entropy loss of the log it's and the labels and we also compute our accuracy by seeing how much the log it's agree with the labels or the maximum log it and then we aggregate all of this over the entire epoch and output that now let's set up a logger so for the logger we can put this i think in the trainer here pytorch lightning logger dot and i think there is a tensor board logger uh pretty sure pytorch lightning is there tensor board no pytorch nying logger i'm pretty sure that exists that's not the newest version i hate these these old docs so latest come on oh this was called logging logger log loggers tensor board logger right here nice so our save dear is going to be called logs and then what we what do we want we want the name imdb and there's also this version thing where if if if you don't put version zero it will just make a new kind of folder each time but i guess we delete the logs anyway we delete the logs folder at the beginning so we don't have this problem but i generally like to overwrite my logs and not make new runs but if you like something different that's you know fine all right so let's run this again and we're cool though this is the bird configuration that we loaded and then we have no attribute logger pytorch lightning loggers loggers loggers okay again loading the weights very cool blah blah blah blah blah blah blah blah and we're in the night python shell and do we have a night python shell remaining only in the training step okay so we're at the training step right here and we can actually can we can check whether or not um ah now we have lightning logs and logs my okay so these appear to be our tensor board logs so we are maybe able to run the tensor board here later um let's run it logs we don't have tensor board okay oh yeah i've uninstalled it because i was angry at it oh come on what's going on um tensor board i should have tensor board somewhere uh it's it's like in um in local bin or something um in local bin or something local bin no it's not in local bin oh oh we'll find it we'll figure it out uh how to get a tensor board maybe we need to install tensor flow well that's gonna take a while okay so back to the training step in the training step we basically need to do the same as in the validation step so we'll need to forward our batch through the model but here we don't need to compute an accuracy but we do need to compute a actually a batch loss that we can back propagate on now in the training step you can either specify how you back propagate um per se or what you can do is you can just output this log loss attribute and then pytorch lightning will basically do the back propagation for you we have the tensor board now please all right there we go and we can we can put this into a different uh thing right here um git uh lp demo yes um okay so this is running and if everything goes correctly 06 shaboom we have a tensor board okay so we need to forward our training step and we need to calculate a loss for the batch so these loss here we do the same thing but here we call mean on it so this is the mean loss from this batch and we can now return um the loss right here and we can also in the training step you can also output a log dictionary and we'll output the loss again here in order so this is our going to be our training loss that we output right here um let's call it train loss and this also will go into the tensor board so if we run this right now we don't have an ipython shell simply by outputting this loss attribute we already instruct pytorch lightning to now run backprop on this loss uh using the optimizer that we have defined okay and by outputting the log we instructed to put this into the tensor board so now we have a scalar entry and um you can see this it only contains the valid no it contains everything very cool very very cool so let's remove the debug flag and we'll just see what happens so to recap right to recap we have um oh now you go see epoch one epoch two go go go go go ah very cool um what we've done is we've set up this pytorch lightning module it needs a bunch of functions but in the init we've basically just set up our bird model from the hogging face transformers library we've loaded in a pre-trained bird model that we're going to fine tune the main thing that the pytorch lightning module needs is a training step function where you define what it should do with the data and this data loader function so in the data loader function um we've loaded up a data set um and we basically specified the batch size this is very easy where does the data set come from we do it here in prepare data this is a special function in pytorch lightning that's basically called after the init but before anything else runs and here we are loading this data set from the nlp library and this is kind of the magic part we specify the split and the size that we want inside of the string and you can do this in percent or in a number of samples that you want i'm sort of sure you can do more things but i haven't explored that then we run map on the data set in order to tokenize it and that's right here and we use a tokenizer again from the pytorch lightning and um just run this encode function this is very simple like if how complicated was this just like a year ago crazy then we need to to put set format and set format tells the data set how it needs to output its samples and we tell it please output torch tensors and um we want these columns right here and we make a train and test data set with from the train and test split accordingly so we have this this goes into a data loader pytorch lightning will take the data loader and run training on it using this train step function in this train step function we get a batch um in the batch there are these two columns that we specified previously input ids and label the input ids will put through the forward function of the model itself this is the forward function we'll construct a mask and run it through the model um we wouldn't actually need to construct a mask but okay and we get back the logits of the classification and then we run this through a cross entropy loss uh get the mean of the batch and there we go in the validation step we do the same thing but also calculate the accuracy but don't calculate the mean we want to keep it per sample and only at the end we want to concatenate everything and calculate the mean if we've done everything correctly you see right here our train loss goes down down down until it is almost zero because we've just and the validation accuracy is super high is this is this because all the labels are equal okay so we have a all the labels are equal like for real um okay so we'll do something else we'll make an integer um with percent and this was five right so that we loaded five percent of the data set um but let's load some more and this might take longer but let's load 50 percent of the data set and just see what happens no present i called it present very good so we'll load up 50 percent of the data set and um we'll do the same thing and we can track in real time what happens in tensorboard and unrecognized instruction format um okay who can we make a format string in a format string this is nasty does it work please work we can make a format string in a format string absolutely bonkers okay so it takes a little bit longer and you could actually i think you can speed this up this mapping of the data set maybe you can stream it um i'm pretty sure you can batch it you can do a batch um processing of this but for our case right here uh we think it's enough so it was like what 1200 if we had five percent so now it should be something like 12 000 um so let's continue with the recap of what we did here we have the train data set the validation data set and on yes so we have everything like this the configure optimizers you can put an optimizer you can also put a learning rate scheduler if you want to and then in the main function we load this pytorch lightning module and we specify a trainer and the trainer we tell it you know the max epochs and so on and we set up the logger and we just run fit on this model and this runs epochs of the model and after each epoch it does a validation epoch and uh minimizes our loss very cool very effective so now if if please if you would all right here we go this is my laptop training burnt oh okay we don't seem to make too much progress let's check the tensor board training loss goes down training loss goes to zero training loss goes down training loss goes to zero i have the sneaking suspicion that this is not entirely shuffled so um is there like a shuffle like a shuffle thing because this seems a bit this seems a bit bit fishy um this imdb data set right here it just seems like you know we could use a bit of shuffling because all the labels yeah the training loss instantly goes to zero so maybe maybe it's not we can we shuffle here let's look at the load data set function load data set batched uh no keeping memory no none of that okay this does not seem to go to continue right here data sets nlp data sets i hope here we know we should find this load data set somewhere builder features load load data set split can we not shuffle anywhere we'll search shuffle builder okay so generate examples this function pre-processed examples key will be hashed okay we are not able to shuffle this um just like that and we can't do that okay we are not able to shuffle this um just like that but i hope this at least gives you an impression i guess if you were to take the full data set and map it then it would actually work we'll just try again with 10% of the data just uh to see it go down tensorboard see this is now good because we always delete the logs folder we don't have any uh remnant uh old tensorflow logs all right come on come on so 10% should be yeah about this about this okay train loss looking good looking good so far looking good looking good so far look at these models how large is that how large is the bert base case hugging face pre-trained models pre-trained models bert based on case that's the one we have 12 layers 110 million parameters easy easy easy oh no it's too large training loss goes to zero again okay so we've determined that this data set very probably isn't entirely shuffled it might just have all the good labels first and all the bad labels last and um yeah just to confirm let's confirm this uh right here let's go with 100% but let's put an ipython shell down um just before we map the data set so we don't have to go through the whole mapping procedure actually that would be here right yes can we not map this asynchronously map i might be doing something really wrong with this library but i think that's that's how it should go so map def map right here we can do batched we could do batched and then i think hugging face has a function to encode batched encode batch encode encode batch no um let's go to the tokenizer build inputs create token type ids get special token mask save where is encode code right here can we have batch encode build inputs no this might be it batch encode yes there is a batch encode where you have batches of these things so okay what if we do the negative one see here's the label zero um i'm pretty sure i'm pretty sure uh batch true let's do that and in our function here we'll say batch encode so let's see how fast this is with 100% where tokenizer has no but we just had batch encode oh but this might be we have batch encode plus batch encode plus or text pairs okay we need this batch encode plus but then that gives us a dictionary right this gives us a dictionary with the fields input ids right here so like this how about that and then we still might want to limit the actual data set um once we have once we have mapped it because we need to train on it as well but i just want to see how fast this batch uh encoding is yes okay reasonably fast but it takes like three minutes um yeah so we won't go on here i will put i will put this as is on um i'll put this as is on github and i will put this as is on github and i hope you can profit from that in any way you want the hugging face site has a tutorial on squad where they also use the metrics so they have basically these pre-defined metrics like blur or rouge i think and you can just use them as you use these data sets so it's very very very convenient to work with these things in nlp so nlp has come a long way absolutely invite you to check out the um the transformers and tokenizers and nlp repos and with that that's it for me i think i hope you enjoyed this again leave a comment if you see improvements or if i maybe should edit this a bit more or if i should add a little bit more see improvements or if i maybe should edit this a bit more i thought the entire process of just going through and making mistakes um would be entertaining to some all right bye bye
[ { "end": 6.96, "start": 0, "text": " How did it do? So Hugging Face just released this NLP library right here and" }, { "end": 13.48, "start": 6.96, "text": " this is pretty cool because it allows you access to about a hundred NLP data" }, { "end": 19.3, "start": 13.48, "text": " sets and ten evaluation metrics pre-packaged. So knowing Hugging Face" }, { "end": 24.18, "start": 19.3, "text": " this is going to be a breeze to work with. So what I thought we would do is we" }, { "end": 28.52, "start": 24.18, "text": " would try to use this. I have not used this yet and it's been a while since" }, { "end": 34.44, "start": 28.52, "text": " I've used any Hugging Face stuff. So what we're trying to do is use this to load" }, { "end": 41.32, "start": 34.44, "text": " up the IMDB data set and then use a BERT model maybe to build a sentiment" }, { "end": 48.16, "start": 41.32, "text": " classifier on top of that using PyTorch, specifically PyTorch Lightning. So all" }, { "end": 54.239999999999995, "start": 48.16, "text": " of that combined from scratch and basically if I can do it then so can you" }, { "end": 58.440000000000005, "start": 54.24, "text": " and we're going to make some mistakes and have to look at the" }, { "end": 64.92, "start": 58.440000000000005, "text": " documentation a bit and so on but that's the process. So first of all if" }, { "end": 70.48, "start": 64.92, "text": " you like content like this let me know if you're not subscribed. Let" }, { "end": 74.52000000000001, "start": 70.48, "text": " me know in the comments if you have any sort of criticism or tips. I'm always" }, { "end": 81.38, "start": 74.52000000000001, "text": " happy for Vim tips honestly. So I have a pretty empty repo, git repo here. I have" }, { "end": 86.96, "start": 81.38, "text": " a git ignore but that's about it. So we'll just dive right in, start up Vim" }, { "end": 101.03999999999999, "start": 86.96, "text": " and let's make a file. So first some boilerplate code. I'm terrible at" }, { "end": 106.19999999999999, "start": 101.03999999999999, "text": " talking and coding at the same time but you know. So I like to use this APSAL" }, { "end": 111.19999999999999, "start": 106.19999999999999, "text": " library and I'm using as you can see I'm using the tab 9 completion engine" }, { "end": 119.84, "start": 111.2, "text": " with CoC with Neo Vim. This is absolutely great. We maybe need apps, app flags" }, { "end": 128.64000000000001, "start": 119.84, "text": " logging. That sounds good. So we'll need Torch probably right and we'll" }, { "end": 137.84, "start": 128.64000000000001, "text": " need PyTorch Lightning as PL. We'll need the NLP library of course since" }, { "end": 142.6, "start": 137.84, "text": " we're gonna use that and we'll need the Transformers library. Now I know" }, { "end": 146.8, "start": 142.6, "text": " Hugging Face has this tokenizers library too but there are some tokenizers in the" }, { "end": 154.16, "start": 146.8, "text": " transformer library already and we'll just keep it light like this. So maybe" }, { "end": 161.32, "start": 154.16, "text": " NumPy, maybe not. Let's see. So we'll export, we'll have these flags object" }, { "end": 172.68, "start": 161.32, "text": " here. Maybe we'll do some flags later and the main function. Let's just call hello." }, { "end": 186.72, "start": 172.68, "text": " Actually let's log that info and alright. Run main. So this is our boilerplate" }, { "end": 195.6, "start": 186.72, "text": " and let's just quickly try it out just to see whether it works. So here we are." }, { "end": 202.64, "start": 195.6, "text": " Hello. That's fine. Alright so where do we go from here? So in PyTorch Lightning" }, { "end": 207.92, "start": 202.64, "text": " what you'll have to do is you have to build this kind of model class." }, { "end": 216.95999999999998, "start": 207.92, "text": " We'll build an IMDB sentiment classifier and that's going to extend this Lightning" }, { "end": 221.51999999999998, "start": 216.95999999999998, "text": " module of PyTorch Lightning. So you need different things in the PyTorch Lightning" }, { "end": 227.56, "start": 221.51999999999998, "text": " module. First of all you need the init and we'll just do like a very basic" }, { "end": 234.32, "start": 227.56, "text": " init. We'll call super on it and that's about it. And you need a forward method" }, { "end": 239.84, "start": 234.32, "text": " since this is a module. So in the forward method you're going to get a batch and" }, { "end": 247.04, "start": 239.84, "text": " you have to do something with it. What we also need is a training step method." }, { "end": 255.16, "start": 247.04, "text": " Training step which gets a batch and a batch index and we'll have to output" }, { "end": 261.76, "start": 255.16, "text": " some kind of loss or some kind of training procedure. Then we'll need a" }, { "end": 268.71999999999997, "start": 261.76, "text": " train data loader. So all of this you can look up in the" }, { "end": 272.71999999999997, "start": 268.71999999999997, "text": " documentation of PyTorch Lightning. Basically you implement these methods" }, { "end": 276.56, "start": 272.71999999999997, "text": " and it will do the rest for you. So it will do all the training loop and it" }, { "end": 285.58, "start": 276.56, "text": " will do the handling of GPUs and whatnot. The whole looping over epochs. All of" }, { "end": 289.96, "start": 285.58, "text": " that is basically taken care of for you when you use PyTorch Lightning. So last" }, { "end": 297.59999999999997, "start": 289.96, "text": " thing we need is maybe a prepare data. Let's put that more up here. Prepare data." }, { "end": 302.32, "start": 297.59999999999997, "text": " That method is optional but it gets called at the beginning and that's going" }, { "end": 306.79999999999995, "start": 302.32, "text": " to be pretty good for us. I have downloaded the weights of a BERT model" }, { "end": 310.91999999999996, "start": 306.79999999999995, "text": " and the data set so we don't need to do that anymore." }, { "end": 318.96, "start": 310.91999999999996, "text": " That's about it. Maybe I've forgotten something." }, { "end": 325.23999999999995, "start": 318.96, "text": " Lightning examples. There's what we're going to do. We're going to look at it" }, { "end": 330.96, "start": 325.23999999999995, "text": " like an example of PyTorch Lightning and just to see whether we'll have it." }, { "end": 337.76, "start": 330.96, "text": " Maybe here domain examples, ImageNet sounds good. We'll have these methods." }, { "end": 342, "start": 337.76, "text": " This is way more than we need but down here. Basically what you do is you" }, { "end": 347.15999999999997, "start": 342, "text": " instantiate your model and we won't have these hyper parameters here." }, { "end": 351.48, "start": 347.16, "text": " These will be our flags but then you'll implement this trainer and then you call" }, { "end": 363.16, "start": 351.48, "text": " fit on the model. Let's maybe copy this down here." }, { "end": 371.84000000000003, "start": 363.16, "text": " This is our IMDB sentiment classifier and the trainer." }, { "end": 383.32, "start": 371.84, "text": " The root here, let's call that logs. GPUs. We'll give it a GPU if CUDA is available." }, { "end": 392.23999999999995, "start": 384.44, "text": " Else zero. Then we'll make a flag for the epochs. We don't need the rest of this." }, { "end": 399.84, "start": 392.23999999999995, "text": " Then at the end we'll call fit model. If we had a classifier this" }, { "end": 411.59999999999997, "start": 399.84, "text": " would already run. Now what I like to do is to have this module" }, { "end": 418.67999999999995, "start": 411.59999999999997, "text": " called SH which gives you some sort of easy shell commands. At the beginning" }, { "end": 427.2, "start": 418.67999999999995, "text": " of each run whenever the file loads I remove the logs folder." }, { "end": 435.12, "start": 427.2, "text": " I have a clean logs folder and then I make it again like this." }, { "end": 440.28, "start": 435.12, "text": " It just deletes the logs and then runs them again. If we run this right now" }, { "end": 447.24, "start": 440.28, "text": " this is going to give us an error. We don't have an epochs flag." }, { "end": 459.36, "start": 447.24, "text": " We need to define a flag. Let's call define integer. We'll go for 10 epochs right now." }, { "end": 470.92, "start": 459.36, "text": " We haven't configured our optimizers." }, { "end": 476.12, "start": 470.92, "text": " In PyTorch Lightning you need some sort of optimizer configuration." }, { "end": 485, "start": 476.12, "text": " I'll just copy that from an example. I'm going full Siraj here people." }, { "end": 491.96, "start": 485, "text": " We need to configure optimizers. I like the SGD for this. It tends to work well in neural networks." }, { "end": 496.6, "start": 491.96, "text": " We don't need the scheduler. We don't need any of that." }, { "end": 505.72, "start": 496.6, "text": " Let's just return the SGD optimizer with the parameters and we'll make a flag for" }, { "end": 513.1600000000001, "start": 505.72, "text": " the learning rate and we'll make a flag for the momentum. We don't need any weight decay." }, { "end": 523.96, "start": 513.1600000000001, "text": " Let's put these. We'll make floats for the learning rate." }, { "end": 533, "start": 523.96, "text": " Maybe start off with something like this. I never put help strings if the description is rather clear." }, { "end": 542.36, "start": 533, "text": " Only losers need help. Don't be kidding yourself." }, { "end": 551.4, "start": 542.36, "text": " If you put the help string you need help. That's how it works." }, { "end": 557.64, "start": 551.4, "text": " I just don't like that this library forces you to put the help string because it somehow makes me feel bad." }, { "end": 565, "start": 557.64, "text": " It's very opinionated. It says basically you should put something there." }, { "end": 572.36, "start": 565, "text": " We have this and now when we run this we don't have anything to optimize yet." }, { "end": 583.3199999999999, "start": 572.36, "text": " First of all we need the model. Do we need to prepare data first?" }, { "end": 589.6400000000001, "start": 583.32, "text": " Let's check. I have this short snippet here that embeds an IPython shell." }, { "end": 593.48, "start": 589.6400000000001, "text": " I just plug this into anywhere so I can see if I reach it." }, { "end": 598.6800000000001, "start": 593.48, "text": " I reach the prepare data. Let's care about the data set first." }, { "end": 606.6, "start": 598.6800000000001, "text": " This NLP library as you can see right here. There's the usage right here." }, { "end": 614.44, "start": 606.6, "text": " You can load a data set here with the appropriate split." }, { "end": 619.08, "start": 614.44, "text": " It will basically just give it back. If you don't have it, it will download it." }, { "end": 624.6800000000001, "start": 619.08, "text": " It's pretty cool. We'll just load the data set." }, { "end": 634.0400000000001, "start": 624.6800000000001, "text": " I've already checked out what they have and they have the IMDB data set." }, { "end": 642.28, "start": 634.04, "text": " In this split argument we can say give me the train split." }, { "end": 648.92, "start": 642.28, "text": " As a string you can say give me whatever the first 5% of the train split." }, { "end": 652.52, "start": 648.92, "text": " This is just my laptop here." }, { "end": 656.68, "start": 652.52, "text": " We won't be able to train a super high grade model." }, { "end": 663.24, "start": 656.68, "text": " We'll go for 5% of the train split. This is to train data set." }, { "end": 668.36, "start": 663.24, "text": " Now if we run until here." }, { "end": 674.92, "start": 668.36, "text": " If you had not downloaded this, it would download this." }, { "end": 677.5600000000001, "start": 674.92, "text": " Given the train data set, I hope you can see this." }, { "end": 683.88, "start": 677.5600000000001, "text": " It says it's a data set. It has 1250 rows." }, { "end": 687.88, "start": 683.88, "text": " Each entry has a text and a label." }, { "end": 692.36, "start": 687.88, "text": " You can just index this like a data set." }, { "end": 696.36, "start": 692.36, "text": " That's the first sample. The label is 1 here." }, { "end": 704.12, "start": 696.36, "text": " It means that we should predict the label that this is a good sentiment." }, { "end": 707.32, "start": 704.12, "text": " It's either 1 or 0." }, { "end": 712.6800000000001, "start": 707.32, "text": " I think so. Either good sentiment or bad sentiment." }, { "end": 719.88, "start": 712.6800000000001, "text": " Our first task is going to be to get this into a form where BERT can consume it." }, { "end": 723.96, "start": 719.88, "text": " How do we do this with this NLP library? That's the pretty cool part." }, { "end": 726.36, "start": 723.96, "text": " Right now you see this is text." }, { "end": 730.92, "start": 726.36, "text": " In NLP we need to map this text into token IDs." }, { "end": 735.08, "start": 730.92, "text": " We need to tokenize and we need to map this to IDs." }, { "end": 738.6, "start": 735.08, "text": " Huggingface of course has very nice libraries for that." }, { "end": 742.36, "start": 738.6, "text": " They're called tokenizers." }, { "end": 746.52, "start": 742.36, "text": " We'll have one of these tokenizers." }, { "end": 751.3199999999999, "start": 746.52, "text": " We'll use this from the transformers library." }, { "end": 757, "start": 751.3199999999999, "text": " I think this is called BERT tokenizer." }, { "end": 761.48, "start": 757, "text": " That the BERT models can use. Let's check it out." }, { "end": 765.16, "start": 761.48, "text": " We're at the documentation." }, { "end": 769.3199999999999, "start": 765.16, "text": " BERT tokenizer. There we go. There's a BERT tokenizer." }, { "end": 772.36, "start": 769.3199999999999, "text": " Fast." }, { "end": 778.12, "start": 772.36, "text": " Yes, okay. We'll take the fast one." }, { "end": 780.76, "start": 778.12, "text": " Maybe not." }, { "end": 787.32, "start": 781.88, "text": " Yeah, we'll take the fast one. Come on. Be risky." }, { "end": 790.76, "start": 787.32, "text": " BERT tokenizer fast." }, { "end": 793.88, "start": 790.76, "text": " I think we can do this from pre-trained." }, { "end": 796.84, "start": 793.88, "text": " They have these methods from pre-trained." }, { "end": 802.6800000000001, "start": 796.84, "text": " We'll take this from pre-trained." }, { "end": 807.32, "start": 804.36, "text": " We'll put the model name here." }, { "end": 810.44, "start": 807.32, "text": " I want to make this a flag." }, { "end": 814.9200000000001, "start": 810.44, "text": " Such that I'm not bound to a particular model." }, { "end": 819.88, "start": 817.4, "text": " Oops." }, { "end": 823.64, "start": 820.84, "text": " Cool." }, { "end": 826.84, "start": 823.64, "text": " This is called model." }, { "end": 832.76, "start": 828.12, "text": " This is our model. BERT based on case." }, { "end": 835.72, "start": 832.76, "text": " We have a tokenizer right now." }, { "end": 838.36, "start": 835.72, "text": " We can now tokenize these things." }, { "end": 841.24, "start": 838.36, "text": " Every entry in the data set." }, { "end": 845.88, "start": 841.24, "text": " In a classic setting we'd have to write a loop for that." }, { "end": 848.92, "start": 845.88, "text": " With this data set library, with this nlp library," }, { "end": 852.52, "start": 848.92, "text": " it's pretty cool that we can tokenize" }, { "end": 855.4, "start": 852.52, "text": " each of the samples. We can map this" }, { "end": 861.72, "start": 855.4, "text": " tokenizer function across the training data set." }, { "end": 864.52, "start": 861.72, "text": " How do we do that?" }, { "end": 868.6, "start": 865, "text": " We have this tokenizer." }, { "end": 872.84, "start": 868.6, "text": " I'm pretty sure it has a tokenizer, an encode or something method." }, { "end": 876.76, "start": 872.84, "text": " There's forward. This is the BERT model." }, { "end": 879.8, "start": 876.76, "text": " Where's the BERT tokenizer?" }, { "end": 884.4399999999999, "start": 879.8, "text": " Right here." }, { "end": 892.3599999999999, "start": 884.4399999999999, "text": " It has this encode or something." }, { "end": 896.3599999999999, "start": 892.5999999999999, "text": " Here. Oh yeah. Encode." }, { "end": 899.56, "start": 896.52, "text": " Where is the definition of that?" }, { "end": 903.3199999999999, "start": 899.56, "text": " Can we click on this?" }, { "end": 907.56, "start": 903.3199999999999, "text": " This encode takes text and it takes a bunch of other arguments." }, { "end": 910.8399999999999, "start": 907.56, "text": " I hope you can see this." }, { "end": 913.4, "start": 910.8399999999999, "text": " There we go." }, { "end": 918.4399999999999, "start": 913.7199999999999, "text": " Whether or not you should add the special tokens" }, { "end": 922.3599999999999, "start": 918.4399999999999, "text": " or the max length. This is going to be pretty important." }, { "end": 926.4399999999999, "start": 922.3599999999999, "text": " And pad to max length. We want everything to be of the same length." }, { "end": 934.1199999999999, "start": 926.4399999999999, "text": " If you apply this encode function to a text of these samples." }, { "end": 938.2, "start": 934.12, "text": " Let's just take the first sample here and let's take the text entry." }, { "end": 942.6, "start": 938.2, "text": " Then what you're going to get is a list of these IDs." }, { "end": 944.84, "start": 942.6, "text": " This is exactly what we want." }, { "end": 949.16, "start": 944.84, "text": " The 101 here is this CLS token that BERT takes in." }, { "end": 952.04, "start": 949.16, "text": " Then it's just the word pieces." }, { "end": 957.08, "start": 952.04, "text": " You could also say instead of this say tokenize." }, { "end": 961.32, "start": 957.08, "text": " I think. That will just give you the word pieces." }, { "end": 964.84, "start": 961.32, "text": " Not the encodes yet." }, { "end": 967.72, "start": 964.84, "text": " These are the word pieces right here." }, { "end": 972.5200000000001, "start": 967.72, "text": " This is the tokenized text and with the encode function it does this." }, { "end": 978.0400000000001, "start": 972.5200000000001, "text": " Then it maps these two IDs such that BERT can consume that." }, { "end": 984.5200000000001, "start": 978.0400000000001, "text": " For this NLP, this library, has this convenient function called map in their data set." }, { "end": 991.24, "start": 984.5200000000001, "text": " What we'll have to do first is define a tokenized function that takes in a single sample." }, { "end": 1000.92, "start": 991.24, "text": " Then it will run the tokenizer encode function across the text entry." }, { "end": 1003.88, "start": 1000.92, "text": " We have already seen we need like..." }, { "end": 1010.28, "start": 1003.88, "text": " Add special tokens is true. This is cool. Max length, yes." }, { "end": 1016.12, "start": 1010.28, "text": " We'll make a flag sequence length or something." }, { "end": 1023.24, "start": 1016.12, "text": " We are going to pad to max length is true." }, { "end": 1027.48, "start": 1023.24, "text": " Every single sample will be of the same size." }, { "end": 1030.84, "start": 1027.48, "text": " In this function there's a number of ways what you can return here." }, { "end": 1035.48, "start": 1030.84, "text": " One way is to return the original sample and actually just set a new attribute." }, { "end": 1038.68, "start": 1035.48, "text": " I think." }, { "end": 1042.68, "start": 1038.68, "text": " Set a new attribute on this original sample right here." }, { "end": 1047.48, "start": 1042.68, "text": " Let's format this a bit nicer." }, { "end": 1051, "start": 1047.48, "text": " You see we have this tokenized function. It takes a sample, it takes the text," }, { "end": 1057.3200000000002, "start": 1051, "text": " it tokenizes it, encodes it and puts this as the new attribute input IDs and returns it again." }, { "end": 1066.52, "start": 1057.3200000000002, "text": " Now what we can do is we can map this function across the training data set." }, { "end": 1072.6000000000001, "start": 1066.52, "text": " This will go over the training data set and basically for each entry do this thing." }, { "end": 1077.32, "start": 1072.6, "text": " Hopefully after this operation we'll have a data set" }, { "end": 1088.9199999999998, "start": 1077.32, "text": " where each sample not only has a text and a label but also an input IDs attribute." }, { "end": 1094.6, "start": 1088.9199999999998, "text": " We don't have this sequence length thing right here." }, { "end": 1098.1999999999998, "start": 1094.6, "text": " Let's put that here." }, { "end": 1104.44, "start": 1098.2, "text": " Let's just go with 32 since this is just my laptop." }, { "end": 1108.04, "start": 1104.44, "text": " 32 samples should be fine." }, { "end": 1113.4, "start": 1108.04, "text": " Here it says can't pickle tokenizer objects." }, { "end": 1120.2, "start": 1113.4, "text": " What it tries to do right here is it tries to" }, { "end": 1124.76, "start": 1120.76, "text": " it tries to" }, { "end": 1128.76, "start": 1124.76, "text": " parallelize basically this thing right here." }, { "end": 1134.92, "start": 1128.76, "text": " If we look at this NLP thing, is there documentation to this?" }, { "end": 1140.28, "start": 1136.12, "text": " We can just look at the data sets maybe." }, { "end": 1145.96, "start": 1140.28, "text": " Naming, splits, builder, arrow, data set." }, { "end": 1150.04, "start": 1146.92, "text": " Map right here." }, { "end": 1156.44, "start": 1150.04, "text": " This function I think it will try to multiprocess and therefore it needs to basically" }, { "end": 1167.6399999999999, "start": 1157.08, "text": " pickle all of the things that go into the function." }, { "end": 1172.2, "start": 1167.6399999999999, "text": " It pickles all of the things that go into the function which means this tokenizer right here" }, { "end": 1179.8, "start": 1172.2, "text": " it needs to be pickled." }, { "end": 1184.76, "start": 1179.8, "text": " Maybe there's a way to get around this." }, { "end": 1190.28, "start": 1187, "text": " One thing we can try is we can try another tokenizer." }, { "end": 1194.92, "start": 1192.28, "text": " Maybe this one can be pickled." }, { "end": 1198.76, "start": 1194.92, "text": " This did library is pretty good but it can't pickle everything." }, { "end": 1203.32, "start": 1198.76, "text": " This tokenizer can actually be pickled." }, { "end": 1210.6, "start": 1203.32, "text": " I'm not entirely sure what you'd have to do honestly" }, { "end": 1215.32, "start": 1210.6, "text": " because I don't know the library but what you could do is make a thread or" }, { "end": 1219.56, "start": 1215.32, "text": " process local variable of this and basically make it a singleton in each" }, { "end": 1223.08, "start": 1219.56, "text": " process and then basically in here you call the function to get it" }, { "end": 1227.96, "start": 1223.08, "text": " and it returns the already instantiated object and so on." }, { "end": 1230.52, "start": 1227.96, "text": " If you really want to multiprocess all of this." }, { "end": 1234.44, "start": 1230.52, "text": " Anyway we have this train data set right now and you see the schema." }, { "end": 1239.16, "start": 1235.24, "text": " If you can see this the schema has been extended so there is now text, there is" }, { "end": 1244.52, "start": 1239.16, "text": " label and there is input IDs which is a list of int64 things." }, { "end": 1246.2, "start": 1245.16, "text": " That's pretty cool." }, { "end": 1253.08, "start": 1247.16, "text": " So now what we can do since this is still a python list right this is still a python list." }, { "end": 1258.36, "start": 1253.08, "text": " Now I know the tokenizers can already output PyTorch tensors but that's kind of cheating." }, { "end": 1262.1999999999998, "start": 1258.36, "text": " So we want to use this library right here." }, { "end": 1264.36, "start": 1262.1999999999998, "text": " We want the train data set." }, { "end": 1270.6799999999998, "start": 1264.36, "text": " There is a method called set format right here and you say type equals torch." }, { "end": 1277.96, "start": 1271.48, "text": " What that does and I think you need to say which columns you want." }, { "end": 1282.52, "start": 1277.96, "text": " So we want columns." }, { "end": 1284.52, "start": 1282.52, "text": " Maybe we should get all columns." }, { "end": 1286.52, "start": 1284.52, "text": " Can we output the text?" }, { "end": 1292.68, "start": 1287.48, "text": " So you can select from the sample which of the columns you want and let's check it out again." }, { "end": 1298.92, "start": 1292.68, "text": " For now as long as we're just debugging here I like to do a debug flag." }, { "end": 1302.68, "start": 1300.04, "text": " So this is usually one of the first flags I do." }, { "end": 1308.3600000000001, "start": 1302.68, "text": " It's define boolean debug." }, { "end": 1316.92, "start": 1312.6000000000001, "text": " What this does is whenever this is active I try to be as fast as possible." }, { "end": 1322.68, "start": 1316.92, "text": " So there in this PyTorch lightning trainer there's actually this fast def run argument." }, { "end": 1330.8400000000001, "start": 1325.0800000000002, "text": " Which does the same thing but I can push it a bit harder with this debug here." }, { "end": 1337.24, "start": 1330.84, "text": " So let me say this is like one." }, { "end": 1345.3999999999999, "start": 1337.24, "text": " We'll just load batch size samples if we are in debug mode." }, { "end": 1352.9199999999998, "start": 1348.9199999999998, "text": " We don't actually have a batch size argument yet do we?" }, { "end": 1361.16, "start": 1352.92, "text": " If flags.debug else 5 percent." }, { "end": 1363.16, "start": 1361.16, "text": " So we don't have batch size yet." }, { "end": 1365.16, "start": 1363.16, "text": " We're surely gonna need that at some point." }, { "end": 1373.16, "start": 1365.16, "text": " So let's go with a batch size of 8 just because we can." }, { "end": 1384.44, "start": 1373.16, "text": " Now if we run this in debug we should..." }, { "end": 1388.28, "start": 1385.72, "text": " Ah okay yes this needs to be a string." }, { "end": 1391.24, "start": 1390.6000000000001, "text": " Shag-a-boom!" }, { "end": 1398.2, "start": 1392.0400000000002, "text": " Cool so it says it's the fast def run and if we run it in debug it just loads very few data points." }, { "end": 1401.8000000000002, "start": 1398.2, "text": " So this map function here doesn't take this whole while." }, { "end": 1404.68, "start": 1401.8, "text": " Maybe there's a way you can stream that I don't know." }, { "end": 1406.04, "start": 1404.68, "text": " For now this is pretty good." }, { "end": 1414.04, "start": 1406.04, "text": " So if we look at the train data set again you can see that it has the same entry." }, { "end": 1420.68, "start": 1414.04, "text": " So this is still a list of 64 but if you index it right now if you go to the zero data point." }, { "end": 1428.84, "start": 1421.56, "text": " Okay then it crashes because it tries to convert these two PyTorch tensors" }, { "end": 1436.12, "start": 1428.84, "text": " and it can't convert the string so we'll have to say we just want the columns input IDs" }, { "end": 1437.6399999999999, "start": 1436.12, "text": " and we want the label." }, { "end": 1440.9199999999998, "start": 1438.9199999999998, "text": " Label can't spell." }, { "end": 1444.36, "start": 1441.9599999999998, "text": " Okay let's try it again." }, { "end": 1456.1999999999998, "start": 1448.4399999999998, "text": " So right here you see that what we get out is actually a PyTorch tensors for this" }, { "end": 1458.6799999999998, "start": 1456.1999999999998, "text": " and not kind of Python lists anymore." }, { "end": 1464.76, "start": 1458.68, "text": " So this is now pretty this is one-to-one so with duck typing maybe it's even subclassed." }, { "end": 1470.2, "start": 1464.76, "text": " This is a PyTorch data set right which we can load into a data loader." }, { "end": 1480.1200000000001, "start": 1470.2, "text": " So this is a perfectly fine data set so we can now say self train data set is this train data set." }, { "end": 1488.36, "start": 1480.12, "text": " Now we want to do this for the test as well but in order to do that we would have to write" }, { "end": 1495.08, "start": 1488.36, "text": " all of this code again which I'm not really in the mood so we'll just loop it." }, { "end": 1504.28, "start": 1496.12, "text": " We'll create a function prepare data set and we'll take in the split name." }, { "end": 1512.36, "start": 1504.28, "text": " The split name right like this and we'll just go with the split name here." }, { "end": 1520.36, "start": 1517.48, "text": " That should do it and we just call it data set." }, { "end": 1527.96, "start": 1523, "text": " Data set and return that." }, { "end": 1538.68, "start": 1527.96, "text": " So now we can say train data set self dot test data set is prepare data set." }, { "end": 1544.8400000000001, "start": 1543.16, "text": " For train and test." }, { "end": 1552.8400000000001, "start": 1548.92, "text": " Excellent so now we have a training data set and a testing data set." }, { "end": 1559.08, "start": 1552.84, "text": " So here in the train data loader we need to take the training data set" }, { "end": 1568.52, "start": 1559.08, "text": " and construct a data loader from it. This is super easy so what we'll do is we'll do it in one line." }, { "end": 1574.84, "start": 1570.6799999999998, "text": " Data loader so what does the data loader need? The data loader needs a data set." }, { "end": 1581, "start": 1576.1999999999998, "text": " So the prepare data is called at the beginning so we have this data set right here and I think we" }, { "end": 1589.72, "start": 1581, "text": " can go with a batch size right here and we already have a flag for that and I think there is like a" }, { "end": 1597.16, "start": 1589.72, "text": " drop last yes so the drop last will go for true we only want full batches during training and we'll" }, { "end": 1608.52, "start": 1597.16, "text": " also shuffle. Okay and the same goes for we need a validation data loader for our validation set." }, { "end": 1613.8, "start": 1608.52, "text": " So in PyTorch Lightning you have train validation and test and test is really only for like the" }, { "end": 1621.32, "start": 1613.8, "text": " final final test. If the test data set we have here is the would be called the validation data" }, { "end": 1629.48, "start": 1621.32, "text": " set in PyTorch Lightning so we false here false we don't want to shuffle particularly. Okay" }, { "end": 1635.16, "start": 1630.44, "text": " so we have a training data loader and a validation data loader." }, { "end": 1639.0800000000002, "start": 1635.16, "text": " Now what do we need? We have optimizer very good." }, { "end": 1647.3200000000002, "start": 1641.48, "text": " Now what do we need? All we need to do is to actually pass our data through the BERT model." }, { "end": 1651.16, "start": 1647.3200000000002, "text": " So this forward thing here we're just going to leave not implemented." }, { "end": 1662.44, "start": 1652.76, "text": " Maybe we can implement it. Okay so we do need a model as you can see right here this batch" }, { "end": 1668.6000000000001, "start": 1662.44, "text": " let's say this batch is going to let's go right here right so if you know if you don't sometimes" }, { "end": 1676.2, "start": 1668.6000000000001, "text": " don't know what to do you just go to where you should be okay at ultimate empty parameter we" }, { "end": 1685.0800000000002, "start": 1676.2, "text": " don't have parameters yet all right so what do we do we go up here and we make a model we need to" }, { "end": 1694.4399999999998, "start": 1685.08, "text": " actually make the BERT model right here so from transformers we can use the BERT model now they" }, { "end": 1704.04, "start": 1694.4399999999998, "text": " have a lot of BERT models and we'll go back right here to the BERT models because they as you know" }, { "end": 1710.6799999999998, "start": 1704.04, "text": " BERT is just an encoder so we need to build a classifier on top of BERT but they already have" }, { "end": 1716.6000000000001, "start": 1710.68, "text": " done this so they have a bunch of BERT different configurations and the one we're looking for here" }, { "end": 1722.44, "start": 1716.6000000000001, "text": " would be this this BERT for sequence classification right this is BERT BERT model transformer with a" }, { "end": 1729.16, "start": 1722.44, "text": " sequence classification or regression head on top right so this is exactly what we need a classifier" }, { "end": 1738.3600000000001, "start": 1729.16, "text": " on top of BERT and we can um i think we can also load this with this from pre-trained and just put" }, { "end": 1748.9199999999998, "start": 1738.36, "text": " in the same name so we can this BERT for sequence classification and we'll load up the same model" }, { "end": 1760.9199999999998, "start": 1748.9199999999998, "text": " that we had okay so um this is our model easy as that so what do we what do we do with this BERT if" }, { "end": 1768.52, "start": 1760.92, "text": " we put in data what what happens for that we quickly go back again so in the forward method we can" }, { "end": 1776.92, "start": 1768.52, "text": " in we can input the input ids right which is batch size sequence length tensor we can input the" }, { "end": 1782.92, "start": 1776.92, "text": " attention mask that basically tells you where there's padding and where there isn't" }, { "end": 1790.3600000000001, "start": 1782.92, "text": " masks to avoid performing attention on padding token mask value selected in zero one one for tokens" }, { "end": 1796.76, "start": 1790.3600000000001, "text": " that are not masks zero for tokens that are masks then we can input the token type ids which we don't" }, { "end": 1801.64, "start": 1796.76, "text": " have here we just have one sentence but usually in BERT you have the capability of inputting two" }, { "end": 1806.6000000000001, "start": 1801.64, "text": " different types like a question and a paragraph or a first sentence and the second sentence" }, { "end": 1816.9199999999998, "start": 1806.6, "text": " um position ids are optional um blah blah blah blah blah blah blah none of that okay we could also" }, { "end": 1825.8799999999999, "start": 1816.9199999999998, "text": " input the labels these are optional and it would already compute a loss for us uh which we we don't" }, { "end": 1831.8799999999999, "start": 1825.8799999999999, "text": " this that's almost cheating so let's just focus on putting in the input ids and i think that's" }, { "end": 1837.4, "start": 1831.88, "text": " gonna be enough since we basically truncate our long text to 32 tokens we don't need to worry about" }, { "end": 1845.4, "start": 1837.4, "text": " masking right here otherwise you would input a mask for um actually we we can do it we can do it" }, { "end": 1855.64, "start": 1846.1200000000001, "text": " okay so what you could input a mask for basically where um your tokens are not pad tokens and the" }, { "end": 1862.6000000000001, "start": 1855.64, "text": " pad tokens in BERT are zero so basically your mask should just be whatever's non-zero uh but" }, { "end": 1869.5600000000002, "start": 1863.3200000000002, "text": " maybe also your model learns um to ignore the pad tokens i might be wrong here and it does it" }, { "end": 1877.0800000000002, "start": 1869.5600000000002, "text": " automatically right so in your forward pass what do you do actually let's go to the training step" }, { "end": 1884.6799999999998, "start": 1877.08, "text": " we'll put something here you can see it so if you if you didn't have BERT um it would actually" }, { "end": 1890.6799999999998, "start": 1885.6399999999999, "text": " uh BERT you it BERT you up it would download BERT right here but since i have it you can see here" }, { "end": 1897.8799999999999, "start": 1890.6799999999998, "text": " this is the smaller BERT model um pytorch lightning i don't have enough space in my console" }, { "end": 1903.48, "start": 1897.8799999999999, "text": " right here but it would give you a nice overview over your model how many parameters it has how" }, { "end": 1910.28, "start": 1903.48, "text": " what kind of layers it has and so on so uh we also need a validation step if we have a validation" }, { "end": 1920.44, "start": 1910.28, "text": " data loader validation step and we need the um validation epoch end function so" }, { "end": 1927.88, "start": 1922.3600000000001, "text": " usually in training you don't really care about epochs too much because you just have many batch" }, { "end": 1934.0400000000002, "start": 1927.88, "text": " after mini batch but in validation uh what you want is kind of one single metric across your entire" }, { "end": 1940.6000000000001, "start": 1934.0400000000002, "text": " test data set or validation data set and therefore you sort of in the validation step you'll just" }, { "end": 1946.44, "start": 1940.6000000000001, "text": " kind of output things you output local things per batch and then in the epoch end function you" }, { "end": 1954.2800000000002, "start": 1946.44, "text": " aggregate them into one big number so um we'll we'll we'll just put" }, { "end": 1960.84, "start": 1954.28, "text": " we'll put things into each thing thing thing so i'm pretty sure we're going to end up in the" }, { "end": 1966.44, "start": 1960.84, "text": " validation step first because if especially if we do this debug run it basically it tries to" }, { "end": 1973.6399999999999, "start": 1966.44, "text": " run a validation first uh at the very start of training so we can look at a batch right here" }, { "end": 1980.36, "start": 1974.44, "text": " so what's a batch um the batch seems to be a dictionary if you look at its keys we can see" }, { "end": 1987.1599999999999, "start": 1980.36, "text": " um the batch seems to be a dictionary if you look at its keys we have label and input ids okay so" }, { "end": 1996.04, "start": 1987.1599999999999, "text": " that's pretty cool so if we go for the input ids that gives us a tensor and the tensors of shape" }, { "end": 2002.76, "start": 1996.04, "text": " eight which is our batch size and 32 which is our sequence length and we should be able to pretty" }, { "end": 2011.8799999999999, "start": 2002.76, "text": " much input that into the BERT model that we created boom okay and what do we get out we get out a tuple" }, { "end": 2018.2, "start": 2011.8799999999999, "text": " and the first entry is going to be this looks like logits all right okay let's check the shape" }, { "end": 2023.96, "start": 2019.08, "text": " and this is eight so this is our batch size and two is the logit so one for the negative class" }, { "end": 2030.6, "start": 2023.96, "text": " and one for the positive class and this is this we can basically input into a cross entropy loss" }, { "end": 2042.1999999999998, "start": 2030.6, "text": " given our labels so we also have our label here and their label is all ones nice um is this maybe" }, { "end": 2049.24, "start": 2042.1999999999998, "text": " sorted is the data set sorted into good and bad things because that would be that would be bad" }, { "end": 2059.7999999999997, "start": 2050.12, "text": " in any case um so what do we have to do so in the forward method we get the input ids let's let's" }, { "end": 2068.04, "start": 2059.8, "text": " say we get the input ids and we run this through our model and we can actually construct a mask" }, { "end": 2080.28, "start": 2068.04, "text": " here and the mask is going to be wherever the input ids are not zero and um that as a what does" }, { "end": 2089.1600000000003, "start": 2080.28, "text": " it need to be so these mask this attention mask is going to be a float tensor okay so we'll put it" }, { "end": 2103.64, "start": 2089.16, "text": " as a float tensor cool um right like this so our logits are going to be that and yeah tuple with" }, { "end": 2109.48, "start": 2103.64, "text": " one entry so the comma here is important we're going to return the logits so this is our forward" }, { "end": 2115.3999999999996, "start": 2109.48, "text": " function so in the validation and the training step the first thing we got to do is we got to" }, { "end": 2122.12, "start": 2115.4, "text": " uh call this forward function with the input ids and these of course are in our batch" }, { "end": 2132.92, "start": 2124.44, "text": " like this so these are going to be our logits and then in the validation what we want to do is we" }, { "end": 2138.44, "start": 2132.92, "text": " first of all want to compute our loss right so we have to construct this up here in the init" }, { "end": 2150.28, "start": 2138.44, "text": " we can actually fold this prepare data um loss is going to be a cross entropy loss yes that exists" }, { "end": 2160.36, "start": 2150.28, "text": " with read reduction i like to put reduction none i don't think there's like an a deprecated reduce" }, { "end": 2165.64, "start": 2160.36, "text": " and there is like a reduction where you can put mean or something i like to not reduce the loss" }, { "end": 2170.12, "start": 2165.64, "text": " at first because then i can agro i can use the same thing for validation and training" }, { "end": 2182.44, "start": 2171.64, "text": " so in the validation step i just want to compute my loss right here with self so loss loss um and" }, { "end": 2184.92, "start": 2183.3199999999997, "text": " we'll have to cheat a bit" }, { "end": 2192.92, "start": 2184.92, "text": " so look up the cross entropy loss and" }, { "end": 2202.92, "start": 2196.92, "text": " come on okay where is the cross entropy loss" }, { "end": 2215.48, "start": 2202.92, "text": " loss cross entropy loss it takes yes it's reduction ha tada and" }, { "end": 2224.28, "start": 2219.32, "text": " so the input to the function that we construct is going to be first um" }, { "end": 2231.48, "start": 2224.28, "text": " n by c first the input and then the targets so first the logits and then the targets right" }, { "end": 2243.4, "start": 2233, "text": " criterion that combines logs of max and nl loss over a single class nice nice nice okay okay cool" }, { "end": 2254.6800000000003, "start": 2243.4, "text": " so first logits and then labels label okay that's our loss so if we check out what our loss is going" }, { "end": 2264.28, "start": 2254.6800000000003, "text": " to be it's probably going to be an vector of size eight because we have reduction none" }, { "end": 2275.6400000000003, "start": 2264.28, "text": " none loss yes c vector of size eight very nice so we can just um basically return" }, { "end": 2285.0800000000004, "start": 2277.1600000000003, "text": " i'll say we can return this loss just as is and then in the validation epoch end the outputs here" }, { "end": 2291.7200000000003, "start": 2285.0800000000004, "text": " is going to be a list of and every entry in the list is going to be one of these validation steps" }, { "end": 2301.72, "start": 2291.72, "text": " for for one batch so we can aggregate those so losses is will concatenate them since they're" }, { "end": 2313.64, "start": 2301.72, "text": " going to be chunks of eight outputs at the dimension zero and then we can calculate the mean right so" }, { "end": 2318.92, "start": 2313.64, "text": " um we can do that and then" }, { "end": 2332.92, "start": 2321.96, "text": " we can oh no we need to do more we also want to know the accuracy right so um the accuracy is" }, { "end": 2346.76, "start": 2332.92, "text": " going to be whether or not the logits dot arg max is go is equal to the label label" }, { "end": 2352.44, "start": 2347.7200000000003, "text": " so the accuracy for each sample is going to be that it's either going to be one or zero and" }, { "end": 2364.76, "start": 2352.44, "text": " we want that as a float so here let's output a dictionary with loss um and accuracy all right" }, { "end": 2375, "start": 2366.04, "text": " excellent so here then we can aggregate so the loss is going to be and i like to have um like" }, { "end": 2386.2, "start": 2375, "text": " a construction here that aggregates this still so we go out loss for o in outputs so these are now" }, { "end": 2394.44, "start": 2386.2, "text": " going to be uh entries each one is going to be a dictionary right so our loss losses we have" }, { "end": 2403.88, "start": 2394.44, "text": " concatenation to the mean okay our accuracy is going to be the same thing for the accuracy" }, { "end": 2408.04, "start": 2403.88, "text": " nice so our output here is going to be a dictionary" }, { "end": 2418.12, "start": 2412.2000000000003, "text": " and i think in pytorch lightning there there if you put validation accuracy select valac" }, { "end": 2425.88, "start": 2418.76, "text": " it selects the model according to this but i'm not sure so also in pytorch lightning i can now" }, { "end": 2433.48, "start": 2425.88, "text": " output this here but also if you have a log entry it will forward this to the logger which is" }, { "end": 2439.48, "start": 2433.48, "text": " the logger which we can uh actually do and make a tensor board logger out of this so what have we" }, { "end": 2446.2, "start": 2439.48, "text": " done we have first of all set up the validation step so the the pytorch lightning is going to" }, { "end": 2452.12, "start": 2446.2, "text": " run through the data loader for each batch do this so we forward it through the bert model to" }, { "end": 2457.8, "start": 2452.12, "text": " get our log it's and then we compute our loss by the cross entropy loss of the log it's and the" }, { "end": 2463.4, "start": 2457.8, "text": " labels and we also compute our accuracy by seeing how much the log it's agree with the labels or the" }, { "end": 2469.8, "start": 2463.4, "text": " maximum log it and then we aggregate all of this over the entire epoch and output that now let's" }, { "end": 2478.92, "start": 2469.8, "text": " set up a logger so for the logger we can put this i think in the trainer here pytorch lightning" }, { "end": 2489.48, "start": 2478.92, "text": " logger dot and i think there is a tensor board logger uh pretty sure pytorch lightning is there" }, { "end": 2494.04, "start": 2489.48, "text": " tensor board no pytorch" }, { "end": 2503.2400000000002, "start": 2495.4, "text": " nying logger i'm pretty sure that exists that's not the newest version i hate these" }, { "end": 2511.72, "start": 2503.96, "text": " these old docs so latest come on oh this was called logging logger" }, { "end": 2513.72, "start": 2511.72, "text": " log" }, { "end": 2520.3599999999997, "start": 2519.8799999999997, "text": " loggers" }, { "end": 2530.2799999999997, "start": 2523.48, "text": " tensor board logger right here nice so our save dear is going to be called logs and then" }, { "end": 2540.2000000000003, "start": 2530.28, "text": " what we what do we want we want the name imdb and there's also this version thing" }, { "end": 2548.76, "start": 2542.6000000000004, "text": " where if if if you don't put version zero it will just make a new kind of folder each time but i" }, { "end": 2552.76, "start": 2548.76, "text": " guess we delete the logs anyway we delete the logs folder at the beginning so we don't have" }, { "end": 2558.2000000000003, "start": 2552.76, "text": " this problem but i generally like to overwrite my logs and not make new runs but if you like" }, { "end": 2565.56, "start": 2558.2, "text": " something different that's you know fine all right so let's run this again and we're cool" }, { "end": 2573.08, "start": 2565.56, "text": " though this is the bird configuration that we loaded and then we have no attribute logger" }, { "end": 2577.3199999999997, "start": 2573.7999999999997, "text": " pytorch lightning loggers loggers" }, { "end": 2582.76, "start": 2577.32, "text": " loggers" }, { "end": 2592.6000000000004, "start": 2584.6800000000003, "text": " okay again loading the weights very cool blah blah blah blah blah blah blah blah and we're in" }, { "end": 2598.44, "start": 2592.6000000000004, "text": " the night python shell and do we have a night python shell remaining only in the training step" }, { "end": 2604.1200000000003, "start": 2598.44, "text": " okay so we're at the training step right here and we can actually can we can check whether or not" }, { "end": 2607.96, "start": 2604.12, "text": " um ah now we have lightning logs and logs my" }, { "end": 2618.2799999999997, "start": 2612.44, "text": " okay so these appear to be our tensor board logs so we are maybe able to run the tensor board here" }, { "end": 2626.8399999999997, "start": 2618.2799999999997, "text": " later um let's run it logs we don't have tensor board okay" }, { "end": 2636.6800000000003, "start": 2626.84, "text": " oh yeah i've uninstalled it because i was angry at it oh come on what's going on um" }, { "end": 2647.8, "start": 2639, "text": " tensor board i should have tensor board somewhere uh it's it's like in um in local bin or something" }, { "end": 2655.88, "start": 2647.8, "text": " um in local bin or something local bin no it's not in local bin" }, { "end": 2662.6000000000004, "start": 2657.8, "text": " oh oh we'll find it we'll figure it out" }, { "end": 2670.2000000000003, "start": 2664.92, "text": " uh how to get a tensor board maybe we need to install tensor flow" }, { "end": 2677.0800000000004, "start": 2672.92, "text": " well that's gonna take a while okay so back to the training step in the training step we" }, { "end": 2682.6, "start": 2677.08, "text": " basically need to do the same as in the validation step so we'll need to forward our batch through" }, { "end": 2687.56, "start": 2682.6, "text": " the model but here we don't need to compute an accuracy but we do need to compute a actually a" }, { "end": 2693.64, "start": 2687.56, "text": " batch loss that we can back propagate on now in the training step you can either specify how you" }, { "end": 2702.2, "start": 2693.64, "text": " back propagate um per se or what you can do is you can just output this log loss attribute and then" }, { "end": 2707.96, "start": 2702.2, "text": " pytorch lightning will basically do the back propagation for you we have the tensor board" }, { "end": 2720.12, "start": 2709.96, "text": " now please all right there we go and we can we can put this into a different uh thing right here" }, { "end": 2731.96, "start": 2720.12, "text": " um git uh lp demo yes um" }, { "end": 2740.6, "start": 2735.16, "text": " okay so this is running and if everything goes correctly" }, { "end": 2752.12, "start": 2740.6, "text": " 06 shaboom we have a tensor board okay so we need to forward our training step and we need to" }, { "end": 2758.36, "start": 2752.12, "text": " calculate a loss for the batch so these loss here we do the same thing but here we call mean on it" }, { "end": 2765.96, "start": 2758.36, "text": " so this is the mean loss from this batch and we can now return um the loss right here and we can" }, { "end": 2773.16, "start": 2765.96, "text": " also in the training step you can also output a log dictionary and we'll output the loss again" }, { "end": 2780.44, "start": 2773.8, "text": " here in order so this is our going to be our training loss that we output right here um let's" }, { "end": 2787.16, "start": 2780.44, "text": " call it train loss and this also will go into the tensor board so if we run this right now we don't" }, { "end": 2794.84, "start": 2787.16, "text": " have an ipython shell simply by outputting this loss attribute we already instruct pytorch lightning" }, { "end": 2800.84, "start": 2794.84, "text": " to now run backprop on this loss uh using the optimizer that we have defined okay" }, { "end": 2808.36, "start": 2802.84, "text": " and by outputting the log we instructed to put this into the tensor board so now we have a scalar" }, { "end": 2816.36, "start": 2808.36, "text": " entry and um you can see this it only contains the valid no it contains everything very cool very very" }, { "end": 2821.32, "start": 2816.36, "text": " cool so let's remove the debug flag and we'll just see what happens" }, { "end": 2831.48, "start": 2821.32, "text": " so to recap right to recap we have um oh now you go see epoch one epoch two go go go go go" }, { "end": 2841.1600000000003, "start": 2833.48, "text": " ah very cool um what we've done is we've set up this pytorch lightning module it needs a bunch of" }, { "end": 2846.52, "start": 2841.1600000000003, "text": " functions but in the init we've basically just set up our bird model from the hogging face" }, { "end": 2852.52, "start": 2846.52, "text": " transformers library we've loaded in a pre-trained bird model that we're going to fine tune the main" }, { "end": 2859.56, "start": 2852.52, "text": " thing that the pytorch lightning module needs is a training step function where you define what it" }, { "end": 2866.52, "start": 2859.56, "text": " should do with the data and this data loader function so in the data loader function um we've" }, { "end": 2874.12, "start": 2866.52, "text": " loaded up a data set um and we basically specified the batch size this is very easy" }, { "end": 2879.08, "start": 2874.12, "text": " where does the data set come from we do it here in prepare data this is a special function in" }, { "end": 2885.64, "start": 2879.08, "text": " pytorch lightning that's basically called after the init but before anything else runs and here" }, { "end": 2892.3599999999997, "start": 2886.3599999999997, "text": " we are loading this data set from the nlp library and this is kind of the magic part" }, { "end": 2899.16, "start": 2893.16, "text": " we specify the split and the size that we want inside of the string and you can do this in percent" }, { "end": 2904.68, "start": 2899.16, "text": " or in a number of samples that you want i'm sort of sure you can do more things but i haven't" }, { "end": 2910.68, "start": 2904.68, "text": " explored that then we run map on the data set in order to tokenize it and that's right here" }, { "end": 2919.24, "start": 2910.68, "text": " and we use a tokenizer again from the pytorch lightning and um just run this encode function" }, { "end": 2930.04, "start": 2919.24, "text": " this is very simple like if how complicated was this just like a year ago crazy then we need to" }, { "end": 2937, "start": 2930.04, "text": " to put set format and set format tells the data set how it needs to output its samples and we tell" }, { "end": 2943.56, "start": 2937, "text": " it please output torch tensors and um we want these columns right here and we make a train and" }, { "end": 2951.88, "start": 2943.56, "text": " test data set with from the train and test split accordingly so we have this this goes into a data" }, { "end": 2957.56, "start": 2951.88, "text": " loader pytorch lightning will take the data loader and run training on it using this train step" }, { "end": 2963.08, "start": 2957.56, "text": " function in this train step function we get a batch um in the batch there are these two columns" }, { "end": 2968.52, "start": 2963.08, "text": " that we specified previously input ids and label the input ids will put through the forward function" }, { "end": 2974.7599999999998, "start": 2968.52, "text": " of the model itself this is the forward function we'll construct a mask and run it through the model" }, { "end": 2981.56, "start": 2976.44, "text": " um we wouldn't actually need to construct a mask but okay and we get back the logits of the" }, { "end": 2988.36, "start": 2981.56, "text": " classification and then we run this through a cross entropy loss uh get the mean of the batch" }, { "end": 2994.92, "start": 2988.36, "text": " and there we go in the validation step we do the same thing but also calculate the accuracy but" }, { "end": 2999.88, "start": 2994.92, "text": " don't calculate the mean we want to keep it per sample and only at the end we want to concatenate" }, { "end": 3008.52, "start": 2999.88, "text": " everything and calculate the mean if we've done everything correctly you see right here our train" }, { "end": 3016.28, "start": 3008.52, "text": " loss goes down down down until it is almost zero because we've just and the validation accuracy is" }, { "end": 3022.12, "start": 3016.28, "text": " super high is this is this because all the labels are equal okay so we have a" }, { "end": 3031.7999999999997, "start": 3022.12, "text": " all the labels are equal like for real um okay so we'll do something else we'll make an integer" }, { "end": 3040.8399999999997, "start": 3031.7999999999997, "text": " um with percent and this was five right so that we loaded five percent of the data set" }, { "end": 3050.44, "start": 3040.84, "text": " um but let's load some more and this might take longer but let's load" }, { "end": 3054.6000000000004, "start": 3051.1600000000003, "text": " 50 percent of the data set and just see what happens" }, { "end": 3061.48, "start": 3057.1600000000003, "text": " no present i called it present" }, { "end": 3066.84, "start": 3066.28, "text": " very good" }, { "end": 3072.44, "start": 3066.84, "text": " so we'll load up 50 percent of the data set and um we'll do the same thing and we can track in" }, { "end": 3079.2400000000002, "start": 3072.44, "text": " real time what happens in tensorboard and unrecognized instruction format um" }, { "end": 3084.36, "start": 3083.8, "text": " okay" }, { "end": 3093.2400000000002, "start": 3085.96, "text": " who can we make a format string in a format string this is nasty does it work" }, { "end": 3101.24, "start": 3093.24, "text": " please work we can make a format string in a format string absolutely bonkers okay so it takes a" }, { "end": 3106.52, "start": 3101.24, "text": " little bit longer and you could actually i think you can speed this up this mapping of the data set" }, { "end": 3115.3999999999996, "start": 3107.08, "text": " maybe you can stream it um i'm pretty sure you can batch it you can do a batch um processing of this" }, { "end": 3124.92, "start": 3115.4, "text": " but for our case right here uh we think it's enough so it was like what 1200 if we had five percent so" }, { "end": 3134.12, "start": 3124.92, "text": " now it should be something like 12 000 um so let's continue with the recap of what we did here" }, { "end": 3140.12, "start": 3135.4, "text": " we have the train data set the validation data set and on yes so we have everything like" }, { "end": 3145.24, "start": 3140.12, "text": " this the configure optimizers you can put an optimizer you can also put a learning rate" }, { "end": 3152.8399999999997, "start": 3145.24, "text": " scheduler if you want to and then in the main function we load this pytorch lightning module" }, { "end": 3159.16, "start": 3153.24, "text": " and we specify a trainer and the trainer we tell it you know the max epochs and so on" }, { "end": 3164.8399999999997, "start": 3159.96, "text": " and we set up the logger and we just run fit on this model and this runs" }, { "end": 3171.96, "start": 3164.84, "text": " epochs of the model and after each epoch it does a validation epoch and uh minimizes our loss" }, { "end": 3185.96, "start": 3173.1600000000003, "text": " very cool very effective so now if if please if you would all right here we go this is my laptop" }, { "end": 3195.2400000000002, "start": 3185.96, "text": " training burnt oh okay we don't seem to make too much progress" }, { "end": 3200.36, "start": 3199.08, "text": " let's check the tensor board" }, { "end": 3206.28, "start": 3202.92, "text": " training loss goes down training loss goes to zero" }, { "end": 3214.52, "start": 3206.28, "text": " training loss goes down training loss goes to zero i have the sneaking suspicion that this is not" }, { "end": 3223.5600000000004, "start": 3214.92, "text": " entirely shuffled so um is there like a shuffle like a shuffle thing" }, { "end": 3230.52, "start": 3225, "text": " because this seems a bit this seems a bit bit fishy um" }, { "end": 3235.24, "start": 3230.52, "text": " this imdb data set right here it just seems like" }, { "end": 3244.2, "start": 3236.84, "text": " you know we could use a bit of shuffling because all the labels yeah the training loss instantly" }, { "end": 3254.6, "start": 3244.2, "text": " goes to zero so maybe maybe it's not we" }, { "end": 3262.6, "start": 3254.6, "text": " can we shuffle here let's look at the load data set function" }, { "end": 3277, "start": 3268.52, "text": " load data set batched uh no keeping memory no none of that okay" }, { "end": 3290.28, "start": 3277, "text": " this does not seem to go to continue right here data sets nlp data sets" }, { "end": 3299.64, "start": 3292.84, "text": " i hope here we know we should find this load data set somewhere builder features load" }, { "end": 3307, "start": 3299.64, "text": " load data set split" }, { "end": 3318.52, "start": 3309.7999999999997, "text": " can we not shuffle anywhere we'll search shuffle builder" }, { "end": 3327.48, "start": 3318.52, "text": " okay so generate examples this function pre-processed examples key will be hashed" }, { "end": 3338.92, "start": 3331.32, "text": " okay we are not able to shuffle this um just like that and we can't do that" }, { "end": 3351.7200000000003, "start": 3338.92, "text": " okay we are not able to shuffle this um just like that but i hope this at least gives you an" }, { "end": 3360.2000000000003, "start": 3351.7200000000003, "text": " impression i guess if you were to take the full data set and map it then it would actually work" }, { "end": 3367.56, "start": 3360.2, "text": " we'll just try again with 10% of the data just uh to see it go down" }, { "end": 3376.2, "start": 3368.9199999999996, "text": " tensorboard see this is now good because we always delete the logs folder we don't have any uh remnant" }, { "end": 3381.64, "start": 3377.96, "text": " uh old tensorflow logs" }, { "end": 3392.68, "start": 3381.64, "text": " all right come on come on so 10% should be yeah about this about this okay" }, { "end": 3405.7999999999997, "start": 3399.56, "text": " train loss looking good looking good so far" }, { "end": 3409.2400000000002, "start": 3405.8, "text": " looking good looking good so far" }, { "end": 3420.28, "start": 3413.96, "text": " look at these models how large is that how large is the bert" }, { "end": 3425.7200000000003, "start": 3420.28, "text": " base case hugging face pre-trained models" }, { "end": 3435.8799999999997, "start": 3425.72, "text": " pre-trained models bert based on case that's the one we have 12 layers 110 million parameters easy" }, { "end": 3437.8799999999997, "start": 3435.8799999999997, "text": " easy easy" }, { "end": 3449.48, "start": 3442.4399999999996, "text": " oh no it's too large training loss goes to zero again okay so we've determined that this data set" }, { "end": 3457.88, "start": 3449.48, "text": " very probably isn't entirely shuffled it might just have all the good labels first and all the" }, { "end": 3470.44, "start": 3457.88, "text": " bad labels last and um yeah just to confirm let's confirm this uh right here let's go with 100%" }, { "end": 3478.6, "start": 3471.08, "text": " but let's put an ipython shell down um just before we map the data set so we don't have to go through" }, { "end": 3489.08, "start": 3478.6, "text": " the whole mapping procedure actually that would be here right yes" }, { "end": 3496.36, "start": 3492.6, "text": " can we not map this asynchronously map" }, { "end": 3504.6, "start": 3499.08, "text": " i might be doing something really wrong with this library but i think that's that's how it should go" }, { "end": 3511.96, "start": 3504.6, "text": " so map def" }, { "end": 3524.44, "start": 3515, "text": " map right here we can do batched we could do batched and then i think hugging face has a function to" }, { "end": 3530.92, "start": 3526.52, "text": " encode batched encode batch encode" }, { "end": 3541.32, "start": 3530.92, "text": " encode batch no um let's go to the tokenizer" }, { "end": 3553.32, "start": 3546.52, "text": " build inputs create token type ids get special token mask save where is encode" }, { "end": 3556.36, "start": 3553.32, "text": " code" }, { "end": 3570.76, "start": 3559.96, "text": " right here can we have batch encode build inputs no this might be it batch encode yes there is a" }, { "end": 3578.6000000000004, "start": 3570.76, "text": " batch encode where you have batches of these things so okay what if we do the negative one" }, { "end": 3587.16, "start": 3578.6, "text": " see here's the label zero um i'm pretty sure i'm pretty sure uh" }, { "end": 3597.3199999999997, "start": 3589.72, "text": " batch true let's do that and in our function here we'll say batch encode" }, { "end": 3607.96, "start": 3597.32, "text": " so let's see how fast this is with 100%" }, { "end": 3614.28, "start": 3613, "text": " where tokenizer has no" }, { "end": 3618.44, "start": 3616.76, "text": " but we just had batch encode" }, { "end": 3623.7200000000003, "start": 3620.52, "text": " oh but this might be we have batch encode plus" }, { "end": 3633, "start": 3623.72, "text": " batch encode plus or text pairs okay we need this batch encode plus" }, { "end": 3645.72, "start": 3636.6, "text": " but then that gives us a dictionary right this gives us a dictionary with the fields input ids" }, { "end": 3647.8799999999997, "start": 3646.3599999999997, "text": " right here so" }, { "end": 3657.08, "start": 3647.88, "text": " like this how about that and then we still might want to limit the actual data set um once we have" }, { "end": 3659.08, "start": 3657.08, "text": " once we have" }, { "end": 3665.1600000000003, "start": 3661, "text": " mapped it because we need to train on it as well" }, { "end": 3672.12, "start": 3668.2000000000003, "text": " but i just want to see how fast this batch uh encoding is" }, { "end": 3675.64, "start": 3672.12, "text": " yes" }, { "end": 3685.24, "start": 3676.8399999999997, "text": " okay reasonably fast but it takes like three minutes um yeah so we won't go on here i will put" }, { "end": 3696.2799999999997, "start": 3685.24, "text": " i will put this as is on um i'll put this as is on github and i will put this as is on github" }, { "end": 3706.36, "start": 3696.28, "text": " and i hope you can profit from that in any way you want the hugging face site has a tutorial on" }, { "end": 3711.88, "start": 3706.36, "text": " squad where they also use the metrics so they have basically these pre-defined metrics like blur" }, { "end": 3721.4, "start": 3712.6000000000004, "text": " or rouge i think and you can just use them as you use these data sets so" }, { "end": 3727.2400000000002, "start": 3721.4, "text": " it's very very very convenient to work with these things in nlp so nlp has come a long way" }, { "end": 3735.32, "start": 3727.2400000000002, "text": " absolutely invite you to check out the um the transformers and tokenizers and nlp repos" }, { "end": 3741.88, "start": 3735.7200000000003, "text": " and with that that's it for me i think i hope you enjoyed this again leave a comment if you" }, { "end": 3747.8, "start": 3741.88, "text": " see improvements or if i maybe should edit this a bit more or if i should add a little bit more" }, { "end": 3753.4, "start": 3747.8, "text": " see improvements or if i maybe should edit this a bit more i thought the entire process" }, { "end": 3778.52, "start": 3753.4, "text": " of just going through and making mistakes um would be entertaining to some all right bye bye" } ]
IiBFqnNu7A8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Planning to Explore via Self-Supervised World Models (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "deep rl", "deep reinforcement learning", "novelty", "curiosity", "intrinsic reward", "dreamer", "planet", "control", "walker", "run forward", "imaginary", "imagination", "planning", "google", "neural network", "actor", "critic", "uncertainty", "information gain", "mutual information", "model" ]
What can an agent do without any reward? Explore the world! While many formulations of intrinsic rewards exist (Curiosity, Novelty, etc.), they all look back in time to learn. Plan2Explore is the first model that uses planning in a learned imaginary latent world model to seek out states where it is uncertain about what will happen. OUTLINE: 0:00 - Intro & Problem Statement 3:30 - Model 5:10 - Intrinsic Motivation 9:05 - Planning in Latent Space 11:15 - Latent Disagreement 16:30 - Maximizing Information Gain 21:00 - More problems with the model 26:45 - Experiments 32:10 - Final Comments Paper: https://arxiv.org/abs/2005.05960 Website: https://ramanans1.github.io/plan2explore/ Code: https://github.com/ramanans1/plan2explore Abstract: Reinforcement learning allows solving complex tasks, however, the learning tends to be task-specific and the sample efficiency remains a challenge. We present Plan2Explore, a self-supervised reinforcement learning agent that tackles both these challenges through a new approach to self-supervised exploration and fast adaptation to new tasks, which need not be known during exploration. During exploration, unlike prior methods which retrospectively compute the novelty of observations after the agent has already reached them, our agent acts efficiently by leveraging planning to seek out expected future novelty. After exploration, the agent quickly adapts to multiple downstream tasks in a zero or a few-shot manner. We evaluate on challenging control tasks from high-dimensional image inputs. Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods, and in fact, almost matches the performances oracle which has access to rewards. Videos and code at this https URL Authors: Ramanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, Deepak Pathak Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at planning to explore via self-supervised world models by Ramanan Sekar, Ole Rybkin, Kostas Danilidis, Peter Abil, Danijar Hafner and Depak Patak. So this is a paper that concerns reinforcement learning and specifically self-supervised reinforcement learning. So what do they mean? Here's a graphic right here. In reinforcement learning, usually you have an environment and an agent. So you have this environment and let's ignore the without rewards for now. And you have the agent and the agent needs to interact with the environment in order to achieve a maximum reward. So the reward is given by a certain task. You have to do something in this environment. In this case, they consider these types of tasks where I think the top task might be called run forward. So your reward is more the further you go with this walker. And how you can influence the walker is you can sort of give a bit of force onto its joints right here and you have a bunch of sensors. So the main task is actually to keep it to balance it on its feet and then sort of walk forward such that it never falls over. Otherwise, you get negative reward. You lose. So in this case, what they want to do is they want to say, wait, if we just train a reinforcement learning agent for each of these tasks individually, that will use a lot of data. And basically, we can't reuse the learned reinforcement learning agent for each of these individual tasks. It's sort of like if you have many, you know, image tasks or NLP tasks, you don't want to learn one model for each one individually, but you might do something like a common joint pre-training. And this is exactly this for reinforcement learning. And it's even called self supervised, right? Like we are used to in the in the classification setting, self supervised learning. What does it mean? So it means that at first you're in an environment without rewards. So basically, the agent is just dropped in an environment and there's no rewards. It can just do actions and observes states from this environment. And after that, after a while of that, then the tasks come in. So task A, task B and task C are three different tasks, all in the same environment, but all requiring the agent to do different things like running forward or running backwards or do a front flip or things like this. So the how fast the agent can adapt to these individual tasks very much depends on what it has learned during this phase where there were no rewards, right? So the agent is tasked to just explore the world via what they call here task agnostic exploration to explore the world to learn something about the world in order then to generalize to these tasks. And in their case, they learn this global world model. So the agent is supposed to learn somehow how the world works, right? And this is this is the way that this agent is then able to adapt really quickly. So in essence, what does this agent do? The agent works as follows. It gets an input observation, and it runs that through an encoder, which is usually something like a convolutional neural network. And that will give you a set of features, right? So this is sort of an embedding of the state that you're in. And that you can incorporate into a latent state at time t. Now, usually in these RL algorithms, or what can happen is that you incorporate the last latent state, so the latent state from the step here also goes into the latent state of the next step, right? So here was the sorry, the last observation observation comes in features, latent state and so on, ultimately, and then and so on and comes in from here. And there's usually like an RNN going over the time steps. But ultimately here, the agent has to decide on an action using this policy network. Now how is this trained this policy network has to come up with an action, but there are no rewards. So usually, we would train this policy network with like an actor critic method. So we would also train some sort of a value function, and then the policy would try to maximize the value function. And if we don't have rewards, how are we going to do that? So people have thought about this for a bit. And people have come up with things like intrinsic motivation. Intrinsic motivation is a term where you're trying to say something like, if you're in a room right here, like this, your agent is right here, then you just you know, you do something. And maybe your agent goes down here. If your agent were to go down there again, it would sort of not really learn anything because it has now already gone there and has already learned from those states. So you might want to explore some different space, right, like here. And in the next episode, you might want to explore this room right here. So this this notion of intrinsic motivation to explore, it has a bunch of different formulations of how exactly you can formulate it. But just imagine basically, the entire state is filled with a bunch of coins, and I'm going to draw this as green dots, sort of like Pac-Man. And everything is filled with these green dots, right? And what the agent wants to do if it has no rewards, it will simply collect those green dots. And once one is collected, so if I go here, I'll collect all these green dots, these are now no longer there. So that area doesn't give me any reward anymore. So you can imagine sort of like this. So as an intrinsic reward, you simply reward the agent every time it finds itself in a new state that it hasn't seen before. So you train it to seek out novel states that usually, when you just have like an actor critic method, and that's what this paper here criticizes, if you have it's called retrospective novelty. That means if you train a model free algorithm, which is an actor critic, right, if we just plug in into here, something like a three C, that will simply have a policy and a value function. And in this case, if we train it on intrinsic reward, the policy will simply tell you where to go to find more green stuff. But you can only train it. So you use this to run an episode. And then you observe how many green things you found in that episode, right, if your episode goes here, and then you put that back into your buffer to learn from. But at that point, you've already collected the green things, right. So the reward signal is actually a bit off because you want to train your agent that it should seek out novel things. But as soon as you've explored them, they're not really novel anymore, because you have now explored them. But still, you're going to train your agent, telling your agent that this area right here has lots of has given me lots of rewards. So the agent is going to be encouraged to repeat that. They say this right here, the retrospective novelty model free exploration methods not only require large amounts of experience to adapt to downstream tasks, they can also be inefficient during exploration. These agents usually first act in the environment, collect trajectories, and then calculate an intrinsic reward as the agent's current estimate of novelty. This approach misses out on efficiency by operating retrospectively. That is, the novelty of inputs is computed after the agent has already reached them. Hence it seeks out previously novel inputs that have already been visited and would not be novel anymore. Instead, one should directly seek out future inputs that are expected to be novel. Now, so what this paper is doing, it's basically saying, can we build a model that estimates the future novelty of a state that we maybe haven't seen so far? And here is where that goes. So what do they do in this policy? So the policy isn't just trained to maximize the novelty in the world. Instead, sorry, it uses planning. It uses planning in latent space. So what this model does is it learns a world model in latent space. The world model takes as input these features that you saw right here that the encoder gives you and it predicts the future hidden latent states. These are the things you saw here. So these things that are always made by incorporating the new features with the old state, it tries to predict it. So technically, these things here should have like some sort of a, no, this is actually exact here, but these here should have some sort of a tick or something to indicate these are estimated. These are estimated future states. And this model right here, this is an estimate. This is a world model. They use Dreamer for this. And I have made a video about Dreamer. Dreamer tries exactly that. It tries to estimate what is the future, but not in actual world space, but in latent space. And yeah, so it tries to estimate its own future. And the cool thing here is that this is probabilistic or you can make it probabilistic. So you can technically from this one age that you have here, you can run out many futures in your imagination. And since you don't need the observations, you only need the latent space, you can simply forward roll your RNN and sample from it and you have many trajectories in the future. Now the fact that you have many trajectories leads to even a different thing. So what you can do for each of these hidden states, they have a head here that predicts the so-called latent disagreement. What does this do? This consists of a whole bunch of models. These are ensemble models. They're the same for each time step. But what they take in, they take in the latent state of the model and the action that you're about to do, the action that you imagine you would do. So this is the imagined state and this is the imagined action in that state. And then it will compute the next features. So the next, whatever in the next step would be the age. So right, right, where do we put it? Whatever in the next, so if I have this age and I have this state, it tells me if I do action A1 and if I were to execute this in the real world, what would be my next age that I would get? So basically by performing an action, I will get the next observation and I will encode that to get the next features. And this small model would try to predict what are the features of the next state if I were to execute this action in this state. Okay, so it's kind of a future predictor. But also not in observation space, but in latent space. So it tries to predict the latent features of the next observation. And the split here, you might think that there is a bit of a, it's like almost the same, this latent state here and the features. But as we discussed before, the latent state can incorporate the last or the history of latent states while the features simply are only a function of the current observation. And that's why they predict the features. They really want to predict the observation. But history has sort of shown that if you try to predict, for example, the pixels of the observation, that won't serve you really well. And therefore, what you need to do is you need to predict the latent features of the observation that works much better. So they have a bunch of these models right here. They have a bunch of models with different parameterizations. They instantiate k different models of that. And they all run the same. So these are all the same inputs through these different models. Now, these different models have been initialized at different points. So they will make slightly different predictions. And the crucial part is, so if it's really deterministic what the next state is going to be, right? So say you're in this state, you perform this action. And so if you have a ball in your hand and you drop the ball, then the ball is going to fall down. Really deterministic. That means these next these estimated next features, if the models are any good, they all agree. And this variance here between the estimates is very, very small. And now if if the uncertainty over the next state is very high, and this can be due to two facts either, it is actually uncertain what's going to happen. So maybe you have a really a piece of paper and you drop that and due to the wind, you can't know what's happening. Or because your model has simply not learned yet what's going to happen. In either of those cases, you don't know what's going to happen. Therefore, these these predictions here, sorry, are going to be very different from each other. And because of that, this variance will be high. And this variance you take as the intrinsic reward. So in each step, you basically try to predict over the next actions you can do, which ones leads me to a leads me to a situation where I don't know what's going to happen, where I cannot really predict the variance in my prediction is high. So I really don't know what's going to happen. And that is going to be the states you seek out. Okay, so this is the core of the paper. Basically, you do this planning in latent space in order to find the states or action that leads you to a state where you don't know what's going to happen. And you measure that by trying to predict it using slightly different models. And if they disagree a whole bunch, then you can use you sort of you say, I don't know what's going to happen. And therefore, I want to go there because I want to learn about that state. Now this, this is the this is the entire thing. It has a bunch of problems, as you can imagine. So this is the reasoning behind it, right? Now they try to make a deal out of basically their latent disagreement here agrees with minima, sorry, maximizing the expected information gain. They go into the theory right here and say, okay, if I have a state and an action, and I had and this W are the dynamics parameters of the world. So the W characterizes how the world works. And the H here is the next state, or sorry, the features of the next observation. And the I is the mutual information between H and W. So this right here measures how much information of the next state is contained in the dynamics of the world. If this is really low, and I have a good world model, then I should be able to predict the next state really well. And this, this, they say, okay, selecting the most promising data during exploration. We want to select the action that maximizes this information gain. So the, the, the more the mutual information here, we want to select the action that maximizes that they decompose this mutual information into two things. They decompose it into this thing right here, which is simply the entropy of the next state given the current state and action. This is simply the total uncertainty, including the fact that it could actually be stochastic like dropping a paper, and the fact that you haven't learned yet what happens like if you drop a ball, but you haven't learned that yet, that is also uncertain. So this is that part, this is the total uncertainty minus this right here. And this is the uncertainty if you know the dynamics, right? So this is the wind, basically, in the paper example. So you want something where the total uncertainty is high, but the the kind of uncertainty of the of the stochasticity of the world is low. If you maximize this quantity here, this total quantity, sorry, if you maximize this entire quantity, because I called one of them total, that means you are going to seek out actions where what's left is only the uncertainty that you yourself don't know, right? You say, well, this state has a pretty high total uncertainty, but it's not due to the fact that the world itself is uncertain. It must be due to the fact that I don't know yet. And they make the claim that their model is actually going after these things. And they say, okay, because we have these these Gaussians here as our estimators, they somehow reduce to this total to this uncertainty, but only basically by taking Gaussians, they assume, they just assume that this quantity here is constant. At least that's how I understand it. They basically assume that every transition in the world has about the same amount of uncertainty. And therefore, we can just focus on the total amount of uncertainty, right? So if we can't predict, if we can't predict the next state, a, we can predict the next state a better than the next state B, and both have about the same amount of intrinsic uncertainty, stochasticity in the world, that must mean we should go to B because that's where our model hasn't learned yet. Now, of course, in the real world, that is absolutely not the case. And I think this model works mainly because they tested in these transitions, or in these environments where that might be very close to accurate, that actually most of the transitions have the same stochasticity as any other transition. The second, the second part, why this is a bit difficult is because you have to somehow keep this latent, sorry, this, this couple of models right here that make this disagreement prediction. So you rely on the fact that you can capture disagreement by looking how those models disagree with each other. And again, they employ Gaussians here. But it is not said that these things, these things will actually give you the true disagreement among themselves. If you initialize them wrongly, they might miss like if, if your distribution has three modes, they act, they might just for all of them, focus on one of them. And then your disagreement will be completely out of whack, or you could initialize them not far enough for or too close together. That's the same thing. So it all depends on kind of how you manage to handle this uncertainty right here. So all of this seems a bit problematic, but the whole setup is pretty cool. Because imagine like all of this is shifting constantly, right? The the policy here tries to maximize these rewards. And that's something I don't understand. In the paper, they make it sort of explicitly clear that the policy tries to maximize this quantity right here, the next uncertainty. The planning objective is to maximize expected novelty, or it, which is this thing right here. However, I, I don't actually see why in that case, you'd need planning. Because with planning, your goal is sort of to look ahead more than one step. So what I would expect is that they somehow have the aggregated somehow that they not don't maximize this, but somehow they maximize the future right of t prime of r t prime, I, they somehow maximize the yes, they somehow maximize the future, the total future, maybe with a disagreement, like if it was a reward, and you actually want to maximize the total reward across your episode, I would imagine they use planning to maximize the total future uncertainty that they encounter because right here you have your trajectories. And as they say it, they only maximize the uncertainty after the first step. So this here might be, you know, even intrinsically uncertain or a bit uncertain. But if you go down the path here, there might be a state where that's super uncertain, and you would like to find that right through your different rollouts. So I'm not sure that the paper is correct or consistent here, actually. I might be wrong, though, they do have the code, which is a really good thing. So I'll link to the code and you can go and explore that. They do have this algorithm down here, which is pretty much I mean, this is saying just nothing. Still exploring, do train the world model, train the latent disagreement ensemble, train the policy in imagination of like, rather, rather, it helps a bit. Okay. But one other thing right here, the policy that tries to maximize the reward, right, so you use planning to look ahead where the uncertainty is. But how do you do the planning? You need a policy in imagination space, right? This latent disagreement policy here is used to train how you act in latent space, right? How this action that you imagine comes to be, you can't plan in imagination space and in imagination space use planning again, it's just an infinite recursion. At some point, you need a model that tells you what to do. And in imagination, they just use an actor critic model, you see they have a value function here. They just use an actor critic model to basically one shot predict the next best action to get you to the next step. So as they themselves rag on these model free methods, because they only look ahead, how is that not exactly the same as me ragging on the fact that they use model free and imagination space, because your world model certainly is retrospective, your world model learns from the past, right? So the model free method that learns on your imagined world model learns from retrospective imagination. And therefore, it itself has sort of the same problem, just one layer deeper that it learns from retrospective data and not from data ahead, because your uncertainty about the future might just be because of your retrospect, exactly is because of your retrospective data. I see the value in having this uncertainty. But I think there are other methods that also do model free, and don't just maximize an intrinsic reward, but actually maximize a sort of uncertainty. Okay, enough ragging, let's go to the experiment. So the cool thing you can do with this is what's called zero shot performance. So what they do is in the first step, they do this, they just learn task agnostic, just explorer. Without task, then second, they go and into their buffer. So when they explore, they all save, they save what they do, there is no reward, but they just save, they store their episodes, right? And then someone comes with a task, and the task is simply, they specify like you have to run forward, and they go to this buffer, and they now label every episode with its reward. So this is different, this is like offline reinforcement learning, right? So basically, it is how well they call it zero shot, but it is how well can an algorithm that has explored with this kind of self supervision, perform in offline reinforcement learning on the trajectories that it has already experienced, which is different from performing the same trajectories with the reward, because you would learn from the reward, and you would learn to seek out different, like your experience would be different. If you were going after a reward. So this is harder. So they compare this to dreamer, and dreamer is a fully supervised method, the dreamer is actually cheating, dreamer actually goes after the reward. And all the other methods here, they don't have a reward, and they're just zero shot offline reinforcement learning generalized to these methods. And you see the green is the plan to explore, and that outperforms almost all the other methods right here, down here, and even comes close to this, to the dreamer that goes up here. It's seen pretty much every graphic that dreamer is the one that's able to cheat, right? So it is performing pretty well. But then the the zero shot generalized plan to explore is sometimes on par and certainly outperforms the other intrinsic reward methods. Now how does that go about when you try actually when you allow the model to fine tune on the task. So you can see the performance on few shot adaptation from raw pixels without state space input. So basically, you learn without reward for this many steps right here, until this shaded area back here. This is how when you do no reward, and then all of a sudden, now you say, okay, now, I'll give you reward. Now please learn. This many steps, you have this many steps where you can learn from the reward. So now we're no longer in this offline RL setting. This is now online RL. But we've been pre trained with all of this experience that we had without the reward. Again, the orange here is the cheater. So the orange is is cheating. And now we don't so before the graphs were higher, because we've, we've actually at each step, for example, how this works is you train until here without reward, and then you do this offline RL offline RL training. And that's how this point comes about. Now, I think in the graph down here, they they don't do that. So they just measure how well you're doing in the task. And of course, if after this many steps, you've never looked at the reward, you haven't been able to look at the reward, your reward will be fairly low, right at the task, because you don't know what to do to get the high reward. The dreamer again, this orange line is able to cheat. That's why it just is basically straight line or goes up at the beginning, it's able to look at the reward from the beginning. And it's here as a baseline comparison. So you see as soon as you give the reward to the models, they generally shoot up. And this plan to explore generally shoots up much harder than these others, as you can see, pretty much everywhere. And again, it gets competitive and here even outperforms the dreamer. Why could it outperform the supervised method? Maybe because this method here is sort of confused or is stuck in a local optimum, which can happen very easily in reinforcement learning. Whereas the plan to explore has never seen the reward, therefore hasn't tried to just single mindedly maximize the reward and has explored a bunch of different things to do in the world. And now we can use that knowledge to outperform the plan, the dreamer, the baseline. So the other thing I would like to draw your attention to here is that sometimes you see that the plan to explore or the other curiosity methods actually get a reward before the reward kicks in as we saw here, right? For example, right here. And this tells me that this is probably a property of the environment itself, namely these reinforcement learning environments, they don't really have much noise going on, right? They pretty much just have, it's a simulator with one figure that can walk or not. And therefore, it might be that the only interesting thing to do in these models is to actually perform one of these tasks. And that's why it might be that sometimes they already get a reward. So it's true that they don't see the reward for this entire duration, but also implicitly via the developers building the simulator, they have made it such that the only interesting thing to do is the same thing as getting a reward, right? So I'm sort of skeptical that this is like a general exploration policy, because also in the real world, there are just combinatorically hugely many, many actions to do many paths to follow. And if you just go by what do I not know yet, I think you can't you can't put that all into one model is just too much. And the states where you really were really something interesting happens are so few and far in between, and that it doesn't compare to the amount of states where you simply don't know most states, you don't know what's going to happen, but probably nothing's nothing interesting is going to happen just different things, which will screw over this method completely. In any case, they Yeah, sorry, this is just this is another experiment. They have a bunch of other experiments. And yeah, that that was my this was my review of the paper. Tell me if you agree or disagree or if I've misunderstood something that's entirely possible. I'm just always a bit skeptical of these things a bit. So the experiments, they're very compute intensive, of course, so you never know there and then these specific environments right here. You never know there and then the fact that the real world actually has very different stochasticity, which they simply assume away right here. Yeah, but other than that, big props to the fact that the code is out. And as I said, leave a comment if you agree or disagree. Please subscribe and share this video if you liked it, and I'll see you next time. Bye bye.
[ { "end": 6.640000000000001, "start": 0, "text": " Hi there! Today we're looking at planning to explore via self-supervised world models" }, { "end": 17.04, "start": 6.640000000000001, "text": " by Ramanan Sekar, Ole Rybkin, Kostas Danilidis, Peter Abil, Danijar Hafner and Depak Patak." }, { "end": 25, "start": 17.04, "text": " So this is a paper that concerns reinforcement learning and specifically self-supervised" }, { "end": 26.44, "start": 25, "text": " reinforcement learning." }, { "end": 28.240000000000002, "start": 26.44, "text": " So what do they mean?" }, { "end": 30.4, "start": 28.24, "text": " Here's a graphic right here." }, { "end": 35.76, "start": 30.4, "text": " In reinforcement learning, usually you have an environment and an agent." }, { "end": 44.2, "start": 35.76, "text": " So you have this environment and let's ignore the without rewards for now." }, { "end": 48.64, "start": 44.2, "text": " And you have the agent and the agent needs to interact with the environment in order" }, { "end": 51.66, "start": 48.64, "text": " to achieve a maximum reward." }, { "end": 54.239999999999995, "start": 51.66, "text": " So the reward is given by a certain task." }, { "end": 56.8, "start": 54.239999999999995, "text": " You have to do something in this environment." }, { "end": 62.64, "start": 56.8, "text": " In this case, they consider these types of tasks where I think the top task might be" }, { "end": 64.24, "start": 62.64, "text": " called run forward." }, { "end": 69.2, "start": 64.24, "text": " So your reward is more the further you go with this walker." }, { "end": 76.96, "start": 69.2, "text": " And how you can influence the walker is you can sort of give a bit of force onto its joints" }, { "end": 78.88, "start": 76.96, "text": " right here and you have a bunch of sensors." }, { "end": 83.03999999999999, "start": 78.88, "text": " So the main task is actually to keep it to balance it on its feet and then sort of walk" }, { "end": 86.4, "start": 83.03999999999999, "text": " forward such that it never falls over." }, { "end": 88.56, "start": 86.4, "text": " Otherwise, you get negative reward." }, { "end": 90.88000000000001, "start": 88.56, "text": " You lose." }, { "end": 97.28, "start": 90.88000000000001, "text": " So in this case, what they want to do is they want to say, wait, if we just train a reinforcement" }, { "end": 102.64000000000001, "start": 97.28, "text": " learning agent for each of these tasks individually, that will use a lot of data." }, { "end": 108.16000000000001, "start": 102.64000000000001, "text": " And basically, we can't reuse the learned reinforcement learning agent for each of these" }, { "end": 110, "start": 108.16000000000001, "text": " individual tasks." }, { "end": 115.64000000000001, "start": 110, "text": " It's sort of like if you have many, you know, image tasks or NLP tasks, you don't want to" }, { "end": 122, "start": 115.64, "text": " learn one model for each one individually, but you might do something like a common joint" }, { "end": 123, "start": 122, "text": " pre-training." }, { "end": 127, "start": 123, "text": " And this is exactly this for reinforcement learning." }, { "end": 129.6, "start": 127, "text": " And it's even called self supervised, right?" }, { "end": 135.86, "start": 129.6, "text": " Like we are used to in the in the classification setting, self supervised learning." }, { "end": 137.12, "start": 135.86, "text": " What does it mean?" }, { "end": 142.08, "start": 137.12, "text": " So it means that at first you're in an environment without rewards." }, { "end": 146.8, "start": 142.08, "text": " So basically, the agent is just dropped in an environment and there's no rewards." }, { "end": 152.78, "start": 146.8, "text": " It can just do actions and observes states from this environment." }, { "end": 158.12, "start": 152.78, "text": " And after that, after a while of that, then the tasks come in." }, { "end": 163.96, "start": 158.12, "text": " So task A, task B and task C are three different tasks, all in the same environment, but all" }, { "end": 169.28, "start": 163.96, "text": " requiring the agent to do different things like running forward or running backwards" }, { "end": 172.6, "start": 169.28, "text": " or do a front flip or things like this." }, { "end": 179.68, "start": 172.6, "text": " So the how fast the agent can adapt to these individual tasks very much depends on what" }, { "end": 185.58, "start": 179.68, "text": " it has learned during this phase where there were no rewards, right?" }, { "end": 191.04, "start": 185.58, "text": " So the agent is tasked to just explore the world via what they call here task agnostic" }, { "end": 196.2, "start": 191.04, "text": " exploration to explore the world to learn something about the world in order then to" }, { "end": 200.39999999999998, "start": 196.2, "text": " generalize to these tasks." }, { "end": 203.64, "start": 200.39999999999998, "text": " And in their case, they learn this global world model." }, { "end": 209.83999999999997, "start": 203.64, "text": " So the agent is supposed to learn somehow how the world works, right?" }, { "end": 218, "start": 209.83999999999997, "text": " And this is this is the way that this agent is then able to adapt really quickly." }, { "end": 220.98, "start": 218, "text": " So in essence, what does this agent do?" }, { "end": 223.76, "start": 220.98, "text": " The agent works as follows." }, { "end": 229.45999999999998, "start": 223.76, "text": " It gets an input observation, and it runs that through an encoder, which is usually" }, { "end": 232.6, "start": 229.45999999999998, "text": " something like a convolutional neural network." }, { "end": 236, "start": 232.6, "text": " And that will give you a set of features, right?" }, { "end": 241.16, "start": 236, "text": " So this is sort of an embedding of the state that you're in." }, { "end": 245.92, "start": 241.16, "text": " And that you can incorporate into a latent state at time t." }, { "end": 251.76, "start": 245.92, "text": " Now, usually in these RL algorithms, or what can happen is that you incorporate the last" }, { "end": 257.5, "start": 251.76, "text": " latent state, so the latent state from the step here also goes into the latent state" }, { "end": 259.15999999999997, "start": 257.5, "text": " of the next step, right?" }, { "end": 265.8, "start": 259.15999999999997, "text": " So here was the sorry, the last observation observation comes in features, latent state" }, { "end": 270.74, "start": 265.8, "text": " and so on, ultimately, and then and so on and comes in from here." }, { "end": 275.92, "start": 270.74, "text": " And there's usually like an RNN going over the time steps." }, { "end": 282.76, "start": 275.92, "text": " But ultimately here, the agent has to decide on an action using this policy network." }, { "end": 287, "start": 282.76, "text": " Now how is this trained this policy network has to come up with an action, but there are" }, { "end": 288.20000000000005, "start": 287, "text": " no rewards." }, { "end": 293.3, "start": 288.20000000000005, "text": " So usually, we would train this policy network with like an actor critic method." }, { "end": 298.3, "start": 293.3, "text": " So we would also train some sort of a value function, and then the policy would try to" }, { "end": 301.04, "start": 298.3, "text": " maximize the value function." }, { "end": 305.44, "start": 301.04, "text": " And if we don't have rewards, how are we going to do that?" }, { "end": 309.64, "start": 305.44, "text": " So people have thought about this for a bit." }, { "end": 314.46, "start": 309.64, "text": " And people have come up with things like intrinsic motivation." }, { "end": 320.88, "start": 314.46, "text": " Intrinsic motivation is a term where you're trying to say something like, if you're in" }, { "end": 330, "start": 320.88, "text": " a room right here, like this, your agent is right here, then you just you know, you do" }, { "end": 331.36, "start": 330, "text": " something." }, { "end": 334.88, "start": 331.36, "text": " And maybe your agent goes down here." }, { "end": 340.88, "start": 334.88, "text": " If your agent were to go down there again, it would sort of not really learn anything" }, { "end": 346.64, "start": 340.88, "text": " because it has now already gone there and has already learned from those states." }, { "end": 353.3, "start": 346.64, "text": " So you might want to explore some different space, right, like here." }, { "end": 356.64, "start": 353.3, "text": " And in the next episode, you might want to explore this room right here." }, { "end": 363.4, "start": 356.64, "text": " So this this notion of intrinsic motivation to explore, it has a bunch of different formulations" }, { "end": 366.2, "start": 363.4, "text": " of how exactly you can formulate it." }, { "end": 370.88, "start": 366.2, "text": " But just imagine basically, the entire state is filled with a bunch of coins, and I'm going" }, { "end": 375.67999999999995, "start": 370.88, "text": " to draw this as green dots, sort of like Pac-Man." }, { "end": 380.88, "start": 375.67999999999995, "text": " And everything is filled with these green dots, right?" }, { "end": 386.67999999999995, "start": 380.88, "text": " And what the agent wants to do if it has no rewards, it will simply collect those green" }, { "end": 387.79999999999995, "start": 386.67999999999995, "text": " dots." }, { "end": 392.23999999999995, "start": 387.79999999999995, "text": " And once one is collected, so if I go here, I'll collect all these green dots, these are" }, { "end": 394.16, "start": 392.24, "text": " now no longer there." }, { "end": 397.48, "start": 394.16, "text": " So that area doesn't give me any reward anymore." }, { "end": 401.84000000000003, "start": 397.48, "text": " So you can imagine sort of like this." }, { "end": 406.96000000000004, "start": 401.84000000000003, "text": " So as an intrinsic reward, you simply reward the agent every time it finds itself in a" }, { "end": 409.56, "start": 406.96000000000004, "text": " new state that it hasn't seen before." }, { "end": 416.40000000000003, "start": 409.56, "text": " So you train it to seek out novel states that usually, when you just have like an actor" }, { "end": 422.96, "start": 416.4, "text": " critic method, and that's what this paper here criticizes, if you have it's called retrospective" }, { "end": 424.67999999999995, "start": 422.96, "text": " novelty." }, { "end": 430.47999999999996, "start": 424.67999999999995, "text": " That means if you train a model free algorithm, which is an actor critic, right, if we just" }, { "end": 439.28, "start": 430.47999999999996, "text": " plug in into here, something like a three C, that will simply have a policy and a value" }, { "end": 440.35999999999996, "start": 439.28, "text": " function." }, { "end": 445.44, "start": 440.35999999999996, "text": " And in this case, if we train it on intrinsic reward, the policy will simply tell you where" }, { "end": 447.76, "start": 445.44, "text": " to go to find more green stuff." }, { "end": 449.48, "start": 447.76, "text": " But you can only train it." }, { "end": 454.32, "start": 449.48, "text": " So you use this to run an episode." }, { "end": 458.64, "start": 454.32, "text": " And then you observe how many green things you found in that episode, right, if your" }, { "end": 463.92, "start": 458.64, "text": " episode goes here, and then you put that back into your buffer to learn from." }, { "end": 466.96, "start": 463.92, "text": " But at that point, you've already collected the green things, right." }, { "end": 474.52, "start": 466.96, "text": " So the reward signal is actually a bit off because you want to train your agent that" }, { "end": 476.76, "start": 474.52, "text": " it should seek out novel things." }, { "end": 481, "start": 476.76, "text": " But as soon as you've explored them, they're not really novel anymore, because you have" }, { "end": 482.15999999999997, "start": 481, "text": " now explored them." }, { "end": 488.52, "start": 482.15999999999997, "text": " But still, you're going to train your agent, telling your agent that this area right here" }, { "end": 491.56, "start": 488.52, "text": " has lots of has given me lots of rewards." }, { "end": 496.02, "start": 491.56, "text": " So the agent is going to be encouraged to repeat that." }, { "end": 502.32, "start": 496.02, "text": " They say this right here, the retrospective novelty model free exploration methods not" }, { "end": 507.24, "start": 502.32, "text": " only require large amounts of experience to adapt to downstream tasks, they can also be" }, { "end": 509.52, "start": 507.24, "text": " inefficient during exploration." }, { "end": 517.48, "start": 509.52, "text": " These agents usually first act in the environment, collect trajectories, and then calculate an" }, { "end": 522.02, "start": 517.48, "text": " intrinsic reward as the agent's current estimate of novelty." }, { "end": 526.9399999999999, "start": 522.02, "text": " This approach misses out on efficiency by operating retrospectively." }, { "end": 534.5200000000001, "start": 526.94, "text": " That is, the novelty of inputs is computed after the agent has already reached them." }, { "end": 539.6400000000001, "start": 534.5200000000001, "text": " Hence it seeks out previously novel inputs that have already been visited and would not" }, { "end": 541.24, "start": 539.6400000000001, "text": " be novel anymore." }, { "end": 546.96, "start": 541.24, "text": " Instead, one should directly seek out future inputs that are expected to be novel." }, { "end": 554.48, "start": 546.96, "text": " Now, so what this paper is doing, it's basically saying, can we build a model that estimates" }, { "end": 560.36, "start": 554.48, "text": " the future novelty of a state that we maybe haven't seen so far?" }, { "end": 562.3000000000001, "start": 560.36, "text": " And here is where that goes." }, { "end": 565.16, "start": 562.3000000000001, "text": " So what do they do in this policy?" }, { "end": 570.72, "start": 565.16, "text": " So the policy isn't just trained to maximize the novelty in the world." }, { "end": 573.44, "start": 570.72, "text": " Instead, sorry, it uses planning." }, { "end": 576.64, "start": 573.44, "text": " It uses planning in latent space." }, { "end": 582.6800000000001, "start": 576.64, "text": " So what this model does is it learns a world model in latent space." }, { "end": 587.88, "start": 582.68, "text": " The world model takes as input these features that you saw right here that the encoder gives" }, { "end": 593.8, "start": 587.88, "text": " you and it predicts the future hidden latent states." }, { "end": 596.4399999999999, "start": 593.8, "text": " These are the things you saw here." }, { "end": 603.5999999999999, "start": 596.4399999999999, "text": " So these things that are always made by incorporating the new features with the old state, it tries" }, { "end": 604.9599999999999, "start": 603.5999999999999, "text": " to predict it." }, { "end": 610.7199999999999, "start": 604.9599999999999, "text": " So technically, these things here should have like some sort of a, no, this is actually" }, { "end": 617, "start": 610.72, "text": " exact here, but these here should have some sort of a tick or something to indicate these" }, { "end": 618.48, "start": 617, "text": " are estimated." }, { "end": 621.08, "start": 618.48, "text": " These are estimated future states." }, { "end": 623.76, "start": 621.08, "text": " And this model right here, this is an estimate." }, { "end": 625.96, "start": 623.76, "text": " This is a world model." }, { "end": 627.28, "start": 625.96, "text": " They use Dreamer for this." }, { "end": 630.0400000000001, "start": 627.28, "text": " And I have made a video about Dreamer." }, { "end": 631.5600000000001, "start": 630.0400000000001, "text": " Dreamer tries exactly that." }, { "end": 639.32, "start": 631.5600000000001, "text": " It tries to estimate what is the future, but not in actual world space, but in latent space." }, { "end": 646, "start": 639.32, "text": " And yeah, so it tries to estimate its own future." }, { "end": 650.8000000000001, "start": 646, "text": " And the cool thing here is that this is probabilistic or you can make it probabilistic." }, { "end": 656.4000000000001, "start": 650.8000000000001, "text": " So you can technically from this one age that you have here, you can run out many futures" }, { "end": 658.1600000000001, "start": 656.4000000000001, "text": " in your imagination." }, { "end": 663.0400000000001, "start": 658.1600000000001, "text": " And since you don't need the observations, you only need the latent space, you can simply" }, { "end": 668.84, "start": 663.0400000000001, "text": " forward roll your RNN and sample from it and you have many trajectories in the future." }, { "end": 675.96, "start": 668.84, "text": " Now the fact that you have many trajectories leads to even a different thing." }, { "end": 683.32, "start": 675.96, "text": " So what you can do for each of these hidden states, they have a head here that predicts" }, { "end": 686.38, "start": 683.32, "text": " the so-called latent disagreement." }, { "end": 687.38, "start": 686.38, "text": " What does this do?" }, { "end": 689.72, "start": 687.38, "text": " This consists of a whole bunch of models." }, { "end": 691.96, "start": 689.72, "text": " These are ensemble models." }, { "end": 694.76, "start": 691.96, "text": " They're the same for each time step." }, { "end": 705, "start": 694.76, "text": " But what they take in, they take in the latent state of the model and the action that you're" }, { "end": 709.28, "start": 705, "text": " about to do, the action that you imagine you would do." }, { "end": 714, "start": 709.28, "text": " So this is the imagined state and this is the imagined action in that state." }, { "end": 718.16, "start": 714, "text": " And then it will compute the next features." }, { "end": 723.4, "start": 718.16, "text": " So the next, whatever in the next step would be the age." }, { "end": 728.68, "start": 723.4, "text": " So right, right, where do we put it?" }, { "end": 739.4399999999999, "start": 728.68, "text": " Whatever in the next, so if I have this age and I have this state, it tells me if I do" }, { "end": 748.56, "start": 739.4399999999999, "text": " action A1 and if I were to execute this in the real world, what would be my next age" }, { "end": 749.68, "start": 748.56, "text": " that I would get?" }, { "end": 755.3199999999999, "start": 749.68, "text": " So basically by performing an action, I will get the next observation and I will encode" }, { "end": 757.76, "start": 755.3199999999999, "text": " that to get the next features." }, { "end": 765, "start": 757.76, "text": " And this small model would try to predict what are the features of the next state if" }, { "end": 770.2399999999999, "start": 765, "text": " I were to execute this action in this state." }, { "end": 775.24, "start": 770.2399999999999, "text": " Okay, so it's kind of a future predictor." }, { "end": 779.5999999999999, "start": 775.24, "text": " But also not in observation space, but in latent space." }, { "end": 786, "start": 779.6, "text": " So it tries to predict the latent features of the next observation." }, { "end": 791.8000000000001, "start": 786, "text": " And the split here, you might think that there is a bit of a, it's like almost the same," }, { "end": 794.76, "start": 791.8000000000001, "text": " this latent state here and the features." }, { "end": 801.12, "start": 794.76, "text": " But as we discussed before, the latent state can incorporate the last or the history of" }, { "end": 807.4, "start": 801.12, "text": " latent states while the features simply are only a function of the current observation." }, { "end": 809.48, "start": 807.4, "text": " And that's why they predict the features." }, { "end": 812.28, "start": 809.48, "text": " They really want to predict the observation." }, { "end": 817.16, "start": 812.28, "text": " But history has sort of shown that if you try to predict, for example, the pixels of" }, { "end": 821.36, "start": 817.16, "text": " the observation, that won't serve you really well." }, { "end": 826.8000000000001, "start": 821.36, "text": " And therefore, what you need to do is you need to predict the latent features of the" }, { "end": 830.32, "start": 826.8000000000001, "text": " observation that works much better." }, { "end": 832.52, "start": 830.32, "text": " So they have a bunch of these models right here." }, { "end": 836.08, "start": 832.52, "text": " They have a bunch of models with different parameterizations." }, { "end": 839.9000000000001, "start": 836.08, "text": " They instantiate k different models of that." }, { "end": 841.5, "start": 839.9000000000001, "text": " And they all run the same." }, { "end": 844.64, "start": 841.5, "text": " So these are all the same inputs through these different models." }, { "end": 848.32, "start": 844.64, "text": " Now, these different models have been initialized at different points." }, { "end": 851.36, "start": 848.32, "text": " So they will make slightly different predictions." }, { "end": 858.0400000000001, "start": 851.36, "text": " And the crucial part is, so if it's really deterministic what the next state is going" }, { "end": 859.0400000000001, "start": 858.0400000000001, "text": " to be, right?" }, { "end": 863.12, "start": 859.0400000000001, "text": " So say you're in this state, you perform this action." }, { "end": 867.76, "start": 863.12, "text": " And so if you have a ball in your hand and you drop the ball, then the ball is going" }, { "end": 869.48, "start": 867.76, "text": " to fall down." }, { "end": 870.74, "start": 869.48, "text": " Really deterministic." }, { "end": 877.2, "start": 870.74, "text": " That means these next these estimated next features, if the models are any good, they" }, { "end": 879.12, "start": 877.2, "text": " all agree." }, { "end": 885.5600000000001, "start": 879.12, "text": " And this variance here between the estimates is very, very small." }, { "end": 891.6800000000001, "start": 885.5600000000001, "text": " And now if if the uncertainty over the next state is very high, and this can be due to" }, { "end": 896.3599999999999, "start": 891.68, "text": " two facts either, it is actually uncertain what's going to happen." }, { "end": 901.52, "start": 896.3599999999999, "text": " So maybe you have a really a piece of paper and you drop that and due to the wind, you" }, { "end": 903.1999999999999, "start": 901.52, "text": " can't know what's happening." }, { "end": 908.28, "start": 903.1999999999999, "text": " Or because your model has simply not learned yet what's going to happen." }, { "end": 912.76, "start": 908.28, "text": " In either of those cases, you don't know what's going to happen." }, { "end": 921.28, "start": 912.76, "text": " Therefore, these these predictions here, sorry, are going to be very different from each other." }, { "end": 924.12, "start": 921.28, "text": " And because of that, this variance will be high." }, { "end": 928.14, "start": 924.12, "text": " And this variance you take as the intrinsic reward." }, { "end": 936.66, "start": 928.14, "text": " So in each step, you basically try to predict over the next actions you can do, which ones" }, { "end": 944.4, "start": 936.66, "text": " leads me to a leads me to a situation where I don't know what's going to happen, where" }, { "end": 949.72, "start": 944.4, "text": " I cannot really predict the variance in my prediction is high." }, { "end": 951.6, "start": 949.72, "text": " So I really don't know what's going to happen." }, { "end": 954.6800000000001, "start": 951.6, "text": " And that is going to be the states you seek out." }, { "end": 957.44, "start": 954.6800000000001, "text": " Okay, so this is the core of the paper." }, { "end": 964.84, "start": 957.44, "text": " Basically, you do this planning in latent space in order to find the states or action" }, { "end": 969.24, "start": 964.84, "text": " that leads you to a state where you don't know what's going to happen." }, { "end": 974.32, "start": 969.24, "text": " And you measure that by trying to predict it using slightly different models." }, { "end": 981.88, "start": 974.32, "text": " And if they disagree a whole bunch, then you can use you sort of you say, I don't know" }, { "end": 982.88, "start": 981.88, "text": " what's going to happen." }, { "end": 986.72, "start": 982.88, "text": " And therefore, I want to go there because I want to learn about that state." }, { "end": 990.5400000000001, "start": 986.72, "text": " Now this, this is the this is the entire thing." }, { "end": 994.6400000000001, "start": 990.5400000000001, "text": " It has a bunch of problems, as you can imagine." }, { "end": 997.6800000000001, "start": 994.6400000000001, "text": " So this is the reasoning behind it, right?" }, { "end": 1009.52, "start": 997.68, "text": " Now they try to make a deal out of basically their latent disagreement here agrees with" }, { "end": 1015.2399999999999, "start": 1009.52, "text": " minima, sorry, maximizing the expected information gain." }, { "end": 1021.64, "start": 1015.2399999999999, "text": " They go into the theory right here and say, okay, if I have a state and an action, and" }, { "end": 1026.98, "start": 1021.64, "text": " I had and this W are the dynamics parameters of the world." }, { "end": 1030.92, "start": 1026.98, "text": " So the W characterizes how the world works." }, { "end": 1036.52, "start": 1030.92, "text": " And the H here is the next state, or sorry, the features of the next observation." }, { "end": 1043.76, "start": 1036.52, "text": " And the I is the mutual information between H and W. So this right here measures how much" }, { "end": 1051.48, "start": 1043.76, "text": " information of the next state is contained in the dynamics of the world." }, { "end": 1055.92, "start": 1051.48, "text": " If this is really low, and I have a good world model, then I should be able to predict the" }, { "end": 1060.3200000000002, "start": 1055.92, "text": " next state really well." }, { "end": 1067.14, "start": 1060.3200000000002, "text": " And this, this, they say, okay, selecting the most promising data during exploration." }, { "end": 1072.0800000000002, "start": 1067.14, "text": " We want to select the action that maximizes this information gain." }, { "end": 1079.88, "start": 1072.0800000000002, "text": " So the, the, the more the mutual information here, we want to select the action that maximizes" }, { "end": 1085.68, "start": 1079.88, "text": " that they decompose this mutual information into two things." }, { "end": 1093.64, "start": 1085.68, "text": " They decompose it into this thing right here, which is simply the entropy of the next state" }, { "end": 1095.76, "start": 1093.64, "text": " given the current state and action." }, { "end": 1102.92, "start": 1095.76, "text": " This is simply the total uncertainty, including the fact that it could actually be stochastic" }, { "end": 1107.54, "start": 1102.92, "text": " like dropping a paper, and the fact that you haven't learned yet what happens like if you" }, { "end": 1112.72, "start": 1107.54, "text": " drop a ball, but you haven't learned that yet, that is also uncertain." }, { "end": 1119.8, "start": 1112.72, "text": " So this is that part, this is the total uncertainty minus this right here." }, { "end": 1123.96, "start": 1119.8, "text": " And this is the uncertainty if you know the dynamics, right?" }, { "end": 1130.42, "start": 1123.96, "text": " So this is the wind, basically, in the paper example." }, { "end": 1138.56, "start": 1130.42, "text": " So you want something where the total uncertainty is high, but the the kind of uncertainty of" }, { "end": 1142.08, "start": 1138.56, "text": " the of the stochasticity of the world is low." }, { "end": 1150.24, "start": 1142.08, "text": " If you maximize this quantity here, this total quantity, sorry, if you maximize this entire" }, { "end": 1156.84, "start": 1150.24, "text": " quantity, because I called one of them total, that means you are going to seek out actions" }, { "end": 1162.8799999999999, "start": 1156.84, "text": " where what's left is only the uncertainty that you yourself don't know, right?" }, { "end": 1167.36, "start": 1162.8799999999999, "text": " You say, well, this state has a pretty high total uncertainty, but it's not due to the" }, { "end": 1170.26, "start": 1167.36, "text": " fact that the world itself is uncertain." }, { "end": 1174.2, "start": 1170.26, "text": " It must be due to the fact that I don't know yet." }, { "end": 1180.8, "start": 1174.2, "text": " And they make the claim that their model is actually going after these things." }, { "end": 1188.76, "start": 1180.8, "text": " And they say, okay, because we have these these Gaussians here as our estimators, they" }, { "end": 1196.32, "start": 1188.76, "text": " somehow reduce to this total to this uncertainty, but only basically by taking Gaussians, they" }, { "end": 1203.56, "start": 1196.32, "text": " assume, they just assume that this quantity here is constant." }, { "end": 1206.08, "start": 1203.56, "text": " At least that's how I understand it." }, { "end": 1211.12, "start": 1206.08, "text": " They basically assume that every transition in the world has about the same amount of" }, { "end": 1212.36, "start": 1211.12, "text": " uncertainty." }, { "end": 1216.54, "start": 1212.36, "text": " And therefore, we can just focus on the total amount of uncertainty, right?" }, { "end": 1223, "start": 1216.54, "text": " So if we can't predict, if we can't predict the next state, a, we can predict the next" }, { "end": 1230.52, "start": 1223, "text": " state a better than the next state B, and both have about the same amount of intrinsic" }, { "end": 1236.4, "start": 1230.52, "text": " uncertainty, stochasticity in the world, that must mean we should go to B because that's" }, { "end": 1238.16, "start": 1236.4, "text": " where our model hasn't learned yet." }, { "end": 1242.28, "start": 1238.16, "text": " Now, of course, in the real world, that is absolutely not the case." }, { "end": 1248.44, "start": 1242.28, "text": " And I think this model works mainly because they tested in these transitions, or in these" }, { "end": 1254.4, "start": 1248.44, "text": " environments where that might be very close to accurate, that actually most of the transitions" }, { "end": 1260.04, "start": 1254.4, "text": " have the same stochasticity as any other transition." }, { "end": 1267.4, "start": 1260.04, "text": " The second, the second part, why this is a bit difficult is because you have to somehow" }, { "end": 1275.8, "start": 1267.4, "text": " keep this latent, sorry, this, this couple of models right here that make this disagreement" }, { "end": 1278.76, "start": 1275.8, "text": " prediction." }, { "end": 1285.04, "start": 1278.76, "text": " So you rely on the fact that you can capture disagreement by looking how those models disagree" }, { "end": 1286.04, "start": 1285.04, "text": " with each other." }, { "end": 1289.02, "start": 1286.04, "text": " And again, they employ Gaussians here." }, { "end": 1296.08, "start": 1289.02, "text": " But it is not said that these things, these things will actually give you the true disagreement" }, { "end": 1297.08, "start": 1296.08, "text": " among themselves." }, { "end": 1303.36, "start": 1297.08, "text": " If you initialize them wrongly, they might miss like if, if your distribution has three" }, { "end": 1308.52, "start": 1303.36, "text": " modes, they act, they might just for all of them, focus on one of them." }, { "end": 1313.12, "start": 1308.52, "text": " And then your disagreement will be completely out of whack, or you could initialize them" }, { "end": 1319.26, "start": 1313.12, "text": " not far enough for or too close together." }, { "end": 1321.4799999999998, "start": 1319.26, "text": " That's the same thing." }, { "end": 1327.28, "start": 1321.4799999999998, "text": " So it all depends on kind of how you manage to handle this uncertainty right here." }, { "end": 1333.32, "start": 1327.28, "text": " So all of this seems a bit problematic, but the whole setup is pretty cool." }, { "end": 1336.92, "start": 1333.32, "text": " Because imagine like all of this is shifting constantly, right?" }, { "end": 1341.84, "start": 1336.92, "text": " The the policy here tries to maximize these rewards." }, { "end": 1343.96, "start": 1341.84, "text": " And that's something I don't understand." }, { "end": 1349.68, "start": 1343.96, "text": " In the paper, they make it sort of explicitly clear that the policy tries to maximize this" }, { "end": 1355.12, "start": 1349.68, "text": " quantity right here, the next uncertainty." }, { "end": 1364.8799999999999, "start": 1355.12, "text": " The planning objective is to maximize expected novelty, or it, which is this thing right" }, { "end": 1365.8799999999999, "start": 1364.8799999999999, "text": " here." }, { "end": 1374.56, "start": 1365.8799999999999, "text": " However, I, I don't actually see why in that case, you'd need planning." }, { "end": 1380.36, "start": 1374.56, "text": " Because with planning, your goal is sort of to look ahead more than one step." }, { "end": 1387.4399999999998, "start": 1380.36, "text": " So what I would expect is that they somehow have the aggregated somehow that they not" }, { "end": 1395, "start": 1387.4399999999998, "text": " don't maximize this, but somehow they maximize the future right of t prime of r t prime," }, { "end": 1403.34, "start": 1395, "text": " I, they somehow maximize the yes, they somehow maximize the future, the total future, maybe" }, { "end": 1408.78, "start": 1403.34, "text": " with a disagreement, like if it was a reward, and you actually want to maximize the total" }, { "end": 1416.52, "start": 1408.78, "text": " reward across your episode, I would imagine they use planning to maximize the total future" }, { "end": 1421.44, "start": 1416.52, "text": " uncertainty that they encounter because right here you have your trajectories." }, { "end": 1427.48, "start": 1421.44, "text": " And as they say it, they only maximize the uncertainty after the first step." }, { "end": 1431.8799999999999, "start": 1427.48, "text": " So this here might be, you know, even intrinsically uncertain or a bit uncertain." }, { "end": 1437.2, "start": 1431.8799999999999, "text": " But if you go down the path here, there might be a state where that's super uncertain, and" }, { "end": 1441.52, "start": 1437.2, "text": " you would like to find that right through your different rollouts." }, { "end": 1448.2, "start": 1441.52, "text": " So I'm not sure that the paper is correct or consistent here, actually." }, { "end": 1451.92, "start": 1448.2, "text": " I might be wrong, though, they do have the code, which is a really good thing." }, { "end": 1456.6000000000001, "start": 1451.92, "text": " So I'll link to the code and you can go and explore that." }, { "end": 1461.72, "start": 1456.6000000000001, "text": " They do have this algorithm down here, which is pretty much I mean, this is saying just" }, { "end": 1463.88, "start": 1461.72, "text": " nothing." }, { "end": 1469.6000000000001, "start": 1463.88, "text": " Still exploring, do train the world model, train the latent disagreement ensemble, train" }, { "end": 1476.88, "start": 1469.6000000000001, "text": " the policy in imagination of like, rather, rather, it helps a bit." }, { "end": 1477.88, "start": 1476.88, "text": " Okay." }, { "end": 1485.6000000000001, "start": 1477.88, "text": " But one other thing right here, the policy that tries to maximize the reward, right," }, { "end": 1489.0800000000002, "start": 1485.6000000000001, "text": " so you use planning to look ahead where the uncertainty is." }, { "end": 1491.5200000000002, "start": 1489.0800000000002, "text": " But how do you do the planning?" }, { "end": 1496.32, "start": 1491.52, "text": " You need a policy in imagination space, right?" }, { "end": 1505.2, "start": 1496.32, "text": " This latent disagreement policy here is used to train how you act in latent space, right?" }, { "end": 1511.8799999999999, "start": 1505.2, "text": " How this action that you imagine comes to be, you can't plan in imagination space and" }, { "end": 1516.22, "start": 1511.8799999999999, "text": " in imagination space use planning again, it's just an infinite recursion." }, { "end": 1519.22, "start": 1516.22, "text": " At some point, you need a model that tells you what to do." }, { "end": 1523.92, "start": 1519.22, "text": " And in imagination, they just use an actor critic model, you see they have a value function" }, { "end": 1524.92, "start": 1523.92, "text": " here." }, { "end": 1533.2, "start": 1524.92, "text": " They just use an actor critic model to basically one shot predict the next best action to" }, { "end": 1535.6000000000001, "start": 1533.2, "text": " get you to the next step." }, { "end": 1547.04, "start": 1535.6000000000001, "text": " So as they themselves rag on these model free methods, because they only look ahead, how" }, { "end": 1553.96, "start": 1547.04, "text": " is that not exactly the same as me ragging on the fact that they use model free and imagination" }, { "end": 1559.3999999999999, "start": 1553.96, "text": " space, because your world model certainly is retrospective, your world model learns" }, { "end": 1561.8799999999999, "start": 1559.3999999999999, "text": " from the past, right?" }, { "end": 1569.44, "start": 1561.8799999999999, "text": " So the model free method that learns on your imagined world model learns from retrospective" }, { "end": 1577.76, "start": 1569.44, "text": " imagination. And therefore, it itself has sort of the same problem, just one layer deeper" }, { "end": 1583.56, "start": 1577.76, "text": " that it learns from retrospective data and not from data ahead, because your uncertainty" }, { "end": 1589.3200000000002, "start": 1583.56, "text": " about the future might just be because of your retrospect, exactly is because of your" }, { "end": 1590.64, "start": 1589.3200000000002, "text": " retrospective data." }, { "end": 1595.14, "start": 1590.64, "text": " I see the value in having this uncertainty." }, { "end": 1601.44, "start": 1595.14, "text": " But I think there are other methods that also do model free, and don't just maximize an" }, { "end": 1606.3200000000002, "start": 1601.44, "text": " intrinsic reward, but actually maximize a sort of uncertainty." }, { "end": 1610.3600000000001, "start": 1606.3200000000002, "text": " Okay, enough ragging, let's go to the experiment." }, { "end": 1617.9, "start": 1610.3600000000001, "text": " So the cool thing you can do with this is what's called zero shot performance." }, { "end": 1625.3200000000002, "start": 1617.9, "text": " So what they do is in the first step, they do this, they just learn task agnostic, just" }, { "end": 1628.16, "start": 1625.3200000000002, "text": " explorer." }, { "end": 1634, "start": 1628.16, "text": " Without task, then second, they go and into their buffer." }, { "end": 1638.2, "start": 1634, "text": " So when they explore, they all save, they save what they do, there is no reward, but" }, { "end": 1642.24, "start": 1638.2, "text": " they just save, they store their episodes, right?" }, { "end": 1648.6, "start": 1642.24, "text": " And then someone comes with a task, and the task is simply, they specify like you have" }, { "end": 1654.88, "start": 1648.6, "text": " to run forward, and they go to this buffer, and they now label every episode with its" }, { "end": 1655.88, "start": 1654.88, "text": " reward." }, { "end": 1661.28, "start": 1655.88, "text": " So this is different, this is like offline reinforcement learning, right?" }, { "end": 1668.3, "start": 1661.28, "text": " So basically, it is how well they call it zero shot, but it is how well can an algorithm" }, { "end": 1680.48, "start": 1668.3, "text": " that has explored with this kind of self supervision, perform in offline reinforcement learning" }, { "end": 1688.08, "start": 1680.48, "text": " on the trajectories that it has already experienced, which is different from performing the same" }, { "end": 1693.96, "start": 1688.08, "text": " trajectories with the reward, because you would learn from the reward, and you would" }, { "end": 1698.1599999999999, "start": 1693.96, "text": " learn to seek out different, like your experience would be different." }, { "end": 1700.28, "start": 1698.16, "text": " If you were going after a reward." }, { "end": 1702.16, "start": 1700.28, "text": " So this is harder." }, { "end": 1709.76, "start": 1702.16, "text": " So they compare this to dreamer, and dreamer is a fully supervised method, the dreamer" }, { "end": 1714.92, "start": 1709.76, "text": " is actually cheating, dreamer actually goes after the reward." }, { "end": 1721, "start": 1714.92, "text": " And all the other methods here, they don't have a reward, and they're just zero shot" }, { "end": 1724.1200000000001, "start": 1721, "text": " offline reinforcement learning generalized to these methods." }, { "end": 1731.32, "start": 1724.12, "text": " And you see the green is the plan to explore, and that outperforms almost all the other" }, { "end": 1738.12, "start": 1731.32, "text": " methods right here, down here, and even comes close to this, to the dreamer that goes up" }, { "end": 1739.12, "start": 1738.12, "text": " here." }, { "end": 1744.36, "start": 1739.12, "text": " It's seen pretty much every graphic that dreamer is the one that's able to cheat, right?" }, { "end": 1747.7199999999998, "start": 1744.36, "text": " So it is performing pretty well." }, { "end": 1756.3600000000001, "start": 1747.72, "text": " But then the the zero shot generalized plan to explore is sometimes on par and certainly" }, { "end": 1761.88, "start": 1756.3600000000001, "text": " outperforms the other intrinsic reward methods." }, { "end": 1770.72, "start": 1761.88, "text": " Now how does that go about when you try actually when you allow the model to fine tune on the" }, { "end": 1771.72, "start": 1770.72, "text": " task." }, { "end": 1778, "start": 1771.72, "text": " So you can see the performance on few shot adaptation from raw pixels without state space" }, { "end": 1779, "start": 1778, "text": " input." }, { "end": 1786.64, "start": 1779, "text": " So basically, you learn without reward for this many steps right here, until this shaded" }, { "end": 1788.1200000000001, "start": 1786.64, "text": " area back here." }, { "end": 1794.84, "start": 1788.1200000000001, "text": " This is how when you do no reward, and then all of a sudden, now you say, okay, now, I'll" }, { "end": 1796.28, "start": 1794.84, "text": " give you reward." }, { "end": 1797.88, "start": 1796.28, "text": " Now please learn." }, { "end": 1804.5200000000002, "start": 1797.88, "text": " This many steps, you have this many steps where you can learn from the reward." }, { "end": 1806.8400000000001, "start": 1804.5200000000002, "text": " So now we're no longer in this offline RL setting." }, { "end": 1808.68, "start": 1806.8400000000001, "text": " This is now online RL." }, { "end": 1813.24, "start": 1808.68, "text": " But we've been pre trained with all of this experience that we had without the reward." }, { "end": 1816.8400000000001, "start": 1813.24, "text": " Again, the orange here is the cheater." }, { "end": 1822.4, "start": 1816.8400000000001, "text": " So the orange is is cheating." }, { "end": 1829.64, "start": 1822.4, "text": " And now we don't so before the graphs were higher, because we've, we've actually at each" }, { "end": 1835.5600000000002, "start": 1829.64, "text": " step, for example, how this works is you train until here without reward, and then you do" }, { "end": 1840.48, "start": 1835.5600000000002, "text": " this offline RL offline RL training." }, { "end": 1842.3600000000001, "start": 1840.48, "text": " And that's how this point comes about." }, { "end": 1847.6200000000001, "start": 1842.3600000000001, "text": " Now, I think in the graph down here, they they don't do that." }, { "end": 1851.88, "start": 1847.6200000000001, "text": " So they just measure how well you're doing in the task." }, { "end": 1857.94, "start": 1851.88, "text": " And of course, if after this many steps, you've never looked at the reward, you haven't been" }, { "end": 1863.8400000000001, "start": 1857.94, "text": " able to look at the reward, your reward will be fairly low, right at the task, because" }, { "end": 1867.2800000000002, "start": 1863.8400000000001, "text": " you don't know what to do to get the high reward." }, { "end": 1870.3200000000002, "start": 1867.2800000000002, "text": " The dreamer again, this orange line is able to cheat." }, { "end": 1874.8400000000001, "start": 1870.3200000000002, "text": " That's why it just is basically straight line or goes up at the beginning, it's able to" }, { "end": 1876.5200000000002, "start": 1874.8400000000001, "text": " look at the reward from the beginning." }, { "end": 1878.72, "start": 1876.5200000000002, "text": " And it's here as a baseline comparison." }, { "end": 1885.28, "start": 1878.72, "text": " So you see as soon as you give the reward to the models, they generally shoot up." }, { "end": 1890.84, "start": 1885.28, "text": " And this plan to explore generally shoots up much harder than these others, as you can" }, { "end": 1892.68, "start": 1890.84, "text": " see, pretty much everywhere." }, { "end": 1898.4, "start": 1892.68, "text": " And again, it gets competitive and here even outperforms the dreamer." }, { "end": 1902.08, "start": 1898.4, "text": " Why could it outperform the supervised method?" }, { "end": 1908.9199999999998, "start": 1902.08, "text": " Maybe because this method here is sort of confused or is stuck in a local optimum, which" }, { "end": 1912, "start": 1908.9199999999998, "text": " can happen very easily in reinforcement learning." }, { "end": 1917.28, "start": 1912, "text": " Whereas the plan to explore has never seen the reward, therefore hasn't tried to just" }, { "end": 1922.36, "start": 1917.28, "text": " single mindedly maximize the reward and has explored a bunch of different things to do" }, { "end": 1923.36, "start": 1922.36, "text": " in the world." }, { "end": 1930.6399999999999, "start": 1923.36, "text": " And now we can use that knowledge to outperform the plan, the dreamer, the baseline." }, { "end": 1936.0400000000002, "start": 1930.64, "text": " So the other thing I would like to draw your attention to here is that sometimes you see" }, { "end": 1945.5, "start": 1936.0400000000002, "text": " that the plan to explore or the other curiosity methods actually get a reward before the reward" }, { "end": 1950, "start": 1945.5, "text": " kicks in as we saw here, right?" }, { "end": 1952.44, "start": 1950, "text": " For example, right here." }, { "end": 1959.76, "start": 1952.44, "text": " And this tells me that this is probably a property of the environment itself, namely" }, { "end": 1964.16, "start": 1959.76, "text": " these reinforcement learning environments, they don't really have much noise going on," }, { "end": 1965.16, "start": 1964.16, "text": " right?" }, { "end": 1970.64, "start": 1965.16, "text": " They pretty much just have, it's a simulator with one figure that can walk or not." }, { "end": 1978.12, "start": 1970.64, "text": " And therefore, it might be that the only interesting thing to do in these models is to actually" }, { "end": 1979.72, "start": 1978.12, "text": " perform one of these tasks." }, { "end": 1986.04, "start": 1979.72, "text": " And that's why it might be that sometimes they already get a reward." }, { "end": 1995, "start": 1986.04, "text": " So it's true that they don't see the reward for this entire duration, but also implicitly" }, { "end": 2000.32, "start": 1995, "text": " via the developers building the simulator, they have made it such that the only interesting" }, { "end": 2005.44, "start": 2000.32, "text": " thing to do is the same thing as getting a reward, right?" }, { "end": 2013.44, "start": 2005.44, "text": " So I'm sort of skeptical that this is like a general exploration policy, because also" }, { "end": 2021.0800000000002, "start": 2013.44, "text": " in the real world, there are just combinatorically hugely many, many actions to do many paths" }, { "end": 2022.44, "start": 2021.0800000000002, "text": " to follow." }, { "end": 2030.72, "start": 2022.44, "text": " And if you just go by what do I not know yet, I think you can't you can't put that all into" }, { "end": 2032.76, "start": 2030.72, "text": " one model is just too much." }, { "end": 2040.16, "start": 2032.76, "text": " And the states where you really were really something interesting happens are so few and" }, { "end": 2045.76, "start": 2040.16, "text": " far in between, and that it doesn't compare to the amount of states where you simply don't" }, { "end": 2051.4, "start": 2045.76, "text": " know most states, you don't know what's going to happen, but probably nothing's nothing" }, { "end": 2056.52, "start": 2051.4, "text": " interesting is going to happen just different things, which will screw over this method" }, { "end": 2058.6, "start": 2056.52, "text": " completely." }, { "end": 2065.64, "start": 2058.6, "text": " In any case, they Yeah, sorry, this is just this is another experiment." }, { "end": 2072.2799999999997, "start": 2065.64, "text": " They have a bunch of other experiments. And yeah, that that was my this was my review" }, { "end": 2078.04, "start": 2072.2799999999997, "text": " of the paper. Tell me if you agree or disagree or if I've misunderstood something that's" }, { "end": 2079.6, "start": 2078.04, "text": " entirely possible." }, { "end": 2088, "start": 2079.6, "text": " I'm just always a bit skeptical of these things a bit." }, { "end": 2093.52, "start": 2088, "text": " So the experiments, they're very compute intensive, of course, so you never know there and then" }, { "end": 2096.28, "start": 2093.52, "text": " these specific environments right here." }, { "end": 2101.12, "start": 2096.28, "text": " You never know there and then the fact that the real world actually has very different" }, { "end": 2106.96, "start": 2101.12, "text": " stochasticity, which they simply assume away right here." }, { "end": 2112.08, "start": 2106.96, "text": " Yeah, but other than that, big props to the fact that the code is out." }, { "end": 2116.6, "start": 2112.08, "text": " And as I said, leave a comment if you agree or disagree." }, { "end": 2120.48, "start": 2116.6, "text": " Please subscribe and share this video if you liked it, and I'll see you next time." }, { "end": 2123.8, "start": 2120.48, "text": " Bye bye." } ]
XvDzZwoQFcU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[News] Facebook's Real-Time TTS system runs on CPUs only!
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "facebook", "fair", "tts", "text-to-speech", "real-time", "wavenet", "rnn", "spectrogram", "mel", "frequency", "vocoder", "linguistic", "features", "speaker", "soft", "style", "tone", "phonemes", "neural", "recurrent", "human", "assistant" ]
Facebook AI's new Text-To-Speech system is able to create 1 second of speech in as little as 500ms, making it real-time. What's even more impressive is the fact that this does not require a rack of GPUs, but runs on merely 4 CPUs. OUTLINE: 0:00 - Intro 1:00 - Problem Formulation 3:20 - System Explanation 15:00 - Speeding up the computation https://ai.facebook.com/blog/a-highly-efficient-real-time-text-to-speech-system-deployed-on-cpus/ Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, check this out. Modern text-to-speech systems have come a long way in using neural networks to mimic the nuances of human voice. To generate human-like audio, one second of speech can require a TTS system to output as many as 24,000 samples, sometimes even more. The size and complexity of state-of-the-art models require massive computation, which often needs to run on GPUs or other specialized hardware. This is generated by a system that Facebook AI has built. We're going to look at it. It's called a highly efficient real-time text-to-speech system deployed on CPUs. There's a lot to unpack here, but this is not a paper. This is basically a technical blog post. I think they built this into their products and it's mainly explaining on a high level what they did. They have a real-time text-to-speech system, which means that you have a text like this sentence here, share on Facebook. You give it to the system and the system comes up with a sound wave that says this sentence. If you listen to it, you'll hear share on Facebook. It has to do that in a credible way, such that it is a human-like voice, because people like hearing that. Not these kind of robot, old-school telephone robot voices where they just chunk together words. It has to flow naturally. What you want to do for this is you want to have some sort of recurrent neural network or any sort of autoregressive network that outputs basically these samples here. One at a time. These points you're going to output one at a time. For one second of audio, it can require you to output 24,000 of these data points. 24,000 forward propagations of your autoregressive model. That's massive. If you want to do it in real-time, and that's why real-time is so impressive, you have to do this in less than one second. You have to do these 24,000 forward passes in less than a second. Even more so, this was already possible, I think, but it required a big data center with many, many, many, many GPUs in it. You basically would send your text to this and it would stream back the audio. They can do this just on CPUs. In fact, they can do this on a quad-core CPU in real-time. They can generate this many samples in half a second. Pretty impressive. Let's dive into how they do it. They say, It's deployed in Portal, our video calling service, and available for use across a range of other Facebook applications. From reading support for the visually impaired to visual reality experiences. First, they show this graph right here. What are they doing? This is their entire system. It is chunked in multiple parts. If you're a deep learning practitioner, you're very keen on taking this text and just like a giant neural network and just run it through and generate audio end-to-end. This doesn't really work in this case, first of all, because it would be too many parameters to evaluate this many times. But also, especially text and audio are such different modalities that you'll have to basically chunk this into individual parts. That's what they do. They have this linguistic front-end. The linguistic front-end generates two different things. It generates what needs to be said and how does it need to be said. What needs to be said, they call this linguistic features. The linguistic features are things like phonemes and so on. They don't even have a title for this one. The linguistic front-end converts the input text, probably it's a sentence by sentence thing. This is one sentence of text into a sequence of linguistic features such as phonemes and sentence type. These linguistic features, if it's like share on Facebook, it would be like, okay, is one and then a and then r. These are the phonemes of share, right? It would kind of chunk it into that. That is much closer to what we think of an audio signal. This will make one sound, this will make a sound and this will make a sound. We chunk it into that and then here the how is it said. This would be, for example, the fact that share on Facebook is sort of an instruction. This is different from if this would be a question. If it said, do you want to share this on Facebook? Then the what would output much the same phonemes right here, except it's a different word now. But the how would output, this is a question. As you can see, the information flow right here informs the later stages. This would then cause at the end of the sentence the voice to go up because it's a question. This linguistic front end, as you can see, it still deals with text. It deals with text and it outputs these how features and these linguistic, these what features. The what features now go into an acoustic model. What does the acoustic model do? The acoustic model is meant to generate a spectrum, a spectrogram of the sound. We'll skip this for the moment and go to the neural vocoder. The neural vocoder is a kind of a standard thing in text in any speech producing. It takes a spectrogram of the sound and turns it into actual audio. So this here, I think they achieve it with, they say something similar like a wave RNN based on plus like a CNN. So we'll look into that quickly. So the spectrum, the spectrogram of the sound is going to be a bit of an image. And the image has time on this axis and frequency on this axis. And then there is a, it's usually somewhat like color coded, but it's just intensity. So whenever there is, whenever a frequency at a given time is expressed strongly, it will light up. So it could be something like this. So over time, the mid frequency is always there, but this frequency right here is not there. And this is here at the beginning. So there is a sort of a way to read the spectrogram. But this represents maybe something like this represents the not even sure how much sound this represents, but this can represent maybe something like 200 milliseconds of audio. And you have to basically perform a Fourier transform to transform that into 200 milliseconds of actual wave audio. But since the audio has to be output at what this 24,000 samples per second, what's coming in here is not that much. The acoustic model outputs not 24,000 spectrums per second. So first of all, this has to be up sampled. So this time dimension here has just too few samples. So we use the CNN because this is basically an image in order to up sample this in order to make a long image out of this. And basically, the CNN will have to impute. This is learned, right? This is learned how this spectrogram would look if it were sampled much more densely. And now it has basically the correct number of samples here. And now this wave RNN can step through it and look at these slices and look at the last slices or actually also can look at the last spectrograms, maybe. I don't know. They don't really say what the RNN exactly goes over. But this is how I imagine it. And the wave RNN is based on a wave net architecture. And that means wave net is sort of like if you look to produce this thing right here, you can look back all the way to the beginning. But this would be too many connections, right? It would be too much memory. So what you do is you can look back at the directions at the things right before you in very great detail. But then as you go back further and further, you basically lose detail. See, there's only two connections here in this long stretch where there is many, many connections here at the very beginning. So right before you. So that means you look in more detail what you've produced recently. But you also sort of look back in a more blurry way at what you've produced a long time ago. So this is a wave net architecture. And that's they say they use something like this in order to then actually produce the final audio from the spectrograms. So actually, neural vocoder, you can you can train this thing. You can train it by itself. Right. You simply feed spectrums and you make it go audio. And you know, there's a lot of audio on the Internet. You can simply produce spectrograms from that and then train the vocoder to produce the audio. So the good thing about this pipeline here is you can train a lot of these things independently from each other. I don't actually know whether they do that or not, but you can. You see this box up here, this prosody model, and that will take in. These how features. So how does something need to be said? How? Come on. Well, this is an H. How does something need to be said? And it will transform it into features that the sort of neural network can understand. These these neural networks, they need features like they need embeddings. And as you can see here, it also takes into account the speaker embedding, which is not only how what the sentence is, the fact that the sentence is a question. You would also get the information that the speaker should be, you know, kind of calm. Sorry, that would be the style. The speaker should be maybe a woman voice. And then the language should be it should have like an English sound or a German sound and so on. So this model here will take in all of that and emit features that these neural networks here can understand. So as you see here, the neural vocoder, not only does it transform spectrograms to audio, but it takes into account how you want the audio to sound. And so does this acoustic model. So the acoustic model is sort of along with the prosody model is a bit of the heart of the thing here. It takes in these linguistic features. So what needs to be said, right? The acoustic model, it takes into account the again, the speaker embedding, language embedding and so on. It takes into account the output from the prosody model of the how you would like it to be said. This includes what type of sentence this is. And this it synthesizes it all in order to come up with the spectrum right here in order to come up with these spectrograms. So this is sort of the heart of the of the thing. OK, so about the prosody model, they say we use style embeddings that allow us to create new voice styles, including assistant soft fast project and formal using only a small amount of additional data with the existing data sets. We don't have to create a separate model for each style. We need only 30 to 60 minutes of training data for each voice style. So that happens because you you take into you take in actually an embedding of the feet of the speakers and you train these things independently. So that means you can sort of generalize to a new style really quickly. Here the they describe in essence how their acoustic model works. The fact that they output 13 dimensional MFCC features concatenated with the fundamental frequency and a five dimensional periodicity feature, which is much easier for the acoustic model to generate. So that's what the acoustic model generates. OK, and then here are conditional neural vocoders is the final part of the pipeline, consists of two components, a convolutional neural network that up samples or expands the input feature vectors from the frame rate 200 predictions per second. So that's not it's not 200 milliseconds. It would be 200 spectrograms a second to the sample rate 24000 predictions per second. And this similar to wave RNN, which synthesizes audio samples autoregressively. That means one sample at a time at 24000 samples per second. Crazy, right? So this all seems like it's a lot of computation. And now they describe how they get this to run faster than real time. And they list their their individual contributions here. So if they just run this on one CPU core, you see it takes 80 seconds to produce one second of audio. Then they do optimized inference operators, which means they basically use a pie torch JIT along with some. So these pie torches, these pie torch JIT, which is kind of where you can sort of compile your deep learning model to an optimized to an optimized form that runs faster. And they also customize this. So they get to 20 seconds per one second. Then they do parameter reduction by sparsification. And they are able to abuse the sparse matrix compute operator. I think they implemented a custom one to do that. So this here is very much like if you have a neural network and. That's somehow connected, connected, connected. You want to train it in such a way that it's sparse, meaning that only very few of these connections have non zero weights. And because of that, you don't have to store the non zero weights and you don't also have to compute because something multiplied by zero is going to be zero. So you don't have to compute that. So they're they achieve sparsity 60, 96 percent. And this basically you do by some sort of teacher, teacher, student model. Or there are many ways to do this, but you can you can sparsity regularize a neural network and basically force most of the connections to be sparse while still maintaining a good training error or a good generalization error. So they supercharge this. And with this sparsity, they bring this down to five seconds per second. And then they go further and do blockwise sparsification and distillation. So they distill it to an entirely smaller model that then is also blockwise sparse. So they also enforce this blockwise sparsification. And that not only can you then have a better operator, they implement this block sparse matrix compute operator that is specifically designed to multiply block sparse matrices. They also in the text, they describe that it also optimizes cache access. So if you know about CPUs, they have these level one, level two, level three caches and you can optimize your computations. That's what libraries like LAPOC and BLAS do. You can optimize your computations such that your cache access is optimized. And therefore you can speed up a lot your computations because the amount of times that you actually have to go to your RAM and retrieve something is minimized. And that tends to be the slow part of the process when your cache misses. So again, they achieve 94 percent block sparsity and that almost gets them to real time. And it's still one CPU. So now they parallelize the operators that are doing the heavy heavy lifting to four CPU cores. And that doesn't divide it by four, of course, because there is an overhead in parallelization and synchronization. But that gets you to this one half a second needed to produce one second of audio. So there you're at real time. And that is pretty impressive. And they go on to describe so in detail how they did it. But they also give some examples of what they can do and would like to achieve even better in the future. So here is an example where they can adapt their model to a given style. And here they have a British accent. Recently, we successfully applied our new approach to create a British accented voice. This is the first of more accents and languages to come. And also you can adapt it. Their idea is that you have sort of an assistant and this assistant will be able to adapt to, let's say, your mood. We're also exploring features to make our voice respond intelligently with different styles of speaking based on the context. For example, when you're rushing out the door in the morning and need to know the time, your assistant would match your hurried pace. When you're in a quiet place and you are speaking softly, your assistant would reply to you in a quiet voice. And later, when it gets noisy in the kitchen, your assistant would switch to a projected voice so you can hear the call from your mom. Right. So this ties in very much with sort of conversational AI, so assistance and so on. But it also ties into wearables, I think. So the fact that you are now smaller than real time means you can run this potentially directly on your smartphone. You could run this on your fridge or something, on your stove, in your car without having to stream, basically. So you'll get much more real time on device assistance, maybe even in your watch. And I'm excited by this technology. So far, it seems you can get it in Facebook products, but I'm sure this will come to places. All right. If you enjoyed this, please consider subscribing. Thank you for listening and watching. Leave a like if you liked it and leave a comment if you have something to comment with that. Bye bye.
[ { "end": 3, "start": 0, "text": " Hi there, check this out." }, { "end": 10, "start": 3, "text": " Modern text-to-speech systems have come a long way in using neural networks to mimic the nuances of human voice." }, { "end": 19, "start": 10, "text": " To generate human-like audio, one second of speech can require a TTS system to output as many as 24,000 samples, sometimes even more." }, { "end": 29, "start": 19, "text": " The size and complexity of state-of-the-art models require massive computation, which often needs to run on GPUs or other specialized hardware." }, { "end": 34, "start": 29, "text": " This is generated by a system that Facebook AI has built." }, { "end": 43, "start": 34, "text": " We're going to look at it. It's called a highly efficient real-time text-to-speech system deployed on CPUs." }, { "end": 49, "start": 43, "text": " There's a lot to unpack here, but this is not a paper. This is basically a technical blog post." }, { "end": 57, "start": 49, "text": " I think they built this into their products and it's mainly explaining on a high level what they did." }, { "end": 67, "start": 57, "text": " They have a real-time text-to-speech system, which means that you have a text like this sentence here, share on Facebook." }, { "end": 77, "start": 67, "text": " You give it to the system and the system comes up with a sound wave that says this sentence." }, { "end": 80, "start": 77, "text": " If you listen to it, you'll hear share on Facebook." }, { "end": 91, "start": 80, "text": " It has to do that in a credible way, such that it is a human-like voice, because people like hearing that." }, { "end": 99, "start": 91, "text": " Not these kind of robot, old-school telephone robot voices where they just chunk together words. It has to flow naturally." }, { "end": 109, "start": 99, "text": " What you want to do for this is you want to have some sort of recurrent neural network or any sort of autoregressive network that outputs basically these samples here." }, { "end": 116, "start": 109, "text": " One at a time. These points you're going to output one at a time." }, { "end": 127, "start": 116, "text": " For one second of audio, it can require you to output 24,000 of these data points." }, { "end": 133, "start": 127, "text": " 24,000 forward propagations of your autoregressive model. That's massive." }, { "end": 142, "start": 133, "text": " If you want to do it in real-time, and that's why real-time is so impressive, you have to do this in less than one second." }, { "end": 150, "start": 142, "text": " You have to do these 24,000 forward passes in less than a second." }, { "end": 163, "start": 150, "text": " Even more so, this was already possible, I think, but it required a big data center with many, many, many, many GPUs in it." }, { "end": 169, "start": 163, "text": " You basically would send your text to this and it would stream back the audio." }, { "end": 178, "start": 169, "text": " They can do this just on CPUs. In fact, they can do this on a quad-core CPU in real-time." }, { "end": 185, "start": 178, "text": " They can generate this many samples in half a second. Pretty impressive." }, { "end": 189, "start": 185, "text": " Let's dive into how they do it. They say," }, { "end": 195, "start": 189, "text": " It's deployed in Portal, our video calling service, and available for use across a range of other Facebook applications." }, { "end": 202, "start": 195, "text": " From reading support for the visually impaired to visual reality experiences." }, { "end": 209, "start": 202, "text": " First, they show this graph right here. What are they doing?" }, { "end": 213, "start": 209, "text": " This is their entire system. It is chunked in multiple parts." }, { "end": 226, "start": 213, "text": " If you're a deep learning practitioner, you're very keen on taking this text and just like a giant neural network and just run it through and generate audio end-to-end." }, { "end": 236, "start": 226, "text": " This doesn't really work in this case, first of all, because it would be too many parameters to evaluate this many times." }, { "end": 247, "start": 236, "text": " But also, especially text and audio are such different modalities that you'll have to basically chunk this into individual parts." }, { "end": 251, "start": 247, "text": " That's what they do. They have this linguistic front-end." }, { "end": 256, "start": 251, "text": " The linguistic front-end generates two different things." }, { "end": 263, "start": 256, "text": " It generates what needs to be said and how does it need to be said." }, { "end": 267, "start": 263, "text": " What needs to be said, they call this linguistic features." }, { "end": 276, "start": 267, "text": " The linguistic features are things like phonemes and so on." }, { "end": 282, "start": 276, "text": " They don't even have a title for this one." }, { "end": 289, "start": 282, "text": " The linguistic front-end converts the input text, probably it's a sentence by sentence thing." }, { "end": 299, "start": 289, "text": " This is one sentence of text into a sequence of linguistic features such as phonemes and sentence type." }, { "end": 310, "start": 299, "text": " These linguistic features, if it's like share on Facebook, it would be like, okay, is one and then a and then r." }, { "end": 314, "start": 310, "text": " These are the phonemes of share, right? It would kind of chunk it into that." }, { "end": 318, "start": 314, "text": " That is much closer to what we think of an audio signal." }, { "end": 322, "start": 318, "text": " This will make one sound, this will make a sound and this will make a sound." }, { "end": 329, "start": 322, "text": " We chunk it into that and then here the how is it said." }, { "end": 337, "start": 329, "text": " This would be, for example, the fact that share on Facebook is sort of an instruction." }, { "end": 341, "start": 337, "text": " This is different from if this would be a question." }, { "end": 345, "start": 341, "text": " If it said, do you want to share this on Facebook?" }, { "end": 352, "start": 345, "text": " Then the what would output much the same phonemes right here, except it's a different word now." }, { "end": 356, "start": 352, "text": " But the how would output, this is a question." }, { "end": 362, "start": 356, "text": " As you can see, the information flow right here informs the later stages." }, { "end": 370, "start": 362, "text": " This would then cause at the end of the sentence the voice to go up because it's a question." }, { "end": 375, "start": 370, "text": " This linguistic front end, as you can see, it still deals with text." }, { "end": 381, "start": 375, "text": " It deals with text and it outputs these how features and these linguistic, these what features." }, { "end": 385, "start": 381, "text": " The what features now go into an acoustic model." }, { "end": 387, "start": 385, "text": " What does the acoustic model do?" }, { "end": 394, "start": 387, "text": " The acoustic model is meant to generate a spectrum, a spectrogram of the sound." }, { "end": 399, "start": 394, "text": " We'll skip this for the moment and go to the neural vocoder." }, { "end": 405, "start": 399, "text": " The neural vocoder is a kind of a standard thing in text in any speech producing." }, { "end": 409, "start": 405, "text": " It takes a spectrogram of the sound and turns it into actual audio." }, { "end": 422, "start": 409, "text": " So this here, I think they achieve it with, they say something similar like a wave RNN based on plus like a CNN." }, { "end": 425, "start": 422, "text": " So we'll look into that quickly." }, { "end": 431, "start": 425, "text": " So the spectrum, the spectrogram of the sound is going to be a bit of an image." }, { "end": 439, "start": 431, "text": " And the image has time on this axis and frequency on this axis." }, { "end": 447, "start": 439, "text": " And then there is a, it's usually somewhat like color coded, but it's just intensity." }, { "end": 454, "start": 447, "text": " So whenever there is, whenever a frequency at a given time is expressed strongly, it will light up." }, { "end": 459, "start": 454, "text": " So it could be something like this." }, { "end": 465, "start": 459, "text": " So over time, the mid frequency is always there, but this frequency right here is not there." }, { "end": 467, "start": 465, "text": " And this is here at the beginning." }, { "end": 469, "start": 467, "text": " So there is a sort of a way to read the spectrogram." }, { "end": 479, "start": 469, "text": " But this represents maybe something like this represents the not even sure how much sound this represents," }, { "end": 487, "start": 479, "text": " but this can represent maybe something like 200 milliseconds of audio." }, { "end": 497, "start": 487, "text": " And you have to basically perform a Fourier transform to transform that into 200 milliseconds of actual wave audio." }, { "end": 508, "start": 497, "text": " But since the audio has to be output at what this 24,000 samples per second, what's coming in here is not that much." }, { "end": 515, "start": 508, "text": " The acoustic model outputs not 24,000 spectrums per second." }, { "end": 519, "start": 515, "text": " So first of all, this has to be up sampled." }, { "end": 522, "start": 519, "text": " So this time dimension here has just too few samples." }, { "end": 533, "start": 522, "text": " So we use the CNN because this is basically an image in order to up sample this in order to make a long image out of this." }, { "end": 538, "start": 533, "text": " And basically, the CNN will have to impute." }, { "end": 541, "start": 538, "text": " This is learned, right?" }, { "end": 547, "start": 541, "text": " This is learned how this spectrogram would look if it were sampled much more densely." }, { "end": 552, "start": 547, "text": " And now it has basically the correct number of samples here." }, { "end": 559, "start": 552, "text": " And now this wave RNN can step through it and look at these slices and look at the last slices" }, { "end": 564, "start": 559, "text": " or actually also can look at the last spectrograms, maybe." }, { "end": 569, "start": 564, "text": " I don't know. They don't really say what the RNN exactly goes over." }, { "end": 570, "start": 569, "text": " But this is how I imagine it." }, { "end": 574, "start": 570, "text": " And the wave RNN is based on a wave net architecture." }, { "end": 582, "start": 574, "text": " And that means wave net is sort of like if you look to produce this thing right here, you can look back all the way to the beginning." }, { "end": 585, "start": 582, "text": " But this would be too many connections, right?" }, { "end": 587, "start": 585, "text": " It would be too much memory." }, { "end": 595, "start": 587, "text": " So what you do is you can look back at the directions at the things right before you in very great detail." }, { "end": 600, "start": 595, "text": " But then as you go back further and further, you basically lose detail." }, { "end": 609, "start": 600, "text": " See, there's only two connections here in this long stretch where there is many, many connections here at the very beginning." }, { "end": 610, "start": 609, "text": " So right before you." }, { "end": 615, "start": 610, "text": " So that means you look in more detail what you've produced recently." }, { "end": 621, "start": 615, "text": " But you also sort of look back in a more blurry way at what you've produced a long time ago." }, { "end": 623, "start": 621, "text": " So this is a wave net architecture." }, { "end": 632, "start": 623, "text": " And that's they say they use something like this in order to then actually produce the final audio from the spectrograms." }, { "end": 636, "start": 632, "text": " So actually, neural vocoder, you can you can train this thing." }, { "end": 640, "start": 636, "text": " You can train it by itself." }, { "end": 644, "start": 640, "text": " Right. You simply feed spectrums and you make it go audio." }, { "end": 648, "start": 644, "text": " And you know, there's a lot of audio on the Internet." }, { "end": 654, "start": 648, "text": " You can simply produce spectrograms from that and then train the vocoder to produce the audio." }, { "end": 659, "start": 654, "text": " So the good thing about this pipeline here is you can train a lot of these things independently from each other." }, { "end": 664, "start": 659, "text": " I don't actually know whether they do that or not, but you can." }, { "end": 672, "start": 664, "text": " You see this box up here, this prosody model, and that will take in." }, { "end": 677, "start": 672, "text": " These how features. So how does something need to be said?" }, { "end": 682, "start": 677, "text": " How? Come on." }, { "end": 687, "start": 682, "text": " Well, this is an H. How does something need to be said?" }, { "end": 693, "start": 687, "text": " And it will transform it into features that the sort of neural network can understand." }, { "end": 698, "start": 693, "text": " These these neural networks, they need features like they need embeddings." }, { "end": 707, "start": 698, "text": " And as you can see here, it also takes into account the speaker embedding, which is not only how what the sentence is, the fact that the sentence is a question." }, { "end": 715, "start": 707, "text": " You would also get the information that the speaker should be, you know, kind of calm." }, { "end": 720, "start": 715, "text": " Sorry, that would be the style. The speaker should be maybe a woman voice." }, { "end": 727, "start": 720, "text": " And then the language should be it should have like an English sound or a German sound and so on." }, { "end": 736, "start": 727, "text": " So this model here will take in all of that and emit features that these neural networks here can understand." }, { "end": 747, "start": 736, "text": " So as you see here, the neural vocoder, not only does it transform spectrograms to audio, but it takes into account how you want the audio to sound." }, { "end": 749, "start": 747, "text": " And so does this acoustic model." }, { "end": 755, "start": 749, "text": " So the acoustic model is sort of along with the prosody model is a bit of the heart of the thing here." }, { "end": 757, "start": 755, "text": " It takes in these linguistic features." }, { "end": 759, "start": 757, "text": " So what needs to be said, right?" }, { "end": 767, "start": 759, "text": " The acoustic model, it takes into account the again, the speaker embedding, language embedding and so on." }, { "end": 776, "start": 767, "text": " It takes into account the output from the prosody model of the how you would like it to be said." }, { "end": 779, "start": 776, "text": " This includes what type of sentence this is." }, { "end": 788, "start": 779, "text": " And this it synthesizes it all in order to come up with the spectrum right here in order to come up with these spectrograms." }, { "end": 798, "start": 788, "text": " So this is sort of the heart of the of the thing." }, { "end": 807, "start": 798, "text": " OK, so about the prosody model, they say we use style embeddings that allow us to create new voice styles," }, { "end": 813, "start": 807, "text": " including assistant soft fast project and formal using only a small amount of additional data with the existing data sets." }, { "end": 816, "start": 813, "text": " We don't have to create a separate model for each style." }, { "end": 819, "start": 816, "text": " We need only 30 to 60 minutes of training data for each voice style." }, { "end": 828, "start": 819, "text": " So that happens because you you take into you take in actually an embedding of the feet of the speakers and you train these things independently." }, { "end": 836, "start": 828, "text": " So that means you can sort of generalize to a new style really quickly." }, { "end": 841, "start": 836, "text": " Here the they describe in essence how their acoustic model works." }, { "end": 851, "start": 841, "text": " The fact that they output 13 dimensional MFCC features concatenated with the fundamental frequency and a five dimensional periodicity feature," }, { "end": 854, "start": 851, "text": " which is much easier for the acoustic model to generate." }, { "end": 857, "start": 854, "text": " So that's what the acoustic model generates." }, { "end": 867, "start": 857, "text": " OK, and then here are conditional neural vocoders is the final part of the pipeline," }, { "end": 876, "start": 867, "text": " consists of two components, a convolutional neural network that up samples or expands the input feature vectors from the frame rate 200 predictions per second." }, { "end": 878, "start": 876, "text": " So that's not it's not 200 milliseconds." }, { "end": 888, "start": 878, "text": " It would be 200 spectrograms a second to the sample rate 24000 predictions per second. And this similar to wave RNN," }, { "end": 892, "start": 888, "text": " which synthesizes audio samples autoregressively." }, { "end": 896, "start": 892, "text": " That means one sample at a time at 24000 samples per second." }, { "end": 899, "start": 896, "text": " Crazy, right?" }, { "end": 903, "start": 899, "text": " So this all seems like it's a lot of computation." }, { "end": 912, "start": 903, "text": " And now they describe how they get this to run faster than real time." }, { "end": 918, "start": 912, "text": " And they list their their individual contributions here." }, { "end": 924, "start": 918, "text": " So if they just run this on one CPU core, you see it takes 80 seconds to produce one second of audio." }, { "end": 932, "start": 924, "text": " Then they do optimized inference operators, which means they basically use a pie torch JIT along with some." }, { "end": 946, "start": 932, "text": " So these pie torches, these pie torch JIT, which is kind of where you can sort of compile your deep learning model to an optimized to an optimized form that runs faster." }, { "end": 948, "start": 946, "text": " And they also customize this." }, { "end": 952, "start": 948, "text": " So they get to 20 seconds per one second." }, { "end": 956, "start": 952, "text": " Then they do parameter reduction by sparsification." }, { "end": 960, "start": 956, "text": " And they are able to abuse the sparse matrix compute operator." }, { "end": 964, "start": 960, "text": " I think they implemented a custom one to do that." }, { "end": 973, "start": 964, "text": " So this here is very much like if you have a neural network and." }, { "end": 976, "start": 973, "text": " That's somehow connected, connected, connected." }, { "end": 985, "start": 976, "text": " You want to train it in such a way that it's sparse, meaning that only very few of these connections have non zero weights." }, { "end": 994, "start": 985, "text": " And because of that, you don't have to store the non zero weights and you don't also have to compute because something multiplied by zero is going to be zero." }, { "end": 996, "start": 994, "text": " So you don't have to compute that." }, { "end": 1003, "start": 996, "text": " So they're they achieve sparsity 60, 96 percent." }, { "end": 1008, "start": 1003, "text": " And this basically you do by some sort of teacher, teacher, student model." }, { "end": 1023, "start": 1008, "text": " Or there are many ways to do this, but you can you can sparsity regularize a neural network and basically force most of the connections to be sparse while still maintaining a good training error or a good generalization error." }, { "end": 1025, "start": 1023, "text": " So they supercharge this." }, { "end": 1031, "start": 1025, "text": " And with this sparsity, they bring this down to five seconds per second." }, { "end": 1036, "start": 1031, "text": " And then they go further and do blockwise sparsification and distillation." }, { "end": 1042, "start": 1036, "text": " So they distill it to an entirely smaller model that then is also blockwise sparse." }, { "end": 1044, "start": 1042, "text": " So they also enforce this blockwise sparsification." }, { "end": 1057, "start": 1044, "text": " And that not only can you then have a better operator, they implement this block sparse matrix compute operator that is specifically designed to multiply block sparse matrices." }, { "end": 1064, "start": 1057, "text": " They also in the text, they describe that it also optimizes cache access." }, { "end": 1072, "start": 1064, "text": " So if you know about CPUs, they have these level one, level two, level three caches and you can optimize your computations." }, { "end": 1076, "start": 1072, "text": " That's what libraries like LAPOC and BLAS do." }, { "end": 1082, "start": 1076, "text": " You can optimize your computations such that your cache access is optimized." }, { "end": 1094, "start": 1082, "text": " And therefore you can speed up a lot your computations because the amount of times that you actually have to go to your RAM and retrieve something is minimized." }, { "end": 1099, "start": 1094, "text": " And that tends to be the slow part of the process when your cache misses." }, { "end": 1108, "start": 1099, "text": " So again, they achieve 94 percent block sparsity and that almost gets them to real time." }, { "end": 1118, "start": 1108, "text": " And it's still one CPU. So now they parallelize the operators that are doing the heavy heavy lifting to four CPU cores." }, { "end": 1125, "start": 1118, "text": " And that doesn't divide it by four, of course, because there is an overhead in parallelization and synchronization." }, { "end": 1132, "start": 1125, "text": " But that gets you to this one half a second needed to produce one second of audio." }, { "end": 1138, "start": 1132, "text": " So there you're at real time. And that is pretty impressive." }, { "end": 1142, "start": 1138, "text": " And they go on to describe so in detail how they did it." }, { "end": 1151, "start": 1142, "text": " But they also give some examples of what they can do and would like to achieve even better in the future." }, { "end": 1160, "start": 1151, "text": " So here is an example where they can adapt their model to a given style." }, { "end": 1163, "start": 1160, "text": " And here they have a British accent." }, { "end": 1169, "start": 1163, "text": " Recently, we successfully applied our new approach to create a British accented voice." }, { "end": 1175, "start": 1169, "text": " This is the first of more accents and languages to come." }, { "end": 1178, "start": 1175, "text": " And also you can adapt it." }, { "end": 1188, "start": 1178, "text": " Their idea is that you have sort of an assistant and this assistant will be able to adapt to, let's say, your mood." }, { "end": 1194, "start": 1188, "text": " We're also exploring features to make our voice respond intelligently with different styles of speaking based on the context." }, { "end": 1199, "start": 1194, "text": " For example, when you're rushing out the door in the morning and need to know the time, your assistant would match your hurried pace." }, { "end": 1206, "start": 1199, "text": " When you're in a quiet place and you are speaking softly, your assistant would reply to you in a quiet voice." }, { "end": 1216, "start": 1206, "text": " And later, when it gets noisy in the kitchen, your assistant would switch to a projected voice so you can hear the call from your mom." }, { "end": 1224, "start": 1216, "text": " Right. So this ties in very much with sort of conversational AI, so assistance and so on." }, { "end": 1227, "start": 1224, "text": " But it also ties into wearables, I think." }, { "end": 1236, "start": 1227, "text": " So the fact that you are now smaller than real time means you can run this potentially directly on your smartphone." }, { "end": 1244, "start": 1236, "text": " You could run this on your fridge or something, on your stove, in your car without having to stream, basically." }, { "end": 1250, "start": 1244, "text": " So you'll get much more real time on device assistance, maybe even in your watch." }, { "end": 1256, "start": 1250, "text": " And I'm excited by this technology." }, { "end": 1262, "start": 1256, "text": " So far, it seems you can get it in Facebook products, but I'm sure this will come to places." }, { "end": 1266, "start": 1262, "text": " All right. If you enjoyed this, please consider subscribing." }, { "end": 1269, "start": 1266, "text": " Thank you for listening and watching." }, { "end": 1276, "start": 1269, "text": " Leave a like if you liked it and leave a comment if you have something to comment with that. Bye bye." } ]
p-zOeQCoG9c
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Weight Standardization (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "normalize", "batchnorm", "groupnorm", "layernorm", "mean", "center", "std", "standardize", "backpropagation", "convergence", "gradients", "norm", "convolution", "cnn", "convolutional neural networks", "filters", "kernel", "channel", "architecture" ]
It's common for neural networks to include data normalization such as BatchNorm or GroupNorm. This paper extends the normalization to also include the weights of the network. This surprisingly simple change leads to a boost in performance and - combined with GroupNorm - new state-of-the-art results. https://arxiv.org/abs/1903.10520 Abstract: In this paper, we propose Weight Standardization (WS) to accelerate deep network training. WS is targeted at the micro-batch training setting where each GPU typically has only 1-2 images for training. The micro-batch training setting is hard because small batch sizes are not enough for training networks with Batch Normalization (BN), while other normalization methods that do not rely on batch knowledge still have difficulty matching the performances of BN in large-batch training. Our WS ends this problem because when used with Group Normalization and trained with 1 image/GPU, WS is able to match or outperform the performances of BN trained with large batch sizes with only 2 more lines of code. In micro-batch training, WS significantly outperforms other normalization methods. WS achieves these superior results by standardizing the weights in the convolutional layers, which we show is able to smooth the loss landscape by reducing the Lipschitz constants of the loss and the gradients. The effectiveness of WS is verified on many tasks, including image classification, object detection, instance segmentation, video recognition, semantic segmentation, and point cloud recognition. The code is available here: this https URL. Authors: Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, Alan Yuille Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at weight standardization by Si Wen Jiao, Hu Yuh Wang, Xianqi Liu, Wei Shen, Alan Yuel of John Hopkins University. So weight standardization is a normalization technique for training neural networks and it goes basically in conjunction with another technique called group normalization. So if you haven't group normalization norm, that is ugly, if you haven't seen my video on group normalization and don't know what it is I suggest you go watch that first or read the group norm paper or some blog post because weight standardization is usually used together with group norm in order to work well and that's what this paper also says. Even though it's pretty much independent but here you can see their main results. So they compare batch norm, group norm and weight standardization used with group norm then they can as you can see here they can outperform in the image net top one accuracy the other two models and the important part here as you can see is batch norm is trained with large batch sizes while group norm and group norm plus weight standardization are trained with one image per GPU so they have a multi GPU setup and this is just one image per GPU and these results over here are on a mask rcnn which I believe is a recurrent model where the model is large because the kind of the model is large and therefore you can only have very small batches per worker and that means batch norm will work less. Now again we've discussed why batch norm is not a good thing when you have to go to small batch sizes because basically what people have discovered is that it is very beneficial in machine learning to normalize your data before working with it. What do we mean by it? So if you have a bunch of data points right here and let's say like this it is it is usually beneficial to first center the data like this so basically calculate its mean and shift it and then to standardize the axis so basically you divide it by the standard deviation in each direction and your data will look something like this. Of many classical methods that will improve the conditioning numbers of the requirements to solve it and so on and even of deep learning methods we just know that if you standardize your data like this it works better. So people are basically have come up with these methods that where they say well if it helps for the data at the beginning of a neural network then if after a layer the data is kind of out of whack that can happen after a layer of neural network we should maybe first before we send it to the next layer do the same thing center it again and then send it through and if after the next layer again it's out of whack we should maybe center it and standardize it again before sending it through the next layer. So in each layer you have these transformations that center and standardize the data and usually for the longest time this was a batch norm. Batch norm does this across the mini batches of the data since you can't pass the entire data set. Now group norm has come and replaced batch norm because in batch norm it's very dependent on the batch size while group norm isn't. The group norm paper has sort of made it clear that in competitive batch sizes in the large batch size regime group norm is sorry batch norm is still the king batch norm still works better. It's only when you go to very small batch sizes that group norm takes over and that's what you can see here. So here okay it's a bit unfair because batch norm is trained with a larger batch size but even if group norm were to be trained with the large batch size it would still be in the same place because no it wouldn't it would not. Sorry that is that is not the case because the batches still influence the gradient stochasticity and so on. But still batch norm is better than group norm as you can see here but here over here where you kind of have to go to the small batch sizes then batch norm is all of a sudden worse than group norm. And the weight standardization is a technique to actually make group norm better than batch norm in any of these so even in these in the large batch regime. Okay so we'll now explore weight standardization. So in the group norm paper we've looked at the diagram on the left. So basically in batch norm here is the number of data points. This is your batch. This is the channels of the batch of the individual images. Channels. And this is the height and width of the image. So this is the image itself a single channel. So a single channel in the image would be a column in this thing right here. Batch norm normalizes across the data points in a single channel. Layer norm which is a precursor to group norm normalizes only in a single data point instance but across all of the channels as you can see here. Now that frees its dependence on the batch size. Each data point is treated individually but of course it sort of convolves all the channels with each other. It doesn't distinguish them. Instance norm tries to fix this. Instance norm down here tries to fix this by saying it was a good idea to normalize each feature individually and takes it to the extreme. Basically normalizes a single image by each of these single features. But that loses too much information. Group norm comes along and says maybe some of the features naturally depend on each other. Naturally exhibit the same responses. Therefore we should normalize them in groups. So we take still a single image but we take groups in this case groups of three channels together and normalize across that. Now this here is all in data space. This all normalizes the data like we said up here when we drew this. This is all normalizing the data before passing it through the next layer. Now what actually happens in these layers? So what happens here? What happens here in a convolutional neural network is that the images get convolved with kernels. That's what a neural network layer is. So if you have an image right here of our trusty cat. I've drawn whiskers in a while. That nose is very high. The eyes must be like up here. Sorry cat. And the layer inherently has these things called kernels. Now I'm just going to draw one of these kernels right here. It's a three by three kernel and what you'll do is you'll slide the kernel across this right across like this. You slide it across across across across and for each point you convolve the kernel. So you can involve the values here with the pixels here and sum them up and that for each position in the image means that you'll basically get a new value at each point and that will be your next layer's data point. Now in these normalization techniques we usually normalize the data points. So here you have multiple channels maybe a red a green and a blue and so on and in the intermediate layers you have even more. But you also have multiple kernels. You can see here you have multiple of these kernels which will then result in multiple output channels. The old normalization methods batch norm, layer norm, group norm, they all work in they all work in this or in this space in the space of data. Whereas weight standardization works on the kernel space. So weight standardization means you want to normalize the weights of the neural network not the data. And that's why it can be used in conjunction with something like group norm or actually batch norm or layer norm. It could be used with any of these but these authors use it in conjunction with group norm. So what does it do? If you have these kernels the kernels are characterized actually a kernel is characterized by four numbers. So first of all it's the height and width of the kernel which in our case was three by three. It is characterized by two more numbers which is the CN, the in channels and the out channels. So the in channels is the number of channels that come into the layer and the out channels are the number of channels that you want to transform that into. So here you can see the in channels are listed here and the out channels are listed here and in the up-down direction which is not labeled here is the height and width. So this here would be actually a two by two kernel. So each of these slivers here is a two by two kernel in the convolutional network and then that would be the orange sliver here and then the sliver behind that would be the next two by two kernel. Weight standardization says, hey it might be as we normalize the data it might be a good idea. Sorry I was that was wrong. One column here, one of these columns is a two by two filter and then the column behind it and the column next to it, they're all two by two filters right. So you have two by two filters in the output or and you also have two by two filters for each of the input output channel combination you have a two by two filter. So you have an entire matrix of two by two filters if you can imagine that. So across the out and across the in direction. Weight standardization says, hey it might be a good idea to see that the weights for a given output channel right. This is we take one output channel and we see all the filters that transform the input into that one output channel which is going to be this many times this many times this many numbers or this many filters. Maybe we should normalize all of these to be sort of to not get out of whack because one could imagine that during training right if we start we initialize our filters somewhere here you know maybe one number this this one number here we initialize it randomly right we draw it from random and then maybe as we train it actually gets very large because it's actually plausible because after that we we you know this is our neural network layer after that we have this procedure to recenter the data right. So I could make a very large weight here multiply the data by very large weight because it gets re-centered anyway but of course if my weights get large I'll basically increase the variance and the instability and the gradients might be high and and so on. So these author think it might be a good idea to normalize these weights. So just as you normalize the data you normalize the weights and this actually turns out to be fairly easy in the sense of how you would do it. So instead of transforming X which is the input to a layer into Y using W so this is W this is your actual parameter using W you won't do this right now so this this was usually you just do you just do X times W and that gives you Y this is a convolution operation right here. Now you don't do this you do you have take W and first you subtract the mean of W this is now for a single output channel and then you divide by the standard deviation how many of this is standard deviation of W and that entire thing you now multiply by X. Now since these things here are sorry about that since these things here are just you know deterministic operation you can actually back propagate through it so the forward path of data now looks as follows you come you start you say okay my data comes in I will take my weights that my layer weights and I will first center them then scale them with its standard deviation and then I will use that thing and X in order to obtain my layer output and then I'll send that to the next layer. Now the backprop signal here is interesting because the backprop signal comes in from here and splits up in two ways it splits up into the back prop signal basically you have to back prop through the X times W hat operation we know how to do that that's just a convolutional back prop that you back prop through the convolution operation back to the last layer. Now usually when you back prop through the convolution operation you get two things you get the derivative with respect to X and you get the derivative with respect to the weights W and you can send both on and you would update your weights with that gradient but now what you'll have to do because this is not your actual parameter of the network you have to take that particular signal and you have to basically reverse the standardization and the centering before you can apply the gradient but that's all doable the actually modern frameworks will do it by themselves but it's just that the backprop path here it introduces two new operation to the forward and to the backprop path that you didn't have before but I can imagine this will actually not take you won't even notice that this is happening this is so fast so they the idea is basically pretty basic especially since the entire discussion around normalization has already happened I enjoy that this paper does go into the theory a bit more so they analyze what this weight standardization what effect it has on the Lipschitz constant of the loss for example and they also research what what what contributes more the centering of the weights or the standardization so they kind of run all these ablations where they figure out okay if we just do group norm we have one we you know we have this trajectory here and if we run group non plus equation five which is subtracting the mean you can see the blue and the orange that is quite a bit and if we only do the dividing by the standard deviation you can see it's pretty close together but there is a difference if you do both then again there is a difference to only doing the centering so they they say even though you know probably subtracting the mean gives you most of the benefit since it is so easy you should just do both and I honestly think and here in the in the in the validation error that makes basically no difference at all and they do quite a number of these ablations which I'm not going to go into too much and they do also the so the Lipschitz constant of the loss and the Lipschitz constant of the gradients they basically show that the loss and the gradients are behaved more more well-behaved when you use this weight standardization technique together with group norm they also do quite a bit of experiments where they show that their method outperforms batch norm and especially in the small batch size regime and that is something that I absolutely believe what happened here okay I we actually don't even need to go down there because if you want to read the paper I invite you to read the paper it's a very good paper I enjoyed reading it but ultimately they suggest this new method and also I have seen this one replicated across the community a number of times so it seems to be a thing that I would expect either it fizzes out and the community decides that it's about the same as batch norm and therefore not worth it or and that's what I believe since we also go into the direction of larger models which means smaller batches per worker and generally batch norm is a pain I believe this is just going to be rather standard in the future so I'll actually incorporate this if I can into my next projects so that was it for me if you like this consider subscribing consider leaving a like on the video thank you for listening if you have any comments I will very probably read them bye bye
[ { "end": 7.8, "start": 0, "text": " Hi there! Today we're looking at weight standardization by Si Wen Jiao, Hu Yuh Wang," }, { "end": 16.32, "start": 7.8, "text": " Xianqi Liu, Wei Shen, Alan Yuel of John Hopkins University. So weight standardization" }, { "end": 23.22, "start": 16.32, "text": " is a normalization technique for training neural networks and it goes basically in" }, { "end": 28.18, "start": 23.22, "text": " conjunction with another technique called group normalization. So if you" }, { "end": 36.4, "start": 28.18, "text": " haven't group normalization norm, that is ugly, if you haven't seen my video on" }, { "end": 41.36, "start": 36.4, "text": " group normalization and don't know what it is I suggest you go watch that first" }, { "end": 46.64, "start": 41.36, "text": " or read the group norm paper or some blog post because weight standardization" }, { "end": 54.16, "start": 46.64, "text": " is usually used together with group norm in order to work well and that's what" }, { "end": 61.64, "start": 54.16, "text": " this paper also says. Even though it's pretty much independent but here you can" }, { "end": 69.64, "start": 61.64, "text": " see their main results. So they compare batch norm, group norm and weight" }, { "end": 75.47999999999999, "start": 69.64, "text": " standardization used with group norm then they can as you can see here they" }, { "end": 81.44, "start": 75.47999999999999, "text": " can outperform in the image net top one accuracy the other two models and the" }, { "end": 87.36, "start": 81.44, "text": " important part here as you can see is batch norm is trained with large batch" }, { "end": 93.08, "start": 87.36, "text": " sizes while group norm and group norm plus weight standardization are trained" }, { "end": 98.92, "start": 93.08, "text": " with one image per GPU so they have a multi GPU setup and this is just one" }, { "end": 107.36, "start": 98.92, "text": " image per GPU and these results over here are on a mask rcnn" }, { "end": 117.4, "start": 107.36, "text": " which I believe is a recurrent model where the model is large because the kind" }, { "end": 122.2, "start": 117.4, "text": " of the model is large and therefore you can only have very small batches per" }, { "end": 128.56, "start": 122.2, "text": " worker and that means batch norm will work less. Now again we've discussed why" }, { "end": 134.84, "start": 128.56, "text": " batch norm is not a good thing when you have to go to small batch sizes because" }, { "end": 141.08, "start": 134.84, "text": " basically what people have discovered is that it is very beneficial in machine" }, { "end": 144.92000000000002, "start": 141.08, "text": " learning to normalize your data before working with it. What do we mean by it?" }, { "end": 151.84, "start": 144.92000000000002, "text": " So if you have a bunch of data points right here and let's say like this it is" }, { "end": 157.68, "start": 151.84, "text": " it is usually beneficial to first center the data like this so basically" }, { "end": 163.96, "start": 157.68, "text": " calculate its mean and shift it and then to standardize the axis so basically" }, { "end": 168.48000000000002, "start": 163.96, "text": " you divide it by the standard deviation in each direction and your data will" }, { "end": 172.12, "start": 168.48000000000002, "text": " look something like this. Of many classical methods that will improve the" }, { "end": 178.28, "start": 172.12, "text": " conditioning numbers of the requirements to solve it and so on and even of deep" }, { "end": 182.52, "start": 178.28, "text": " learning methods we just know that if you standardize your data like this it" }, { "end": 188.20000000000002, "start": 182.52, "text": " works better. So people are basically have come up with these methods that" }, { "end": 192.84, "start": 188.20000000000002, "text": " where they say well if it helps for the data at the beginning of a neural" }, { "end": 200.16, "start": 192.84, "text": " network then if after a layer the data is kind of out of whack that can" }, { "end": 204.36, "start": 200.16, "text": " happen after a layer of neural network we should maybe first before we send it" }, { "end": 209.28, "start": 204.36, "text": " to the next layer do the same thing center it again and then send it through" }, { "end": 214.8, "start": 209.28, "text": " and if after the next layer again it's out of whack we should maybe center it" }, { "end": 219.44, "start": 214.8, "text": " and standardize it again before sending it through the next layer. So in each" }, { "end": 225.68, "start": 219.44, "text": " layer you have these transformations that center and standardize the data and" }, { "end": 230.32, "start": 225.68, "text": " usually for the longest time this was a batch norm. Batch norm does this across" }, { "end": 235.6, "start": 230.32, "text": " the mini batches of the data since you can't pass the entire data set. Now group" }, { "end": 240.92, "start": 235.6, "text": " norm has come and replaced batch norm because in batch norm it's very" }, { "end": 247, "start": 240.92, "text": " dependent on the batch size while group norm isn't. The group norm paper has" }, { "end": 252, "start": 247, "text": " sort of made it clear that in competitive batch sizes in the large" }, { "end": 256.48, "start": 252, "text": " batch size regime group norm is sorry batch norm is still the king batch norm" }, { "end": 260.2, "start": 256.48, "text": " still works better. It's only when you go to very small batch sizes that group" }, { "end": 265.12, "start": 260.2, "text": " norm takes over and that's what you can see here. So here okay it's a bit unfair" }, { "end": 268.88, "start": 265.12, "text": " because batch norm is trained with a larger batch size but even if group norm" }, { "end": 273.72, "start": 268.88, "text": " were to be trained with the large batch size it would still be in the same place" }, { "end": 283.56, "start": 273.72, "text": " because no it wouldn't it would not. Sorry that is that is not the case" }, { "end": 289.76000000000005, "start": 283.56, "text": " because the batches still influence the gradient stochasticity and so on. But" }, { "end": 293.84000000000003, "start": 289.76000000000005, "text": " still batch norm is better than group norm as you can see here but here over" }, { "end": 299.96000000000004, "start": 293.84000000000003, "text": " here where you kind of have to go to the small batch sizes then batch norm is all" }, { "end": 307.28, "start": 299.96, "text": " of a sudden worse than group norm. And the weight standardization is a technique" }, { "end": 313.32, "start": 307.28, "text": " to actually make group norm better than batch norm in any of these so even in" }, { "end": 320.67999999999995, "start": 313.32, "text": " these in the large batch regime. Okay so we'll now explore weight" }, { "end": 326.62, "start": 320.67999999999995, "text": " standardization. So in the group norm paper we've looked at the diagram on the" }, { "end": 331.96, "start": 326.62, "text": " left. So basically in batch norm here is the number of data points. This is your" }, { "end": 339.8, "start": 331.96, "text": " batch. This is the channels of the batch of the individual images. Channels. And" }, { "end": 343.78000000000003, "start": 339.8, "text": " this is the height and width of the image. So this is the image itself a" }, { "end": 347.32, "start": 343.78000000000003, "text": " single channel. So a single channel in the image would be a column in this" }, { "end": 353.72, "start": 347.32, "text": " thing right here. Batch norm normalizes across the data points in a single" }, { "end": 360.76000000000005, "start": 353.72, "text": " channel. Layer norm which is a precursor to group norm normalizes only in a single" }, { "end": 365.72, "start": 360.76000000000005, "text": " data point instance but across all of the channels as you can see here. Now" }, { "end": 370.20000000000005, "start": 365.72, "text": " that frees its dependence on the batch size. Each data point is treated" }, { "end": 375.32000000000005, "start": 370.20000000000005, "text": " individually but of course it sort of convolves all the channels with each" }, { "end": 381.64000000000004, "start": 375.32000000000005, "text": " other. It doesn't distinguish them. Instance norm tries to fix this. Instance" }, { "end": 386.28, "start": 381.64, "text": " norm down here tries to fix this by saying it was a good idea to" }, { "end": 390.36, "start": 386.28, "text": " normalize each feature individually and takes it to the extreme. Basically" }, { "end": 398.56, "start": 390.36, "text": " normalizes a single image by each of these single features. But that loses" }, { "end": 403.76, "start": 398.56, "text": " too much information. Group norm comes along and says maybe some of the" }, { "end": 409.15999999999997, "start": 403.76, "text": " features naturally depend on each other. Naturally exhibit the same responses." }, { "end": 414.72, "start": 409.16, "text": " Therefore we should normalize them in groups. So we take still a single image" }, { "end": 420.04, "start": 414.72, "text": " but we take groups in this case groups of three channels together and normalize" }, { "end": 428.28000000000003, "start": 420.04, "text": " across that. Now this here is all in data space. This all normalizes the data like" }, { "end": 433.18, "start": 428.28000000000003, "text": " we said up here when we drew this. This is all normalizing the data before" }, { "end": 437.90000000000003, "start": 433.18, "text": " passing it through the next layer. Now what actually happens in these layers? So" }, { "end": 444.35999999999996, "start": 437.9, "text": " what happens here? What happens here in a convolutional neural network is that the" }, { "end": 448.4, "start": 444.35999999999996, "text": " images get convolved with kernels. That's what a neural network" }, { "end": 455.4, "start": 448.4, "text": " layer is. So if you have an image right here of our trusty cat. I've drawn" }, { "end": 460.64, "start": 455.4, "text": " whiskers in a while. That nose is very high. The eyes must be like up here. Sorry" }, { "end": 466.44, "start": 460.64, "text": " cat. And the layer inherently has these things called kernels. Now I'm just" }, { "end": 471.16, "start": 466.44, "text": " going to draw one of these kernels right here. It's a three by three kernel and" }, { "end": 476.2, "start": 471.16, "text": " what you'll do is you'll slide the kernel across this right across like" }, { "end": 482.56, "start": 476.2, "text": " this. You slide it across across across across and for each point you convolve" }, { "end": 488.28, "start": 482.56, "text": " the kernel. So you can involve the values here with the pixels here and sum them" }, { "end": 494.28, "start": 488.28, "text": " up and that for each position in the image means that you'll basically get a" }, { "end": 502.88, "start": 494.28, "text": " new value at each point and that will be your next layer's data point. Now in" }, { "end": 508.23999999999995, "start": 502.88, "text": " these normalization techniques we usually normalize the data points. So" }, { "end": 512, "start": 508.23999999999995, "text": " here you have multiple channels maybe a red a green and a blue and so on and in" }, { "end": 518.12, "start": 512, "text": " the intermediate layers you have even more. But you also have multiple" }, { "end": 524.64, "start": 518.12, "text": " kernels. You can see here you have multiple of these kernels which will then" }, { "end": 533.48, "start": 524.64, "text": " result in multiple output channels. The old normalization methods batch norm," }, { "end": 543.12, "start": 533.48, "text": " layer norm, group norm, they all work in they all work in this or in this space" }, { "end": 549.24, "start": 543.12, "text": " in the space of data. Whereas weight standardization works on the kernel" }, { "end": 554.68, "start": 549.24, "text": " space. So weight standardization means you want to normalize the weights of the" }, { "end": 559.96, "start": 554.68, "text": " neural network not the data. And that's why it can be used in conjunction with" }, { "end": 564.24, "start": 559.96, "text": " something like group norm or actually batch norm or layer norm. It could be used" }, { "end": 569.08, "start": 564.24, "text": " with any of these but these authors use it in conjunction with group norm. So" }, { "end": 575.12, "start": 569.08, "text": " what does it do? If you have these kernels the kernels are characterized" }, { "end": 579.4000000000001, "start": 575.12, "text": " actually a kernel is characterized by four numbers. So first of all it's the" }, { "end": 586.08, "start": 579.4000000000001, "text": " height and width of the kernel which in our case was three by three. It is" }, { "end": 591.2800000000001, "start": 586.08, "text": " characterized by two more numbers which is the CN, the in channels and the out" }, { "end": 597.94, "start": 591.2800000000001, "text": " channels. So the in channels is the number of channels that come into the" }, { "end": 601.8800000000001, "start": 597.94, "text": " layer and the out channels are the number of channels that you want to" }, { "end": 607.96, "start": 601.8800000000001, "text": " transform that into. So here you can see the in channels are listed here and the" }, { "end": 612.08, "start": 607.96, "text": " out channels are listed here and in the up-down direction which is not labeled" }, { "end": 617.2800000000001, "start": 612.08, "text": " here is the height and width. So this here would be actually a two by two" }, { "end": 624, "start": 617.2800000000001, "text": " kernel. So each of these slivers here is a two by two kernel in the" }, { "end": 627.84, "start": 624, "text": " convolutional network and then that would be the orange sliver here and then the" }, { "end": 634.52, "start": 627.84, "text": " sliver behind that would be the next two by two kernel. Weight standardization" }, { "end": 644.44, "start": 634.52, "text": " says, hey it might be as we normalize the data it might be a good idea. Sorry I" }, { "end": 651, "start": 644.44, "text": " was that was wrong. One column here, one of these columns is a two by two filter" }, { "end": 656.56, "start": 651, "text": " and then the column behind it and the column next to it, they're all two by" }, { "end": 666.52, "start": 656.56, "text": " two filters right. So you have two by two filters in the output or and you also" }, { "end": 670.28, "start": 666.52, "text": " have two by two filters for each of the input output channel" }, { "end": 674.04, "start": 670.28, "text": " combination you have a two by two filter. So you have an entire matrix of" }, { "end": 679.48, "start": 674.04, "text": " two by two filters if you can imagine that. So across the out and across the" }, { "end": 687.04, "start": 679.48, "text": " in direction. Weight standardization says, hey it might be a good idea to see that" }, { "end": 694.84, "start": 687.04, "text": " the weights for a given output channel right. This is we take one output channel" }, { "end": 700.88, "start": 694.84, "text": " and we see all the filters that transform the input into that one output" }, { "end": 706.6, "start": 700.88, "text": " channel which is going to be this many times this many times this many numbers" }, { "end": 714.0400000000001, "start": 706.6, "text": " or this many filters. Maybe we should normalize all of these to be sort of to" }, { "end": 718.48, "start": 714.0400000000001, "text": " not get out of whack because one could imagine that during training right if we" }, { "end": 724.32, "start": 718.48, "text": " start we initialize our filters somewhere here you know maybe one number" }, { "end": 728.12, "start": 724.32, "text": " this this one number here we initialize it randomly right we draw it from random" }, { "end": 734.1600000000001, "start": 728.12, "text": " and then maybe as we train it actually gets very large because it's actually" }, { "end": 738.64, "start": 734.16, "text": " plausible because after that we we you know this is our neural network layer" }, { "end": 744.56, "start": 738.64, "text": " after that we have this procedure to recenter the data right. So I could make" }, { "end": 750.24, "start": 744.56, "text": " a very large weight here multiply the data by very large weight because it" }, { "end": 756.6, "start": 750.24, "text": " gets re-centered anyway but of course if my weights get large I'll basically" }, { "end": 762.48, "start": 756.6, "text": " increase the variance and the instability and the gradients might be" }, { "end": 768, "start": 762.48, "text": " high and and so on. So these author think it might be a good idea to normalize" }, { "end": 773.2, "start": 768, "text": " these weights. So just as you normalize the data you normalize the weights and" }, { "end": 778.48, "start": 773.2, "text": " this actually turns out to be fairly easy in the sense of how you would do it." }, { "end": 787.4, "start": 778.48, "text": " So instead of transforming X which is the input to a layer into Y using W so" }, { "end": 792.04, "start": 787.4, "text": " this is W this is your actual parameter using W you won't do this right" }, { "end": 799.0799999999999, "start": 792.04, "text": " now so this this was usually you just do you just do X times W and that gives you" }, { "end": 806.0799999999999, "start": 799.0799999999999, "text": " Y this is a convolution operation right here. Now you don't do this you do you" }, { "end": 813.16, "start": 806.0799999999999, "text": " have take W and first you subtract the mean of W this is now for a single" }, { "end": 817.92, "start": 813.16, "text": " output channel and then you divide by the standard deviation how many of this is" }, { "end": 825.0799999999999, "start": 817.92, "text": " standard deviation of W and that entire thing you now multiply by X. Now since" }, { "end": 831.64, "start": 825.0799999999999, "text": " these things here are sorry about that since these things here are just you" }, { "end": 835.4, "start": 831.64, "text": " know deterministic operation you can actually back propagate through it so" }, { "end": 843.92, "start": 835.4, "text": " the forward path of data now looks as follows you come you start you say okay" }, { "end": 852.28, "start": 843.92, "text": " my data comes in I will take my weights that my layer weights and I will first" }, { "end": 858.4799999999999, "start": 852.28, "text": " center them then scale them with its standard deviation and then I will use" }, { "end": 864.16, "start": 858.4799999999999, "text": " that thing and X in order to obtain my layer output and then I'll send that to" }, { "end": 868.68, "start": 864.16, "text": " the next layer. Now the backprop signal here is interesting because the backprop" }, { "end": 874.12, "start": 868.68, "text": " signal comes in from here and splits up in two ways it splits up into the back" }, { "end": 883.16, "start": 874.12, "text": " prop signal basically you have to back prop through the X times W hat operation" }, { "end": 887.88, "start": 883.16, "text": " we know how to do that that's just a convolutional back prop that you back" }, { "end": 895.24, "start": 887.88, "text": " prop through the convolution operation back to the last layer. Now usually when" }, { "end": 899.72, "start": 895.24, "text": " you back prop through the convolution operation you get two things you get the" }, { "end": 904.92, "start": 899.72, "text": " derivative with respect to X and you get the derivative with respect to the" }, { "end": 911.08, "start": 904.92, "text": " weights W and you can send both on and you would update your weights with that" }, { "end": 917.6, "start": 911.08, "text": " gradient but now what you'll have to do because this is not your actual" }, { "end": 925.22, "start": 917.6, "text": " parameter of the network you have to take that particular signal and you have" }, { "end": 930.6, "start": 925.22, "text": " to basically reverse the standardization and the centering before you can apply" }, { "end": 936, "start": 930.6, "text": " the gradient but that's all doable the actually modern frameworks will do it by" }, { "end": 943.08, "start": 936, "text": " themselves but it's just that the backprop path here it introduces two new" }, { "end": 948.36, "start": 943.08, "text": " operation to the forward and to the backprop path that you didn't have" }, { "end": 953.36, "start": 948.36, "text": " before but I can imagine this will actually not take you won't even notice" }, { "end": 961.76, "start": 953.36, "text": " that this is happening this is so fast so they the idea is basically pretty" }, { "end": 966.52, "start": 961.76, "text": " basic especially since the entire discussion around normalization has" }, { "end": 972.88, "start": 966.52, "text": " already happened I enjoy that this paper does go into the theory a bit more so" }, { "end": 979.36, "start": 972.88, "text": " they analyze what this weight standardization what effect it has on the" }, { "end": 985.04, "start": 979.36, "text": " Lipschitz constant of the loss for example and they also research what what" }, { "end": 991.92, "start": 985.04, "text": " what contributes more the centering of the weights or the standardization so" }, { "end": 996.32, "start": 991.92, "text": " they kind of run all these ablations where they figure out okay if we just do" }, { "end": 1001.6800000000001, "start": 996.32, "text": " group norm we have one we you know we have this trajectory here and if we run" }, { "end": 1006.04, "start": 1001.6800000000001, "text": " group non plus equation five which is subtracting the mean you can see the" }, { "end": 1013.28, "start": 1006.04, "text": " blue and the orange that is quite a bit and if we only do the dividing by the" }, { "end": 1017.8399999999999, "start": 1013.28, "text": " standard deviation you can see it's pretty close together but there is a" }, { "end": 1022.98, "start": 1017.8399999999999, "text": " difference if you do both then again there is a difference to only doing the" }, { "end": 1027.82, "start": 1022.98, "text": " centering so they they say even though you know probably subtracting the mean" }, { "end": 1035, "start": 1027.82, "text": " gives you most of the benefit since it is so easy you should just do both and I" }, { "end": 1041.48, "start": 1035, "text": " honestly think and here in the in the in the validation error that makes" }, { "end": 1047.44, "start": 1041.48, "text": " basically no difference at all and they do quite a number of these ablations" }, { "end": 1054.68, "start": 1047.44, "text": " which I'm not going to go into too much and they do also the so the Lipschitz" }, { "end": 1058.56, "start": 1054.68, "text": " constant of the loss and the Lipschitz constant of the gradients they basically" }, { "end": 1064.84, "start": 1058.56, "text": " show that the loss and the gradients are behaved more more well-behaved when you" }, { "end": 1069.1599999999999, "start": 1064.84, "text": " use this weight standardization technique together with group norm they" }, { "end": 1075.24, "start": 1069.1599999999999, "text": " also do quite a bit of experiments where they show that their method outperforms" }, { "end": 1080.3999999999999, "start": 1075.24, "text": " batch norm and especially in the small batch size regime and that is something" }, { "end": 1087.4399999999998, "start": 1080.3999999999999, "text": " that I absolutely believe what happened here okay I we actually don't even need" }, { "end": 1093.1999999999998, "start": 1087.4399999999998, "text": " to go down there because if you want to read the paper I invite you to read the" }, { "end": 1098.92, "start": 1093.2, "text": " paper it's a very good paper I enjoyed reading it but ultimately they suggest" }, { "end": 1103.68, "start": 1098.92, "text": " this new method and also I have seen this one replicated across the community" }, { "end": 1109.8400000000001, "start": 1103.68, "text": " a number of times so it seems to be a thing that I would expect either it" }, { "end": 1114.2, "start": 1109.8400000000001, "text": " fizzes out and the community decides that it's about the same as batch norm" }, { "end": 1120.64, "start": 1114.2, "text": " and therefore not worth it or and that's what I believe since we also go into the" }, { "end": 1125.64, "start": 1120.64, "text": " direction of larger models which means smaller batches per worker and" }, { "end": 1131.5200000000002, "start": 1125.64, "text": " generally batch norm is a pain I believe this is just going to be rather" }, { "end": 1137.68, "start": 1131.5200000000002, "text": " standard in the future so I'll actually incorporate this if I can into my next" }, { "end": 1144, "start": 1137.68, "text": " projects so that was it for me if you like this consider subscribing consider" }, { "end": 1149, "start": 1144, "text": " leaving a like on the video thank you for listening if you have any comments I" }, { "end": 1155.76, "start": 1149, "text": " will very probably read them bye bye" } ]
zt_R85Ife_U
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Trash] Automated Inference on Criminality using Face Images
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "trash", "wrong", "phrenology", "physiognomy", "face", "facial", "criminal", "violent", "features", "body", "physical", "visible", "intuition", "smile", "micro", "expression" ]
This paper sets out to build a classifier to distinguish criminals from non-criminals using nothing but a face picture. I explore why the research is trash and what lessons we can learn from it. https://arxiv.org/abs/1611.04135 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, take a look at these faces. Try to decide which of these faces are criminals and which ones are law-abiding citizens. I'll give you a second. Okay, got it. So if you decided that these four here are the criminals, you would be correct. And that makes these three the law-abiding citizens. As for this one, maybe if the crime is being too cool. Of course, none of these faces actually exist in real life. These are compositions of eigenfaces, of datasets, of criminals and non-criminals. Today's paper is an absolute controversy. This is going to get me into so much trouble. So if you see something like this in the news, always, always, always go and check. Now we're going to look at automated inference on criminality using face images by Xiaolin Wu and Xi Cheng. On a high level, they're trying to separate criminals from non-criminals using face images. So basically using classifiers on ID photos. This, of course, has generated quite the uproar. I suggest we just dive into the paper and look at what's happening right here. We study for the first time automated inference on criminality based solely on still face images, which is free of any biases and subjective judgments of human observers. So they say we train a bunch of models, including, as you can see, a CNN, using facial images of one thousand eight hundred and fifty six real persons controlled for race, gender, age and facial expressions. Nearly half of whom were convicted criminals for discriminating between criminals and non-criminals. So this is the outset. This is the kind of research question here. Now, immediately you have people jumping up saying that's not possible. And I would agree. But I think actually there are very, very interesting lessons to be learned from this paper. So they're saying they actually managed to do this with their classifiers, actually with all of these classifiers. Of course, deep learning being the best. Also, some discriminating structural features for predicting criminality have been found by machine learning. So they even tell you why. Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds. The variation among criminal faces is significantly greater than that of non-criminal faces. The two manifolds consisting of criminal and non-criminal faces appear to be concentric with the non-criminal manifold lying in the kernel with the smaller span exhibiting a law of normality for faces of non-criminals. Oh, I'm going to be canceled. I don't advocate for this. This is not, this is not, I'm not a fan of this. Just in other words, the faces of general law abiding public have a greater degree of resemblance compared with the faces of criminals. Or criminals have a higher degree of dissimilarity in facial appearance than non-criminals. So basically what they're saying is that the this kind of similarity among the non-criminals in their data set is larger than the similarity among the criminals. OK, so already the outset, right? Then they go into this introduction and in the introduction we won't go through it fully, but they basically introduce the concept of facial recognition. They try to build up kind of an argument where they say faces are different. Some people have hypothesized that it's possible to infer personality traits from facial features. Some studies exist that show that people agree on the perception of these traits. So not the actual traits, but people will kind of agree that a face looks extroverted or more agreeable. People tend to agree that the appearance exists. And then they sort of make the next step and say, OK, can facial features also be used not just for predicting the appearance, but to predict the actual personality trait? For validating the hypothesis on the correlations between the innate traits and social behaviors of a person and the physical characteristics of that person's face, it would be hard pushed to find a more convincing experiment than examining the success rate of discriminating between criminals and non-criminals. So actually, you could agree with this, right? Since this is sort of a distinction one can make about behavior, whether or not someone breaks the law or in this case is caught and convicted and so on. There are like many, many hurdles in this. In essence, the statement sort of makes sense. Like if you could actually do this from facial features, that would be very, first of all, very surprising and second of all, very drastic. People immediately jump to the conclusion that, OK, if such a thing were found, that means you could somehow precognate criminality, which I don't think it has to be, because what could also be the case is they have a quote from Aristotle right here. It is possible to infer character from features if it is granted that body and soul are changed together by the natural affections. One interpretation of me is that, let's say you break the law for whatever, it could be completely moral, like you steal the medicine from the old lady in your house and but you know you broke the law, you know you did something that society doesn't want you to do and that will exert stress on you. You now have to lie to people about this. You now have to sort of make sure you're not caught. You have to worry. Maybe there's a security tape or something like this. And the stress will, we know that stress will physically change you. And that could be in turn made out by your features. For example, the stress of being in jail could change your physical features. And since these are all convicted criminals, one might think that it might be possible. It might. Again, not saying it is, it might. So if we throw away all of the kind of prejudgments on this, it could be an interesting research question, right? Could. Now, whether we want to pursue it or not, that's a different question. But the way they build this up here is that they only have the best of intentions in mind. I feel like this might not be the case. So they say something like this right here. At the onset of this study, our gut feeling is that modern tools of machine learning and computer vision will refute the validity of physiognomy, although the outcomes turn out otherwise. This and this is the part where I just stopped believing them that their intentions were like all good. And it's just about disproving this so we can just lay it to rest because they then very quickly switch when they find something else. Non criminals are the normals and the criminals are like the that just rubs me the wrong way where you'll have to say no. It's like the Pekuk looks like, oh, no, we, you know, we have many social gatherings and our gut feeling is that people aren't really different. And the robes are actually personal protective equipment. It's all actually just a community thing. We all have, you know, good intentions. Oh, and every now and then we lynch you guys going into this with sort of a mixed bag of feelings where you'd have a hypothetically valid research question. But also even the introduction makes it very clear because it's somewhat over the top promising to just be neutral and be good, good intended. Not going to fall for it. Sorry. They say in order to conduct our experiments, they have one thousand eight hundred and fifty six ID photos. The following criteria, Chinese male between ages of 18 and 55, no facial hair, no facial scars or other markings. And the data set is called S. Then there's two subsets, S.N. for non criminals and S.C. for criminals. The non criminals contains ID photos of alone hundred twenty six non criminals that were acquired from the Internet using the web spider tool. They're from a wide gamut of professions and social status, including waiters, construction work, blah, blah, blah. OK. The subset of the criminals contains ID photos of seven hundred and thirty criminals of which three hundred and thirty as are published as wanted suspects by the Ministry of Public Security of China and by the Departments of Public Security for the provinces of Guangdong, Jiangsu, Liaoning, et cetera. The others are provided by city police department in China under a confidentiality agreement. We stress and here here's an important point. We stress that the criminal face images in S.C. are normal ID photos, not police mugshots. So they say they have violent crimes or nonviolent crimes. And so on. So they have these examples here of those images. So the top ones are the criminals and the bottom ones are the non criminals. Now, people immediately see differences here. And if you spotted that all of these have white colors and none of those have white colors, then you would be correct. Now, you're on the right path. You're not actually correct. Correct. But you're on the right path here because actually what they do is they they mask away the colors. So they only extract the face part and the upper neck part. So these this white collar part will actually not be on the image that they analyze to control for clothing, which is good. But it gives you sort of an indication that the origins of the two image groups might not actually be the same. So what you'll have is you'll have basically a database, actually have two databases of criminals, which are so the one database is this wanted. Let's call them W. These are released by the police for wanted criminals. Then the others database is the convicted criminals. Let's call that C. And then on the other side, you have the database of non criminals and the non criminals come from the Internet. So you have three different databases. And of course, these two make will going to make up the criminals and this will make up the non criminals. And the herein lies the problem, right? You even though the white collars are masked out, you have to make sure that whatever you find isn't just a property of how you collected the data. And this doesn't really come through in this paper. So they they do data preparation as again, they mask, they resize and so on. They stress again, all our idea images with frontal lighting. So, yeah. And they OK, so now they test the classifiers. So they say we test logistic regression, logistic regression, KNN, SVM and CNN on the image data set. So for the CNN, you can just input the original image. But for the other classifiers, you need a set of features. And what they do is they concatenate three different image feature vectors. So the first one is facial landmark points that you extract by some sort of tool. You can extract whatever corners of mouth and so on. Then the second facial feature vector generated by a modular PCA. And the third is a facial feature vector based on local binary pattern histograms. So these are these are sort of face features that people use for recognizing faces. They concatenate them. That gives you a feature vector. You feed that into the machine learning algorithm. And they do a we perform a tenfold cross validation for all possible combinations of three feature classifiers and the four types of feature vectors plus the data driven CNN. So they do a tenfold cross validation, right, which basically means you do you partition your data into 10 parts. You take nine to train, predict the one. Then you take the next nine to train, predict the one that you left out and so on. But this kind of you get a train test split across all sorts of splits of your data, which is a it's a you know, it's a valid thing to do. And they discover here that their CNN classifier performs at almost 90 percent accuracy, as you can see here. And even their SVM and the other classifiers, they perform fairly well in recognizing these criminality faces. So. And they analyze the ROC curves and the ROC curves. This is a really this is a classifier that works right. So you can see in the the the other models, but especially the CNN classifier here, works really well. Of course, the question is, what does it work for? So they basically say, all right, we now have a classifier that distinguishes criminals from non-criminals. And I would say you have a classifier that discriminates your particular pictures of criminals from your particular pictures of non-criminals. And if this were submitted to me as a reviewer, I would expect that any sane author would then go and try to invalidate that. So here's what you'll have to do if you want to convince me that this is not just due to how you collected your data. You need to go and you need to basically say, OK, I have these different methods of collecting data right here. Now, maybe I can go to the police and ask them for a picture from the same database of a non convicted, not of a non-criminal, someone that was arrested, but then not convicted. And I can have someone from from here. That can put in that data set, and then you have to show me that your classifier will correctly predict that that's a non-criminal. And if it predicts it's a criminal, it's due to the data set. You can also find one of the criminals, but find their picture on the Internet, like you collected the non-criminals. And that will give you someone from this database in that data set. And then you have to show me that your classifier correctly predicts that's a criminal. You can further convince me that your classifier is neutral to this separation right here of the wanted and convicted criminals, because they all should be criminals. So if your classifier is neutral to that, then it basically doesn't care where it comes from. So this will be a weaker argument, but still one that one could investigate. What do they do for validating their method? Here is where it gets funky. So they say, given the high social sensitivities and repercussions of our topic and skeptics on physiognomy, we try to exercise maximum caution before publishing our results. Yeah, you failed. In playing devil's advocate, we design and conduct the following experiments to challenge the validity of the tested classifiers. For the task of discriminating between criminals and non-criminals. All right, this is it right here. Here is where you give us where you tell us it's not because of how we collected the data. Which is the obvious explanation. We randomly label the faces in the very sample set as negative and positive instances with equal probability and redo all the above experiments of binary classification. Well, how crazy is this? All right, they're basically saying, well, if our classifier were not a criminality classifier, that means we could invalidate it by shuffling the labels. And if that comes out to 50-50, then our classifier obviously works because it's not 50-50 in this data set. So basically, they're just validating that a classification algorithm can classify something. The criticism here is never that they haven't actually trained a working classifier. The criticism is what have they trained a classifier for? But their entire validation procedure is basically, we don't have a bug in our code. The outcomes show that the randomly generated negative and positive instances cannot be distinguished at all. Gee, who guessed? A classifier on random labels doesn't generalize. Man, they say, in fact, we go much further along the self-critical path. All right, here it comes. And carry out the same experiments for random labeling on different samples of the same size and with the same variable control. Only this time in the selection criteria are standard ID photos of Chinese female young middle-aged, or standard ID photos of Caucasian male young middle-aged, of Caucasian female young middle-aged nofacial. So basically, if you train on a randomly labeled data set on any sort of pictures, your classifier will not work. Thanks. Thanks. Maybe that's, I think that's the academically most valid statement in the entire paper. Oh, man, in none of the three cases, any of the four classifiers managed to achieve a true positive rate higher than 53% on randomly labeled positive and negative instances. So the classifier must be valid because... OK, the above experiments rule out that the good accuracies of the four evaluated classifiers in phase inference on criminality are due to data overfitting. No. Otherwise, given the same sample size, they would also be able to distinguish between randomly labeled positive and negative instances with significantly better chances. But they did cross validation. They did. The cross validation prevents the overfitting. We no one criticizes that you over. These people have no idea what they're doing. They have no clue of machine learning. They don't know what the problems with methods are. They don't know what overfitting is and and how you control for it. The big jump of the true positive rate from random labeling to truth labeling on the same set of phase images can only be explained by intrinsic separability of SC and SN. That is true. That is true. But why are they separable? That's the question. OK, as different source cameras generated the ID photos in the set S, now they might be on the right track here. Different source cameras. Maybe they get the idea that different data sources lead to different things. They might leave their signatures that, although below perception threshold in signal strength, could mislead machine learning. OK, so they basically saying different cameras could generate different sort of artifacts. And they rule this out by basically adding noise to the images such that these artifacts would be washed out and the noise doesn't change their results. Gee, they were so close. They were so close to actually doing something useful. OK, this section is where it gets even more interesting. Now they're trying to guess from their classifier what are the actual features that make criminals criminals. So discriminating features right here. Having obtained the above strong empirical evidences for the validity of automated phase induced inference on criminality, one cannot resist the following intriguing questions. What features of a human face betray its owners propensity for crimes? OK, Shakespeare. And they basically they basically go and explain ability route where they see what the classifier pays attention to. And it turns out the classifier pays attention to the following features on the left. You can see where the classifier pays attention to. No surprise here, it pays attention to face features, but they kind of parse out the following three features. First of all, the D, the distance between the eyes in criminals tends to be smaller than in non criminals. The angle between the nose and the corners of the mouth tends to be smaller in criminals than in non criminals. And the curvature of the upper lip tends to be higher in criminals than in non criminals. So let's let's try just from this information to draw the ultimate criminal and non criminal faces. So first of all, the non criminal, let's draw the non criminal as just regular. I'm not very good at this. So here's the nose. And then let's just draw the lips like this. Non criminal. Perfect. Looks like a law abiding citizen to me. Criminal. Right here. So the eyes are closer together. Here's the nose. And then the curvature of the upper lip is higher. So. Hmm. And then the angle between the nose and the outer corners of the mouth is smaller. How can I make the angle smaller? Could it be that if I. Oh, yes. Ah, that's the trick. Criminal, ladies and gentlemen. So are you telling me that all someone has to do to be a criminal is frown? Yeah, totally valid. So they're so close, right? But they say, oh, these are intrinsic facial features. But come on. All right. So they go on to say that they have some histogram differences of these features. So they basically say these features are what's responsible for this. And then they do face clustering, which is beautiful. So first of all, what they do is they sort of take the average faces for criminals and non-criminals. And these are the average faces. So the top are the actual average eigen faces. And the bottom is when you kind of shift the facial landmarks around. The seeming paradox that SC and SN can be classified, but the average faces appear almost the same can. Sorry, average faces appear almost the same. The average faces appear almost the same. What a paradox. These are almost the same. I mean, if I just overlay them one over another, they're almost the same. There is no difference at all. I don't see a difference. What could possibly be the difference? What could be the difference? I don't think these are the most honest of intentions. So they basically do some clustering, which I find interesting. I find interesting, for example, that they don't really explain isomap here. So isomap uses the geodesic distance between two points on the manifold, which is defined between the sum of the weights. So they kind of washy washy isomap. But they then explain k-means in great detail with formulas. And again, I mean, OK, non-machine learning people can do machine learning. That's fine. But they're not really into the matter here. And they try k-means clustering. And they find, in their opinion, they find four clusters of criminals and three clusters of non-criminals. Now, why three and four? And usually, you can do something like this by clustering and then measuring the residual variance in your data. So how much does one cluster explain, two clusters, and so on? So here you can see the curves for non-criminals and criminals. Now, they claim that the optimal number of clusters here for non-criminals is three, which makes no sense to me. Like, why three? What you usually want to find is kind of a kink in your curve. Like, if it's steep and then it gets flat, that means that up until then, your clusters actually buy you something good. And from then, they basically are useless. So if I were to guess, I would divide the criminals into two clusters and the non-criminals into a single cluster, because that's pretty flat. Certainly not the non-criminals into three and the criminals into four. That makes no sense at all. Like, why? And they say, OK, these are the clusters right here. And these are the pictures I showed you at the beginning. What surprise? The bottom ones, the non-criminals, are smiling and the top ones aren't. Gee, I wonder why the method works. And the interesting part here is that how can we justify? How can we say if we decide on one cluster for non-criminals and two clusters for criminals, what does that remind us of? Oh, yes, that is exactly how we collected the data. That is exactly the fact that we collected the non-criminals with one procedure and the criminals with two different procedures. Gee, their data set replicates exactly how they collected the data. And that convinces me that it says absolutely nothing about the actual criminality of people. It's just that police, even if it's ID photos, they don't smile. And pictures on the internet, sometimes people smile. The rest of the paper is pretty much garbage. They did reply to critics and they kind of take issue with a number of things. So first, name calling. I don't mean to name call, but it's going to happen. I don't get why people call them racist because it's all the same. Doesn't, no, no, trouble. And smiley. Ha. In our experiments, we did control facial expression, but not faint micro expression. The critique that our methods can be reduced to a simple discriminator of smiling versus not smiling has given us a new angle of scrutiny. They say, well, Westerners think that this is smiling, but our Chinese students and colleagues, even after being prompted to consider the cue of smile, fail to detect the same. So basically, their answer is, yeah, you think so, but we don't. And then they say instead, they only find the faces in the bottom row appearing somewhat more relaxed than those in the top row. And then here's the crucial part. All criminal ID photos are government issues, but not mock shots. They are normal government issue ID portraits like those driver license in the USA. In contrast, most of the non-criminal ID style photos are taken officially by some organizations, such as real estate companies, law firms, et cetera, for their website. You know what it always says when you take your picture for a government ID? Please don't smile. Imagine if your law firm comes to you and say, we want a picture for our website. Please don't smile. All right, this was it for this paper. If you like this content, please consider subscribing and sharing it out. This is absolute garbage. And there is important lessons to learn here, namely Occam's razor is a real problem in research. People often fail to criticize themselves enough and to think, is there maybe a different explanation for why I'm getting the results that I'm getting? And how can I disprove that that is the case? And how can I make sure that the effect that I'm seeing actually comes from the place where I claim it comes from? I think this is a real threat throughout all of research. I've seen many papers that I've reviewed that are exactly of the same fallacy, not as touchy subjects as this one, but it definitely exists. And I remind everyone that learn a lesson from this. And have a good day. Thank you.
[ { "end": 10.32, "start": 0, "text": " Hi there, take a look at these faces. Try to decide which of these faces are criminals and which ones are law-abiding citizens." }, { "end": 12.32, "start": 10.32, "text": " I'll give you a second." }, { "end": 20.52, "start": 13.72, "text": " Okay, got it. So if you decided that these four here are the criminals, you would be correct." }, { "end": 23.92, "start": 20.52, "text": " And that makes these three the law-abiding citizens." }, { "end": 28.76, "start": 23.92, "text": " As for this one, maybe if the crime is being too cool." }, { "end": 31.720000000000002, "start": 28.76, "text": " Of course, none of these faces actually exist in real life." }, { "end": 38.6, "start": 31.720000000000002, "text": " These are compositions of eigenfaces, of datasets, of criminals and non-criminals." }, { "end": 45.8, "start": 38.6, "text": " Today's paper is an absolute controversy. This is going to get me into so much trouble." }, { "end": 51, "start": 45.8, "text": " So if you see something like this in the news, always, always, always go and check." }, { "end": 59.96, "start": 51, "text": " Now we're going to look at automated inference on criminality using face images by Xiaolin Wu and Xi Cheng." }, { "end": 66.84, "start": 59.96, "text": " On a high level, they're trying to separate criminals from non-criminals using face images." }, { "end": 71.6, "start": 66.84, "text": " So basically using classifiers on ID photos." }, { "end": 75.44, "start": 71.6, "text": " This, of course, has generated quite the uproar." }, { "end": 80.32, "start": 75.44, "text": " I suggest we just dive into the paper and look at what's happening right here." }, { "end": 88.24, "start": 80.32, "text": " We study for the first time automated inference on criminality based solely on still face images," }, { "end": 95.55999999999999, "start": 88.24, "text": " which is free of any biases and subjective judgments of human observers." }, { "end": 100.16, "start": 95.55999999999999, "text": " So they say we train a bunch of models, including, as you can see, a CNN," }, { "end": 109.32, "start": 100.16, "text": " using facial images of one thousand eight hundred and fifty six real persons controlled for race, gender, age and facial expressions." }, { "end": 117.96, "start": 109.32, "text": " Nearly half of whom were convicted criminals for discriminating between criminals and non-criminals." }, { "end": 122.32, "start": 117.96, "text": " So this is the outset. This is the kind of research question here." }, { "end": 128.12, "start": 122.32, "text": " Now, immediately you have people jumping up saying that's not possible." }, { "end": 130.32, "start": 128.12, "text": " And I would agree." }, { "end": 137.24, "start": 130.32, "text": " But I think actually there are very, very interesting lessons to be learned from this paper." }, { "end": 143.52, "start": 137.24, "text": " So they're saying they actually managed to do this with their classifiers, actually with all of these classifiers." }, { "end": 145.68, "start": 143.52, "text": " Of course, deep learning being the best." }, { "end": 152.68, "start": 145.68, "text": " Also, some discriminating structural features for predicting criminality have been found by machine learning." }, { "end": 154.96, "start": 152.68, "text": " So they even tell you why." }, { "end": 165.20000000000002, "start": 154.96, "text": " Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds." }, { "end": 172.32, "start": 165.2, "text": " The variation among criminal faces is significantly greater than that of non-criminal faces." }, { "end": 181.76, "start": 172.32, "text": " The two manifolds consisting of criminal and non-criminal faces appear to be concentric with the non-criminal manifold lying in the kernel" }, { "end": 190.95999999999998, "start": 181.76, "text": " with the smaller span exhibiting a law of normality for faces of non-criminals." }, { "end": 194.79999999999998, "start": 190.95999999999998, "text": " Oh, I'm going to be canceled." }, { "end": 196.48000000000002, "start": 194.8, "text": " I don't advocate for this." }, { "end": 200.56, "start": 196.48000000000002, "text": " This is not, this is not, I'm not a fan of this." }, { "end": 211.96, "start": 200.56, "text": " Just in other words, the faces of general law abiding public have a greater degree of resemblance compared with the faces of criminals." }, { "end": 217.56, "start": 211.96, "text": " Or criminals have a higher degree of dissimilarity in facial appearance than non-criminals." }, { "end": 228.08, "start": 217.56, "text": " So basically what they're saying is that the this kind of similarity among the non-criminals in their data set is larger than the similarity among the criminals." }, { "end": 231.76, "start": 228.08, "text": " OK, so already the outset, right?" }, { "end": 241.4, "start": 231.76, "text": " Then they go into this introduction and in the introduction we won't go through it fully, but they basically introduce the concept of facial recognition." }, { "end": 246.36, "start": 241.4, "text": " They try to build up kind of an argument where they say faces are different." }, { "end": 254.96, "start": 246.36, "text": " Some people have hypothesized that it's possible to infer personality traits from facial features." }, { "end": 261.52000000000004, "start": 254.96, "text": " Some studies exist that show that people agree on the perception of these traits." }, { "end": 269.44, "start": 261.52000000000004, "text": " So not the actual traits, but people will kind of agree that a face looks extroverted or more agreeable." }, { "end": 273, "start": 269.44, "text": " People tend to agree that the appearance exists." }, { "end": 286.08, "start": 273, "text": " And then they sort of make the next step and say, OK, can facial features also be used not just for predicting the appearance, but to predict the actual personality trait?" }, { "end": 297.16, "start": 286.08, "text": " For validating the hypothesis on the correlations between the innate traits and social behaviors of a person and the physical characteristics of that person's face," }, { "end": 306.52000000000004, "start": 297.16, "text": " it would be hard pushed to find a more convincing experiment than examining the success rate of discriminating between criminals and non-criminals." }, { "end": 310.24, "start": 306.52000000000004, "text": " So actually, you could agree with this, right?" }, { "end": 319.8, "start": 310.24, "text": " Since this is sort of a distinction one can make about behavior, whether or not someone breaks the law or in this case is caught and convicted and so on." }, { "end": 322.48, "start": 319.8, "text": " There are like many, many hurdles in this." }, { "end": 325.88, "start": 322.48, "text": " In essence, the statement sort of makes sense." }, { "end": 335.88, "start": 325.88, "text": " Like if you could actually do this from facial features, that would be very, first of all, very surprising and second of all, very drastic." }, { "end": 344.44, "start": 335.88, "text": " People immediately jump to the conclusion that, OK, if such a thing were found, that means you could somehow precognate criminality," }, { "end": 354.56, "start": 344.44, "text": " which I don't think it has to be, because what could also be the case is they have a quote from Aristotle right here." }, { "end": 365.16, "start": 354.56, "text": " It is possible to infer character from features if it is granted that body and soul are changed together by the natural affections." }, { "end": 371.08, "start": 365.16, "text": " One interpretation of me is that, let's say you break the law for whatever, it could be completely moral," }, { "end": 378.28, "start": 371.08, "text": " like you steal the medicine from the old lady in your house and but you know you broke the law," }, { "end": 383.64, "start": 378.28, "text": " you know you did something that society doesn't want you to do and that will exert stress on you." }, { "end": 386.32, "start": 383.64, "text": " You now have to lie to people about this." }, { "end": 389.15999999999997, "start": 386.32, "text": " You now have to sort of make sure you're not caught." }, { "end": 392.76, "start": 389.15999999999997, "text": " You have to worry. Maybe there's a security tape or something like this." }, { "end": 398.64, "start": 392.76, "text": " And the stress will, we know that stress will physically change you." }, { "end": 402.47999999999996, "start": 398.64, "text": " And that could be in turn made out by your features." }, { "end": 407.15999999999997, "start": 402.47999999999996, "text": " For example, the stress of being in jail could change your physical features." }, { "end": 413.36, "start": 407.15999999999997, "text": " And since these are all convicted criminals, one might think that it might be possible." }, { "end": 418.36, "start": 413.36, "text": " It might. Again, not saying it is, it might." }, { "end": 428.28000000000003, "start": 418.36, "text": " So if we throw away all of the kind of prejudgments on this, it could be an interesting research question, right?" }, { "end": 432.8, "start": 428.28000000000003, "text": " Could. Now, whether we want to pursue it or not, that's a different question." }, { "end": 437.6, "start": 432.8, "text": " But the way they build this up here is that they only have the best of intentions in mind." }, { "end": 440.44, "start": 437.6, "text": " I feel like this might not be the case." }, { "end": 442.92, "start": 440.44, "text": " So they say something like this right here." }, { "end": 454.24, "start": 442.92, "text": " At the onset of this study, our gut feeling is that modern tools of machine learning and computer vision will refute the validity of physiognomy," }, { "end": 458.08000000000004, "start": 454.24, "text": " although the outcomes turn out otherwise." }, { "end": 463.44, "start": 458.08000000000004, "text": " This and this is the part where I just stopped believing them that their intentions were like all good." }, { "end": 471.36, "start": 463.44, "text": " And it's just about disproving this so we can just lay it to rest because they then very quickly switch when they find something else." }, { "end": 480.48, "start": 471.36, "text": " Non criminals are the normals and the criminals are like the that just rubs me the wrong way where you'll have to say no." }, { "end": 488.6, "start": 480.48, "text": " It's like the Pekuk looks like, oh, no, we, you know, we have many social gatherings and our gut feeling is that people aren't really different." }, { "end": 492.04, "start": 488.6, "text": " And the robes are actually personal protective equipment." }, { "end": 494.52000000000004, "start": 492.04, "text": " It's all actually just a community thing." }, { "end": 497.6, "start": 494.52000000000004, "text": " We all have, you know, good intentions." }, { "end": 507.64000000000004, "start": 497.6, "text": " Oh, and every now and then we lynch you guys going into this with sort of a mixed bag of feelings where you'd have a hypothetically valid research question." }, { "end": 517.5600000000001, "start": 507.64000000000004, "text": " But also even the introduction makes it very clear because it's somewhat over the top promising to just be neutral and be good, good intended." }, { "end": 519.12, "start": 517.5600000000001, "text": " Not going to fall for it. Sorry." }, { "end": 524.8000000000001, "start": 519.12, "text": " They say in order to conduct our experiments, they have one thousand eight hundred and fifty six ID photos." }, { "end": 532.3199999999999, "start": 524.8, "text": " The following criteria, Chinese male between ages of 18 and 55, no facial hair, no facial scars or other markings." }, { "end": 534.52, "start": 532.3199999999999, "text": " And the data set is called S." }, { "end": 541.28, "start": 534.52, "text": " Then there's two subsets, S.N. for non criminals and S.C. for criminals." }, { "end": 552.1999999999999, "start": 541.28, "text": " The non criminals contains ID photos of alone hundred twenty six non criminals that were acquired from the Internet using the web spider tool." }, { "end": 557.5200000000001, "start": 552.2, "text": " They're from a wide gamut of professions and social status, including waiters, construction work, blah, blah, blah." }, { "end": 571.24, "start": 557.5200000000001, "text": " OK. The subset of the criminals contains ID photos of seven hundred and thirty criminals of which three hundred and thirty as are published as wanted suspects by the Ministry of Public Security of China" }, { "end": 580.32, "start": 571.24, "text": " and by the Departments of Public Security for the provinces of Guangdong, Jiangsu, Liaoning, et cetera." }, { "end": 586.48, "start": 580.32, "text": " The others are provided by city police department in China under a confidentiality agreement." }, { "end": 589.6800000000001, "start": 586.48, "text": " We stress and here here's an important point." }, { "end": 599.88, "start": 589.6800000000001, "text": " We stress that the criminal face images in S.C. are normal ID photos, not police mugshots." }, { "end": 608.2, "start": 599.88, "text": " So they say they have violent crimes or nonviolent crimes." }, { "end": 612.9200000000001, "start": 608.2, "text": " And so on. So they have these examples here of those images." }, { "end": 620.44, "start": 612.9200000000001, "text": " So the top ones are the criminals and the bottom ones are the non criminals." }, { "end": 625.24, "start": 620.44, "text": " Now, people immediately see differences here." }, { "end": 635.4000000000001, "start": 625.24, "text": " And if you spotted that all of these have white colors and none of those have white colors, then you would be correct." }, { "end": 638.3199999999999, "start": 635.4, "text": " Now, you're on the right path. You're not actually correct." }, { "end": 646, "start": 638.3199999999999, "text": " Correct. But you're on the right path here because actually what they do is they they mask away the colors." }, { "end": 651.64, "start": 646, "text": " So they only extract the face part and the upper neck part." }, { "end": 660.68, "start": 651.64, "text": " So these this white collar part will actually not be on the image that they analyze to control for clothing, which is good." }, { "end": 672.16, "start": 660.68, "text": " But it gives you sort of an indication that the origins of the two image groups might not actually be the same." }, { "end": 687.12, "start": 672.16, "text": " So what you'll have is you'll have basically a database, actually have two databases of criminals, which are so the one database is this wanted." }, { "end": 693.24, "start": 687.12, "text": " Let's call them W. These are released by the police for wanted criminals." }, { "end": 698.4, "start": 693.24, "text": " Then the others database is the convicted criminals." }, { "end": 709.76, "start": 698.4, "text": " Let's call that C. And then on the other side, you have the database of non criminals and the non criminals come from the Internet." }, { "end": 711.84, "start": 709.76, "text": " So you have three different databases." }, { "end": 718.88, "start": 711.84, "text": " And of course, these two make will going to make up the criminals and this will make up the non criminals." }, { "end": 724.12, "start": 718.88, "text": " And the herein lies the problem, right?" }, { "end": 735.12, "start": 724.12, "text": " You even though the white collars are masked out, you have to make sure that whatever you find isn't just a property of how you collected the data." }, { "end": 739.5600000000001, "start": 735.12, "text": " And this doesn't really come through in this paper." }, { "end": 745.52, "start": 739.56, "text": " So they they do data preparation as again, they mask, they resize and so on." }, { "end": 749.56, "start": 745.52, "text": " They stress again, all our idea images with frontal lighting." }, { "end": 756.9599999999999, "start": 749.56, "text": " So, yeah. And they OK, so now they test the classifiers." }, { "end": 768.1199999999999, "start": 756.9599999999999, "text": " So they say we test logistic regression, logistic regression, KNN, SVM and CNN on the image data set." }, { "end": 772.64, "start": 768.12, "text": " So for the CNN, you can just input the original image." }, { "end": 775.96, "start": 772.64, "text": " But for the other classifiers, you need a set of features." }, { "end": 781.28, "start": 775.96, "text": " And what they do is they concatenate three different image feature vectors." }, { "end": 787.72, "start": 781.28, "text": " So the first one is facial landmark points that you extract by some sort of tool." }, { "end": 792.36, "start": 787.72, "text": " You can extract whatever corners of mouth and so on." }, { "end": 805.44, "start": 792.36, "text": " Then the second facial feature vector generated by a modular PCA. And the third is a facial feature vector based on local binary pattern histograms." }, { "end": 811.24, "start": 805.44, "text": " So these are these are sort of face features that people use for recognizing faces." }, { "end": 815.8000000000001, "start": 811.24, "text": " They concatenate them. That gives you a feature vector. You feed that into the machine learning algorithm." }, { "end": 826.7199999999999, "start": 815.8, "text": " And they do a we perform a tenfold cross validation for all possible combinations of three feature classifiers and the four types of feature vectors plus the data driven CNN." }, { "end": 834.76, "start": 826.7199999999999, "text": " So they do a tenfold cross validation, right, which basically means you do you partition your data into 10 parts." }, { "end": 837.4799999999999, "start": 834.76, "text": " You take nine to train, predict the one." }, { "end": 841.4799999999999, "start": 837.4799999999999, "text": " Then you take the next nine to train, predict the one that you left out and so on." }, { "end": 851.64, "start": 841.48, "text": " But this kind of you get a train test split across all sorts of splits of your data, which is a it's a you know, it's a valid thing to do." }, { "end": 861.04, "start": 851.64, "text": " And they discover here that their CNN classifier performs at almost 90 percent accuracy, as you can see here." }, { "end": 873.76, "start": 861.04, "text": " And even their SVM and the other classifiers, they perform fairly well in recognizing these criminality faces." }, { "end": 879.56, "start": 873.76, "text": " So. And they analyze the ROC curves and the ROC curves." }, { "end": 882.8, "start": 879.56, "text": " This is a really this is a classifier that works right." }, { "end": 892.16, "start": 882.8, "text": " So you can see in the the the other models, but especially the CNN classifier here, works really well." }, { "end": 898.04, "start": 892.16, "text": " Of course, the question is, what does it work for?" }, { "end": 903.92, "start": 898.04, "text": " So they basically say, all right, we now have a classifier that distinguishes criminals from non-criminals." }, { "end": 912.52, "start": 903.92, "text": " And I would say you have a classifier that discriminates your particular pictures of criminals from your particular pictures of non-criminals." }, { "end": 924.1999999999999, "start": 912.52, "text": " And if this were submitted to me as a reviewer, I would expect that any sane author would then go and try to invalidate that." }, { "end": 930.56, "start": 924.1999999999999, "text": " So here's what you'll have to do if you want to convince me that this is not just due to how you collected your data." }, { "end": 939.4, "start": 930.56, "text": " You need to go and you need to basically say, OK, I have these different methods of collecting data right here." }, { "end": 947.68, "start": 939.4, "text": " Now, maybe I can go to the police and ask them for a picture from the same database of a non convicted," }, { "end": 952.52, "start": 947.68, "text": " not of a non-criminal, someone that was arrested, but then not convicted." }, { "end": 958.1999999999999, "start": 952.52, "text": " And I can have someone from from here." }, { "end": 965.1999999999999, "start": 958.1999999999999, "text": " That can put in that data set, and then you have to show me that your classifier will correctly predict that that's a non-criminal." }, { "end": 970.2, "start": 965.2, "text": " And if it predicts it's a criminal, it's due to the data set." }, { "end": 978.5600000000001, "start": 970.2, "text": " You can also find one of the criminals, but find their picture on the Internet, like you collected the non-criminals." }, { "end": 983.0400000000001, "start": 978.5600000000001, "text": " And that will give you someone from this database in that data set." }, { "end": 988.1600000000001, "start": 983.0400000000001, "text": " And then you have to show me that your classifier correctly predicts that's a criminal." }, { "end": 998.56, "start": 988.16, "text": " You can further convince me that your classifier is neutral to this separation right here of the wanted and convicted criminals," }, { "end": 1001.36, "start": 998.56, "text": " because they all should be criminals." }, { "end": 1007.8399999999999, "start": 1001.36, "text": " So if your classifier is neutral to that, then it basically doesn't care where it comes from." }, { "end": 1013.6, "start": 1007.8399999999999, "text": " So this will be a weaker argument, but still one that one could investigate." }, { "end": 1017.16, "start": 1013.6, "text": " What do they do for validating their method?" }, { "end": 1020.28, "start": 1017.16, "text": " Here is where it gets funky." }, { "end": 1028.48, "start": 1020.28, "text": " So they say, given the high social sensitivities and repercussions of our topic and skeptics on physiognomy," }, { "end": 1034.8, "start": 1028.48, "text": " we try to exercise maximum caution before publishing our results." }, { "end": 1036.56, "start": 1034.8, "text": " Yeah, you failed." }, { "end": 1047, "start": 1036.56, "text": " In playing devil's advocate, we design and conduct the following experiments to challenge the validity of the tested classifiers." }, { "end": 1050.6, "start": 1047, "text": " For the task of discriminating between criminals and non-criminals." }, { "end": 1052.52, "start": 1050.6, "text": " All right, this is it right here." }, { "end": 1059.04, "start": 1052.52, "text": " Here is where you give us where you tell us it's not because of how we collected the data." }, { "end": 1062.96, "start": 1059.04, "text": " Which is the obvious explanation." }, { "end": 1070.8, "start": 1062.96, "text": " We randomly label the faces in the very sample set as negative and positive instances with equal probability" }, { "end": 1080, "start": 1070.8, "text": " and redo all the above experiments of binary classification." }, { "end": 1081.84, "start": 1080, "text": " Well, how crazy is this?" }, { "end": 1088.68, "start": 1081.84, "text": " All right, they're basically saying, well, if our classifier were not a criminality classifier," }, { "end": 1093.1599999999999, "start": 1088.68, "text": " that means we could invalidate it by shuffling the labels." }, { "end": 1104.8000000000002, "start": 1093.16, "text": " And if that comes out to 50-50, then our classifier obviously works because it's not 50-50 in this data set." }, { "end": 1112.44, "start": 1104.8000000000002, "text": " So basically, they're just validating that a classification algorithm can classify something." }, { "end": 1118.6000000000001, "start": 1112.44, "text": " The criticism here is never that they haven't actually trained a working classifier." }, { "end": 1122.76, "start": 1118.6000000000001, "text": " The criticism is what have they trained a classifier for?" }, { "end": 1132.12, "start": 1122.76, "text": " But their entire validation procedure is basically, we don't have a bug in our code." }, { "end": 1141.16, "start": 1132.12, "text": " The outcomes show that the randomly generated negative and positive instances cannot be distinguished at all." }, { "end": 1143.4, "start": 1141.16, "text": " Gee, who guessed?" }, { "end": 1147, "start": 1143.4, "text": " A classifier on random labels doesn't generalize." }, { "end": 1155.88, "start": 1147, "text": " Man, they say, in fact, we go much further along the self-critical path." }, { "end": 1158.56, "start": 1155.88, "text": " All right, here it comes." }, { "end": 1173.08, "start": 1158.56, "text": " And carry out the same experiments for random labeling on different samples of the same size and with the same variable control." }, { "end": 1179.32, "start": 1173.08, "text": " Only this time in the selection criteria are standard ID photos of Chinese female young middle-aged," }, { "end": 1186.3999999999999, "start": 1179.32, "text": " or standard ID photos of Caucasian male young middle-aged, of Caucasian female young middle-aged nofacial." }, { "end": 1198, "start": 1186.3999999999999, "text": " So basically, if you train on a randomly labeled data set on any sort of pictures, your classifier will not work." }, { "end": 1200.12, "start": 1198, "text": " Thanks. Thanks." }, { "end": 1206.52, "start": 1200.12, "text": " Maybe that's, I think that's the academically most valid statement in the entire paper." }, { "end": 1217.36, "start": 1206.52, "text": " Oh, man, in none of the three cases, any of the four classifiers managed to achieve a true positive rate higher than 53% on randomly labeled positive and negative instances." }, { "end": 1222.32, "start": 1217.36, "text": " So the classifier must be valid because..." }, { "end": 1233.4399999999998, "start": 1222.32, "text": " OK, the above experiments rule out that the good accuracies of the four evaluated classifiers in phase inference on criminality are due to data overfitting." }, { "end": 1236.12, "start": 1233.4399999999998, "text": " No." }, { "end": 1246.9199999999998, "start": 1236.12, "text": " Otherwise, given the same sample size, they would also be able to distinguish between randomly labeled positive and negative instances with significantly better chances." }, { "end": 1249.2, "start": 1246.9199999999998, "text": " But they did cross validation." }, { "end": 1254.76, "start": 1249.2, "text": " They did. The cross validation prevents the overfitting." }, { "end": 1257.32, "start": 1254.76, "text": " We no one criticizes that you over." }, { "end": 1260.72, "start": 1257.32, "text": " These people have no idea what they're doing." }, { "end": 1263.0800000000002, "start": 1260.72, "text": " They have no clue of machine learning." }, { "end": 1267.3600000000001, "start": 1263.0800000000002, "text": " They don't know what the problems with methods are." }, { "end": 1277.8400000000001, "start": 1267.3600000000001, "text": " They don't know what overfitting is and and how you control for it." }, { "end": 1287.12, "start": 1277.84, "text": " The big jump of the true positive rate from random labeling to truth labeling on the same set of phase images can only be explained by intrinsic separability of SC and SN." }, { "end": 1289.24, "start": 1287.12, "text": " That is true. That is true." }, { "end": 1292.1599999999999, "start": 1289.24, "text": " But why are they separable?" }, { "end": 1293.6799999999998, "start": 1292.1599999999999, "text": " That's the question." }, { "end": 1300.9199999999998, "start": 1293.6799999999998, "text": " OK, as different source cameras generated the ID photos in the set S, now they might be on the right track here." }, { "end": 1303.3999999999999, "start": 1300.9199999999998, "text": " Different source cameras." }, { "end": 1311, "start": 1303.4, "text": " Maybe they get the idea that different data sources lead to different things." }, { "end": 1320.44, "start": 1311, "text": " They might leave their signatures that, although below perception threshold in signal strength, could mislead machine learning." }, { "end": 1326.72, "start": 1320.44, "text": " OK, so they basically saying different cameras could generate different sort of artifacts." }, { "end": 1338.6000000000001, "start": 1326.72, "text": " And they rule this out by basically adding noise to the images such that these artifacts would be washed out and the noise doesn't change their results." }, { "end": 1341.76, "start": 1338.6000000000001, "text": " Gee, they were so close." }, { "end": 1346.64, "start": 1341.76, "text": " They were so close to actually doing something useful." }, { "end": 1349.72, "start": 1346.64, "text": " OK, this section is where it gets even more interesting." }, { "end": 1358.44, "start": 1349.72, "text": " Now they're trying to guess from their classifier what are the actual features that make criminals criminals." }, { "end": 1361.8, "start": 1358.44, "text": " So discriminating features right here." }, { "end": 1369.1200000000001, "start": 1361.8, "text": " Having obtained the above strong empirical evidences for the validity of automated phase induced inference on criminality," }, { "end": 1373.04, "start": 1369.1200000000001, "text": " one cannot resist the following intriguing questions." }, { "end": 1383.8, "start": 1373.04, "text": " What features of a human face betray its owners propensity for crimes?" }, { "end": 1385.36, "start": 1383.8, "text": " OK, Shakespeare." }, { "end": 1392.56, "start": 1385.36, "text": " And they basically they basically go and explain ability route where they see what the classifier pays attention to." }, { "end": 1397.6, "start": 1392.56, "text": " And it turns out the classifier pays attention to the following features on the left." }, { "end": 1400.6399999999999, "start": 1397.6, "text": " You can see where the classifier pays attention to." }, { "end": 1409.68, "start": 1400.64, "text": " No surprise here, it pays attention to face features, but they kind of parse out the following three features." }, { "end": 1422, "start": 1409.68, "text": " First of all, the D, the distance between the eyes in criminals tends to be smaller than in non criminals." }, { "end": 1432.64, "start": 1422, "text": " The angle between the nose and the corners of the mouth tends to be smaller in criminals than in non criminals." }, { "end": 1440.04, "start": 1432.64, "text": " And the curvature of the upper lip tends to be higher in criminals than in non criminals." }, { "end": 1447.16, "start": 1440.04, "text": " So let's let's try just from this information to draw the ultimate criminal and non criminal faces." }, { "end": 1455.44, "start": 1447.16, "text": " So first of all, the non criminal, let's draw the non criminal as just regular." }, { "end": 1456.88, "start": 1455.44, "text": " I'm not very good at this." }, { "end": 1459.1200000000001, "start": 1456.88, "text": " So here's the nose." }, { "end": 1464, "start": 1459.1200000000001, "text": " And then let's just draw the lips like this." }, { "end": 1464.88, "start": 1464, "text": " Non criminal." }, { "end": 1465.68, "start": 1464.88, "text": " Perfect." }, { "end": 1468.5600000000002, "start": 1465.68, "text": " Looks like a law abiding citizen to me." }, { "end": 1471.0800000000002, "start": 1468.5600000000002, "text": " Criminal." }, { "end": 1472.44, "start": 1471.0800000000002, "text": " Right here." }, { "end": 1477.44, "start": 1472.44, "text": " So the eyes are closer together." }, { "end": 1480.24, "start": 1477.44, "text": " Here's the nose." }, { "end": 1483.52, "start": 1480.24, "text": " And then the curvature of the upper lip is higher." }, { "end": 1486.3600000000001, "start": 1483.52, "text": " So." }, { "end": 1488.16, "start": 1486.3600000000001, "text": " Hmm." }, { "end": 1496.16, "start": 1488.16, "text": " And then the angle between the nose and the outer corners of the mouth is smaller." }, { "end": 1498.88, "start": 1496.16, "text": " How can I make the angle smaller?" }, { "end": 1503.7600000000002, "start": 1498.88, "text": " Could it be that if I." }, { "end": 1505.8000000000002, "start": 1503.7600000000002, "text": " Oh, yes." }, { "end": 1511.48, "start": 1505.8000000000002, "text": " Ah, that's the trick." }, { "end": 1514.48, "start": 1511.48, "text": " Criminal, ladies and gentlemen." }, { "end": 1523.8000000000002, "start": 1514.48, "text": " So are you telling me that all someone has to do to be a criminal is frown?" }, { "end": 1528.68, "start": 1523.8000000000002, "text": " Yeah, totally valid." }, { "end": 1532.72, "start": 1528.68, "text": " So they're so close, right?" }, { "end": 1535.44, "start": 1532.72, "text": " But they say, oh, these are intrinsic facial features." }, { "end": 1537.5600000000002, "start": 1535.44, "text": " But come on." }, { "end": 1538.0800000000002, "start": 1537.5600000000002, "text": " All right." }, { "end": 1544.68, "start": 1538.0800000000002, "text": " So they go on to say that they have some histogram differences of these features." }, { "end": 1548.28, "start": 1544.68, "text": " So they basically say these features are what's responsible for this." }, { "end": 1551.96, "start": 1548.28, "text": " And then they do face clustering, which is beautiful." }, { "end": 1555.92, "start": 1551.96, "text": " So first of all, what they do is they sort of take the average faces" }, { "end": 1558.92, "start": 1555.92, "text": " for criminals and non-criminals." }, { "end": 1560.52, "start": 1558.92, "text": " And these are the average faces." }, { "end": 1562.88, "start": 1560.52, "text": " So the top are the actual average eigen faces." }, { "end": 1567.2, "start": 1562.88, "text": " And the bottom is when you kind of shift the facial landmarks around." }, { "end": 1573.52, "start": 1567.2, "text": " The seeming paradox that SC and SN can be classified," }, { "end": 1578.3600000000001, "start": 1573.52, "text": " but the average faces appear almost the same can." }, { "end": 1582.1200000000001, "start": 1578.3600000000001, "text": " Sorry, average faces appear almost the same." }, { "end": 1585.3600000000001, "start": 1582.1200000000001, "text": " The average faces appear almost the same." }, { "end": 1586.4799999999998, "start": 1585.36, "text": " What a paradox." }, { "end": 1588.12, "start": 1586.4799999999998, "text": " These are almost the same." }, { "end": 1594.8, "start": 1588.12, "text": " I mean, if I just overlay them one over another, they're almost the same." }, { "end": 1597.8, "start": 1594.8, "text": " There is no difference at all." }, { "end": 1599.9599999999998, "start": 1597.8, "text": " I don't see a difference." }, { "end": 1604.08, "start": 1599.9599999999998, "text": " What could possibly be the difference?" }, { "end": 1605.7199999999998, "start": 1604.08, "text": " What could be the difference?" }, { "end": 1609.32, "start": 1605.7199999999998, "text": " I don't think these are the most honest of intentions." }, { "end": 1614.6399999999999, "start": 1609.32, "text": " So they basically do some clustering, which I find interesting." }, { "end": 1618.48, "start": 1614.64, "text": " I find interesting, for example, that they don't really explain isomap here." }, { "end": 1623.1200000000001, "start": 1618.48, "text": " So isomap uses the geodesic distance between two points on the manifold," }, { "end": 1626.1200000000001, "start": 1623.1200000000001, "text": " which is defined between the sum of the weights." }, { "end": 1629.0400000000002, "start": 1626.1200000000001, "text": " So they kind of washy washy isomap." }, { "end": 1635.4, "start": 1629.0400000000002, "text": " But they then explain k-means in great detail with formulas." }, { "end": 1641.88, "start": 1635.4, "text": " And again, I mean, OK, non-machine learning people" }, { "end": 1642.8000000000002, "start": 1641.88, "text": " can do machine learning." }, { "end": 1643.4, "start": 1642.8000000000002, "text": " That's fine." }, { "end": 1647.68, "start": 1643.4, "text": " But they're not really into the matter here." }, { "end": 1650, "start": 1647.68, "text": " And they try k-means clustering." }, { "end": 1656.52, "start": 1650, "text": " And they find, in their opinion, they find four clusters of criminals" }, { "end": 1658.96, "start": 1656.52, "text": " and three clusters of non-criminals." }, { "end": 1661.8000000000002, "start": 1658.96, "text": " Now, why three and four?" }, { "end": 1664.16, "start": 1661.8000000000002, "text": " And usually, you can do something like this by clustering" }, { "end": 1667.0800000000002, "start": 1664.16, "text": " and then measuring the residual variance in your data." }, { "end": 1670.76, "start": 1667.0800000000002, "text": " So how much does one cluster explain, two clusters, and so on?" }, { "end": 1675, "start": 1670.76, "text": " So here you can see the curves for non-criminals and criminals." }, { "end": 1679.24, "start": 1675, "text": " Now, they claim that the optimal number of clusters here for non-criminals" }, { "end": 1683.04, "start": 1679.24, "text": " is three, which makes no sense to me." }, { "end": 1683.8799999999999, "start": 1683.04, "text": " Like, why three?" }, { "end": 1687.64, "start": 1683.8799999999999, "text": " What you usually want to find is kind of a kink in your curve." }, { "end": 1690.08, "start": 1687.64, "text": " Like, if it's steep and then it gets flat," }, { "end": 1694.48, "start": 1690.08, "text": " that means that up until then, your clusters actually buy you something good." }, { "end": 1698.32, "start": 1694.48, "text": " And from then, they basically are useless." }, { "end": 1704.4399999999998, "start": 1698.32, "text": " So if I were to guess, I would divide the criminals into two clusters" }, { "end": 1709.48, "start": 1704.4399999999998, "text": " and the non-criminals into a single cluster, because that's pretty flat." }, { "end": 1714.76, "start": 1709.48, "text": " Certainly not the non-criminals into three and the criminals into four." }, { "end": 1717.72, "start": 1714.76, "text": " That makes no sense at all." }, { "end": 1719.6399999999999, "start": 1717.72, "text": " Like, why?" }, { "end": 1723.12, "start": 1719.6399999999999, "text": " And they say, OK, these are the clusters right here." }, { "end": 1727.3999999999999, "start": 1723.12, "text": " And these are the pictures I showed you at the beginning." }, { "end": 1728.76, "start": 1727.4, "text": " What surprise?" }, { "end": 1735.5600000000002, "start": 1728.76, "text": " The bottom ones, the non-criminals, are smiling and the top ones aren't." }, { "end": 1739.64, "start": 1735.5600000000002, "text": " Gee, I wonder why the method works." }, { "end": 1748.52, "start": 1741.96, "text": " And the interesting part here is that how can we justify?" }, { "end": 1752.24, "start": 1748.52, "text": " How can we say if we decide on one cluster for non-criminals" }, { "end": 1755.44, "start": 1752.24, "text": " and two clusters for criminals, what does" }, { "end": 1758.64, "start": 1755.44, "text": " that remind us of?" }, { "end": 1764.56, "start": 1758.64, "text": " Oh, yes, that is exactly how we collected the data." }, { "end": 1768.96, "start": 1764.56, "text": " That is exactly the fact that we collected the non-criminals" }, { "end": 1773.56, "start": 1768.96, "text": " with one procedure and the criminals with two different procedures." }, { "end": 1779.92, "start": 1773.56, "text": " Gee, their data set replicates exactly how they collected the data." }, { "end": 1783.52, "start": 1779.92, "text": " And that convinces me that it says absolutely nothing" }, { "end": 1786.24, "start": 1783.52, "text": " about the actual criminality of people." }, { "end": 1792.4, "start": 1786.24, "text": " It's just that police, even if it's ID photos, they don't smile." }, { "end": 1797.56, "start": 1792.4, "text": " And pictures on the internet, sometimes people smile." }, { "end": 1799.76, "start": 1797.56, "text": " The rest of the paper is pretty much garbage." }, { "end": 1804.72, "start": 1799.76, "text": " They did reply to critics and they kind of take issue with a number of things." }, { "end": 1806.48, "start": 1804.72, "text": " So first, name calling." }, { "end": 1810.68, "start": 1806.48, "text": " I don't mean to name call, but it's going to happen." }, { "end": 1817.28, "start": 1810.68, "text": " I don't get why people call them racist because it's all the same." }, { "end": 1820.16, "start": 1817.28, "text": " Doesn't, no, no, trouble." }, { "end": 1822.28, "start": 1820.16, "text": " And smiley." }, { "end": 1822.78, "start": 1822.28, "text": " Ha." }, { "end": 1828.68, "start": 1826.28, "text": " In our experiments, we did control facial expression," }, { "end": 1832.5600000000002, "start": 1828.68, "text": " but not faint micro expression." }, { "end": 1834.5600000000002, "start": 1832.5600000000002, "text": " The critique that our methods can be reduced" }, { "end": 1837.24, "start": 1834.5600000000002, "text": " to a simple discriminator of smiling versus not smiling" }, { "end": 1842.68, "start": 1837.24, "text": " has given us a new angle of scrutiny." }, { "end": 1845.88, "start": 1842.68, "text": " They say, well, Westerners think that this is smiling," }, { "end": 1849.24, "start": 1845.88, "text": " but our Chinese students and colleagues, even after being prompted" }, { "end": 1855.04, "start": 1849.24, "text": " to consider the cue of smile, fail to detect the same." }, { "end": 1859.08, "start": 1855.04, "text": " So basically, their answer is, yeah, you think so, but we don't." }, { "end": 1863.08, "start": 1859.08, "text": " And then they say instead, they only find the faces in the bottom row" }, { "end": 1869.1999999999998, "start": 1863.08, "text": " appearing somewhat more relaxed than those in the top row." }, { "end": 1871.36, "start": 1869.1999999999998, "text": " And then here's the crucial part." }, { "end": 1875.6, "start": 1871.36, "text": " All criminal ID photos are government issues, but not mock shots." }, { "end": 1878.32, "start": 1875.6, "text": " They are normal government issue ID portraits" }, { "end": 1881, "start": 1878.32, "text": " like those driver license in the USA." }, { "end": 1883.98, "start": 1881, "text": " In contrast, most of the non-criminal ID style photos" }, { "end": 1886.3999999999999, "start": 1883.98, "text": " are taken officially by some organizations," }, { "end": 1893.44, "start": 1886.4, "text": " such as real estate companies, law firms, et cetera, for their website." }, { "end": 1896.8400000000001, "start": 1893.44, "text": " You know what it always says when you take your picture for a government ID?" }, { "end": 1898.44, "start": 1896.8400000000001, "text": " Please don't smile." }, { "end": 1900.3200000000002, "start": 1898.44, "text": " Imagine if your law firm comes to you and say," }, { "end": 1901.92, "start": 1900.3200000000002, "text": " we want a picture for our website." }, { "end": 1904, "start": 1901.92, "text": " Please don't smile." }, { "end": 1906, "start": 1904, "text": " All right, this was it for this paper." }, { "end": 1911.2800000000002, "start": 1906, "text": " If you like this content, please consider subscribing and sharing it out." }, { "end": 1913.0800000000002, "start": 1911.2800000000002, "text": " This is absolute garbage." }, { "end": 1916.0800000000002, "start": 1913.0800000000002, "text": " And there is important lessons to learn here," }, { "end": 1921.52, "start": 1916.08, "text": " namely Occam's razor is a real problem in research." }, { "end": 1925.12, "start": 1921.52, "text": " People often fail to criticize themselves enough" }, { "end": 1928.56, "start": 1925.12, "text": " and to think, is there maybe a different explanation for why" }, { "end": 1930.4399999999998, "start": 1928.56, "text": " I'm getting the results that I'm getting?" }, { "end": 1934.32, "start": 1930.4399999999998, "text": " And how can I disprove that that is the case?" }, { "end": 1938.6799999999998, "start": 1934.32, "text": " And how can I make sure that the effect that I'm seeing actually" }, { "end": 1942.76, "start": 1938.6799999999998, "text": " comes from the place where I claim it comes from?" }, { "end": 1946.6, "start": 1942.76, "text": " I think this is a real threat throughout all of research." }, { "end": 1949.08, "start": 1946.6, "text": " I've seen many papers that I've reviewed that" }, { "end": 1954.2, "start": 1949.08, "text": " are exactly of the same fallacy, not as touchy subjects as this one," }, { "end": 1956.36, "start": 1954.2, "text": " but it definitely exists." }, { "end": 1961, "start": 1956.36, "text": " And I remind everyone that learn a lesson from this." }, { "end": 1962.92, "start": 1961, "text": " And have a good day." }, { "end": 1973.0800000000002, "start": 1962.92, "text": " Thank you." } ]
bFn2xcGi1TQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Faster Neural Network Training with Data Echoing (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "brain", "pipeline", "bottleneck", "speed", "gpu", "tpu", "idle", "network", "distributed", "preprocessing", "augmentation" ]
CPUs are often bottlenecks in Machine Learning pipelines. Data fetching, loading, preprocessing and augmentation can be slow to a point where the GPUs are mostly idle. Data Echoing is a technique to re-use data that is already in the pipeline to reclaim this idle time and keep the GPUs busy at all times. https://arxiv.org/abs/1907.05550 Abstract: In the twilight of Moore's law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of the training pipeline, such as disk I/O and data preprocessing, do not run on accelerators. As accelerators continue to improve, these earlier stages will increasingly become the bottleneck. In this paper, we introduce "data echoing," which reduces the total computation used by earlier pipeline stages and speeds up training whenever computation upstream from accelerators dominates the training time. Data echoing reuses (or "echoes") intermediate outputs from earlier pipeline stages in order to reclaim idle capacity. We investigate the behavior of different data echoing algorithms on various workloads, for various amounts of echoing, and for various batch sizes. We find that in all settings, at least one data echoing algorithm can match the baseline's predictive performance using less upstream computation. We measured a factor of 3.25 decrease in wall-clock time for ResNet-50 on ImageNet when reading training data over a network. Authors: Dami Choi, Alexandre Passos, Christopher J. Shallue, George E. Dahl Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at faster neural network training with data echoing by Damme Choi, Alexander Passo, Christopher J. Shalu and George E. Dahl. So on a high level this paper basically says you should repeat data that's already in memory in order to speed up the entire process of neural network training. And it also says that this can speed up your wall time without hurting your performance too much. And I have mixed feelings about it. So let's jump in. So they basically make a point of saying that machine learning doesn't happen in just one thing. It's not like sklearn.fit anymore. It is more of a pipeline. So what do we mean by this? If you think of something like you want to train an ImageNet model, what you want to do is you have like your data set somewhere. And that could be in a database, it could be in the network somewhere. So if you have even something larger than ImageNet, you'll probably store it on a central server on an Amazon bucket somewhere. So this is in AWS. And the first thing you actually need to do is you need to read that data set. Now usually you're not gonna have enough memory on a machine to just load in the entire data set into memory. So that means this process here is streaming. So this is continuously streaming data points. And once you have used the data points, you're gonna throw it away because you need space for the next one, right? And so the streaming is done continuously. The next process is read and decode. That means you have to read it from the network and actually bring it into a format where you can use it. Usually something like an umpire array or a tensorflow tensor. You need to apply some shuffling because usually the order, you can't really trust the order that it is stored in. Oftentimes there is like a bias in the ordering, so you need some sort of a shuffle buffer here. Then often you want to apply some data augmentation to it. That means that you have one image. And we know for these models, if this is your cat, we know for these models that what can help is to basically make many many different images from one image. So this could be by cropping part of it and saying, well, if it was a cat before, the small upper right part here is still a cat. So this is one update. It's called data augmentation. And you're gonna apply a whole bunch of these things. So you can crop, you can rotate the image a bit. It's still a cat, right? And you can also change its luminance and a bit of its colors. You can jitter the colors, you can horizontally flip it and it'll still be a cat. And that's basically how you make many data points from one data point. And we know that helps. Then what you want to do is you want to batch this data. So you want to put it into mini batches. Since you've shuffled here, that means when the next time the same data point comes along, it's going to be batched with a different group of images and of course augmented differently with a different group of images. And that basically means it's a different training batch for the model. So this entire pipeline is basically a way that we take data points that we have and we make a whole bunch of variations and various groupings and batchings of it. And we know that helps enormously with the generalization capability of your final models. And then here you do your apply your SGD update. So that's usually where you forward propagate your data through your network, which here is F. You'll get like some Y hat as an output. And then you have your labels that also come through the pipeline. And you have some sort of loss function L that takes both as an input and gives you an output. And then you do your back propagation. So the back propagation would go through your loss function through your network and update the network parameters such that your network learned something right now this step to the right here is usually what what we focus on when we do deep learning. The step on the right here, all of this, this can be done on a GPU or is usually done on something like a GPU or a TPU. Right. But in these things are getting faster and faster. The point the paper makes is that the TPUs and GPUs of the world are getting faster. But this entire other thing right here, this is basically CPU land. Now I know there is some data augmentation now happening on the GPU and so on. But in essence, you can think of a pipeline where the thing to the left is happening on CPU and the thing to the right is happening on TPU. And even even worse, let's say the speed is continuously is continuously increasing. So in your pipeline, the kind of speed would be so here is the network reading. And here over here is the GPU SGD step. And this is speed. Basically, the further to the right in your pipeline, you go the faster your the faster your hardware gets. And that means that if if this since this is a continuous pipeline, right, that basically means that if I input something here, it goes through the pipeline. And even if this is all running in parallel, at this thing over here is going to idle this since this is the fastest part of the pipeline, it is going to just idle a lot. Right, it because because it can only consume things as fast as this thing can produce. Now if you if you have some sort of a multi GPU machine and just train image net, like you just run the code, usually your this is not the bottleneck. Usually, your GPU is here are at 100% capacity. So this paper is not for you. But if you are, let's say a big company have this network storage, have a big data set, have very expensive data augmentation. This happens, for example, this can happen in NLP and so on. This can be quite your situation, where the earlier in the pipeline, the slower it is. And don't you just love these graphics? So here's here's time. And apparently, it goes in both directions. And so does it go like this? I think what they mean is just time goes in this direction. And you're here. And you're upstream. So your upstream is your network. This is your network reading or your pre processing and the downstream, this is the GPU. So as you are pre processing things, you you and this should be this should be different. It should mean okay, to correct this right here, this is idle. And this is running. So as you're upstream processes images, right at the beginning, your GPU is idle. But then as soon as it ships off the first batch of images, your GPU can run now it's running. While you're doing that, your upstream your network is still reading new images, pre processing them and so on, but it cannot is too slow to insert a batch at the time that the GPU is done. The time the GPU is done, it's still processing this batch. So the GPU is idle until here where it finally manages to process that batch and then the GPU is running again. I think that would have been a much better graphic. But you know, so their goal basically is that what you'll have is right here, for example, after the batch, what you'll do is you scrap this connection, you take this and you put it into a smaller buffer. And the buffer is a repeat buffer. So what it does is it simply will repeat the whatever you have in the buffer until something new comes in, right? So new data point comes in, you just output that data point again, again, again, again, the for the GPU, it's gonna feel like these are all new batches and they continuously come in. But it's always the same until the next data point comes in. And then you output that one again and again and again and again. Now the the actual factor here you can, of course, tune by hand or you can just say repeat until something else comes in. In this paper, they have an explicit factor where they say we repeat each data point four times or three times or so on. So this is data echoing, you basically echo the data point multiple times. And this can be done in various places. So they experiment with echoing in any of these places right here. So the egg they experiment with it right here, after reading and decoding, after shuffling. No, I think always before shuffling. Because if you if you have a shuffle buffer anyway, they say it makes sense that if you do the echoing, you you do your shuffle buffer after you're echoing. So here, then after augmentation and after batching. So they experiment with these three locations in in echoing. Now what could be the downturn of something like this, the downturn, of course, is that this SGD procedure right here, basically, it relies on the data incoming being an IID sample from your data distribution, right? That's that's how we formulate SGD is that there's always new data incoming. Now, if you just output the same data point all the time, that could that is like no new information, first of all, and second of all, it could bias the SGD update, such that you it because it sees the same data, it doesn't it sees the same information over and over, is going to think that's the whole data set, right? So potentially, it can make too many steps into the wrong direction. That just happens to be the bias in this particular data point. So the IID assumption is is is invalid. Now, why do you experiment with this in different locations? Because what you expect is that it hurts more or it hurts less the earlier you introduce this. So if you introduce echoing right here, so if you echo your data until new data from the network comes in, it's still going to be shuffled differently, right? It's and it's still going to be augmented differently. So each time the data point comes out of the echo buffer, it is going to be shuffled. And it is going to be augmented in a different way than the last time the same data point came out. And this is going to be batched together because you've shuffled differently, it's going to be batched together with a different bunch of data points. And that means SGD gets new information. But if you go on to the very last thing, where you just after the batch right here, where you input the echo, that means SGD just gets to see the same batch of data augmented in the same way all the time, right? So the of course, where you exactly have to echo, you have to trade this off. So you have to trade off the how much you basically violate the IID fresh data assumption against where in your data pipeline is the bottleneck. So if your bottleneck is in the data augmentation, it may make little sense to echo before that because your bottleneck is the data augmentation. And that being said, if the bottleneck is that you don't have enough GPUs, then it probably doesn't make sense to data echo at all, though their experiments are somehow wonky on this. But so let's dive in, they make the following claims. Let's just go through them really quick. Data echo reduces the amount of upstream that think of network reading or augmentation computation needed to reach a competitive out of sample error rate on various data sets and model architectures. Second, data echoing can provide a wall time speed up in practice. Third, data echoing can support a wide range of echoing factors. And that's the echoing factor is how often you repeat the data. Fourth, the effectiveness of data echoing depends on the intersection point in the training pipeline, sorry, in the insertion point. That's what our hypothesis was, right? Fifth, data echoing can benefit from additional shuffling after echoing, but does not require it. And six, countering expectations, data echoing reaches the same final error rate as well tuned baselines. So I am can absolutely accept one through five, especially in like an actual practical in the wild setting. But six, we'll see about six. So let's jump into their models. They, sorry about that, they train the following four models. So they train a transformer on these two data sets LM1B and common crawl. So I guess technically it's five models on language modeling. They train the ResNet 32 on CIFAR 10. They train the ResNet 50 on ImageNet and they train SD on Coco. Now here is the accuracies they get and here is, sorry, this is the target. So what they do is they train these models and then they say, okay, what's the accuracy we reach? And then they set a target value. So on ResNet 50 on ImageNet, a very common number to reach is something like 76.5. If you look at, for example, torch vision models, they reach something like this. And so they say, well, our target accuracy here is just a little bit below that. So and then we just measure how many steps or how many their measurement here is fresh data points. So how many actual fresh training samples do we need to reach this target? And this is where it gets wonky because, for example, take the 91% here on CIFAR 10. That is quite, quite low. And also the ResNet 50 is, I mean, this is standard, but still ImageNet is much further nowadays. And I think the effectiveness of something like this has a lot to do with how competitive you want to get. Maybe this is all just an effect of how much under par your, this target performance really is. And I would expect that even though they say it doesn't hurt their performance in their experiments, I would at least expect it will hurt your performance in general if you try to get competitive. Because these things aren't, as of now, at least the ones I know, like the ResNets, aren't really competitive. But so what do they do? They measure data echoing with an echoing factor of 2. So that means data that's incoming is output twice in a row. And every data point that's coming in is just emitted twice from the buffer. And then the next data point is emitted twice and so on. And what they measure, again, is the fresh examples read. So how many fresh data points do you need to achieve something? This is a good measurement because this is kind of independent of hardware. So if you're really in the situation where your GPU is twice as fast as the rest of your pipeline, then an echoing factor of 2 will speed up at most your training procedure by a factor of 2. All right, so you have the baseline in red. And then you have batch echoing, which is where you echo what we said at the worst possible time right after batching. So this might hurt your performance the most, but also it has the potential to be the fastest if maybe your augmentation is very expensive. Then, sorry, or your batching. You have example echoing after augmentation. So that would mean the augmentation is very expensive. So you save the augmented data point. And then you emit it multiple times, but each time it is batched differently. So it is shuffled and then batched with different other data points. So you have a shuffle buffer after it. And then you have example echoing before data augmentation. So that means the same data point emitted multiple times will be augmented in different ways and basically will lead to slightly different data points. So the results here are pretty much what you could expect in that the earlier you do the echoing, as you can see here, the more this echoing helps. So the number, if you, for example, this is the object segmentation task, the baseline needs this many fresh examples to reach this target accuracy. With batch echoing, not only do you, sorry, with batch echoing, you need less fresh training examples. So that means even though you kind of train on the same data twice, this helps you more, or this helps you. It doesn't help you fully because the dashed line here is the, if it would help you as much as a fresh data point, you'd be at the dashed line, right? This is exactly half of this because the echoing factor is two. So if a repeated data point was as useful as a fresh data point, you'd be at the dashed line. As you can see right here, you're not at the dashed line, but at least it doesn't hurt. You might expect that it hurts, but it doesn't hurt. It actually speeds up. So the repeated data points at least have some utility. Again, this is only useful if you have this asymmetry in your pipeline. If your pipeline is actually symmetric and you do an echoing factor of two, the wall time here, the wall time plot would look this for the baseline and then almost twice as high for the batch echoing. Because even though it needs the same amount of fresh, or almost the same amount of fresh example, you echo each one twice. So it needs to process it twice so it'll take much longer. So again, this is useful if you have this asymmetry and if the echoing factor is kind of smaller than your asymmetry. Otherwise you're simply wasting time repeating data points. Then if you do example echoing here after augmentation, you use even less fresh data points. And if you do it before augmentation, this is really surprising. You almost get the benefit of fresh data points, which is something you might expect, right? Because an augmented newly shuffled data point is kind of almost a new data point. But still, it's quite surprising that you almost get to the level of the of the theoretical possible. And also here on the image net task. Now here is actually an example where you can see that it hurts to do this batch echoing. Because the reasons why it could hurt is just that you have you violate this IID assumption, you basically have correlated data points. This is a big, big problem, for example, in reinforcement learning, where already by nature of you running episodes and then feeding the episodes back into the training procedure, you have correlated data points. And that hurts your performance here actually compared to the to the baseline. But then if you go to example echoing, and the example echoing before augmentation, again, you get a speed up, which is pretty cool. Okay, so they do a bunch of other experiments. And I appreciate these experiments here to really show what's going on. And until when can you push this? So here they have a plot of example echoing before augmentation can reduce training time for ResNet 50 on image net. So this is before augmentation. And the echoing factor describes how often you repeat each data point. So this goes from two to five. And you can see that basically you you get the speed up, you just sort of get it for free. As you can see, the dashed line again is as if if at repeated data point were as useful as a fresh data point, you'd be at the dashed line. And you can see right here that you are just above this dashed line. So this can help a lot. And so this is the fresh examples read and this is the wall time in their particular situation. In this case, it doesn't help as much. But again, it if that very much depends on how the asymmetry in your pipeline is. Now, in these experiments, I would actually appreciate something like they do down here, where I would always like to see where it breaks. So how far can you go with the echoing factor until it doesn't help anymore? Because this sort of tells me pretty much nothing. I want to see where is the low point? Where's kind of the optimal echoing factor? And what can you tell me about this optimal echoing factor? How can we determine it sort of beforehand? Or how can you reason how does it connect to the different parts of your architecture? So if I had to point out a flaw in this paper, it would be that right here, I would expect the them to continue this echoing factor increase until it breaks, sort of like they do down here. This is for I believe this is for the transformer on LM 1B. Now here they have a batch size of 1024. And you can see, and this is the this is their standard setting for the transformer, the 1024 batch size, you can see that the baseline uses this many 1.5 times 10 to the seventh fresh examples to train until their target. If you increase the echoing factor by two, you basically need half as many fresh examples, as long as you echo each one twice. Again, very surprising the fact how close you can get to the as if each batch were a a perfect fresh data point. But you can see as you increase this echoing factor, and here is exactly what I said, right, you at some point, this hurts at some point, you get to the point where the non IID ness, the correlation of date of successive data points will actually hurt you. And they make a point here of saying that this is, for example, dependent on batch size. Now in this experiment over here, they have a larger batch size. And here is again the the baseline number of data points to reach the target. And you can see again it goes down. But now with the echoing factor where before you had a you had an increase again, now it continues to decrease. Again, it will be interesting to see where it goes up here and how the number at the lowest like here the four and here the I don't know what it's gonna be maybe the 16, how this will kind of depend on your batch size. And here is another problem. And that's what I alluded to at the beginning, this this performance dependence. Now, I have not read anything differently in the paper. So I had to assume that they trained this here, this number of fresh examples to reach the target is still the target that they determined at the beginning. So it's that 3.9 in the table, that 3.9 was achieved with this batch size with 1024. And we know especially in language models that larger batch sizes will lead to a better performance, even if you need let's say more samples. So here you can see that the samples here is 1.5 and here it's actually four, because you increase that batch size. So that will tell you something 1.5 and four, that is that is like a times. Okay, that's like an times 2.5. So you go with the batch size of times four, and you need 2.5 more more fresh training samples to reach the same target accuracy. First of all, we know that the larger batch sizes can reach higher target accuracies. So again, these results, the dependence of them on the actual performance to the maximum achievable value, to me that's kind of a shady world here to always say, okay, how long does it take to reach that particular target? Because we know that this model right here can reach a much higher target, but we don't know this about these models here. What is their kind of performance in the limit? And they try to make these experiments, but I don't really believe them. Maybe yeah. And yes, and the second, right, that will that will be that will that is already interesting. So this ratio right here, this 2.5 to 4, this ratio must mean something, right? It's it's I go to a higher batch size, four times higher batch size, and I need 2.5 many more fresh training samples to reach the same target. That must somehow tell you something about the usefulness of a single data point versus a succession of data points, right? So it doesn't seem because I would expect if each data point was valuable, I would expect this to be times four. And if it were if it were times one, so if it were no, no speed up at all, sorry, not times four, if it would be times one, it would mean I'd need the same number of fresh training samples, right? No matter how I batch them. But it were times four. That means basically that it doesn't matter really how many training points I have in a batch as long as I have enough and the 1024 seems to be enough. It just it just matters how many you know SGD steps I do. So basically, what we're saying SGD isn't getting the most out of these data points. And this ratio this 2.5, this this tells you something about the information content of a of an additional data point versus the usefulness content of an additional step of SGD. And I would expect that to depend to intrinsically be connected to the where the low point of this echoing factor is because that's exactly what the echoing does. It trades off freshness of data point versus doing more steps on the on the on the same information. And for a paper, especially paper by Google brain, I this this is a connection that I would love to see investigated. But enough of the ranting, they do investigate other things they do investigate, for example, what happens if we just up the batch size. And you can see here, yeah, this is interesting in the baseline needs more fresh samples as you up the batch size. But and at the beginning, this batch echoing, for example, doesn't help doesn't hurt, but doesn't help. But as you go to higher and higher batch sizes, this batch echoing starts to help more and more. Again, I believe this is connected to the usefulness of the single data point, at some point, your batch size is just too large for the problem, you'd rather do more steps. And that's why this helps. But also, this model right here might have a higher ceiling accuracy. So indeed is the question whether this model right here has the same or whether this model right here, the batch echoing model would actually fall back to the ceiling accuracy of one of these models over here. Yeah, in any case, their point is basically that as you increase the batch size, this echoing tends to help more relatively. Because maybe it's because what I said, right, they say as batch size increases, the performance of batch echoing relative to the baseline stays either stays the same or improves. While for example, echoing it either stays the same or it gets sorry, while for example, echoing it either stays the same or gets worse. Dashed lines indicate the expected values if repeated examples were as useful as fresh examples. Yeah, so I believe there is an intrinsic connection here between the usefulness of more data and usefulness of doing additional steps. And here the example echoing you can almost see it as more data because especially here you're going to do augmentation on top of it and you see the non augmented versus the augmented ratio changes dramatically from here to here. Okay, final set of experiments. As you can tell, this is more mostly an experimental paper. And it is always easy to criticize experimental papers and rightfully so because I would not trust this very much. But given that it comes from a big institution, and it is a very well written paper, I would trust it more than I would a regular paper. And I would say if you're in practice, this is certainly worth trying. Absolutely. I'm just I just think that some of the things aren't aren't researched, like some of my questions aren't answered of this. So they investigate sizes. So they now build shuffle buffers. So we have batch echoing, but they say ah, but we can do batch echoing with shuffle buffers. So after the batch echoing, right, we have this state where we have the batching. And then we have the echoing. So this is our echo buffer, where we output the each data point multiple times. And then we have another buffer, which is a shuffle buffer, that a shuffle buffer just collects data points and then shuffles them around before outputting them. And that means even though we you know, output this five times, it might not come out five times after each other, it might be that it comes out once, and then another data point that was already in the shuffle buffer comes out. And then it will just say that in total, it comes out five times. But it is first shuffled together with a bunch of other data points. Of course, this uses more memory, but it returns to that more IID setting. And you can see here as the buffer size increases, then the performance gets more and more to the performance that you would have with completely fresh data. Right. So again, trading off freshness and freshness and doing multiple steps with by by basically repeating, repeating data points straight out versus repeating data points shuffled. And also here you have the same with example echoing. So if you apply the shuffle buffer to example echoing and you increase its size, you can get very, very, very close to the performance that you would get with fresh data, which of course, if you increase the shuffle buffer to the size of the data set, you are at the situation that's the limit you are at the situation of fresh data, right? If you do example echoing. Right. So here is where it gets into the funky part, where they say we actually measure the validation cross entropy and the validation accuracy versus the number of fresh examples read. And here I want to concentrate on the ResNet 50 on ImageNet. And as you can see, most of these models, they pretty much end up in the same place here. It's just that the echoing models end up there faster. Right. And this this is, I mean, this is where it gets a bit confusing, honestly, because why do you have this super sharp thing here? Because usually and here it sort of speeds up in the middle, you see you see that and then it kind of sharply declines. Is this maybe because they drop the learning rate or something. Now, my main thing is that the performance here even though this target thing is lower than than the even though this target thing is the same for everyone, it is lower than the best reachable accuracy. And I'm I'm just this this is just confusing. If this is really true. Whoa. If this is really true, I think we have a lot to learn about SGD yet and how we're not actually doing SGD correctly. And because it seems like almost the the echo versions are better or reach a better accuracy than the baseline. I don't know, do they just cap it at the performance? I don't think so. I think they say they let it reach. They also have these things right here. These these curves where they say this is the best we reach. And this is the ResNet 32 on C410. Again, 91% on C410 is just very, very low. And I'm almost thinking that, okay, this might help if you just throw something that we know is kind of overpowered because we can reach 99%. Or at least you can reach something like 94% on C410 easily, easily with a network smaller than ResNet 32. Maybe this effect manifests if you if you have actually something that could reach higher, but for some reason you only reach this low. I'm not sure. But this is confusing. And if this is really true. Yeah, I would just if it's true, which I believe I believe this paper, it might be just an effect of not reaching the actual ceiling. And again, look at this. This is just the curves are just strange, right? You have the echoing before augmentation, like it seems like it's outperforming the the fresh data points. I don't know, there's a little bell in my head that doesn't like this. If it's actually true, then you know, that's cool. But yeah, so my main criticisms are a bit with the experimental methodology, for example, where they increase the batch size, but still reach the same target accuracy, even though we know that there is a higher ceiling if you increase the batch size for language models. My other criticism is the non investigation of this connection. This connection right here, maybe, but all in all, it's a pretty cool paper. If I had a big company with these pipeline issues, I would absolutely implement this. This seems like a no brainer to do this and can help you tremendously. Alright, that was it. Thank you for listening. If you're still here, subscribe, like, tell a friend. Bye bye.
[ { "end": 5.12, "start": 0, "text": " Hi there! Today we're looking at faster neural network training with data" }, { "end": 11.9, "start": 5.12, "text": " echoing by Damme Choi, Alexander Passo, Christopher J. Shalu and George E. Dahl." }, { "end": 17.34, "start": 11.9, "text": " So on a high level this paper basically says you should repeat data that's" }, { "end": 21.86, "start": 17.34, "text": " already in memory in order to speed up the entire process of neural network" }, { "end": 27.76, "start": 21.86, "text": " training. And it also says that this can speed up your wall time without hurting" }, { "end": 33.52, "start": 27.76, "text": " your performance too much. And I have mixed feelings about it. So let's jump in." }, { "end": 38.800000000000004, "start": 33.52, "text": " So they basically make a point of saying that machine learning doesn't happen in" }, { "end": 46.8, "start": 38.800000000000004, "text": " just one thing. It's not like sklearn.fit anymore. It is more of a pipeline. So" }, { "end": 51.6, "start": 46.8, "text": " what do we mean by this? If you think of something like you want to" }, { "end": 57.160000000000004, "start": 51.6, "text": " train an ImageNet model, what you want to do is you have like your data set" }, { "end": 63.279999999999994, "start": 57.16, "text": " somewhere. And that could be in a database, it could be in the network" }, { "end": 68.44, "start": 63.279999999999994, "text": " somewhere. So if you have even something larger than ImageNet, you'll probably" }, { "end": 72.16, "start": 68.44, "text": " store it on a central server on an Amazon bucket somewhere. So this is in" }, { "end": 77.03999999999999, "start": 72.16, "text": " AWS. And the first thing you actually need to do is you need to read that" }, { "end": 82.47999999999999, "start": 77.03999999999999, "text": " data set. Now usually you're not gonna have enough memory on a machine to just" }, { "end": 87.52000000000001, "start": 82.48, "text": " load in the entire data set into memory. So that means this process here is" }, { "end": 93.32000000000001, "start": 87.52000000000001, "text": " streaming. So this is continuously streaming data points. And once you have" }, { "end": 96.82000000000001, "start": 93.32000000000001, "text": " used the data points, you're gonna throw it away because you need space for the" }, { "end": 103.24000000000001, "start": 96.82000000000001, "text": " next one, right? And so the streaming is done continuously. The next process is" }, { "end": 107.68, "start": 103.24000000000001, "text": " read and decode. That means you have to read it from the network and" }, { "end": 111.52000000000001, "start": 107.68, "text": " actually bring it into a format where you can use it. Usually something like an" }, { "end": 117.75999999999999, "start": 111.52, "text": " umpire array or a tensorflow tensor. You need to apply some shuffling because" }, { "end": 123.88, "start": 117.75999999999999, "text": " usually the order, you can't really trust the order that it is stored in." }, { "end": 127.39999999999999, "start": 123.88, "text": " Oftentimes there is like a bias in the ordering, so you need some sort of a" }, { "end": 133.2, "start": 127.39999999999999, "text": " shuffle buffer here. Then often you want to apply some data augmentation to it." }, { "end": 139.35999999999999, "start": 133.2, "text": " That means that you have one image. And we know for these models, if this is your" }, { "end": 146.4, "start": 139.36, "text": " cat, we know for these models that what can help is to basically make many many" }, { "end": 151.60000000000002, "start": 146.4, "text": " different images from one image. So this could be by cropping part of it and" }, { "end": 156.8, "start": 151.60000000000002, "text": " saying, well, if it was a cat before, the small upper right part here is still a" }, { "end": 162.72000000000003, "start": 156.8, "text": " cat. So this is one update. It's called data augmentation. And you're gonna apply" }, { "end": 166.76000000000002, "start": 162.72000000000003, "text": " a whole bunch of these things. So you can crop, you can rotate the image a bit. It's" }, { "end": 171.79999999999998, "start": 166.76, "text": " still a cat, right? And you can also change its luminance and a bit of its" }, { "end": 176.6, "start": 171.79999999999998, "text": " colors. You can jitter the colors, you can horizontally flip it and it'll still be a" }, { "end": 181.56, "start": 176.6, "text": " cat. And that's basically how you make many data points from one data point. And" }, { "end": 187.2, "start": 181.56, "text": " we know that helps. Then what you want to do is you want to batch this data. So you" }, { "end": 193.04, "start": 187.2, "text": " want to put it into mini batches. Since you've shuffled here, that means when the" }, { "end": 196.72, "start": 193.04, "text": " next time the same data point comes along, it's going to be batched with a" }, { "end": 202.7, "start": 196.72, "text": " different group of images and of course augmented differently with a" }, { "end": 206.95999999999998, "start": 202.7, "text": " different group of images. And that basically means it's a different training" }, { "end": 212.48, "start": 206.95999999999998, "text": " batch for the model. So this entire pipeline is basically a way that we take" }, { "end": 217.2, "start": 212.48, "text": " data points that we have and we make a whole bunch of variations and various" }, { "end": 221.84, "start": 217.2, "text": " groupings and batchings of it. And we know that helps enormously with the" }, { "end": 227.48, "start": 221.84, "text": " generalization capability of your final models. And then here you do your apply" }, { "end": 233.08, "start": 227.48, "text": " your SGD update. So that's usually where you forward propagate your data through" }, { "end": 239.8, "start": 233.08, "text": " your network, which here is F. You'll get like some Y hat as an output. And then" }, { "end": 244.56, "start": 239.8, "text": " you have your labels that also come through the pipeline. And you have some" }, { "end": 250.72, "start": 244.56, "text": " sort of loss function L that takes both as an input and gives you an output. And" }, { "end": 256.8, "start": 250.72, "text": " then you do your back propagation. So the back propagation would go through your" }, { "end": 262.96, "start": 256.8, "text": " loss function through your network and update the network parameters such that" }, { "end": 268.32, "start": 262.96, "text": " your network learned something right now this step to the right here is usually" }, { "end": 275.48, "start": 268.32, "text": " what what we focus on when we do deep learning. The step on the right here, all" }, { "end": 282.84000000000003, "start": 275.48, "text": " of this, this can be done on a GPU or is usually done on something like a GPU or a" }, { "end": 291.72, "start": 282.84000000000003, "text": " TPU. Right. But in these things are getting faster and faster. The point the" }, { "end": 297.44, "start": 291.72, "text": " paper makes is that the TPUs and GPUs of the world are getting faster. But this" }, { "end": 303.16, "start": 297.44, "text": " entire other thing right here, this is basically CPU land. Now I know there is" }, { "end": 309.32000000000005, "start": 303.16, "text": " some data augmentation now happening on the GPU and so on. But in essence, you can" }, { "end": 313.36, "start": 309.32000000000005, "text": " think of a pipeline where the thing to the left is happening on CPU and the" }, { "end": 319.40000000000003, "start": 313.36, "text": " thing to the right is happening on TPU. And even even worse, let's say the speed" }, { "end": 326.36, "start": 319.40000000000003, "text": " is continuously is continuously increasing. So in your pipeline, the kind" }, { "end": 337.36, "start": 326.36, "text": " of speed would be so here is the network reading. And here over here is the GPU SGD" }, { "end": 344.52000000000004, "start": 337.36, "text": " step. And this is speed. Basically, the further to the right in your pipeline," }, { "end": 353.36, "start": 344.52000000000004, "text": " you go the faster your the faster your hardware gets. And that means that if if" }, { "end": 358.28000000000003, "start": 353.36, "text": " this since this is a continuous pipeline, right, that basically means that if I" }, { "end": 364.6, "start": 358.28000000000003, "text": " input something here, it goes through the pipeline. And even if this is all running" }, { "end": 370.08000000000004, "start": 364.6, "text": " in parallel, at this thing over here is going to idle this since this is the" }, { "end": 376.52000000000004, "start": 370.08000000000004, "text": " fastest part of the pipeline, it is going to just idle a lot. Right, it because" }, { "end": 380.64, "start": 376.52000000000004, "text": " because it can only consume things as fast as this thing can produce. Now if" }, { "end": 386.24, "start": 380.64, "text": " you if you have some sort of a multi GPU machine and just train image net, like" }, { "end": 392.12, "start": 386.24, "text": " you just run the code, usually your this is not the bottleneck. Usually, your GPU" }, { "end": 398.24, "start": 392.12, "text": " is here are at 100% capacity. So this paper is not for you. But if you are," }, { "end": 402.64, "start": 398.24, "text": " let's say a big company have this network storage, have a big data set, have" }, { "end": 407.76, "start": 402.64, "text": " very expensive data augmentation. This happens, for example, this can happen in" }, { "end": 415.56, "start": 407.76, "text": " NLP and so on. This can be quite your situation, where the earlier in the" }, { "end": 423.08, "start": 415.56, "text": " pipeline, the slower it is. And don't you just love these graphics? So here's" }, { "end": 431.56, "start": 423.08, "text": " here's time. And apparently, it goes in both directions. And so does it go like" }, { "end": 437.52, "start": 431.56, "text": " this? I think what they mean is just time goes in this direction. And you're" }, { "end": 444.35999999999996, "start": 437.52, "text": " here. And you're upstream. So your upstream is your network. This is your" }, { "end": 450.71999999999997, "start": 444.35999999999996, "text": " network reading or your pre processing and the downstream, this is the GPU. So as" }, { "end": 457.24, "start": 450.71999999999997, "text": " you are pre processing things, you you and this should be this should be" }, { "end": 464.12, "start": 457.64, "text": " different. It should mean okay, to correct this right here, this is idle." }, { "end": 470.92, "start": 464.12, "text": " And this is running. So as you're upstream processes images, right at the" }, { "end": 475.88, "start": 470.92, "text": " beginning, your GPU is idle. But then as soon as it ships off the first batch of" }, { "end": 480.84000000000003, "start": 475.88, "text": " images, your GPU can run now it's running. While you're doing that, your" }, { "end": 484.76, "start": 480.84000000000003, "text": " upstream your network is still reading new images, pre processing them and so" }, { "end": 491.16, "start": 484.76, "text": " on, but it cannot is too slow to insert a batch at the time that the GPU is done." }, { "end": 496.68, "start": 491.16, "text": " The time the GPU is done, it's still processing this batch. So the GPU is idle" }, { "end": 502.32000000000005, "start": 497.68, "text": " until here where it finally manages to process that batch and then the GPU is" }, { "end": 507.40000000000003, "start": 502.32000000000005, "text": " running again. I think that would have been a much better graphic. But you know," }, { "end": 515, "start": 507.88, "text": " so their goal basically is that what you'll have is right here, for example," }, { "end": 520.36, "start": 515, "text": " after the batch, what you'll do is you scrap this connection, you take this" }, { "end": 528.4, "start": 520.36, "text": " and you put it into a smaller buffer. And the buffer is a repeat buffer. So what it" }, { "end": 536.6, "start": 528.4, "text": " does is it simply will repeat the whatever you have in the buffer until" }, { "end": 541.12, "start": 536.6, "text": " something new comes in, right? So new data point comes in, you just output that" }, { "end": 547.6, "start": 541.12, "text": " data point again, again, again, again, the for the GPU, it's gonna feel like these" }, { "end": 552.6800000000001, "start": 547.6, "text": " are all new batches and they continuously come in. But it's always the same until" }, { "end": 557.28, "start": 552.6800000000001, "text": " the next data point comes in. And then you output that one again and again and" }, { "end": 562.32, "start": 557.28, "text": " again and again. Now the the actual factor here you can, of course, tune by" }, { "end": 567.48, "start": 562.32, "text": " hand or you can just say repeat until something else comes in. In this paper," }, { "end": 572.2, "start": 567.48, "text": " they have an explicit factor where they say we repeat each data point four times" }, { "end": 577.24, "start": 572.2, "text": " or three times or so on. So this is data echoing, you basically echo the data" }, { "end": 584.88, "start": 577.24, "text": " point multiple times. And this can be done in various places. So they" }, { "end": 589.88, "start": 584.88, "text": " experiment with echoing in any of these places right here. So the egg they" }, { "end": 596.52, "start": 589.88, "text": " experiment with it right here, after reading and decoding, after shuffling. No," }, { "end": 602.84, "start": 596.52, "text": " I think always before shuffling. Because if you if you have a shuffle buffer" }, { "end": 607.52, "start": 602.84, "text": " anyway, they say it makes sense that if you do the echoing, you you do your" }, { "end": 612.72, "start": 607.52, "text": " shuffle buffer after you're echoing. So here, then after augmentation and after" }, { "end": 620.5600000000001, "start": 612.72, "text": " batching. So they experiment with these three locations in in echoing. Now what" }, { "end": 625.44, "start": 620.5600000000001, "text": " could be the downturn of something like this, the downturn, of course, is that" }, { "end": 633.72, "start": 625.44, "text": " this SGD procedure right here, basically, it relies on the data incoming being an" }, { "end": 640.1600000000001, "start": 633.72, "text": " IID sample from your data distribution, right? That's that's how we formulate SGD" }, { "end": 644.72, "start": 640.1600000000001, "text": " is that there's always new data incoming. Now, if you just output the same data" }, { "end": 650.8800000000001, "start": 644.72, "text": " point all the time, that could that is like no new information, first of all," }, { "end": 657.96, "start": 650.88, "text": " and second of all, it could bias the SGD update, such that you it because it sees" }, { "end": 661.56, "start": 657.96, "text": " the same data, it doesn't it sees the same information over and over, is going" }, { "end": 666.04, "start": 661.56, "text": " to think that's the whole data set, right? So potentially, it can make too many" }, { "end": 671.88, "start": 666.04, "text": " steps into the wrong direction. That just happens to be the bias in this particular" }, { "end": 681.4, "start": 671.88, "text": " data point. So the IID assumption is is is invalid. Now, why do you experiment" }, { "end": 687.48, "start": 681.4, "text": " with this in different locations? Because what you expect is that it hurts more or" }, { "end": 692.48, "start": 687.48, "text": " it hurts less the earlier you introduce this. So if you introduce echoing right" }, { "end": 698.36, "start": 692.48, "text": " here, so if you echo your data until new data from the network comes in, it's" }, { "end": 702.76, "start": 698.36, "text": " still going to be shuffled differently, right? It's and it's still going to be" }, { "end": 707.48, "start": 702.76, "text": " augmented differently. So each time the data point comes out of the echo buffer," }, { "end": 712.5600000000001, "start": 707.48, "text": " it is going to be shuffled. And it is going to be augmented in a different way" }, { "end": 716.96, "start": 712.5600000000001, "text": " than the last time the same data point came out. And this is going to be batched" }, { "end": 720.6, "start": 716.96, "text": " together because you've shuffled differently, it's going to be batched" }, { "end": 725.6800000000001, "start": 720.6, "text": " together with a different bunch of data points. And that means SGD gets new" }, { "end": 730.88, "start": 725.68, "text": " information. But if you go on to the very last thing, where you just after the" }, { "end": 736.2399999999999, "start": 730.88, "text": " batch right here, where you input the echo, that means SGD just gets to see" }, { "end": 745.52, "start": 736.2399999999999, "text": " the same batch of data augmented in the same way all the time, right? So the of" }, { "end": 750.68, "start": 745.52, "text": " course, where you exactly have to echo, you have to trade this off. So you have" }, { "end": 757.1999999999999, "start": 750.68, "text": " to trade off the how much you basically violate the IID fresh data assumption" }, { "end": 762.4799999999999, "start": 757.1999999999999, "text": " against where in your data pipeline is the bottleneck. So if your bottleneck is" }, { "end": 768.4799999999999, "start": 762.4799999999999, "text": " in the data augmentation, it may make little sense to echo before that because" }, { "end": 772.88, "start": 768.4799999999999, "text": " your bottleneck is the data augmentation. And that being said, if the bottleneck is" }, { "end": 777.16, "start": 772.88, "text": " that you don't have enough GPUs, then it probably doesn't make sense to data" }, { "end": 785, "start": 777.16, "text": " echo at all, though their experiments are somehow wonky on this. But so let's dive" }, { "end": 791.1999999999999, "start": 785, "text": " in, they make the following claims. Let's just go through them really quick. Data" }, { "end": 796.52, "start": 791.1999999999999, "text": " echo reduces the amount of upstream that think of network reading or" }, { "end": 802.12, "start": 796.52, "text": " augmentation computation needed to reach a competitive out of sample error rate" }, { "end": 806.28, "start": 802.12, "text": " on various data sets and model architectures. Second, data echoing can" }, { "end": 811.24, "start": 806.28, "text": " provide a wall time speed up in practice. Third, data echoing can support a wide" }, { "end": 815.72, "start": 811.24, "text": " range of echoing factors. And that's the echoing factor is how often you repeat" }, { "end": 821.24, "start": 815.72, "text": " the data. Fourth, the effectiveness of data echoing depends on the intersection" }, { "end": 826.36, "start": 821.24, "text": " point in the training pipeline, sorry, in the insertion point. That's what" }, { "end": 832.12, "start": 826.36, "text": " our hypothesis was, right? Fifth, data echoing can benefit from additional" }, { "end": 837.92, "start": 832.12, "text": " shuffling after echoing, but does not require it. And six, countering" }, { "end": 843.2, "start": 837.92, "text": " expectations, data echoing reaches the same final error rate as well tuned" }, { "end": 851.2, "start": 843.2, "text": " baselines. So I am can absolutely accept one through five, especially in like an" }, { "end": 863.5600000000001, "start": 851.2, "text": " actual practical in the wild setting. But six, we'll see about six. So let's jump" }, { "end": 870.9200000000001, "start": 863.5600000000001, "text": " into their models. They, sorry about that, they train the following four models. So" }, { "end": 876.8000000000001, "start": 870.9200000000001, "text": " they train a transformer on these two data sets LM1B and common crawl. So I" }, { "end": 884.4, "start": 876.8, "text": " guess technically it's five models on language modeling. They train the ResNet" }, { "end": 891.5999999999999, "start": 884.4, "text": " 32 on CIFAR 10. They train the ResNet 50 on ImageNet and they train SD on Coco." }, { "end": 899.3599999999999, "start": 891.5999999999999, "text": " Now here is the accuracies they get and here is, sorry, this is the target. So" }, { "end": 903.9599999999999, "start": 899.3599999999999, "text": " what they do is they train these models and then they say, okay, what's the" }, { "end": 910.48, "start": 903.96, "text": " accuracy we reach? And then they set a target value. So on ResNet 50 on ImageNet," }, { "end": 918.6, "start": 910.48, "text": " a very common number to reach is something like 76.5. If you look at, for" }, { "end": 924.1600000000001, "start": 918.6, "text": " example, torch vision models, they reach something like this. And so they say," }, { "end": 930.2, "start": 924.1600000000001, "text": " well, our target accuracy here is just a little bit below that. So and then we" }, { "end": 935.84, "start": 930.2, "text": " just measure how many steps or how many their measurement here is fresh data" }, { "end": 940.8000000000001, "start": 935.84, "text": " points. So how many actual fresh training samples do we need to reach this target?" }, { "end": 949.08, "start": 940.8000000000001, "text": " And this is where it gets wonky because, for example, take the 91% here on CIFAR 10." }, { "end": 958.1600000000001, "start": 949.08, "text": " That is quite, quite low. And also the ResNet 50 is, I mean, this is standard," }, { "end": 965.3199999999999, "start": 958.16, "text": " but still ImageNet is much further nowadays. And I think the effectiveness of" }, { "end": 970.8, "start": 965.3199999999999, "text": " something like this has a lot to do with how competitive you want to get. Maybe" }, { "end": 976.76, "start": 970.8, "text": " this is all just an effect of how much under par your, this target performance" }, { "end": 984.6, "start": 976.76, "text": " really is. And I would expect that even though they say it doesn't hurt their" }, { "end": 989.9200000000001, "start": 984.6, "text": " performance in their experiments, I would at least expect it will hurt your" }, { "end": 997.5600000000001, "start": 989.9200000000001, "text": " performance in general if you try to get competitive. Because these things aren't," }, { "end": 1003.84, "start": 997.5600000000001, "text": " as of now, at least the ones I know, like the ResNets, aren't really" }, { "end": 1012.6, "start": 1003.84, "text": " competitive. But so what do they do? They measure data echoing with an echoing" }, { "end": 1018.84, "start": 1012.6, "text": " factor of 2. So that means data that's incoming is output twice in a row. And" }, { "end": 1023.76, "start": 1018.84, "text": " every data point that's coming in is just emitted twice from the buffer. And" }, { "end": 1029.48, "start": 1023.76, "text": " then the next data point is emitted twice and so on. And what they measure," }, { "end": 1035.32, "start": 1029.48, "text": " again, is the fresh examples read. So how many fresh data points do you need to" }, { "end": 1038.52, "start": 1035.32, "text": " achieve something? This is a good measurement because this is kind of" }, { "end": 1046.16, "start": 1038.52, "text": " independent of hardware. So if you're really in the situation where your GPU is" }, { "end": 1053.56, "start": 1046.16, "text": " twice as fast as the rest of your pipeline, then an echoing factor of 2 will" }, { "end": 1061.92, "start": 1053.56, "text": " speed up at most your training procedure by a factor of 2. All right, so you have" }, { "end": 1068.04, "start": 1061.92, "text": " the baseline in red. And then you have batch echoing, which is where you echo" }, { "end": 1072.6, "start": 1068.04, "text": " what we said at the worst possible time right after batching. So this might hurt" }, { "end": 1079.8, "start": 1072.6, "text": " your performance the most, but also it has the potential to be the fastest if" }, { "end": 1085.1599999999999, "start": 1079.8, "text": " maybe your augmentation is very expensive. Then, sorry, or your batching." }, { "end": 1090.6399999999999, "start": 1085.1599999999999, "text": " You have example echoing after augmentation. So that would mean the" }, { "end": 1095.56, "start": 1090.6399999999999, "text": " augmentation is very expensive. So you save the augmented data point. And then" }, { "end": 1103.6399999999999, "start": 1095.56, "text": " you emit it multiple times, but each time it is batched differently. So it is" }, { "end": 1107.32, "start": 1103.6399999999999, "text": " shuffled and then batched with different other data points. So you have a shuffle" }, { "end": 1111.52, "start": 1107.32, "text": " buffer after it. And then you have example echoing before data augmentation." }, { "end": 1115.24, "start": 1111.52, "text": " So that means the same data point emitted multiple times will be augmented in" }, { "end": 1120.1599999999999, "start": 1115.24, "text": " different ways and basically will lead to slightly different data points. So the" }, { "end": 1125.1599999999999, "start": 1120.1599999999999, "text": " results here are pretty much what you could expect in that the earlier you do" }, { "end": 1132.4, "start": 1125.16, "text": " the echoing, as you can see here, the more this echoing helps. So the number, if" }, { "end": 1138.28, "start": 1132.4, "text": " you, for example, this is the object segmentation task, the baseline needs" }, { "end": 1144.0800000000002, "start": 1138.28, "text": " this many fresh examples to reach this target accuracy. With batch echoing, not" }, { "end": 1151.44, "start": 1144.0800000000002, "text": " only do you, sorry, with batch echoing, you need less fresh training examples. So" }, { "end": 1159.56, "start": 1151.44, "text": " that means even though you kind of train on the same data twice, this" }, { "end": 1164.8400000000001, "start": 1159.56, "text": " helps you more, or this helps you. It doesn't help you fully because the dashed" }, { "end": 1172.6000000000001, "start": 1164.8400000000001, "text": " line here is the, if it would help you as much as a fresh data point, you'd be at" }, { "end": 1176.2, "start": 1172.6000000000001, "text": " the dashed line, right? This is exactly half of this because the echoing factor" }, { "end": 1183.4, "start": 1176.2, "text": " is two. So if a repeated data point was as useful as a fresh data point," }, { "end": 1187.24, "start": 1183.4, "text": " you'd be at the dashed line. As you can see right here, you're not at the dashed" }, { "end": 1192.1200000000001, "start": 1187.24, "text": " line, but at least it doesn't hurt. You might expect that it hurts, but it" }, { "end": 1196.1200000000001, "start": 1192.1200000000001, "text": " doesn't hurt. It actually speeds up. So the repeated data points at least have" }, { "end": 1203.48, "start": 1196.1200000000001, "text": " some utility. Again, this is only useful if you have this asymmetry" }, { "end": 1207.28, "start": 1203.48, "text": " in your pipeline. If your pipeline is actually symmetric and you do an echoing" }, { "end": 1212.04, "start": 1207.28, "text": " factor of two, the wall time here, the wall time plot would look this for the" }, { "end": 1218.72, "start": 1212.04, "text": " baseline and then almost twice as high for the batch echoing. Because even" }, { "end": 1223.28, "start": 1218.72, "text": " though it needs the same amount of fresh, or almost the same amount of fresh" }, { "end": 1231.68, "start": 1223.28, "text": " example, you echo each one twice. So it needs to process it twice so it'll" }, { "end": 1236.88, "start": 1231.68, "text": " take much longer. So again, this is useful if you have this asymmetry and if" }, { "end": 1242.6000000000001, "start": 1236.88, "text": " the echoing factor is kind of smaller than your asymmetry. Otherwise you're" }, { "end": 1249.64, "start": 1242.6000000000001, "text": " simply wasting time repeating data points. Then if you do example echoing" }, { "end": 1254.72, "start": 1249.64, "text": " here after augmentation, you use even less fresh data points. And if you do it" }, { "end": 1260.44, "start": 1254.72, "text": " before augmentation, this is really surprising. You almost get the benefit of" }, { "end": 1266.1200000000001, "start": 1260.44, "text": " fresh data points, which is something you might expect, right? Because an" }, { "end": 1272.3600000000001, "start": 1266.1200000000001, "text": " augmented newly shuffled data point is kind of almost a new data point. But" }, { "end": 1278.96, "start": 1272.3600000000001, "text": " still, it's quite surprising that you almost get to the level of the of the" }, { "end": 1285.8400000000001, "start": 1278.96, "text": " theoretical possible. And also here on the image net task. Now here is actually" }, { "end": 1291.24, "start": 1285.84, "text": " an example where you can see that it hurts to do this batch echoing. Because" }, { "end": 1295.9199999999998, "start": 1291.24, "text": " the reasons why it could hurt is just that you have you violate this IID" }, { "end": 1300.84, "start": 1295.9199999999998, "text": " assumption, you basically have correlated data points. This is a big, big problem," }, { "end": 1307.12, "start": 1300.84, "text": " for example, in reinforcement learning, where already by nature of you running" }, { "end": 1311.76, "start": 1307.12, "text": " episodes and then feeding the episodes back into the training procedure, you" }, { "end": 1316.84, "start": 1311.76, "text": " have correlated data points. And that hurts your performance here actually" }, { "end": 1323.2, "start": 1316.84, "text": " compared to the to the baseline. But then if you go to example echoing, and the" }, { "end": 1329.76, "start": 1323.2, "text": " example echoing before augmentation, again, you get a speed up, which is pretty" }, { "end": 1336.4, "start": 1329.8, "text": " cool. Okay, so they do a bunch of other experiments. And I appreciate these" }, { "end": 1340.72, "start": 1336.4, "text": " experiments here to really show what's going on. And until when can you push" }, { "end": 1346.48, "start": 1340.72, "text": " this? So here they have a plot of example echoing before augmentation can reduce" }, { "end": 1352.88, "start": 1346.52, "text": " training time for ResNet 50 on image net. So this is before augmentation. And the" }, { "end": 1358.1200000000001, "start": 1352.92, "text": " echoing factor describes how often you repeat each data point. So this goes from" }, { "end": 1365.3600000000001, "start": 1358.1200000000001, "text": " two to five. And you can see that basically you you get the speed up, you" }, { "end": 1372.6799999999998, "start": 1365.36, "text": " just sort of get it for free. As you can see, the dashed line again is as if if at" }, { "end": 1378.28, "start": 1372.7199999999998, "text": " repeated data point were as useful as a fresh data point, you'd be at the dashed" }, { "end": 1385.52, "start": 1378.28, "text": " line. And you can see right here that you are just above this dashed line. So this" }, { "end": 1391.8, "start": 1385.8799999999999, "text": " can help a lot. And so this is the fresh examples read and this is the wall time" }, { "end": 1397.32, "start": 1391.8, "text": " in their particular situation. In this case, it doesn't help as much. But again," }, { "end": 1404.68, "start": 1397.3999999999999, "text": " it if that very much depends on how the asymmetry in your pipeline is. Now, in" }, { "end": 1410.28, "start": 1404.68, "text": " these experiments, I would actually appreciate something like they do down" }, { "end": 1417.24, "start": 1410.32, "text": " here, where I would always like to see where it breaks. So how far can you go" }, { "end": 1423.04, "start": 1417.24, "text": " with the echoing factor until it doesn't help anymore? Because this sort of tells" }, { "end": 1427.2, "start": 1423.04, "text": " me pretty much nothing. I want to see where is the low point? Where's kind of" }, { "end": 1433.24, "start": 1427.2, "text": " the optimal echoing factor? And what can you tell me about this optimal echoing" }, { "end": 1438.4, "start": 1433.24, "text": " factor? How can we determine it sort of beforehand? Or how can you reason how" }, { "end": 1442.4, "start": 1438.4, "text": " does it connect to the different parts of your architecture? So if I had to point" }, { "end": 1448, "start": 1442.4, "text": " out a flaw in this paper, it would be that right here, I would expect the them" }, { "end": 1455.4, "start": 1448, "text": " to continue this echoing factor increase until it breaks, sort of like they do" }, { "end": 1464.44, "start": 1455.44, "text": " down here. This is for I believe this is for the transformer on LM 1B. Now here" }, { "end": 1472.88, "start": 1464.44, "text": " they have a batch size of 1024. And you can see, and this is the this is their" }, { "end": 1477.16, "start": 1472.88, "text": " standard setting for the transformer, the 1024 batch size, you can see that the" }, { "end": 1485.52, "start": 1477.16, "text": " baseline uses this many 1.5 times 10 to the seventh fresh examples to train" }, { "end": 1491.64, "start": 1485.52, "text": " until their target. If you increase the echoing factor by two, you basically need" }, { "end": 1498.8400000000001, "start": 1491.64, "text": " half as many fresh examples, as long as you echo each one twice. Again, very" }, { "end": 1507.4, "start": 1498.8400000000001, "text": " surprising the fact how close you can get to the as if each batch were a a" }, { "end": 1514.2, "start": 1507.4, "text": " perfect fresh data point. But you can see as you increase this echoing factor, and" }, { "end": 1519.5600000000002, "start": 1514.2, "text": " here is exactly what I said, right, you at some point, this hurts at some point," }, { "end": 1525.32, "start": 1519.56, "text": " you get to the point where the non IID ness, the correlation of date of" }, { "end": 1530.76, "start": 1525.32, "text": " successive data points will actually hurt you. And they make a point here of" }, { "end": 1537.9199999999998, "start": 1530.76, "text": " saying that this is, for example, dependent on batch size. Now in this" }, { "end": 1544.44, "start": 1537.9199999999998, "text": " experiment over here, they have a larger batch size. And here is again the the" }, { "end": 1551, "start": 1544.44, "text": " baseline number of data points to reach the target. And you can see again it goes" }, { "end": 1558.52, "start": 1551, "text": " down. But now with the echoing factor where before you had a you had an" }, { "end": 1562.68, "start": 1558.52, "text": " increase again, now it continues to decrease. Again, it will be interesting to" }, { "end": 1568.74, "start": 1562.68, "text": " see where it goes up here and how the number at the lowest like here the four" }, { "end": 1573.64, "start": 1568.74, "text": " and here the I don't know what it's gonna be maybe the 16, how this will" }, { "end": 1579.0800000000002, "start": 1573.64, "text": " kind of depend on your batch size. And here is another problem. And that's what" }, { "end": 1584.2, "start": 1579.0800000000002, "text": " I alluded to at the beginning, this this performance dependence. Now, I have not" }, { "end": 1589.96, "start": 1584.2, "text": " read anything differently in the paper. So I had to assume that they trained" }, { "end": 1595.38, "start": 1589.96, "text": " this here, this number of fresh examples to reach the target is still the target" }, { "end": 1601.64, "start": 1595.38, "text": " that they determined at the beginning. So it's that 3.9 in the table, that 3.9" }, { "end": 1607.3600000000001, "start": 1601.64, "text": " was achieved with this batch size with 1024. And we know especially in language" }, { "end": 1614.2800000000002, "start": 1607.3600000000001, "text": " models that larger batch sizes will lead to a better performance, even if you need" }, { "end": 1619.4, "start": 1614.2800000000002, "text": " let's say more samples. So here you can see that the samples here is 1.5 and" }, { "end": 1625.48, "start": 1619.4, "text": " here it's actually four, because you increase that batch size. So that will" }, { "end": 1633, "start": 1625.48, "text": " tell you something 1.5 and four, that is that is like a times. Okay, that's like" }, { "end": 1641.4, "start": 1633, "text": " an times 2.5. So you go with the batch size of times four, and you need 2.5 more" }, { "end": 1647.2, "start": 1641.4, "text": " more fresh training samples to reach the same target accuracy. First of all, we" }, { "end": 1653, "start": 1647.2, "text": " know that the larger batch sizes can reach higher target accuracies. So again," }, { "end": 1658.84, "start": 1653, "text": " these results, the dependence of them on the actual performance to the" }, { "end": 1665.6, "start": 1658.84, "text": " maximum achievable value, to me that's kind of a shady world here to always" }, { "end": 1672.36, "start": 1665.6, "text": " say, okay, how long does it take to reach that particular target? Because we" }, { "end": 1677.8, "start": 1672.36, "text": " know that this model right here can reach a much higher target, but we don't" }, { "end": 1682.72, "start": 1677.8, "text": " know this about these models here. What is their kind of performance in the limit?" }, { "end": 1688.68, "start": 1682.72, "text": " And they try to make these experiments, but I don't really believe them. Maybe" }, { "end": 1695.68, "start": 1688.68, "text": " yeah. And yes, and the second, right, that will that will be that will that is" }, { "end": 1702.82, "start": 1695.68, "text": " already interesting. So this ratio right here, this 2.5 to 4, this ratio must mean" }, { "end": 1707.72, "start": 1702.82, "text": " something, right? It's it's I go to a higher batch size, four times higher" }, { "end": 1714.64, "start": 1707.72, "text": " batch size, and I need 2.5 many more fresh training samples to reach the same" }, { "end": 1719.48, "start": 1714.64, "text": " target. That must somehow tell you something about the usefulness of a" }, { "end": 1724.96, "start": 1719.48, "text": " single data point versus a succession of data points, right? So it doesn't seem" }, { "end": 1728.64, "start": 1724.96, "text": " because I would expect if each data point was valuable, I would expect this" }, { "end": 1738.48, "start": 1728.64, "text": " to be times four. And if it were if it were times one, so if it were no, no speed" }, { "end": 1744.8400000000001, "start": 1738.48, "text": " up at all, sorry, not times four, if it would be times one, it would mean I'd" }, { "end": 1748.88, "start": 1744.8400000000001, "text": " need the same number of fresh training samples, right? No matter how I batch" }, { "end": 1754.44, "start": 1748.88, "text": " them. But it were times four. That means basically that it doesn't matter really" }, { "end": 1759.48, "start": 1754.44, "text": " how many training points I have in a batch as long as I have enough and the" }, { "end": 1765.8, "start": 1759.48, "text": " 1024 seems to be enough. It just it just matters how many you know SGD steps I do." }, { "end": 1769.92, "start": 1765.8, "text": " So basically, what we're saying SGD isn't getting the most out of these data" }, { "end": 1774.52, "start": 1769.92, "text": " points. And this ratio this 2.5, this this tells you something about the" }, { "end": 1779.0800000000002, "start": 1774.52, "text": " information content of a of an additional data point versus the" }, { "end": 1784.76, "start": 1779.08, "text": " usefulness content of an additional step of SGD. And I would expect that to" }, { "end": 1790.76, "start": 1784.76, "text": " depend to intrinsically be connected to the where the low point of this echoing" }, { "end": 1794.08, "start": 1790.76, "text": " factor is because that's exactly what the echoing does. It trades off" }, { "end": 1800.4399999999998, "start": 1794.08, "text": " freshness of data point versus doing more steps on the on the on the same" }, { "end": 1809.44, "start": 1800.44, "text": " information. And for a paper, especially paper by Google brain, I this this is a" }, { "end": 1815.72, "start": 1809.44, "text": " connection that I would love to see investigated. But enough of the ranting," }, { "end": 1820.16, "start": 1815.72, "text": " they do investigate other things they do investigate, for example, what happens if" }, { "end": 1825.2, "start": 1820.16, "text": " we just up the batch size. And you can see here, yeah, this is interesting in" }, { "end": 1832.48, "start": 1825.2, "text": " the baseline needs more fresh samples as you up the batch size. But and at the" }, { "end": 1836.04, "start": 1832.48, "text": " beginning, this batch echoing, for example, doesn't help doesn't hurt, but" }, { "end": 1840.88, "start": 1836.04, "text": " doesn't help. But as you go to higher and higher batch sizes, this batch echoing" }, { "end": 1848.1200000000001, "start": 1840.88, "text": " starts to help more and more. Again, I believe this is connected to the" }, { "end": 1852.1200000000001, "start": 1848.1200000000001, "text": " usefulness of the single data point, at some point, your batch size is just too" }, { "end": 1858.32, "start": 1852.12, "text": " large for the problem, you'd rather do more steps. And that's why this helps. But" }, { "end": 1865.3999999999999, "start": 1858.32, "text": " also, this model right here might have a higher ceiling accuracy. So indeed is" }, { "end": 1869.9599999999998, "start": 1865.3999999999999, "text": " the question whether this model right here has the same or whether this model" }, { "end": 1874.2399999999998, "start": 1869.9599999999998, "text": " right here, the batch echoing model would actually fall back to the ceiling" }, { "end": 1882.88, "start": 1874.24, "text": " accuracy of one of these models over here. Yeah, in any case, their point is" }, { "end": 1888.64, "start": 1882.88, "text": " basically that as you increase the batch size, this echoing tends to help more" }, { "end": 1895.96, "start": 1888.64, "text": " relatively. Because maybe it's because what I said, right, they say as batch" }, { "end": 1899.36, "start": 1895.96, "text": " size increases, the performance of batch echoing relative to the baseline stays" }, { "end": 1905.8, "start": 1899.36, "text": " either stays the same or improves. While for example, echoing it either stays the" }, { "end": 1910.6799999999998, "start": 1905.8, "text": " same or it gets sorry, while for example, echoing it either stays the same or gets" }, { "end": 1916.52, "start": 1910.6799999999998, "text": " worse. Dashed lines indicate the expected values if repeated examples were as" }, { "end": 1920.52, "start": 1916.52, "text": " useful as fresh examples. Yeah, so I believe there is an intrinsic connection" }, { "end": 1927.9599999999998, "start": 1920.52, "text": " here between the usefulness of more data and usefulness of doing additional steps." }, { "end": 1931.8, "start": 1927.96, "text": " And here the example echoing you can almost see it as more data because" }, { "end": 1936.3600000000001, "start": 1931.8, "text": " especially here you're going to do augmentation on top of it and you see" }, { "end": 1942.3600000000001, "start": 1936.3600000000001, "text": " the non augmented versus the augmented ratio changes dramatically from here to" }, { "end": 1950.1200000000001, "start": 1942.3600000000001, "text": " here. Okay, final set of experiments. As you can tell, this is more mostly an" }, { "end": 1956, "start": 1950.1200000000001, "text": " experimental paper. And it is always easy to criticize experimental papers and" }, { "end": 1964.56, "start": 1956, "text": " rightfully so because I would not trust this very much. But given that it comes" }, { "end": 1971.72, "start": 1964.56, "text": " from a big institution, and it is a very well written paper, I would trust it" }, { "end": 1976.16, "start": 1971.72, "text": " more than I would a regular paper. And I would say if you're in practice, this is" }, { "end": 1983.12, "start": 1976.16, "text": " certainly worth trying. Absolutely. I'm just I just think that some of the" }, { "end": 1988.52, "start": 1983.12, "text": " things aren't aren't researched, like some of my questions aren't answered of" }, { "end": 1995.6399999999999, "start": 1988.52, "text": " this. So they investigate sizes. So they now build shuffle buffers. So we have" }, { "end": 2001.32, "start": 1995.6399999999999, "text": " batch echoing, but they say ah, but we can do batch echoing with shuffle buffers." }, { "end": 2004.84, "start": 2001.32, "text": " So after the batch echoing, right, we have this state where we have the" }, { "end": 2011.28, "start": 2004.84, "text": " batching. And then we have the echoing. So this is our echo buffer, where we" }, { "end": 2016.44, "start": 2011.28, "text": " output the each data point multiple times. And then we have another buffer," }, { "end": 2020.3999999999999, "start": 2016.44, "text": " which is a shuffle buffer, that a shuffle buffer just collects data points and" }, { "end": 2024.6399999999999, "start": 2020.3999999999999, "text": " then shuffles them around before outputting them. And that means even" }, { "end": 2031.44, "start": 2024.6399999999999, "text": " though we you know, output this five times, it might not come out five times" }, { "end": 2036.2, "start": 2031.44, "text": " after each other, it might be that it comes out once, and then another data" }, { "end": 2039.76, "start": 2036.2, "text": " point that was already in the shuffle buffer comes out. And then it will just" }, { "end": 2044.32, "start": 2039.76, "text": " say that in total, it comes out five times. But it is first shuffled together" }, { "end": 2050.92, "start": 2044.32, "text": " with a bunch of other data points. Of course, this uses more memory, but it" }, { "end": 2056.12, "start": 2050.92, "text": " returns to that more IID setting. And you can see here as the buffer size" }, { "end": 2061.52, "start": 2056.12, "text": " increases, then the performance gets more and more to the performance that you" }, { "end": 2066.68, "start": 2061.52, "text": " would have with completely fresh data. Right. So again, trading off freshness" }, { "end": 2075.48, "start": 2066.68, "text": " and freshness and doing multiple steps with by by basically repeating," }, { "end": 2082.12, "start": 2075.96, "text": " repeating data points straight out versus repeating data points shuffled." }, { "end": 2088.7599999999998, "start": 2083.3999999999996, "text": " And also here you have the same with example echoing. So if you apply the" }, { "end": 2094.3599999999997, "start": 2088.7599999999998, "text": " shuffle buffer to example echoing and you increase its size, you can get very," }, { "end": 2101.1600000000003, "start": 2094.36, "text": " very, very close to the performance that you would get with fresh data, which of" }, { "end": 2105.8, "start": 2101.1600000000003, "text": " course, if you increase the shuffle buffer to the size of the data set, you" }, { "end": 2109.96, "start": 2105.8, "text": " are at the situation that's the limit you are at the situation of fresh data," }, { "end": 2115.48, "start": 2110.2000000000003, "text": " right? If you do example echoing. Right. So here is where it gets into the funky" }, { "end": 2121.56, "start": 2115.48, "text": " part, where they say we actually measure the validation cross entropy and the" }, { "end": 2127.4, "start": 2121.56, "text": " validation accuracy versus the number of fresh examples read. And here I want to" }, { "end": 2133.48, "start": 2127.56, "text": " concentrate on the ResNet 50 on ImageNet. And as you can see, most of these" }, { "end": 2139.7999999999997, "start": 2133.56, "text": " models, they pretty much end up in the same place here. It's just that the" }, { "end": 2148.68, "start": 2139.88, "text": " echoing models end up there faster. Right. And this this is, I mean, this is" }, { "end": 2155.48, "start": 2148.68, "text": " where it gets a bit confusing, honestly, because why do you have this super sharp" }, { "end": 2162.2, "start": 2155.48, "text": " thing here? Because usually and here it sort of speeds up in the middle, you see" }, { "end": 2167.48, "start": 2162.2, "text": " you see that and then it kind of sharply declines. Is this maybe because they" }, { "end": 2173.8799999999997, "start": 2167.48, "text": " drop the learning rate or something. Now, my main thing is that the performance" }, { "end": 2181.4, "start": 2173.88, "text": " here even though this target thing is lower than than the even though this" }, { "end": 2185.6400000000003, "start": 2181.4, "text": " target thing is the same for everyone, it is lower than the best reachable" }, { "end": 2193.48, "start": 2185.6400000000003, "text": " accuracy. And I'm I'm just this this is just confusing. If this is really true." }, { "end": 2202.2000000000003, "start": 2194.2000000000003, "text": " Whoa. If this is really true, I think we have a lot to learn about SGD yet and" }, { "end": 2207.24, "start": 2202.2, "text": " how we're not actually doing SGD correctly. And because it seems like" }, { "end": 2214.68, "start": 2207.24, "text": " almost the the echo versions are better or reach a better accuracy than the" }, { "end": 2219.96, "start": 2214.68, "text": " baseline. I don't know, do they just cap it at the performance? I don't think so." }, { "end": 2225.64, "start": 2219.96, "text": " I think they say they let it reach. They also have these things right here." }, { "end": 2229.8799999999997, "start": 2225.64, "text": " These these curves where they say this is the best we reach. And this is the" }, { "end": 2239, "start": 2229.88, "text": " ResNet 32 on C410. Again, 91% on C410 is just very, very low. And I'm almost" }, { "end": 2244.6800000000003, "start": 2239, "text": " thinking that, okay, this might help if you just throw something that we know is" }, { "end": 2249.96, "start": 2244.6800000000003, "text": " kind of overpowered because we can reach 99%. Or at least you can reach" }, { "end": 2256.04, "start": 2249.96, "text": " something like 94% on C410 easily, easily with a network smaller than ResNet 32." }, { "end": 2260.7599999999998, "start": 2256.04, "text": " Maybe this effect manifests if you if you have actually something that could" }, { "end": 2268.2799999999997, "start": 2260.7599999999998, "text": " reach higher, but for some reason you only reach this low. I'm not sure. But" }, { "end": 2275.56, "start": 2268.2799999999997, "text": " this is confusing. And if this is really true. Yeah, I would just if it's true," }, { "end": 2280.36, "start": 2275.56, "text": " which I believe I believe this paper, it might be just an effect of not reaching" }, { "end": 2286.6, "start": 2280.36, "text": " the actual ceiling. And again, look at this. This is just the curves are just" }, { "end": 2293.48, "start": 2286.6, "text": " strange, right? You have the echoing before augmentation, like it seems like" }, { "end": 2301.48, "start": 2293.48, "text": " it's outperforming the the fresh data points. I don't know, there's a little" }, { "end": 2306.44, "start": 2301.48, "text": " bell in my head that doesn't like this. If it's actually true, then you know," }, { "end": 2310.92, "start": 2306.44, "text": " that's cool. But yeah, so my main criticisms are a bit with the" }, { "end": 2315.96, "start": 2310.92, "text": " experimental methodology, for example, where they increase the batch size, but" }, { "end": 2320.36, "start": 2315.96, "text": " still reach the same target accuracy, even though we know that there is a" }, { "end": 2324.68, "start": 2320.36, "text": " higher ceiling if you increase the batch size for language models. My other" }, { "end": 2331.8, "start": 2324.68, "text": " criticism is the non investigation of this connection. This connection right" }, { "end": 2336.84, "start": 2331.8, "text": " here, maybe, but all in all, it's a pretty cool paper. If I had a big company" }, { "end": 2341.32, "start": 2336.84, "text": " with these pipeline issues, I would absolutely implement this. This seems" }, { "end": 2348.52, "start": 2341.32, "text": " like a no brainer to do this and can help you tremendously. Alright, that was" }, { "end": 2352.52, "start": 2348.52, "text": " it. Thank you for listening. If you're still here, subscribe, like, tell a" }, { "end": 2361.8, "start": 2352.52, "text": " friend. Bye bye." } ]
l_3zj6HeWUE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Group Normalization (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "batchnorm", "groupnorm", "layer norm", "group norm", "batch norm", "instance norm", "fair", "normalization", "mean", "standard deviation", "minibatch", "batch statistics", "kernel", "cnn", "convolutional neural network" ]
The dirty little secret of Batch Normalization is its intrinsic dependence on the training batch size. Group Normalization attempts to achieve the benefits of normalization without batch statistics and, most importantly, without sacrificing performance compared to Batch Normalization. https://arxiv.org/abs/1803.08494 Abstract: Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries. Authors: Yuxin Wu, Kaiming He Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at group normalization by Yuxin Wu and Kaiming He of Facebook AI Research. So this paper is basically an engineering paper about a new normalization technique called group normalization. So what's the issue here? The issue is that pretty much throughout neural network learning we're using this technique called batch normalization. Now batch normalization is a pretty reasonable thing and it works very very well. So what's the idea behind batch normalization? The idea is if you have data points for machine learning methods and your data is in a 2D coordinate system somewhere down here, and you're trying to separate that from the dots which are here, it is often very beneficial to shift that distribution before you do anything. You want to shift it to the middle of the... Basically you want to center it, first of all, such that the origin point is in the middle of the data. And then sometimes you also want to do what's called normalize it. And by normalizing we mean you want to kind of rescale the axis such that things are more or less sort of like Gaussians. So if you look at this distribution, first is the centering and then second is what is called a normalization. And usually we know that any sort of machine learning methods work better if you do that. And that's mostly in classic machine learning methods with conditioning numbers of the data being better and so on. But if you just want to learn, let's say a linear classifier, you can see here you can even save one parameter because you can make it just go through the origin. And that's true in general. So if we draw this in 1D, you'd have a distribution that maybe is very peaky right here. You first center it to the middle of the coordinate system. And sorry, that's not really centered. And then you would divide it by its standard deviation such that after it, it is a unit standard deviation Gaussian, so a normal distribution. The closer your data seems to be to a multivariate normal distribution, the better these machine learning methods work, especially if you look at how signal in deep network is propagating through the layers. So the idea is if it's good for the general machine learning method that the input has a multivariate normal distribution or is normalized, then it's probably good that the input to each layer is normalized. So when you look at how signal features are in between layers, so this is, for example, the con five three. This is a layer somewhere in the middle of a convolutional neural network. And if you look at the spread of how features feature signals are throughout training, you'll see that the more training progresses, the larger the kind of spread of features is. So you might get really large numbers or really large negative numbers or maybe really small numbers in your neural networks. And it would be better if you had a layer and the input you've normalized it right. And the output then is again a distribution, but it's maybe shifted that you would first transform that back into a normal unit, normal distribution before you put it through the next layer. So what batch norm does is at each layer before each layer, it will do a normalization procedure on the data before giving it to the next layer. And you can do basically backprop through that. It's also common to learn bias and variance parameter to add after that. But the important thing is that after each layer, the data is normalized such that it is kind of in the most comfortable regime. What's the problem? The problem with this is that you actually need the distribution, right? If you want to center this data up here, you need to know what the data is. So you need to know the entire data. If I want to figure out what is the mean of this distribution, I need all of the data points to decide here's the mean. I need to shift that up to here. If I just have a mini batch like we usually do in machine learning. So if I just have this or this and this and this point, I just have four points. I can't determine the mean. But what I can do is I can sort of guess the mean from the four points, right? So my guesstimation of the mean would be somewhere here. And that would be usually close enough. And you can also see that the larger your batch is, if you sample at random, the more accurate your mean estimation is going to be. So people have been training neural networks with large batch sizes for basically batch size have gotten larger and larger in the last year. So that has not been a problem. But what people are doing now is they are doing distributed machine learning where you do have your data set and you draw a batch. And the batch might be large. So this might be, I don't know, one million images. This might still be 4000 images in your batch. But what they'll do, especially with things like TPUs, is they'll distribute that across many, many, many machines into batches of sometimes as small as eight samples per unit. And if this is not images, but maybe something longer, like a sequence of text, or if this is a sequence of speech or something like this, you can sometimes even go down to two or one samples per unit of computation. And of course, you can't do batch normalization. You can't calculate the mean of one sample. It's just going to be that one sample itself. So either you have to have two options. If you're in small batch sizes, let's say two, either you take the hit and have your very bad estimate of the mean from just two samples or eight samples. Or after each layer, you basically do a synchronization step such that everyone communicates to everyone else their statistics. And you can basically aggregate the statistics across the batch. Both are not cool. Usually these frameworks, they don't do this synchronization because it's just too slow. So they'll go with the bad statistics. And you can see this right here in this graph. They have the ImageNet classification error versus batch sizes. So this is a ResNet-50 model trained on the ImageNet dataset using eight workers, so eight GPUs. And if they do 32 images per... Now just look at the blue line here. If they do 32 images per worker, so it's eight workers, it's eight times 32. That's the batch size. That is a large number. 256 maybe. Yeah. All right. So if they do that, then you can see the error is on a state of the art for a ResNet-50. If they go to 16, it's still pretty good. But then as they go lower and lower and lower, so if they go to smaller and smaller batches and spread them out over the workers, then the error starts going up. And this is because the batch norm statistics get worse and worse. So the goal of this paper is to find this group norm thing here. The group norm, this paper claims, is another normalization technique that pretty much does the same thing, this centering and the normalization, the scaling. But it does it without relying on the batch statistics. It basically does it within a data point. And that means that the performance, even though it's a bit smaller at the beginning for this particular example, will stay constant with even in small batch size regimes. So this is potentially applicable, as I said, to things where you have to go to like two or one sample per worker, because it's just the data points, the single data points are just too large. So if you maybe want to train something like BERT on a GPU. So what is group normalization? Group normalization, as I said, works within a sample. Now, there have been other methods that work within a sample instead of across the batch. And they tend to not work as well as batch norm. Now, this paper here claims that group norm works on par with batch norm for large batch sizes and better than on small batch sizes. So here they have a schematic of what's happening. In batch norm, as you can see here, you have this cube. Now, this cube here, N, means the batch size. So these are the data points. Points in your mini batch. This is the thing that is going to get small in the if you don't have enough memory. Then C would be your channels. So we are talking about convolutional neural networks here, but this is generalizable to other neural networks. The channels are going to be the independent feature maps that you're going to have. So in a convolutional neural network, usually each layer has these things called kernels. And there might be three by three matrices like this. And if you have an image, the kernel will be slided. This thing right here will be maybe here will be slided across the image or slid. Is it slid? Okay, would be slid across the image. And then the numbers in here will be convolved with the pixels. And that will give you the next layers representation. So whatever the operation convolution operation is, and you'll slide that over. And that sliding over will give you the values in the next layer. Now you not only have one kernel, but you actually have many kernels. Sorry about this. Let's draw that. So you have more and more kernels. You have a whole stack of kernels. And how many kernels you have, those are the different kernels are also called your different channels. Now, the kernels refer to the weights and the channels refer to the image. But the Ith kernel is going to be convolving the Ith channel of the image. So at the beginning, the input image has three channels because red, green and blue. But then the intermediate images can have more channels as you have basically as many as you have kernels in the layer right before. Okay, and the H and the W means the height and width of the image. So it combined so the image is kind of unrolled across the height or the width in this direction. So what does batch norm do? Batch norm takes, as you can see here, one channel. And it it takes one channel. So maybe this image, this is one channel. Let's just say this is the red channel because I drawn it in red. It takes that and it calculates the mean one over and the standard deviation of that. It calculates those two statistics and it uses that to do this centering and scaling operation. So all of these methods are going to calculate the mean and the variance and then do the same scaling transformation. The question is just how do you calculate the mean? Batch norm does this across the data points. So it looks at a single feature at a single channel and it asks what's the mean across all the data points? What are the data statistics of this channel and what was the mean and standard deviation? Now, actually, batch norm, I'm not I didn't even know that in convolutional layer this works like this. You can also imagine batch norm of really just taking one single feature. And that means of really just taking one of these things right here. So if this goes to the back and normalizing across that, the important part is that it is in fact normalizing across the data points. So it looks at your batch, looks at the mean and the variance in that batch and it normalizes by that. I think convolutional layers make sense because you have this invariance in height and width and therefore. Yeah, so that makes sense. But in a fully connected layer, you'd simply go look at one feature at a time. Layer norm is different. Layer norm has basically been proposed as an alternative to batch norm with the same reasoning that this paper has. So layer norm, as you can see here, it does each data point individually. So here we only have one data point that is normalized by itself. So you do this for each data point independently and therefore it's not dependent on the batch size anymore. But what you'll do is you look across all of the channels right here. So all of the channels and all of the width and height. So this entire thing here, this entire thing is basically one channel. Right. And then the next channel is here of the image and the next. No, that's the next image. Well, that is a bad drawing because the image is unrolled. In any case, what you'll do is you look at. So if you have a filter bank like this, you have an image and the image composed of multiple channels. Right. This is the red. And then you'll have the green. Right. This is in the green. And then you'll have the blue channel. And what you'll do is simply you'll calculate the mean across all of the pixels. And across all of the channels, you just take this whole NumPy array and you just say dot mean. And that gives you one number. And it's just whatever that number is, you subtract it and then you say standard deviation and you divide by that. That's layer norm. So an entire layers representation of one image is just normalized to the mean. Now, this seems a bit drastic. And that's why instance norm did the exact opposite. They said, wait a minute, instead of normalizing across all of the features, right, we'll go back and do what batch norm does. Batch norm looks at each feature individually. So basically, it looks at all of these these different axes in the data distribution. It looks at them differently. So if one axis is scaled very widely, we want to normalize that differently than if than the other axis that is just scaled very shortly. And that's why we'll look at each feature individually like batch norm. But also, we only look at one data point at a time. Now, as you can imagine, this doesn't work anymore in in a fully connected network. This basically works in a convolutional network where you have a feature map channel. So you look at one individual channel and one data point. So that means here you would normalize the red channel individually. You would normalize the green channel individually and you normalize the blue channel individually. So the image you're going to end up with is simply the red channel subtracted by its own mean and then divided by its own standard deviation. And just within that data point, right. So maybe I should here say across the number of features or something. So I hope that that's clear. So the layer norm drops the dependence on the batch size, but instead says we should normalize across all of the features. And the instance norm says, wait a minute, batch norm had a good idea normalizing only across the features individually because the individual features might have different scales. And we should account for that. But also, we don't want to be dependent on the batch size. And now is this where group norm comes in? Group norm is basically a mix between layer norm and instance norm. What group norm says, layer norm and instance norm have good ideas. They only go across one sample. They take that. They say, in essence, instance norm has a good idea in that the features should be normalized individually, but it goes sort of too far from it goes too far. You might get not good enough statistics because you're now normalizing each of these things individually. Whereas with layer norm, you're too restricted. You're basically saying that the features, it's fine if the features relative to each other are like this, right. One is maybe very high variance and one is very low variance. Feature norm would keep that. And group norm would say, maybe that's not so good. We should have, we should normalize the individual features, maybe individually. But their argumentation here is that maybe there are some features that by their nature already have the same sort of scaling and variance. They give an example. If you, for example, have a filter, again, we deal with convolutional layers here, and that filter is a let's say an edge filter, right. So a horizontal edge filter. So it's very low value here. And let me mark the high value with blue. So this is a horizontal edge filter. If you slide this over a window and these are high numbers and these are low numbers, it will respond to edges because edges have high, low, high, right. Or vice versa. So it will give you very positive and very negative number every time you slide across an edge. Now you can imagine that in natural images, that filter, whatever image you put in would, and however you normalize, would give you pretty much the same response as a vertical edge filter. So the horizontal and the vertical edge filter, you'll see whatever their response is, they're probably about equal in size. So we could expect that in a neural network, there will be groups of filters that together exhibit the same scale. And therefore we can normalize across them like in layer norm. So the more things we normalize across, the better statistics we can gather. That's why instance norm doesn't work because it only normalizes across a very small thing, getting very little statistics. But we should normalize, if we could gather good statistics, we should normalize different features differently. And group norm says, well, since some of the features are almost guaranteed to behave the same, we could normalize across those. Now, of course, you don't know at the beginning which ones those are. But you hope that by doing group norm, by basically at a priori, so at the beginning of training, you decide what the groups are. And naturally, it's just whichever ones are next to each other, those are the groups. And you'll hope that through the training procedure, basically those groups will learn the features that are equal of size. Well, you basically enforce that, so you kind of constrain the architecture to do that. So that's the idea behind group norm. You basically build these groups of channels and then you normalize across those, across the groups of, within the groups of channels, across the entire height and width, only in a single data point. And therefore, you gain the advantage of layer norm, of normalizing within a single data point. You retain the advantage of batch norm, of normalizing across single features. And that's what instance norm attempted. But yeah, so you get the best of both worlds, sort of. That's group norm. And now we go and look what it does. So they say, OK, basically all the normalization techniques do this. They subtract a mean and divide by a standard deviation. That's what we saw. And the difference is just across what you collect, your statistics. So the group norm is the following code in TensorFlow. As you can see, you simply reshape your data and basically expand this part right here where you built, where you put the extra. So this is C. This entire thing used to be C. And you divide it into group and index within group. And then you just normalize across that and reshape to the original dimension again. And the important, the cool thing is in batch norm, you have to keep track of these, of these running means, because at test time, you sort of don't want the batch statistic to influence anything. You don't have that here. So you just back propagate through this observation, through this operation. And you don't need to keep these running, running averages going. And you always care, am I in test or am I in train mode right now? You just do this. This operation is per data point. So it's just part of your model. Right. And they do a an experiment where they have 32 images per GPU. So it's reasonably sized. And they can basically show that the group norm and the batch norm, they compare in their performance. Now, I do usually don't believe the experiments that you see in single papers. But I think this has been replicated a couple of times. Now, you see, this is the train error where group norm even behaves a bit better. And then in the validation error, it behaves a bit worse. But one could say it is it is kind of more closely together than the other methods are to the group norm or to each other. These instance norm and layer norm. So it at least it's better than instance norm and layer norm. And then once you go into the smaller batch size regime, of course, that's where the group norm starts to shine. So if you go from the 32 images per GPU, which is this low black curve here, all the way to two images per GPU. And I believe they could even do one image per GPU with group norm. But of course, you can't do that with batch norm because you need batch statistics. You can see that the performance of batch norm degrades drastically. Whereas with group norm, this experiment is just funny. They just had to do this, even though you know exactly what turns out. So look at the lines are all exactly in the in the same place. I mean, come on, like, you know, you're just having time to probably one of the reviewers was like, but did you really do the experiment? They put it in. So, yeah. So you can see that the batch norm beats the group norm in this setting with the when you have the larger batch sizes. But the group norm pulls ahead quite drastically when you have the smaller batch sizes. And that is the main advantage. So now you can turn to models that require small batch sizes or small batch per worker. And generally, it's a pain in the ass to just keep track of those statistics for test time. They do verify, which I find pretty cool, that this phenomenon of the responses going apart during training in the internal feature maps, batch norm counteracts that. So with batch norm, you'll get actually a convergence of responses during training. So the more you train, the more normalized basically your internal features will be. And they show that this is exactly the same with group norm. So group norm is as it seems, it is a replacement. It's not an addition. It doesn't the gains don't come from different place. It seems to be a substitute for batch norm, though they don't have an experiment where they do both. I believe maybe I'm wrong. Maybe they do. But yeah, it seems like you just kind of have to bring some calmness on some standardization into your signal. And how exactly you do that doesn't seem that important as long as you do it with some precision and some some real overall statistics. Yeah. What I don't like about this is now you have, of course, a new hyper parameter, which is this number of groups. So that that seems rather annoying. And the gains like this usually come from the introductions of new hyper parameters. And that just it's not so it's not that ideal for a method to introduce a new hyper parameter, at least layer norm and instance norm didn't. And now, as you can see, the number of groups is is not super influential, but does have a bit of an influence on the on the performance. So if you go a number of groups or here number of channels per group, of course, these two numbers are inversely related. The more groups you have, the less number of channels per group you have. If you go to one extreme, you will get to the layer norm, basically. So the layer norm is an extreme case of group norm where you just have one group. All the channels are in the same group. Then the performance, as you can see here, is quite a bit worse. If you go to the other extreme where every channel is its own group, that's equivalent to instance norm. Again, the performance is quite bad. And somewhere in the middle here with 32 groups is seems to be a good sweet spot. So I don't again I don't like the hyper parameter seems to be some somewhat of a thing where you really have to hit a good value. And well, I guess we'll see over time if that value is always going to be about the same, you know, like the beta two of Adam. It's it's always like people never change it from point nine nine nine because it just tends to work. Or whether that's really going to be another hyper parameter to fit. That seems to be annoying. They do a bunch of ablation studies and tests on, as we said, the, for example, object detection and segmentation. So so models where you must go almost to small batch sizes just because so video classification. So if you want to classify an entire video, that's a lot of data. And you almost have to go small batch sizes for that. They do a lot of experiments. And generally, as I said, I believe these results for group norm have been replicated and across the community a bunch of times now. And I would definitely consider group norm if you are thinking of a especially a distributed machine learning project. All right. With that, I hope you enjoyed this paper. I've been talking for way too long now. I wish you a nice day. If you haven't already, please subscribe, like, share, comment or whatever you feel like doing. Bye bye.
[ { "end": 10, "start": 0, "text": " Hi there! Today we'll look at group normalization by Yuxin Wu and Kaiming He of Facebook AI Research." }, { "end": 18, "start": 10, "text": " So this paper is basically an engineering paper about a new normalization technique called group normalization." }, { "end": 27, "start": 18, "text": " So what's the issue here? The issue is that pretty much throughout neural network learning we're using this technique called batch normalization." }, { "end": 33, "start": 27, "text": " Now batch normalization is a pretty reasonable thing and it works very very well." }, { "end": 36, "start": 33, "text": " So what's the idea behind batch normalization?" }, { "end": 47, "start": 36, "text": " The idea is if you have data points for machine learning methods and your data is in a 2D coordinate system somewhere down here," }, { "end": 51, "start": 47, "text": " and you're trying to separate that from the dots which are here," }, { "end": 57, "start": 51, "text": " it is often very beneficial to shift that distribution before you do anything." }, { "end": 61, "start": 57, "text": " You want to shift it to the middle of the..." }, { "end": 71, "start": 61, "text": " Basically you want to center it, first of all, such that the origin point is in the middle of the data." }, { "end": 75, "start": 71, "text": " And then sometimes you also want to do what's called normalize it." }, { "end": 85, "start": 75, "text": " And by normalizing we mean you want to kind of rescale the axis such that things are more or less sort of like Gaussians." }, { "end": 97, "start": 85, "text": " So if you look at this distribution, first is the centering and then second is what is called a normalization." }, { "end": 106, "start": 97, "text": " And usually we know that any sort of machine learning methods work better if you do that. And that's mostly in classic machine learning methods" }, { "end": 110, "start": 106, "text": " with conditioning numbers of the data being better and so on." }, { "end": 119, "start": 110, "text": " But if you just want to learn, let's say a linear classifier, you can see here you can even save one parameter because you can make it just go through the origin." }, { "end": 122, "start": 119, "text": " And that's true in general." }, { "end": 129, "start": 122, "text": " So if we draw this in 1D, you'd have a distribution that maybe is very peaky right here." }, { "end": 134, "start": 129, "text": " You first center it to the middle of the coordinate system." }, { "end": 137, "start": 134, "text": " And sorry, that's not really centered." }, { "end": 147, "start": 137, "text": " And then you would divide it by its standard deviation such that after it, it is a unit standard deviation Gaussian, so a normal distribution." }, { "end": 155, "start": 147, "text": " The closer your data seems to be to a multivariate normal distribution, the better these machine learning methods work," }, { "end": 160, "start": 155, "text": " especially if you look at how signal in deep network is propagating through the layers." }, { "end": 172, "start": 160, "text": " So the idea is if it's good for the general machine learning method that the input has a multivariate normal distribution or is normalized," }, { "end": 177, "start": 172, "text": " then it's probably good that the input to each layer is normalized." }, { "end": 188, "start": 177, "text": " So when you look at how signal features are in between layers, so this is, for example, the con five three." }, { "end": 193, "start": 188, "text": " This is a layer somewhere in the middle of a convolutional neural network." }, { "end": 200, "start": 193, "text": " And if you look at the spread of how features feature signals are throughout training," }, { "end": 206, "start": 200, "text": " you'll see that the more training progresses, the larger the kind of spread of features is." }, { "end": 215, "start": 206, "text": " So you might get really large numbers or really large negative numbers or maybe really small numbers in your neural networks." }, { "end": 221, "start": 215, "text": " And it would be better if you had a layer and the input you've normalized it right." }, { "end": 228, "start": 221, "text": " And the output then is again a distribution, but it's maybe shifted that you would first transform that back" }, { "end": 234, "start": 228, "text": " into a normal unit, normal distribution before you put it through the next layer." }, { "end": 246, "start": 234, "text": " So what batch norm does is at each layer before each layer, it will do a normalization procedure on the data before giving it to the next layer." }, { "end": 250, "start": 246, "text": " And you can do basically backprop through that." }, { "end": 255, "start": 250, "text": " It's also common to learn bias and variance parameter to add after that." }, { "end": 263, "start": 255, "text": " But the important thing is that after each layer, the data is normalized such that it is kind of in the most comfortable regime." }, { "end": 269, "start": 263, "text": " What's the problem? The problem with this is that you actually need the distribution, right?" }, { "end": 276, "start": 269, "text": " If you want to center this data up here, you need to know what the data is." }, { "end": 282, "start": 276, "text": " So you need to know the entire data. If I want to figure out what is the mean of this distribution," }, { "end": 288, "start": 282, "text": " I need all of the data points to decide here's the mean. I need to shift that up to here." }, { "end": 292, "start": 288, "text": " If I just have a mini batch like we usually do in machine learning." }, { "end": 299, "start": 292, "text": " So if I just have this or this and this and this point, I just have four points. I can't determine the mean." }, { "end": 304, "start": 299, "text": " But what I can do is I can sort of guess the mean from the four points, right?" }, { "end": 309, "start": 304, "text": " So my guesstimation of the mean would be somewhere here. And that would be usually close enough." }, { "end": 319, "start": 309, "text": " And you can also see that the larger your batch is, if you sample at random, the more accurate your mean estimation is going to be." }, { "end": 327, "start": 319, "text": " So people have been training neural networks with large batch sizes for basically batch size have gotten larger and larger in the last year." }, { "end": 337, "start": 327, "text": " So that has not been a problem. But what people are doing now is they are doing distributed machine learning where you do have your data set and you draw a batch." }, { "end": 341, "start": 337, "text": " And the batch might be large. So this might be, I don't know, one million images." }, { "end": 352, "start": 341, "text": " This might still be 4000 images in your batch. But what they'll do, especially with things like TPUs, is they'll distribute that across many, many, many machines" }, { "end": 359, "start": 352, "text": " into batches of sometimes as small as eight samples per unit." }, { "end": 368, "start": 359, "text": " And if this is not images, but maybe something longer, like a sequence of text, or if this is a sequence of speech or something like this," }, { "end": 376, "start": 368, "text": " you can sometimes even go down to two or one samples per unit of computation." }, { "end": 382, "start": 376, "text": " And of course, you can't do batch normalization. You can't calculate the mean of one sample." }, { "end": 388, "start": 382, "text": " It's just going to be that one sample itself. So either you have to have two options." }, { "end": 399, "start": 388, "text": " If you're in small batch sizes, let's say two, either you take the hit and have your very bad estimate of the mean from just two samples or eight samples." }, { "end": 407, "start": 399, "text": " Or after each layer, you basically do a synchronization step such that everyone communicates to everyone else their statistics." }, { "end": 412, "start": 407, "text": " And you can basically aggregate the statistics across the batch. Both are not cool." }, { "end": 418, "start": 412, "text": " Usually these frameworks, they don't do this synchronization because it's just too slow." }, { "end": 424, "start": 418, "text": " So they'll go with the bad statistics. And you can see this right here in this graph." }, { "end": 428, "start": 424, "text": " They have the ImageNet classification error versus batch sizes." }, { "end": 435, "start": 428, "text": " So this is a ResNet-50 model trained on the ImageNet dataset using eight workers, so eight GPUs." }, { "end": 442, "start": 435, "text": " And if they do 32 images per... Now just look at the blue line here." }, { "end": 450, "start": 442, "text": " If they do 32 images per worker, so it's eight workers, it's eight times 32. That's the batch size." }, { "end": 456, "start": 450, "text": " That is a large number. 256 maybe. Yeah." }, { "end": 466, "start": 456, "text": " All right. So if they do that, then you can see the error is on a state of the art for a ResNet-50." }, { "end": 472, "start": 466, "text": " If they go to 16, it's still pretty good. But then as they go lower and lower and lower," }, { "end": 481, "start": 472, "text": " so if they go to smaller and smaller batches and spread them out over the workers, then the error starts going up." }, { "end": 485, "start": 481, "text": " And this is because the batch norm statistics get worse and worse." }, { "end": 489, "start": 485, "text": " So the goal of this paper is to find this group norm thing here." }, { "end": 496, "start": 489, "text": " The group norm, this paper claims, is another normalization technique that pretty much does the same thing," }, { "end": 506, "start": 496, "text": " this centering and the normalization, the scaling. But it does it without relying on the batch statistics." }, { "end": 511, "start": 506, "text": " It basically does it within a data point. And that means that the performance," }, { "end": 516, "start": 511, "text": " even though it's a bit smaller at the beginning for this particular example," }, { "end": 521, "start": 516, "text": " will stay constant with even in small batch size regimes." }, { "end": 528, "start": 521, "text": " So this is potentially applicable, as I said, to things where you have to go to like two or one sample per worker," }, { "end": 534, "start": 528, "text": " because it's just the data points, the single data points are just too large." }, { "end": 541, "start": 534, "text": " So if you maybe want to train something like BERT on a GPU." }, { "end": 548, "start": 541, "text": " So what is group normalization? Group normalization, as I said, works within a sample." }, { "end": 553, "start": 548, "text": " Now, there have been other methods that work within a sample instead of across the batch." }, { "end": 557, "start": 553, "text": " And they tend to not work as well as batch norm." }, { "end": 563, "start": 557, "text": " Now, this paper here claims that group norm works on par with batch norm for large batch sizes" }, { "end": 569, "start": 563, "text": " and better than on small batch sizes. So here they have a schematic of what's happening." }, { "end": 573, "start": 569, "text": " In batch norm, as you can see here, you have this cube." }, { "end": 580, "start": 573, "text": " Now, this cube here, N, means the batch size. So these are the data points." }, { "end": 590, "start": 580, "text": " Points in your mini batch. This is the thing that is going to get small in the if you don't have enough memory." }, { "end": 594, "start": 590, "text": " Then C would be your channels." }, { "end": 604, "start": 594, "text": " So we are talking about convolutional neural networks here, but this is generalizable to other neural networks." }, { "end": 609, "start": 604, "text": " The channels are going to be the independent feature maps that you're going to have." }, { "end": 614, "start": 609, "text": " So in a convolutional neural network, usually each layer has these things called kernels." }, { "end": 617, "start": 614, "text": " And there might be three by three matrices like this." }, { "end": 623, "start": 617, "text": " And if you have an image, the kernel will be slided." }, { "end": 629, "start": 623, "text": " This thing right here will be maybe here will be slided across the image or slid. Is it slid?" }, { "end": 631, "start": 629, "text": " Okay, would be slid across the image." }, { "end": 634, "start": 631, "text": " And then the numbers in here will be convolved with the pixels." }, { "end": 639, "start": 634, "text": " And that will give you the next layers representation." }, { "end": 643, "start": 639, "text": " So whatever the operation convolution operation is, and you'll slide that over." }, { "end": 647, "start": 643, "text": " And that sliding over will give you the values in the next layer." }, { "end": 652, "start": 647, "text": " Now you not only have one kernel, but you actually have many kernels." }, { "end": 655, "start": 652, "text": " Sorry about this. Let's draw that." }, { "end": 665, "start": 655, "text": " So you have more and more kernels." }, { "end": 672, "start": 665, "text": " You have a whole stack of kernels. And how many kernels you have, those are the different kernels are also called your different channels." }, { "end": 676, "start": 672, "text": " Now, the kernels refer to the weights and the channels refer to the image." }, { "end": 681, "start": 676, "text": " But the Ith kernel is going to be convolving the Ith channel of the image." }, { "end": 687, "start": 681, "text": " So at the beginning, the input image has three channels because red, green and blue." }, { "end": 697, "start": 687, "text": " But then the intermediate images can have more channels as you have basically as many as you have kernels in the layer right before." }, { "end": 703, "start": 697, "text": " Okay, and the H and the W means the height and width of the image." }, { "end": 709, "start": 703, "text": " So it combined so the image is kind of unrolled across the height or the width in this direction." }, { "end": 711, "start": 709, "text": " So what does batch norm do?" }, { "end": 716, "start": 711, "text": " Batch norm takes, as you can see here, one channel." }, { "end": 720, "start": 716, "text": " And it it takes one channel." }, { "end": 723, "start": 720, "text": " So maybe this image, this is one channel." }, { "end": 727, "start": 723, "text": " Let's just say this is the red channel because I drawn it in red." }, { "end": 736, "start": 727, "text": " It takes that and it calculates the mean one over and the standard deviation of that." }, { "end": 742, "start": 736, "text": " It calculates those two statistics and it uses that to do this centering and scaling operation." }, { "end": 748, "start": 742, "text": " So all of these methods are going to calculate the mean and the variance and then do the same scaling transformation." }, { "end": 752, "start": 748, "text": " The question is just how do you calculate the mean?" }, { "end": 754, "start": 752, "text": " Batch norm does this across the data points." }, { "end": 761, "start": 754, "text": " So it looks at a single feature at a single channel and it asks what's the mean across all the data points?" }, { "end": 769, "start": 761, "text": " What are the data statistics of this channel and what was the mean and standard deviation?" }, { "end": 775, "start": 769, "text": " Now, actually, batch norm, I'm not I didn't even know that in convolutional layer this works like this." }, { "end": 779, "start": 775, "text": " You can also imagine batch norm of really just taking one single feature." }, { "end": 785, "start": 779, "text": " And that means of really just taking one of these things right here." }, { "end": 794, "start": 785, "text": " So if this goes to the back and normalizing across that, the important part is that it is in fact normalizing across the data points." }, { "end": 800, "start": 794, "text": " So it looks at your batch, looks at the mean and the variance in that batch and it normalizes by that." }, { "end": 805, "start": 800, "text": " I think convolutional layers make sense because you have this invariance in height and width and therefore." }, { "end": 807, "start": 805, "text": " Yeah, so that makes sense." }, { "end": 813, "start": 807, "text": " But in a fully connected layer, you'd simply go look at one feature at a time." }, { "end": 815, "start": 813, "text": " Layer norm is different." }, { "end": 821, "start": 815, "text": " Layer norm has basically been proposed as an alternative to batch norm with the same reasoning that this paper has." }, { "end": 827, "start": 821, "text": " So layer norm, as you can see here, it does each data point individually." }, { "end": 833, "start": 827, "text": " So here we only have one data point that is normalized by itself." }, { "end": 839, "start": 833, "text": " So you do this for each data point independently and therefore it's not dependent on the batch size anymore." }, { "end": 844, "start": 839, "text": " But what you'll do is you look across all of the channels right here." }, { "end": 848, "start": 844, "text": " So all of the channels and all of the width and height." }, { "end": 854, "start": 848, "text": " So this entire thing here, this entire thing is basically one channel." }, { "end": 859, "start": 854, "text": " Right. And then the next channel is here of the image and the next." }, { "end": 862, "start": 859, "text": " No, that's the next image." }, { "end": 867, "start": 862, "text": " Well, that is a bad drawing because the image is unrolled." }, { "end": 872, "start": 867, "text": " In any case, what you'll do is you look at." }, { "end": 878, "start": 872, "text": " So if you have a filter bank like this, you have an image and the image composed of multiple channels." }, { "end": 880, "start": 878, "text": " Right. This is the red." }, { "end": 883, "start": 880, "text": " And then you'll have the green." }, { "end": 885, "start": 883, "text": " Right. This is in the green." }, { "end": 890, "start": 885, "text": " And then you'll have the blue channel." }, { "end": 898, "start": 890, "text": " And what you'll do is simply you'll calculate the mean across all of the pixels." }, { "end": 905, "start": 898, "text": " And across all of the channels, you just take this whole NumPy array and you just say dot mean." }, { "end": 907, "start": 905, "text": " And that gives you one number." }, { "end": 912, "start": 907, "text": " And it's just whatever that number is, you subtract it and then you say standard deviation and you divide by that." }, { "end": 913, "start": 912, "text": " That's layer norm." }, { "end": 920, "start": 913, "text": " So an entire layers representation of one image is just normalized to the mean." }, { "end": 922, "start": 920, "text": " Now, this seems a bit drastic." }, { "end": 926, "start": 922, "text": " And that's why instance norm did the exact opposite." }, { "end": 935, "start": 926, "text": " They said, wait a minute, instead of normalizing across all of the features, right, we'll go back and do what batch norm does." }, { "end": 937, "start": 935, "text": " Batch norm looks at each feature individually." }, { "end": 942, "start": 937, "text": " So basically, it looks at all of these these different axes in the data distribution." }, { "end": 943, "start": 942, "text": " It looks at them differently." }, { "end": 954, "start": 943, "text": " So if one axis is scaled very widely, we want to normalize that differently than if than the other axis that is just scaled very shortly." }, { "end": 959, "start": 954, "text": " And that's why we'll look at each feature individually like batch norm." }, { "end": 962, "start": 959, "text": " But also, we only look at one data point at a time." }, { "end": 969, "start": 962, "text": " Now, as you can imagine, this doesn't work anymore in in a fully connected network." }, { "end": 974, "start": 969, "text": " This basically works in a convolutional network where you have a feature map channel." }, { "end": 980, "start": 974, "text": " So you look at one individual channel and one data point." }, { "end": 985, "start": 980, "text": " So that means here you would normalize the red channel individually." }, { "end": 990, "start": 985, "text": " You would normalize the green channel individually and you normalize the blue channel individually." }, { "end": 1000, "start": 990, "text": " So the image you're going to end up with is simply the red channel subtracted by its own mean and then divided by its own standard deviation." }, { "end": 1002, "start": 1000, "text": " And just within that data point, right." }, { "end": 1010, "start": 1002, "text": " So maybe I should here say across the number of features or something." }, { "end": 1012, "start": 1010, "text": " So I hope that that's clear." }, { "end": 1021, "start": 1012, "text": " So the layer norm drops the dependence on the batch size, but instead says we should normalize across all of the features." }, { "end": 1030, "start": 1021, "text": " And the instance norm says, wait a minute, batch norm had a good idea normalizing only across the features individually because the individual features might have different scales." }, { "end": 1032, "start": 1030, "text": " And we should account for that." }, { "end": 1036, "start": 1032, "text": " But also, we don't want to be dependent on the batch size." }, { "end": 1038, "start": 1036, "text": " And now is this where group norm comes in?" }, { "end": 1043, "start": 1038, "text": " Group norm is basically a mix between layer norm and instance norm." }, { "end": 1048, "start": 1043, "text": " What group norm says, layer norm and instance norm have good ideas." }, { "end": 1051, "start": 1048, "text": " They only go across one sample." }, { "end": 1052, "start": 1051, "text": " They take that." }, { "end": 1064, "start": 1052, "text": " They say, in essence, instance norm has a good idea in that the features should be normalized individually, but it goes sort of too far from it goes too far." }, { "end": 1071, "start": 1064, "text": " You might get not good enough statistics because you're now normalizing each of these things individually." }, { "end": 1073, "start": 1071, "text": " Whereas with layer norm, you're too restricted." }, { "end": 1083, "start": 1073, "text": " You're basically saying that the features, it's fine if the features relative to each other are like this, right." }, { "end": 1086, "start": 1083, "text": " One is maybe very high variance and one is very low variance." }, { "end": 1088, "start": 1086, "text": " Feature norm would keep that." }, { "end": 1091, "start": 1088, "text": " And group norm would say, maybe that's not so good." }, { "end": 1096, "start": 1091, "text": " We should have, we should normalize the individual features, maybe individually." }, { "end": 1107, "start": 1096, "text": " But their argumentation here is that maybe there are some features that by their nature already have the same sort of scaling and variance." }, { "end": 1109, "start": 1107, "text": " They give an example." }, { "end": 1119, "start": 1109, "text": " If you, for example, have a filter, again, we deal with convolutional layers here, and that filter is a let's say an edge filter, right." }, { "end": 1121, "start": 1119, "text": " So a horizontal edge filter." }, { "end": 1123, "start": 1121, "text": " So it's very low value here." }, { "end": 1126, "start": 1123, "text": " And let me mark the high value with blue." }, { "end": 1129, "start": 1126, "text": " So this is a horizontal edge filter." }, { "end": 1140, "start": 1129, "text": " If you slide this over a window and these are high numbers and these are low numbers, it will respond to edges because edges have high, low, high, right." }, { "end": 1141, "start": 1140, "text": " Or vice versa." }, { "end": 1146, "start": 1141, "text": " So it will give you very positive and very negative number every time you slide across an edge." }, { "end": 1161, "start": 1146, "text": " Now you can imagine that in natural images, that filter, whatever image you put in would, and however you normalize, would give you pretty much the same response as a vertical edge filter." }, { "end": 1169, "start": 1161, "text": " So the horizontal and the vertical edge filter, you'll see whatever their response is, they're probably about equal in size." }, { "end": 1178, "start": 1169, "text": " So we could expect that in a neural network, there will be groups of filters that together exhibit the same scale." }, { "end": 1184, "start": 1178, "text": " And therefore we can normalize across them like in layer norm." }, { "end": 1188, "start": 1184, "text": " So the more things we normalize across, the better statistics we can gather." }, { "end": 1195, "start": 1188, "text": " That's why instance norm doesn't work because it only normalizes across a very small thing, getting very little statistics." }, { "end": 1204, "start": 1195, "text": " But we should normalize, if we could gather good statistics, we should normalize different features differently." }, { "end": 1211, "start": 1204, "text": " And group norm says, well, since some of the features are almost guaranteed to behave the same, we could normalize across those." }, { "end": 1216, "start": 1211, "text": " Now, of course, you don't know at the beginning which ones those are." }, { "end": 1226, "start": 1216, "text": " But you hope that by doing group norm, by basically at a priori, so at the beginning of training, you decide what the groups are." }, { "end": 1231, "start": 1226, "text": " And naturally, it's just whichever ones are next to each other, those are the groups." }, { "end": 1239, "start": 1231, "text": " And you'll hope that through the training procedure, basically those groups will learn the features that are equal of size." }, { "end": 1246, "start": 1239, "text": " Well, you basically enforce that, so you kind of constrain the architecture to do that." }, { "end": 1249, "start": 1246, "text": " So that's the idea behind group norm." }, { "end": 1258, "start": 1249, "text": " You basically build these groups of channels and then you normalize across those, across the groups of, within the groups of channels," }, { "end": 1264, "start": 1258, "text": " across the entire height and width, only in a single data point." }, { "end": 1270, "start": 1264, "text": " And therefore, you gain the advantage of layer norm, of normalizing within a single data point." }, { "end": 1278, "start": 1270, "text": " You retain the advantage of batch norm, of normalizing across single features." }, { "end": 1281, "start": 1278, "text": " And that's what instance norm attempted." }, { "end": 1285, "start": 1281, "text": " But yeah, so you get the best of both worlds, sort of." }, { "end": 1287, "start": 1285, "text": " That's group norm." }, { "end": 1291, "start": 1287, "text": " And now we go and look what it does." }, { "end": 1294, "start": 1291, "text": " So they say, OK, basically all the normalization techniques do this." }, { "end": 1296, "start": 1294, "text": " They subtract a mean and divide by a standard deviation." }, { "end": 1298, "start": 1296, "text": " That's what we saw." }, { "end": 1303, "start": 1298, "text": " And the difference is just across what you collect, your statistics." }, { "end": 1308, "start": 1303, "text": " So the group norm is the following code in TensorFlow." }, { "end": 1316, "start": 1308, "text": " As you can see, you simply reshape your data and basically expand this part right here where you built, where you put the extra." }, { "end": 1318, "start": 1316, "text": " So this is C." }, { "end": 1324, "start": 1318, "text": " This entire thing used to be C. And you divide it into group and index within group." }, { "end": 1331, "start": 1324, "text": " And then you just normalize across that and reshape to the original dimension again." }, { "end": 1339, "start": 1331, "text": " And the important, the cool thing is in batch norm, you have to keep track of these, of these running means," }, { "end": 1343, "start": 1339, "text": " because at test time, you sort of don't want the batch statistic to influence anything." }, { "end": 1345, "start": 1343, "text": " You don't have that here." }, { "end": 1349, "start": 1345, "text": " So you just back propagate through this observation, through this operation." }, { "end": 1353, "start": 1349, "text": " And you don't need to keep these running, running averages going." }, { "end": 1357, "start": 1353, "text": " And you always care, am I in test or am I in train mode right now?" }, { "end": 1358, "start": 1357, "text": " You just do this." }, { "end": 1361, "start": 1358, "text": " This operation is per data point." }, { "end": 1363, "start": 1361, "text": " So it's just part of your model." }, { "end": 1365, "start": 1363, "text": " Right." }, { "end": 1372, "start": 1365, "text": " And they do a an experiment where they have 32 images per GPU." }, { "end": 1374, "start": 1372, "text": " So it's reasonably sized." }, { "end": 1381, "start": 1374, "text": " And they can basically show that the group norm and the batch norm, they compare in their performance." }, { "end": 1388, "start": 1381, "text": " Now, I do usually don't believe the experiments that you see in single papers." }, { "end": 1391, "start": 1388, "text": " But I think this has been replicated a couple of times." }, { "end": 1395, "start": 1391, "text": " Now, you see, this is the train error where group norm even behaves a bit better." }, { "end": 1398, "start": 1395, "text": " And then in the validation error, it behaves a bit worse." }, { "end": 1408, "start": 1398, "text": " But one could say it is it is kind of more closely together than the other methods are to the group norm or to each other." }, { "end": 1410, "start": 1408, "text": " These instance norm and layer norm." }, { "end": 1415, "start": 1410, "text": " So it at least it's better than instance norm and layer norm." }, { "end": 1423, "start": 1415, "text": " And then once you go into the smaller batch size regime, of course, that's where the group norm starts to shine." }, { "end": 1431, "start": 1423, "text": " So if you go from the 32 images per GPU, which is this low black curve here, all the way to two images per GPU." }, { "end": 1436, "start": 1431, "text": " And I believe they could even do one image per GPU with group norm." }, { "end": 1441, "start": 1436, "text": " But of course, you can't do that with batch norm because you need batch statistics." }, { "end": 1446, "start": 1441, "text": " You can see that the performance of batch norm degrades drastically." }, { "end": 1450, "start": 1446, "text": " Whereas with group norm, this experiment is just funny." }, { "end": 1454, "start": 1450, "text": " They just had to do this, even though you know exactly what turns out." }, { "end": 1459, "start": 1454, "text": " So look at the lines are all exactly in the in the same place." }, { "end": 1468, "start": 1459, "text": " I mean, come on, like, you know, you're just having time to probably one of the reviewers was like, but did you really do the experiment?" }, { "end": 1472, "start": 1468, "text": " They put it in." }, { "end": 1474, "start": 1472, "text": " So, yeah." }, { "end": 1482, "start": 1474, "text": " So you can see that the batch norm beats the group norm in this setting with the when you have the larger batch sizes." }, { "end": 1488, "start": 1482, "text": " But the group norm pulls ahead quite drastically when you have the smaller batch sizes." }, { "end": 1490, "start": 1488, "text": " And that is the main advantage." }, { "end": 1498, "start": 1490, "text": " So now you can turn to models that require small batch sizes or small batch per worker." }, { "end": 1505, "start": 1498, "text": " And generally, it's a pain in the ass to just keep track of those statistics for test time." }, { "end": 1514, "start": 1505, "text": " They do verify, which I find pretty cool, that this phenomenon of the responses going apart during training in the internal feature maps," }, { "end": 1517, "start": 1514, "text": " batch norm counteracts that." }, { "end": 1522, "start": 1517, "text": " So with batch norm, you'll get actually a convergence of responses during training." }, { "end": 1528, "start": 1522, "text": " So the more you train, the more normalized basically your internal features will be." }, { "end": 1531, "start": 1528, "text": " And they show that this is exactly the same with group norm." }, { "end": 1535, "start": 1531, "text": " So group norm is as it seems, it is a replacement." }, { "end": 1537, "start": 1535, "text": " It's not an addition." }, { "end": 1540, "start": 1537, "text": " It doesn't the gains don't come from different place." }, { "end": 1549, "start": 1540, "text": " It seems to be a substitute for batch norm, though they don't have an experiment where they do both." }, { "end": 1551, "start": 1549, "text": " I believe maybe I'm wrong." }, { "end": 1554, "start": 1551, "text": " Maybe they do." }, { "end": 1560, "start": 1554, "text": " But yeah, it seems like you just kind of have to bring some calmness on some standardization into your signal." }, { "end": 1572, "start": 1560, "text": " And how exactly you do that doesn't seem that important as long as you do it with some precision and some some real overall statistics." }, { "end": 1573, "start": 1572, "text": " Yeah." }, { "end": 1580, "start": 1573, "text": " What I don't like about this is now you have, of course, a new hyper parameter, which is this number of groups." }, { "end": 1583, "start": 1580, "text": " So that that seems rather annoying." }, { "end": 1589, "start": 1583, "text": " And the gains like this usually come from the introductions of new hyper parameters." }, { "end": 1600, "start": 1589, "text": " And that just it's not so it's not that ideal for a method to introduce a new hyper parameter, at least layer norm and instance norm didn't." }, { "end": 1613, "start": 1600, "text": " And now, as you can see, the number of groups is is not super influential, but does have a bit of an influence on the on the performance." }, { "end": 1619, "start": 1613, "text": " So if you go a number of groups or here number of channels per group, of course, these two numbers are inversely related." }, { "end": 1622, "start": 1619, "text": " The more groups you have, the less number of channels per group you have." }, { "end": 1627, "start": 1622, "text": " If you go to one extreme, you will get to the layer norm, basically." }, { "end": 1632, "start": 1627, "text": " So the layer norm is an extreme case of group norm where you just have one group." }, { "end": 1634, "start": 1632, "text": " All the channels are in the same group." }, { "end": 1638, "start": 1634, "text": " Then the performance, as you can see here, is quite a bit worse." }, { "end": 1644, "start": 1638, "text": " If you go to the other extreme where every channel is its own group, that's equivalent to instance norm." }, { "end": 1647, "start": 1644, "text": " Again, the performance is quite bad." }, { "end": 1654, "start": 1647, "text": " And somewhere in the middle here with 32 groups is seems to be a good sweet spot." }, { "end": 1664, "start": 1654, "text": " So I don't again I don't like the hyper parameter seems to be some somewhat of a thing where you really have to hit a good value." }, { "end": 1673, "start": 1664, "text": " And well, I guess we'll see over time if that value is always going to be about the same, you know, like the beta two of Adam." }, { "end": 1680, "start": 1673, "text": " It's it's always like people never change it from point nine nine nine because it just tends to work." }, { "end": 1684, "start": 1680, "text": " Or whether that's really going to be another hyper parameter to fit." }, { "end": 1687, "start": 1684, "text": " That seems to be annoying." }, { "end": 1694, "start": 1687, "text": " They do a bunch of ablation studies and tests on, as we said, the, for example, object detection and segmentation." }, { "end": 1703, "start": 1694, "text": " So so models where you must go almost to small batch sizes just because so video classification." }, { "end": 1707, "start": 1703, "text": " So if you want to classify an entire video, that's a lot of data." }, { "end": 1711, "start": 1707, "text": " And you almost have to go small batch sizes for that." }, { "end": 1713, "start": 1711, "text": " They do a lot of experiments." }, { "end": 1722, "start": 1713, "text": " And generally, as I said, I believe these results for group norm have been replicated and across the community a bunch of times now." }, { "end": 1732, "start": 1722, "text": " And I would definitely consider group norm if you are thinking of a especially a distributed machine learning project." }, { "end": 1735, "start": 1732, "text": " All right. With that, I hope you enjoyed this paper." }, { "end": 1737, "start": 1735, "text": " I've been talking for way too long now." }, { "end": 1739, "start": 1737, "text": " I wish you a nice day." }, { "end": 1745, "start": 1739, "text": " If you haven't already, please subscribe, like, share, comment or whatever you feel like doing." }, { "end": 1766, "start": 1745, "text": " Bye bye." } ]
Cs_j-oNwGgg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Concept Learning with Energy-Based Models (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "openai", "ebm", "energy function", "gradient descent", "relational neural network", "latent", "attention", "entities", "spatial relation", "inference time", "reasoning", "demonstration" ]
This is a hard paper! Energy-functions are typically a mere afterthought in current machine learning. A core function of the Energy - its smoothness - is usually not exploited at inference time. This paper takes a stab at it. Inferring concepts, world states, and attention masks via gradient descent on a learned energy function leads to an interesting framework with many possibilities. Paper: https://arxiv.org/abs/1811.02486 Blog: https://openai.com/blog/learning-concepts-with-energy-functions/ Videos: https://sites.google.com/site/energyconceptmodels/ Abstract: Many hallmarks of human intelligence, such as generalizing from limited experience, abstract reasoning and planning, analogical reasoning, creative problem solving, and capacity for language require the ability to consolidate experience into concepts, which act as basic building blocks of understanding and reasoning. We present a framework that defines a concept by an energy function over events in the environment, as well as an attention mask over entities participating in the event. Given few demonstration events, our method uses inference-time optimization procedure to generate events involving similar concepts or identify entities involved in the concept. We evaluate our framework on learning visual, quantitative, relational, temporal concepts from demonstration events in an unsupervised manner. Our approach is able to successfully generate and identify concepts in a few-shot setting and resulting learned concepts can be reused across environments. Example videos of our results are available at this http URL Authors: Igor Mordatch Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, what you're seeing here is an energy-based model that learns the concept of a shape from a demonstration on the left. So on the left you can see a demonstration of data points sampled from a shape, in these cases circles or squares, and then the corresponding energy function that the model infers from that. And then it can replicate that shape on the right using that energy function. So the paper we're going to analyze today is called Concept Learning with Energy-Based Models by Igor Mordac of OpenAI. And this is a very cool paper, or at least I think it's a very cool paper, but it is also a very hard paper. So therefore first I want to kind of make a bit of an introduction into the concepts that we are facing in this paper. So the first thing you need to know are energy functions or energy-based models. What is an energy function? An energy function, sometimes called E, is simply a function with one or multiple inputs, let's call them X. And you can make the... if the energy function is happy with X it will be the value 0. And if the energy function is not happy with X it will be a high value, like larger than 0. So this is happy, this is not happy. So let's give some examples of this. We can formulate almost any machine learning problem in terms of an energy function. Let's say we have a classifier. The classifier takes as an input an image here, maybe of a cat, and a label. So if the label is cat then the energy will be 0 if the energy function is of course working correctly. And if we give the energy function the same image but we give it a wrong label, dog, then it is very high. In the case of the classifier of course we can simply take the loss function as the energy function and we automatically get an energy-based model. So the loss function here would be something like the negative log probability of the correct class. But in any case it is just going to be a high number, let's call it 10 to the 9. So the energy function says, this is very bad, this thing here is very bad, the entire thing you input. It won't tell you yet what's bad about it. So that also means you can change any of the two things to make the classifier happy. Now usually we're concerned with changing the label. It's like, tell me which other label do I need to input to make you happy? And if we make the labels differentiable, of course we never input a true label, we actually input like a distribution, softmax distribution over labels, and that's differentiable. We can use gradient descent to update the dog label, we can use gradient descent to find a label that would make the energy function more happy. So we could use gradient descent to get the cat level if we had a good classifier. But we can also optimize the image to make it compatible with the dog label. That's things that if you ever saw deep dream or something like this, those models do exactly that, they optimize the input image for a particular label. And there you can view the entire neural network including the loss function as the energy function. So what's another example? Another example is, let's say you have a k-means model, and the energy function simply input a data point. And for the data point, what you're going to do is you're going to find the min cluster index, the min k, over, you know, you have your multiple clusters here and your data point might be here, so you're going to find the cluster that's closest and then the distance here, this distance d, will be the energy of that. So the model is very happy when your data point comes from one of the clusters, but your model is not happy when the data point is far away. And that would be the cost function of the k-means function. So that's an energy-based model too. Now currently energy-based models have come into fashion through things like GANs or any sort of noise contrastive estimation. So in a GAN, what you have is you have a discriminator. And the discriminator will basically learn a function to differentiate data from non-data. So that by itself is an energy function. The discriminator will learn a function and that function will be low wherever the discriminator thinks there is data. So it will usually do this around the data points, so the data points form the valleys right here. And then the generator will basically take that discriminator function and will try to infer points that are also in these valleys, to produce points that are also in the valleys. And then you basically have an energy learning competition. The discriminator now tries to push down on the energy where the true data is and push up on the energy where the generated data is. And that will give you basically a steeper energy-based function in the future. So in this case the discriminator neural network is the energy function. And the degenerator just tries to produce data that is compatible with that energy function. So I hope that the concept of what an energy function is is a bit clear. Again any machine learning problem can be formulated in terms of an energy function. Now what is not done so far is what we alluded to a little bit before in the classifier example and also here. So right now when we want to train again we simply take the generator to produce data. Now what's the generator's goal? The generator's goal is to hit those valleys in the energy function. And we produce a generator in one shot to produce this data. But what we could also do is of course we could just start somewhere. Let's say here we pick a random data point and then we use gradient descent because the energy function in this case is smooth. We use gradient descent to just drop down this valley and then find ourselves in this valley. So without ever training a generator we can use this methods to produce points that are in the valley of the energy function. And I don't know if people... I guess people have trained GANs like this. The reason why it doesn't work let's say in the real world is because that procedure will just produce adversarial examples for the discriminator. And those usually look like nothing like data. Because if you keep the discriminator just stable and gradient descent against it what you'll get isn't really qualitatively good. But in principle if the discriminator was a good energy function for the data to describe the data we could use gradient descent. The same up here. In order to find a good label for an image given that we have a good energy function, we could simply gradient descent on the label in order to find a better label. So in this paper we're going to have a situation where we say we're given an energy function and we're given a bunch of inputs. They are then called X, A, and W. And if I have my energy function already, if I have given my energy function and I have given two of those three things, any two, I can infer the last thing simply by gradient descent on my energy function. Because I know the energy function is zero when the energy function is happy with the input. So when all of these things agree, basically the energy function is happy, it will output zero otherwise it will output a high value. Therefore if I'm given any of those two, any two of those three things, I can find a compatible third thing by descending. And then of course over here in these machine learning problems, the task was always actually to learn an energy function. So usually in the training data set we are given images and labels and we want to learn this energy function which would be parameterized. So we want to learn the parameters. And the same here in our general case if we are now given three things but we are not given the parameters of the energy function, we don't know what those are. As long as we're given all of the inputs in our training data set, and our training data set guarantees these are actually, you know, these are inputs that are compatible with each other, the energy function should be low, we can simply gradient descent on the parameters of the energy function. So in a sense there are four things, right? There are these three inputs and then there are the parameters of the energy function. If we're given any three of those four, we can gradient descent on the rest. And that's going to be the basis. So the X here is going to be the so-called state. And the state in this paper is going to be images of entities. The entities, sorry it's not going to be images, but the entities are these little circles that you're going to see. And each of those entities can have an X position, a Y position, and I believe a color. So R, G and B. So each of those can have that. And then the concatenation of all of those attributes is one big vector and that is your X, that's your state. So state is number of entities and their attributes. A is going to be an attention mask over the state. So A is going to be... here you have four entities, so A will have four entries telling you which of these entities you should pay attention to right now. And W is going to be a concept vector so called. So W is going to be the embedding of a concept. Now what a concept is in this case is very general. I can give you an example. One concept is do the entities that the A pays attention to, are they close to each other? So in this case you see we have two entities that A has a high value on and this is this ball up here and this ball down here. Now if the concept vector is the embedding for the concept of being close to each other then the energy function would be very happy if those two things are close to each other and it would be very unhappy if those two things aren't close to each other. But in the very same situation, so the same X, the same attention mask, but a different concept, so a different W vector right here, then the energy function would be maybe very happy if the two things are far apart and maybe unhappy if the two things are close. So the question is always how are the three things that you put into the energy function compatible with each other and given all but one of these things you can infer the other. So let's say you have a perfect energy function for this situation. You're just given the energy function, you can trust it. And you are given, let's make an example, you are given the X, so you're given the state, I'm going to draw the state down here, right? Okay, this is the state and you're given the W and the W is the embedding, it's a vector but in embedding space, but the embedding is for a line, right? So the geometric unit of a line. Now your task is to find A, the attention mask that will make the energy function happy. And as you can see right here, what you would do is you would put a lot of weight on this, this, this and this ball and no weight on that ball, because those make a line. And since everything here is differentiable, so the state is differentiable, the attention is differentiable and the concepts are vectors, they're differentiable, you can use gradient descent to find that. Another example, if you're given again the same W, so line, and you are given this following thing and you are given, now you're given the attention on these three and you say please find the X, please find the X, the state that makes this energy function happy. Now this here you would call the starting state, the X zero, your task is going to be find the X one, find the state, how do you have to change this state such that the energy function is happy? And of course the answer is going to be is to push this ball here inward until it is in the middle of the two others, so the three form a line. Right, these three form a line. You don't have to do anything to this ball up here, because there is no attention on it. And the attention, it's only, is the concept for the things that you put attention on and the state, are those three in agreement and the energy function is happy. Okay, we have covered the basics. Now let's dive into the paper. I think this is the longest introduction ever, but I think it will pay off once you see. So they specifically, or this author, I think it's a single author, identifies two different things that you can do with an energy function here. Of course you can do more as we saw, but they identify two. So here is where you have given the initial state and an attention mask and you want to find the x1, the state that satisfies the concept and attention the most. This the author calls generation. As you can see here, these four things that you have the attention on are pushed around until they make a square, because the concept right now is square. And in the other case, where you are given this x0 and x1, just call this x right here, just call this thing x. If you're given those two, and you are given the concept square, and you're tasked with finding a, the attention mask, of course you're going to put the attention on these right here. And that is going to happen through gradient descent. Again, we're not learning a model to give you that attention. Like in a GAN, we're learning a generator to just one shot give it to you. Right now, what we're going to do is we're going to gradient descent optimize on our smooth energy function to give us that perfect attention mask that satisfies the energy function. Alright, so this is the difference right here. Gradient descent is part of the output procedure of the model. Usually we just use it to learn, and we learn a one-shot model. But here gradient descent is part of the model. So they introduce energy functions here, and they say, okay, we can have a policy on x. So if we're given a concept W, and if we're given an A, we can have a policy over x, which basically means we can find x's that are compatible with that by running gradient descent here. You see there is an xk minus one, and we are running gradient descent on the energy function with respect to x to find a better x that satisfies the energy function given those inputs. And the same if we want to find an attention mask, we are running gradient descent on the attention mask, again, in order to satisfy the same energy function. So you see the inputs are both times the same. The concept here we can input square, here we can input square, but the difference is what we're running gradient descent on and what we keep constant. And I would get, I would add a third line here actually, because we can also, if we're given an x and an a, we can also infer a W. And that's going to be an integral part. So if I have this right here, and this situation, and I have, say I have attention on these four, now I can ask the model, so I'm given x and I'm given a, I can ask the model to infer W. And the model should ideally output, ha, this is square. Now the model isn't going to output square, the model is going to output a vector representation of square. So the model is going to output square but as a vector of numbers, because that's how we've trained it. W is an embedding. But what we can then do later is we can say, okay, I'm not going to tell you it's a square, you just come up with a vector W to describe this situation. And now I'm going to take that vector W that you came up with, miss, mister or missus model, and I'm going to take, tell you a new situation. This situation right here. And I'm going to now give you x, and I'm going to give you the W that you yourself have output, and now please tell me what's the a. And then the model is of course supposed to tell you, oh these four here are the a. So without ever telling that it should be a square, what you can do is you can let the model infer a W from one example situation, and then transfer that W to a new situation. So it can identify, you can just say whatever concept I have up here, please apply that same concept, which is the W down here. And this is the entire paper now. This is the concept learning through energy-based models. Okay, so that is kind of a third line I would add down here. You can infer a concept vector if you're given the X and the a. So in order to do all this, their energy function is going to be a so-called relational neural network. So what you'll have is you'll have a simple neural network, a multi-layer perceptron that always connects two entities to each other with the concept vector, and then this is I believe a sigmoid that connects the attention masks of the two, and you simply sum over all pairs of two entries in your model, and then you send that through an MLP, sorry, through an MLP again. This I believe is not so important, it's just important that they can feed this entire situation, the X, the a, and the W, they can basically feed into a neural network, and the neural network comes up with a number of how well those three things fit together. And then you can transfer these concepts. That's pretty cool. Now the only question is, of course, we've always said we're given an energy function, we're just, we just have it. But of course, this is a neural network, and the neural network has parameters, and the parameters, we don't know what good parameters are at the beginning. So we need to train this thing. And again, the reason why these are toy problems right here is, I mean, we'll get to why it's computational, but this is kind of a new field, I believe in machine learning, at least I come from classical machine learning, and we only ever have used, like SGD to train, and we only ever have produced models that one shot produce something. And here, we, this is a, I believe this is a new concept where you use gradient descent as part of the output. And that makes a lot of trouble. So that's why we work in toy problems. So what this, this here is the situation I described. You have a demo event where you're given the X and the A, and you're supposed to infer the W. So the question here is, what's the W? And the model will come up with a W, and you're not going to do anything, you know, right now, you're simply going to take that W and tell it, oh, well, here is a so-called test event. So please apply the W you came up with in this test event. And please find me the A, in this case, that satisfies the W and the X I give you here. And of course, the A right here is, as you can see, even you don't know that it's a square. And the actual concept here is move the gray ball to the middle of the square, right? That is it here. But no one has told me this. I just looked at the picture. So the correct answer here would be to place attention on those four things, and then to take this thing and move it to the middle right here, in this over here. So that would be the correct answer. Now, the question is, how do you train something like this? And they show that they, so this is the loss function right here. The loss function is they give you a concept and an initial situation, and you're supposed to infer the X1 and the A. And the loss function is simply the negative log likelihood of that. But what does that mean? So we'll make it easier. If you have this procedure right here, where you have a demo event, this up here, this is demo, and this is a test event. How are you going, this entire procedure, how are you going to learn the energy function? Well, in this case, this entire procedure, this entire thing is one training sample. But usually we have input and label. And now here, it's much more complicated because, so we have input, okay, that's this X and this A, cool. But then we have SGD as integral part of the procedure to determine the W. And now what we could do is just apply a loss to the W, but we don't because we don't know what the embedding space for the concepts is. We could maybe train a classifier, but in this case, we want to train the ability to transfer these concepts. So our training sample needs to be one time transferring a concept. So SGD for one is part of our process here. And not only that, but then this X here, of course, is also part of our training sample, right? This appears X0 and this here is X1. And now we need to find this A, this attention mask. And that is an SGD again. Remember, inferring anything through the energy function is a gradient descent process. So ultimately, our one training example consists of X0, A at the beginning, so let's call that A0. It consists of the SGD procedure to find W, it consists of X1, and it consists of the SGD procedure to find A, the A1, the output A. And then that will give us the output A, the A1. So this here is our input in the classical machine learning. This would be our X, and this here would be our label Y. And that's what we train on. We train. So such that the output right here, the A, this is of course, sorry, this is of course the Y hat. This is what we predict. And in the training sample, we just write a little generator that will, you know, make this situation that knows what the concept is, right? It will say, okay, I'm gonna make an example for a square, then it will make this, will make the attention mask for a square, and then it will make the new situation again with a square, but not tell us the attention mask there, and it will make the attention mask into the true Y. So at the end, we can compare what our model output, the attention mask we output here, without ever knowing that this should be a square, right? And we have the true label, which comes out of the generator that at the beginning decided that it should be a square. And then the loss, the distance between those two, that's our loss. This is an, this is an enormous procedure to get a loss. And most crucially, you have to back propagate through optimization procedures. And this is something that we just can't do yet in our models. If you take an image, a ResNet 50, right, right now, we do one forward propagation to get a label. In this procedure, if you had to back propagate through the optimization procedure, for each sample, you would need to basically back propagate through 50 forward passes of the ResNet, if you if your optimization procedure is 50 steps long, and that is just not feasible right now. So that's why we don't do it. But I believe maybe once we find a smart way of back propping through optimization procedures, a whole lot of these things will become the new a new wave in machine learning. I really I'm excited by this. I'm pretty sure it doesn't work yet. And this is very fiddly, fiddly work. But I'm excited by the prospect that we can do this. So this is the training procedure, right? You are given x0, x1, and a, and you optimize in order to infer the concept behind it, right? The generator that your level generator of your training data, it knows the concept, it has a concept in mind when it generated this, but you're not telling your model what the concept is, it needs to infer that. And then using the model, the thing that the model inferred, you can either give it x0 and x1 and infer a, or you can give it the x and the a and infer x, you can do either of those, right? These are called identification or generation, respectively. And then you compare the output here to what the generator at the beginning thought, again, it's not telling you it's that's because that's the label. And you compare this to that. And that will be your loss to train your energy function parameters. So your training samples, if you think of this entire thing as one forward pass of the model, then it's just classic machine learning, right? You have a training sample, which is one forward pass and you have a corresponding label that you infer. So let's jump to the experiments right here. The experiments are actually pretty cool. So what they've done is, for example, taken the concept of being far apart from something now being far apart so that the little x needs to be as far away as possible from the ball that has the attention on it. So if you do generation, and you start the little x right here, and you ask the model, where please infer the next state of the world, it will push that little x away right here. And in color, you can see the energy function values of the position of the x. So it pushes it away from this thing. But if you take the same concept embedding, the concept embedding of being far away, but you don't do generation, you do identification, which means you infer the a, then it will simply tell you that this ball right here is the furthest away from the x. So you can do all sorts of things like this and transferring concepts. I find this here pretty interesting. So they have two different concepts. One concept is red as an identification. You need to identify the red ball. But the other concept is you need to turn something red, right? You need to take a ball that is maybe now blue, and of course the color, you can gradient descent on the colors, you need to make it red. And since the energy function, it just takes three input x, a and w. You're not gonna tell it right now in which situation you are. It has to create this w embedding space through learning. And if you do it with those two concepts, then it will put the make something red concept and the is something red concepts in the same places. So this is a PCA. And in blue, I think these blue is the attention codes for identify the red things. And in red are the generation code for make something red and they will be put in the same place, which is pretty cool. It means that the energy function really learns the feature of something being red. I find this pretty neat. And then here they have some experiments where they basically show we need that gradient descent optimization procedure because only after many steps will the energy function basically be aligned with the concept that you want. So if you have a zero-shot model, like just one forward pass as we do here, you'll see that the energy function that is supposed to make a circle from samples, right? This is the example concept right here. If you just have a one-shot model, it will, it cannot or in this case at least, it doesn't learn to one-shot produce. Only if you optimize for a few steps will it get this. So you optimize at inference time and that seems to be very important. You can see again here demonstrations of this. So the example is this and then the model as you can see after 20 steps learn optimizes the points to go to these locations. Whereas after only one step it didn't do that yet. So there are complex things at work here. And this column here is where you don't have a relational neural network. So you can't basically capture dependencies between things. So you have no chance of making a square because you don't know where the things are in relation to each other. But that's more of an engineering question. Their point is basically that if you have models that do optimization at inference time, they are much more powerful than models that just do a one-shot forward pass. It's sort of like an autoregressive model in NLP versus a non autoregressive model that produces all words at once. If you produce all words of a sentence at once, no word can depend on any other word and you can just come produce independent or you can just produce independent things which will make the sentence often not make any sense. They also have this KL objective which is a regularizer which I believe that's just a trial and error they built it in because. But it is a regularizer. I don't want to really go into that. And then they do demonstration and they reenact it on a robot. The demonstration here is that there is a situation where two things have attention on and you're supposed to move something into the middle of the two things. So that's the concept. You don't tell the robot the concept. It needs to learn that from data and then infer that this is the concept that you want and then transfer that to the other environment. Now you know there's this robot environment but ultimately they still encode the positions of these things and the position of that. And really all you have to do different here is that instead of moving this actuator directly you need to calculate what you need to do to the individual joints in the robot. So I think this is maybe because it's open AI and it needs to you know look roboty and stuff but the problem here is not really different. It's not real-world transfer or anything. So yeah let's go through some of the things they can learn with this. So you can see here they can learn these regional geometric shapes and on the left is the example event that the model needs to take the concept from. Now this is I believe very much identification. So what they did is they trained with a data set where all of these appear. So there are squares, there are lines, there are circles. So this is maybe my criticism here that it is not so much to generally infer a concept. It is more like identify the concept. So the model basically just needs to decide is this line, is this circle or is this square because those things were in the training data set. It would be nice to see how this generalizes to general concepts or if we can even make that if we can have a zero-shot concept inference and then transfer those concepts to other things. Maybe that's already happening. I don't know. So here the spatial arrangement is to either be close to something or to be between two things. So if the attention is on two things you want in between. So you see the top ones are the demonstrations. It needs to recognize the concept and it needs to basically optimize to fulfill that concept. Shapes. So to make shapes is... Oh yeah there's a triangle. Again this very much I believe relies on recognition and not actual understanding of what a triangle is. Here you have proximity being closer being far apart. What else is cool? Oh yeah you have the recognition for the same task. You need to identify the ball that is closer. Here you really also see the optimization procedure in action. Where for example at the beginning of each flicker you kind of see the attention being everywhere and then stabilizing to one or two points. So if two points are equally close or far apart you'll see the attention being on multiple points. Which is pretty cool right? So that means the model really learns this concept. Here's the count quantity. So you can either have one two or larger than three or something. Yeah that seems like they tried three and four and didn't work so they just said we'll just do larger than three. And here is this robot thing where it also always needs to move in between. Now this is the part that I'm not really impressed with but you know whatever you want. Okay I hope this was a good introduction to energy functions. What you can do with them. What I think of them. And of this paper it is a pretty cool paper. Yes it only works on toy problems so far but I believe this is one interesting direction of future machine learning and something yet to be very much explored. If you like this content please subscribe, tell all of your friends about it, share and I'll see you next time. Bye bye!
[ { "end": 5.84, "start": 0, "text": " Hi there, what you're seeing here is an energy-based model that learns the" }, { "end": 11.92, "start": 5.84, "text": " concept of a shape from a demonstration on the left. So on the left you can see a" }, { "end": 17.84, "start": 11.92, "text": " demonstration of data points sampled from a shape, in these cases circles or" }, { "end": 23.12, "start": 17.84, "text": " squares, and then the corresponding energy function that the model infers" }, { "end": 28.6, "start": 23.12, "text": " from that. And then it can replicate that shape on the right using that energy" }, { "end": 33.760000000000005, "start": 28.6, "text": " function. So the paper we're going to analyze today is called Concept Learning" }, { "end": 40.480000000000004, "start": 33.760000000000005, "text": " with Energy-Based Models by Igor Mordac of OpenAI. And this is a very cool paper," }, { "end": 45.2, "start": 40.480000000000004, "text": " or at least I think it's a very cool paper, but it is also a very hard paper." }, { "end": 51.84, "start": 45.2, "text": " So therefore first I want to kind of make a bit of an introduction into the" }, { "end": 56.72, "start": 51.84, "text": " concepts that we are facing in this paper. So the first thing you need to" }, { "end": 61.04, "start": 56.72, "text": " know are energy functions or energy-based models. What is an energy" }, { "end": 67.44, "start": 61.04, "text": " function? An energy function, sometimes called E, is simply a function with one" }, { "end": 73.2, "start": 67.44, "text": " or multiple inputs, let's call them X. And you can make the... if the energy" }, { "end": 79.72, "start": 73.2, "text": " function is happy with X it will be the value 0. And if the energy function is" }, { "end": 88.48, "start": 79.72, "text": " not happy with X it will be a high value, like larger than 0. So this is happy, this" }, { "end": 94.68, "start": 88.48, "text": " is not happy. So let's give some examples of this. We can formulate almost any" }, { "end": 99.12, "start": 94.68, "text": " machine learning problem in terms of an energy function. Let's say we have a" }, { "end": 110.80000000000001, "start": 99.12, "text": " classifier. The classifier takes as an input an image here, maybe of a cat, and a" }, { "end": 118.68, "start": 110.80000000000001, "text": " label. So if the label is cat then the energy will be 0 if the energy function" }, { "end": 124.44, "start": 118.68, "text": " is of course working correctly. And if we give the energy function the same" }, { "end": 132.04, "start": 124.44, "text": " image but we give it a wrong label, dog, then it is very high. In the case of the" }, { "end": 137.8, "start": 132.04, "text": " classifier of course we can simply take the loss function as the energy function" }, { "end": 141.52, "start": 137.8, "text": " and we automatically get an energy-based model. So the loss function here" }, { "end": 149.48, "start": 141.52, "text": " would be something like the negative log probability of the correct class." }, { "end": 155.35999999999999, "start": 149.48, "text": " But in any case it is just going to be a high number, let's call it 10 to the 9." }, { "end": 161.92, "start": 155.35999999999999, "text": " So the energy function says, this is very bad, this thing here is" }, { "end": 167.2, "start": 161.92, "text": " very bad, the entire thing you input. It won't tell you yet what's bad about it." }, { "end": 172.67999999999998, "start": 167.2, "text": " So that also means you can change any of the two things to make the classifier" }, { "end": 176.72, "start": 172.67999999999998, "text": " happy. Now usually we're concerned with changing the label. It's like, tell me" }, { "end": 183.44, "start": 176.72, "text": " which other label do I need to input to make you happy? And if we make the labels" }, { "end": 187.76, "start": 183.44, "text": " differentiable, of course we never input a true label, we actually input like a" }, { "end": 193.32, "start": 187.76, "text": " distribution, softmax distribution over labels, and that's differentiable." }, { "end": 199.8, "start": 193.32, "text": " We can use gradient descent to update the dog label, we can use gradient descent" }, { "end": 204.68, "start": 199.8, "text": " to find a label that would make the energy function more happy. So we could" }, { "end": 213.36, "start": 204.68, "text": " use gradient descent to get the cat level if we had a good classifier. But we can" }, { "end": 220.28, "start": 213.36, "text": " also optimize the image to make it compatible with the dog label." }, { "end": 224.84, "start": 220.28, "text": " That's things that if you ever saw deep dream or something like this, those" }, { "end": 230.28, "start": 224.84, "text": " models do exactly that, they optimize the input image for a particular label." }, { "end": 234.60000000000002, "start": 230.28, "text": " And there you can view the entire neural network including the loss function" }, { "end": 243.56, "start": 234.6, "text": " as the energy function. So what's another example? Another example is, let's say" }, { "end": 249.6, "start": 243.56, "text": " you have a k-means model, and the energy function simply input a data point." }, { "end": 255.07999999999998, "start": 249.6, "text": " And for the data point, what you're going to do is you're going to find the min" }, { "end": 260.88, "start": 255.07999999999998, "text": " cluster index, the min k, over, you know, you have your multiple clusters here and" }, { "end": 265, "start": 260.88, "text": " your data point might be here, so you're going to find the cluster that's closest" }, { "end": 272.64, "start": 265, "text": " and then the distance here, this distance d, will be the energy of that. So the" }, { "end": 277.76, "start": 272.64, "text": " model is very happy when your data point comes from one of the clusters, but your" }, { "end": 281.28, "start": 277.76, "text": " model is not happy when the data point is far away. And that would be the cost" }, { "end": 285.64, "start": 281.28, "text": " function of the k-means function. So that's an energy-based model too. Now" }, { "end": 290.56, "start": 285.64, "text": " currently energy-based models have come into fashion through things like GANs or" }, { "end": 298.04, "start": 290.56, "text": " any sort of noise contrastive estimation. So in a GAN, what you" }, { "end": 303.68, "start": 298.04, "text": " have is you have a discriminator. And the discriminator will basically learn a" }, { "end": 310.4, "start": 303.68, "text": " function to differentiate data from non-data. So that by itself is an energy" }, { "end": 314.68, "start": 310.4, "text": " function. The discriminator will learn a function and that function will be low" }, { "end": 320.88, "start": 314.68, "text": " wherever the discriminator thinks there is data. So it will usually do" }, { "end": 324.92, "start": 320.88, "text": " this around the data points, so the data points form the valleys right here. And" }, { "end": 330.96000000000004, "start": 324.92, "text": " then the generator will basically take that discriminator function and will try" }, { "end": 336.56, "start": 330.96000000000004, "text": " to infer points that are also in these valleys, to produce points that are also" }, { "end": 343.24, "start": 336.56, "text": " in the valleys. And then you basically have an energy learning competition. The" }, { "end": 349.12, "start": 343.24, "text": " discriminator now tries to push down on the energy where the true data is and" }, { "end": 354.44, "start": 349.12, "text": " push up on the energy where the generated data is. And that will give you" }, { "end": 362.72, "start": 354.44, "text": " basically a steeper energy-based function in the future. So in this" }, { "end": 368.52, "start": 362.72, "text": " case the discriminator neural network is the energy function. And the" }, { "end": 373.76, "start": 368.52, "text": " degenerator just tries to produce data that is compatible with that energy" }, { "end": 377.64, "start": 373.76, "text": " function. So I hope that the concept of what an energy function is is a bit" }, { "end": 382.91999999999996, "start": 377.64, "text": " clear. Again any machine learning problem can be formulated in terms of" }, { "end": 388.59999999999997, "start": 382.91999999999996, "text": " an energy function. Now what is not done so far is what we alluded to a little" }, { "end": 395.84, "start": 388.59999999999997, "text": " bit before in the classifier example and also here. So right now when we want to" }, { "end": 402.4, "start": 395.84, "text": " train again we simply take the generator to produce data. Now what's the" }, { "end": 406.71999999999997, "start": 402.4, "text": " generator's goal? The generator's goal is to hit those valleys in the energy" }, { "end": 411.88, "start": 406.71999999999997, "text": " function. And we produce a generator in one shot to produce this data. But" }, { "end": 417.15999999999997, "start": 411.88, "text": " what we could also do is of course we could just start somewhere. Let's say" }, { "end": 421.96, "start": 417.15999999999997, "text": " here we pick a random data point and then we use gradient descent because the" }, { "end": 427.2, "start": 421.96, "text": " energy function in this case is smooth. We use gradient descent to just drop" }, { "end": 433.23999999999995, "start": 427.2, "text": " down this valley and then find ourselves in this valley. So without ever training" }, { "end": 438.76, "start": 433.23999999999995, "text": " a generator we can use this methods to produce points that are in the valley of" }, { "end": 445.28, "start": 438.76, "text": " the energy function. And I don't know if people... I guess people have" }, { "end": 448.97999999999996, "start": 445.28, "text": " trained GANs like this. The reason why it doesn't work let's say in the real" }, { "end": 454.08000000000004, "start": 448.98, "text": " world is because that procedure will just produce adversarial examples for" }, { "end": 459.04, "start": 454.08000000000004, "text": " the discriminator. And those usually look like nothing like data. Because if you" }, { "end": 464.36, "start": 459.04, "text": " keep the discriminator just stable and gradient descent against it what you'll" }, { "end": 470.92, "start": 464.36, "text": " get isn't really qualitatively good. But in principle if the discriminator was a" }, { "end": 476.28000000000003, "start": 470.92, "text": " good energy function for the data to describe the data we could use gradient" }, { "end": 482.91999999999996, "start": 476.28, "text": " descent. The same up here. In order to find a good label for an image given" }, { "end": 489.17999999999995, "start": 482.91999999999996, "text": " that we have a good energy function, we could simply gradient" }, { "end": 497.71999999999997, "start": 489.17999999999995, "text": " descent on the label in order to find a better label. So in this" }, { "end": 504.11999999999995, "start": 497.71999999999997, "text": " paper we're going to have a situation where we say we're given an energy" }, { "end": 510.52, "start": 504.12, "text": " function and we're given a bunch of inputs. They are then called X, A, and W." }, { "end": 517.92, "start": 510.52, "text": " And if I have my energy function already, if I have given my energy function and I" }, { "end": 525.5600000000001, "start": 517.92, "text": " have given two of those three things, any two, I can infer the last thing" }, { "end": 532.64, "start": 525.5600000000001, "text": " simply by gradient descent on my energy function. Because I know the energy" }, { "end": 538.24, "start": 532.64, "text": " function is zero when the energy function is happy with the input." }, { "end": 543.84, "start": 538.24, "text": " So when all of these things agree, basically the energy function is happy, it" }, { "end": 548.28, "start": 543.84, "text": " will output zero otherwise it will output a high value. Therefore if I'm given any" }, { "end": 554.56, "start": 548.28, "text": " of those two, any two of those three things, I can find a compatible third" }, { "end": 560.24, "start": 554.56, "text": " thing by descending. And then of course over here in these machine learning" }, { "end": 565.16, "start": 560.24, "text": " problems, the task was always actually to learn an energy function. So" }, { "end": 570.16, "start": 565.16, "text": " usually in the training data set we are given images and labels and we want to" }, { "end": 575.04, "start": 570.16, "text": " learn this energy function which would be parameterized. So we want to learn the" }, { "end": 580.72, "start": 575.04, "text": " parameters. And the same here in our general case if we are now given three" }, { "end": 585.08, "start": 580.72, "text": " things but we are not given the parameters of the energy function, we" }, { "end": 590.64, "start": 585.08, "text": " don't know what those are. As long as we're given all of the inputs in our" }, { "end": 594.6800000000001, "start": 590.64, "text": " training data set, and our training data set guarantees these are actually, you" }, { "end": 597.9200000000001, "start": 594.6800000000001, "text": " know, these are inputs that are compatible with each other, the energy" }, { "end": 602.6, "start": 597.9200000000001, "text": " function should be low, we can simply gradient descent on the parameters of" }, { "end": 608.08, "start": 602.6, "text": " the energy function. So in a sense there are four things, right? There are these" }, { "end": 611.1600000000001, "start": 608.08, "text": " three inputs and then there are the parameters of the energy function. If" }, { "end": 618.8399999999999, "start": 611.16, "text": " we're given any three of those four, we can gradient descent on the rest. And" }, { "end": 624.8, "start": 618.8399999999999, "text": " that's going to be the basis. So the X here is going to be the so-called state." }, { "end": 632.68, "start": 624.8, "text": " And the state in this paper is going to be images of entities. The entities," }, { "end": 636.88, "start": 632.68, "text": " sorry it's not going to be images, but the entities are these little circles" }, { "end": 643.2, "start": 636.88, "text": " that you're going to see. And each of those entities can have an X position, a" }, { "end": 650.12, "start": 643.2, "text": " Y position, and I believe a color. So R, G and B. So each of those can have that." }, { "end": 655.76, "start": 650.12, "text": " And then the concatenation of all of those attributes is one big vector and" }, { "end": 661, "start": 655.76, "text": " that is your X, that's your state. So state is number of entities and their" }, { "end": 667.32, "start": 661, "text": " attributes. A is going to be an attention mask over the state. So A is" }, { "end": 675.92, "start": 667.32, "text": " going to be... here you have four entities, so A will have four entries telling you" }, { "end": 684.12, "start": 675.92, "text": " which of these entities you should pay attention to right now. And W is going to" }, { "end": 693.48, "start": 684.12, "text": " be a concept vector so called. So W is going to be the embedding of a concept." }, { "end": 699.24, "start": 693.48, "text": " Now what a concept is in this case is very general. I can give you an example." }, { "end": 709.08, "start": 699.24, "text": " One concept is do the entities that the A pays attention to, are they" }, { "end": 714.48, "start": 709.08, "text": " close to each other? So in this case you see we have two entities that A has a" }, { "end": 723.6, "start": 714.48, "text": " high value on and this is this ball up here and this ball down here. Now if the" }, { "end": 729.9200000000001, "start": 723.6, "text": " concept vector is the embedding for the concept of being close to each other" }, { "end": 737.2800000000001, "start": 729.9200000000001, "text": " then the energy function would be very happy if those two things are close to" }, { "end": 740.8, "start": 737.28, "text": " each other and it would be very unhappy if those two things aren't close to each" }, { "end": 745.8399999999999, "start": 740.8, "text": " other. But in the very same situation, so the same X, the same attention mask, but" }, { "end": 753.72, "start": 745.8399999999999, "text": " a different concept, so a different W vector right here, then the energy" }, { "end": 757.1999999999999, "start": 753.72, "text": " function would be maybe very happy if the two things are far apart and maybe" }, { "end": 764.56, "start": 757.1999999999999, "text": " unhappy if the two things are close. So the question is always how are the three" }, { "end": 768.7199999999999, "start": 764.56, "text": " things that you put into the energy function compatible with each other and" }, { "end": 775.92, "start": 768.7199999999999, "text": " given all but one of these things you can infer the other. So let's say you" }, { "end": 781.7199999999999, "start": 775.92, "text": " have a perfect energy function for this situation." }, { "end": 788, "start": 781.7199999999999, "text": " You're just given the energy function, you can trust it. And you are given, let's" }, { "end": 791.8399999999999, "start": 788, "text": " make an example, you are given the X, so you're given the state, I'm going to draw" }, { "end": 802.08, "start": 791.84, "text": " the state down here, right? Okay, this is the state and you're given the W and the" }, { "end": 808.34, "start": 802.08, "text": " W is the embedding, it's a vector but in embedding space, but the" }, { "end": 818.36, "start": 808.34, "text": " embedding is for a line, right? So the geometric unit of a line." }, { "end": 825.24, "start": 818.36, "text": " Now your task is to find A, the attention mask that will make the energy function" }, { "end": 829.8000000000001, "start": 825.24, "text": " happy. And as you can see right here, what you would do is you would put a lot of" }, { "end": 836.16, "start": 829.8000000000001, "text": " weight on this, this, this and this ball and no weight on that ball, because those" }, { "end": 841.84, "start": 836.16, "text": " make a line. And since everything here is differentiable, so the state is" }, { "end": 845, "start": 841.84, "text": " differentiable, the attention is differentiable and the concepts are" }, { "end": 849.8, "start": 845, "text": " vectors, they're differentiable, you can use gradient descent to find that. Another" }, { "end": 857.36, "start": 849.8, "text": " example, if you're given again the same W, so line, and you are given this" }, { "end": 865.76, "start": 857.36, "text": " following thing and you are given, now you're given the attention on these" }, { "end": 871.8, "start": 865.76, "text": " three and you say please find the X, please find the X, the state that makes" }, { "end": 877.28, "start": 871.8, "text": " this energy function happy. Now this here you would call the starting state, the X" }, { "end": 885.0799999999999, "start": 877.28, "text": " zero, your task is going to be find the X one, find the state, how do you have to" }, { "end": 888.78, "start": 885.0799999999999, "text": " change this state such that the energy function is happy? And of course the" }, { "end": 893.28, "start": 888.78, "text": " answer is going to be is to push this ball here inward until it is in the" }, { "end": 898.92, "start": 893.28, "text": " middle of the two others, so the three form a line. Right, these three form a" }, { "end": 902.8399999999999, "start": 898.92, "text": " line. You don't have to do anything to this ball up here, because" }, { "end": 908.68, "start": 902.8399999999999, "text": " there is no attention on it. And the attention, it's only, is the concept for" }, { "end": 913.5999999999999, "start": 908.68, "text": " the things that you put attention on and the state, are those three in agreement" }, { "end": 921.8, "start": 913.5999999999999, "text": " and the energy function is happy. Okay, we have covered the basics. Now let's" }, { "end": 928.12, "start": 921.8, "text": " dive into the paper. I think this is the longest introduction ever, but" }, { "end": 936.96, "start": 928.12, "text": " I think it will pay off once you see. So they specifically, or this" }, { "end": 940.96, "start": 936.96, "text": " author, I think it's a single author, identifies two different things that you" }, { "end": 945.4, "start": 940.96, "text": " can do with an energy function here. Of course you can do more as we saw, but they" }, { "end": 952.82, "start": 945.4, "text": " identify two. So here is where you have given the initial state and an attention" }, { "end": 959.9200000000001, "start": 952.82, "text": " mask and you want to find the x1, the state that satisfies the concept and" }, { "end": 965.32, "start": 959.9200000000001, "text": " attention the most. This the author calls generation. As you can see here," }, { "end": 970.1600000000001, "start": 965.32, "text": " these four things that you have the attention on are pushed around until" }, { "end": 976.22, "start": 970.1600000000001, "text": " they make a square, because the concept right now is square. And in the other" }, { "end": 983.72, "start": 976.22, "text": " case, where you are given this x0 and x1, just call this x right here, just call" }, { "end": 990.0400000000001, "start": 983.72, "text": " this thing x. If you're given those two, and you are given the concept square, and" }, { "end": 994.4, "start": 990.0400000000001, "text": " you're tasked with finding a, the attention mask, of course you're going to" }, { "end": 999.96, "start": 994.4, "text": " put the attention on these right here. And that is going to happen through" }, { "end": 1004.64, "start": 999.96, "text": " gradient descent. Again, we're not learning a model to give you that" }, { "end": 1009.12, "start": 1004.64, "text": " attention. Like in a GAN, we're learning a generator to just one shot give it to" }, { "end": 1013.4, "start": 1009.12, "text": " you. Right now, what we're going to do is we're going to gradient descent" }, { "end": 1017.8, "start": 1013.4, "text": " optimize on our smooth energy function to give us that perfect attention mask" }, { "end": 1022.4, "start": 1017.8, "text": " that satisfies the energy function. Alright, so this is the difference right" }, { "end": 1028.3, "start": 1022.4, "text": " here. Gradient descent is part of the output procedure of the model. Usually we" }, { "end": 1033.08, "start": 1028.3, "text": " just use it to learn, and we learn a one-shot model. But here gradient descent" }, { "end": 1040.1999999999998, "start": 1033.08, "text": " is part of the model. So they introduce energy functions here, and they say, okay," }, { "end": 1046.6, "start": 1040.1999999999998, "text": " we can have a policy on x. So if we're given a concept W, and if we're given an" }, { "end": 1052.4399999999998, "start": 1046.6, "text": " A, we can have a policy over x, which basically means we can find x's that are" }, { "end": 1058.3999999999999, "start": 1052.4399999999998, "text": " compatible with that by running gradient descent here. You see there is an xk" }, { "end": 1065.8400000000001, "start": 1058.4, "text": " minus one, and we are running gradient descent on the energy function with" }, { "end": 1071.2800000000002, "start": 1065.8400000000001, "text": " respect to x to find a better x that satisfies the energy function given" }, { "end": 1078.1200000000001, "start": 1071.2800000000002, "text": " those inputs. And the same if we want to find an attention mask, we are running" }, { "end": 1085.48, "start": 1078.1200000000001, "text": " gradient descent on the attention mask, again, in order to satisfy the same" }, { "end": 1091.16, "start": 1085.48, "text": " energy function. So you see the inputs are both times the same. The concept here" }, { "end": 1097.16, "start": 1091.16, "text": " we can input square, here we can input square, but the difference is what we're" }, { "end": 1102.14, "start": 1097.16, "text": " running gradient descent on and what we keep constant. And I would get, I would" }, { "end": 1109.1200000000001, "start": 1102.14, "text": " add a third line here actually, because we can also, if we're given an x and an" }, { "end": 1116, "start": 1109.12, "text": " a, we can also infer a W. And that's going to be an integral part. So if I" }, { "end": 1124.6399999999999, "start": 1116, "text": " have this right here, and this situation, and I have, say I have attention on these" }, { "end": 1133.2399999999998, "start": 1124.6399999999999, "text": " four, now I can ask the model, so I'm given x and I'm given a, I can ask the" }, { "end": 1141.72, "start": 1133.24, "text": " model to infer W. And the model should ideally output, ha, this is square. Now" }, { "end": 1146.04, "start": 1141.72, "text": " the model isn't going to output square, the model is going to output a vector" }, { "end": 1150.56, "start": 1146.04, "text": " representation of square. So the model is going to output square but as a" }, { "end": 1157.84, "start": 1150.56, "text": " vector of numbers, because that's how we've trained it. W is an embedding. But" }, { "end": 1163.3999999999999, "start": 1157.84, "text": " what we can then do later is we can say, okay, I'm not going to tell you it's a" }, { "end": 1169, "start": 1163.3999999999999, "text": " square, you just come up with a vector W to describe this situation. And now I'm" }, { "end": 1174.1599999999999, "start": 1169, "text": " going to take that vector W that you came up with, miss, mister or missus model," }, { "end": 1182.9599999999998, "start": 1174.1599999999999, "text": " and I'm going to take, tell you a new situation. This situation right here. And" }, { "end": 1189.32, "start": 1182.96, "text": " I'm going to now give you x, and I'm going to give you the W that you" }, { "end": 1196.08, "start": 1189.32, "text": " yourself have output, and now please tell me what's the a. And then the model is of" }, { "end": 1200.88, "start": 1196.08, "text": " course supposed to tell you, oh these four here are the a. So without" }, { "end": 1205.68, "start": 1200.88, "text": " ever telling that it should be a square, what you can do is you can let the model" }, { "end": 1212.32, "start": 1205.68, "text": " infer a W from one example situation, and then transfer that W to a new" }, { "end": 1219.08, "start": 1212.32, "text": " situation. So it can identify, you can just say whatever concept I have up here," }, { "end": 1225.36, "start": 1219.08, "text": " please apply that same concept, which is the W down here. And this is the entire" }, { "end": 1233.24, "start": 1225.36, "text": " paper now. This is the concept learning through energy-based models. Okay, so that" }, { "end": 1238.1599999999999, "start": 1233.24, "text": " is kind of a third line I would add down here. You can infer a concept vector if" }, { "end": 1245.16, "start": 1238.16, "text": " you're given the X and the a. So in order to do all this, their energy function is" }, { "end": 1249.8400000000001, "start": 1245.16, "text": " going to be a so-called relational neural network. So what you'll have is" }, { "end": 1254.44, "start": 1249.8400000000001, "text": " you'll have a simple neural network, a multi-layer perceptron that always" }, { "end": 1261.0400000000002, "start": 1254.44, "text": " connects two entities to each other with the concept vector, and then this is I" }, { "end": 1267.0800000000002, "start": 1261.0400000000002, "text": " believe a sigmoid that connects the attention masks of the two, and you" }, { "end": 1273.6399999999999, "start": 1267.08, "text": " simply sum over all pairs of two entries in your model, and then you send that" }, { "end": 1278.9199999999998, "start": 1273.6399999999999, "text": " through an MLP, sorry, through an MLP again. This I believe is not so important," }, { "end": 1284.12, "start": 1278.9199999999998, "text": " it's just important that they can feed this entire situation, the X, the a, and" }, { "end": 1287.36, "start": 1284.12, "text": " the W, they can basically feed into a neural network, and the neural network" }, { "end": 1294.28, "start": 1287.36, "text": " comes up with a number of how well those three things fit together. And then you" }, { "end": 1299.6, "start": 1294.28, "text": " can transfer these concepts. That's pretty cool. Now the only question is, of" }, { "end": 1305.72, "start": 1299.6, "text": " course, we've always said we're given an energy function, we're just, we just have" }, { "end": 1309.6399999999999, "start": 1305.72, "text": " it. But of course, this is a neural network, and the neural network has" }, { "end": 1314.12, "start": 1309.6399999999999, "text": " parameters, and the parameters, we don't know what good parameters are at the" }, { "end": 1320.32, "start": 1314.12, "text": " beginning. So we need to train this thing. And again, the reason why these are toy" }, { "end": 1325.36, "start": 1320.32, "text": " problems right here is, I mean, we'll get to why it's computational, but this is" }, { "end": 1330.6799999999998, "start": 1325.36, "text": " kind of a new field, I believe in machine learning, at least I come from classical" }, { "end": 1336.72, "start": 1330.6799999999998, "text": " machine learning, and we only ever have used, like SGD to train, and we only ever" }, { "end": 1345.04, "start": 1336.72, "text": " have produced models that one shot produce something. And here, we, this is a," }, { "end": 1348.76, "start": 1345.04, "text": " I believe this is a new concept where you use gradient descent as part of the" }, { "end": 1356.56, "start": 1348.76, "text": " output. And that makes a lot of trouble. So that's why we work in toy problems. So" }, { "end": 1362.84, "start": 1356.56, "text": " what this, this here is the situation I described. You have a demo event where" }, { "end": 1368.64, "start": 1362.84, "text": " you're given the X and the A, and you're supposed to infer the W. So the question" }, { "end": 1373.96, "start": 1368.64, "text": " here is, what's the W? And the model will come up with a W, and you're not going to" }, { "end": 1379.24, "start": 1373.96, "text": " do anything, you know, right now, you're simply going to take that W and tell it," }, { "end": 1386.24, "start": 1379.24, "text": " oh, well, here is a so-called test event. So please apply the W you came up with in" }, { "end": 1393.08, "start": 1386.24, "text": " this test event. And please find me the A, in this case, that satisfies the W and" }, { "end": 1398.64, "start": 1393.08, "text": " the X I give you here. And of course, the A right here is, as you can see, even you" }, { "end": 1404.88, "start": 1398.64, "text": " don't know that it's a square. And the actual concept here is move the gray" }, { "end": 1409.8000000000002, "start": 1404.88, "text": " ball to the middle of the square, right? That is it here. But no one has told me" }, { "end": 1415.8400000000001, "start": 1409.8000000000002, "text": " this. I just looked at the picture. So the correct answer here would be to place" }, { "end": 1421.16, "start": 1415.8400000000001, "text": " attention on those four things, and then to take this thing and move it to the" }, { "end": 1428.0400000000002, "start": 1421.16, "text": " middle right here, in this over here. So that would be the correct answer. Now," }, { "end": 1436.36, "start": 1428.04, "text": " the question is, how do you train something like this? And they show" }, { "end": 1441, "start": 1436.36, "text": " that they, so this is the loss function right here. The loss function is they" }, { "end": 1447.84, "start": 1441, "text": " give you a concept and an initial situation, and you're supposed to infer" }, { "end": 1453.12, "start": 1447.84, "text": " the X1 and the A. And the loss function is simply the negative log likelihood of" }, { "end": 1463.7199999999998, "start": 1453.12, "text": " that. But what does that mean? So we'll make it easier. If you have this" }, { "end": 1469.52, "start": 1463.7199999999998, "text": " procedure right here, where you have a demo event, this up here, this is demo," }, { "end": 1475.7199999999998, "start": 1469.52, "text": " and this is a test event. How are you going, this entire procedure, how are you" }, { "end": 1482.7199999999998, "start": 1475.7199999999998, "text": " going to learn the energy function? Well, in this case, this entire procedure, this" }, { "end": 1492.22, "start": 1482.72, "text": " entire thing is one training sample. But usually we have input and" }, { "end": 1498.88, "start": 1492.22, "text": " label. And now here, it's much more complicated because, so we have input, okay," }, { "end": 1504.68, "start": 1498.88, "text": " that's this X and this A, cool. But then we have SGD as integral part of the" }, { "end": 1510.78, "start": 1504.68, "text": " procedure to determine the W. And now what we could do is just apply a loss to" }, { "end": 1514.72, "start": 1510.78, "text": " the W, but we don't because we don't know what the embedding space for the concepts" }, { "end": 1520, "start": 1514.72, "text": " is. We could maybe train a classifier, but in this case, we want to train the" }, { "end": 1526.32, "start": 1520, "text": " ability to transfer these concepts. So our training sample needs to be one time" }, { "end": 1533.72, "start": 1526.32, "text": " transferring a concept. So SGD for one is part of our process here. And not only" }, { "end": 1539.12, "start": 1533.72, "text": " that, but then this X here, of course, is also part of our training sample, right?" }, { "end": 1544.28, "start": 1539.12, "text": " This appears X0 and this here is X1. And now we need to find this A, this" }, { "end": 1550.1999999999998, "start": 1544.28, "text": " attention mask. And that is an SGD again. Remember, inferring anything through the" }, { "end": 1555.4799999999998, "start": 1550.1999999999998, "text": " energy function is a gradient descent process. So ultimately, our one training" }, { "end": 1564.3999999999999, "start": 1555.4799999999998, "text": " example consists of X0, A at the beginning, so let's call that A0. It" }, { "end": 1572.76, "start": 1564.4, "text": " consists of the SGD procedure to find W, it consists of X1, and it consists of" }, { "end": 1582.8400000000001, "start": 1572.76, "text": " the SGD procedure to find A, the A1, the output A. And then that will give us the" }, { "end": 1590.44, "start": 1582.8400000000001, "text": " output A, the A1. So this here is our input in the classical machine learning. This" }, { "end": 1596.68, "start": 1590.44, "text": " would be our X, and this here would be our label Y. And that's what we train on." }, { "end": 1602.76, "start": 1596.68, "text": " We train. So such that the output right here, the A, this is of course, sorry, this" }, { "end": 1607.72, "start": 1602.76, "text": " is of course the Y hat. This is what we predict. And in the training sample, we" }, { "end": 1614.3200000000002, "start": 1607.72, "text": " just write a little generator that will, you know, make this situation that knows" }, { "end": 1618.2, "start": 1614.3200000000002, "text": " what the concept is, right? It will say, okay, I'm gonna make an example for a" }, { "end": 1622.16, "start": 1618.2, "text": " square, then it will make this, will make the attention mask for a square, and then" }, { "end": 1626.56, "start": 1622.16, "text": " it will make the new situation again with a square, but not tell us the" }, { "end": 1636.64, "start": 1626.56, "text": " attention mask there, and it will make the attention mask into the true Y. So at" }, { "end": 1642.3600000000001, "start": 1636.64, "text": " the end, we can compare what our model output, the attention mask we output" }, { "end": 1647.24, "start": 1642.3600000000001, "text": " here, without ever knowing that this should be a square, right? And we have the" }, { "end": 1653.4, "start": 1647.24, "text": " true label, which comes out of the generator that at the beginning decided" }, { "end": 1658.52, "start": 1653.4, "text": " that it should be a square. And then the loss, the distance between those two," }, { "end": 1666.72, "start": 1658.84, "text": " that's our loss. This is an, this is an enormous procedure to get a loss. And" }, { "end": 1672.84, "start": 1667.16, "text": " most crucially, you have to back propagate through optimization procedures." }, { "end": 1677.8799999999999, "start": 1672.84, "text": " And this is something that we just can't do yet in our models. If you take an" }, { "end": 1682.9599999999998, "start": 1677.8799999999999, "text": " image, a ResNet 50, right, right now, we do one forward propagation to get a" }, { "end": 1688.24, "start": 1682.9599999999998, "text": " label. In this procedure, if you had to back propagate through the optimization" }, { "end": 1693.1999999999998, "start": 1688.24, "text": " procedure, for each sample, you would need to basically back propagate through" }, { "end": 1699.24, "start": 1693.1999999999998, "text": " 50 forward passes of the ResNet, if you if your optimization procedure is 50" }, { "end": 1705.32, "start": 1699.24, "text": " steps long, and that is just not feasible right now. So that's why we don't do it." }, { "end": 1713.08, "start": 1706.04, "text": " But I believe maybe once we find a smart way of back propping through optimization" }, { "end": 1718.08, "start": 1713.08, "text": " procedures, a whole lot of these things will become the new a new wave in" }, { "end": 1722.32, "start": 1718.08, "text": " machine learning. I really I'm excited by this. I'm pretty sure it doesn't work" }, { "end": 1729.08, "start": 1722.32, "text": " yet. And this is very fiddly, fiddly work. But I'm excited by the prospect that" }, { "end": 1735.8, "start": 1729.08, "text": " we can do this. So this is the training procedure, right? You are given x0, x1," }, { "end": 1742.1599999999999, "start": 1735.8, "text": " and a, and you optimize in order to infer the concept behind it, right? The" }, { "end": 1747.1999999999998, "start": 1742.1599999999999, "text": " generator that your level generator of your training data, it knows the concept," }, { "end": 1750.4399999999998, "start": 1747.1999999999998, "text": " it has a concept in mind when it generated this, but you're not telling" }, { "end": 1755.8799999999999, "start": 1750.6799999999998, "text": " your model what the concept is, it needs to infer that. And then using the" }, { "end": 1762.48, "start": 1755.88, "text": " model, the thing that the model inferred, you can either give it x0 and x1 and" }, { "end": 1766.88, "start": 1762.48, "text": " infer a, or you can give it the x and the a and infer x, you can do either of" }, { "end": 1769.88, "start": 1766.88, "text": " those, right? These are called identification or generation," }, { "end": 1775.8000000000002, "start": 1769.88, "text": " respectively. And then you compare the output here to what the generator at the" }, { "end": 1781.5600000000002, "start": 1775.8000000000002, "text": " beginning thought, again, it's not telling you it's that's because that's" }, { "end": 1787.48, "start": 1781.56, "text": " the label. And you compare this to that. And that will be your loss to train your" }, { "end": 1792.84, "start": 1787.48, "text": " energy function parameters. So your training samples, if you think of this" }, { "end": 1797.44, "start": 1792.84, "text": " entire thing as one forward pass of the model, then it's just classic machine" }, { "end": 1800.76, "start": 1797.44, "text": " learning, right? You have a training sample, which is one forward pass and you" }, { "end": 1807.32, "start": 1800.76, "text": " have a corresponding label that you infer. So let's jump to the experiments" }, { "end": 1814, "start": 1807.32, "text": " right here. The experiments are actually pretty cool. So what they've done is, for" }, { "end": 1823.36, "start": 1814, "text": " example, taken the concept of being far apart from something now being far apart" }, { "end": 1828.48, "start": 1823.36, "text": " so that the little x needs to be as far away as possible from the ball that has" }, { "end": 1835.56, "start": 1828.48, "text": " the attention on it. So if you do generation, and you start the little x" }, { "end": 1842.04, "start": 1835.56, "text": " right here, and you ask the model, where please infer the next state of the world," }, { "end": 1846.8, "start": 1842.04, "text": " it will push that little x away right here. And in color, you can see the energy" }, { "end": 1853.52, "start": 1846.8, "text": " function values of the position of the x. So it pushes it away from this thing. But" }, { "end": 1859.44, "start": 1853.52, "text": " if you take the same concept embedding, the concept embedding of being far away," }, { "end": 1865.56, "start": 1859.44, "text": " but you don't do generation, you do identification, which means you infer the a," }, { "end": 1871.4, "start": 1865.56, "text": " then it will simply tell you that this ball right here is the furthest away" }, { "end": 1878.96, "start": 1871.4, "text": " from the x. So you can do all sorts of things like this and transferring" }, { "end": 1884.0800000000002, "start": 1878.96, "text": " concepts. I find this here pretty interesting. So they have two different" }, { "end": 1891.6399999999999, "start": 1884.08, "text": " concepts. One concept is red as an identification. You need to identify the" }, { "end": 1897.8, "start": 1891.6399999999999, "text": " red ball. But the other concept is you need to turn something red, right? You" }, { "end": 1902.6, "start": 1897.8, "text": " need to take a ball that is maybe now blue, and of course the color, you can" }, { "end": 1908, "start": 1902.6, "text": " gradient descent on the colors, you need to make it red. And since the energy" }, { "end": 1913.36, "start": 1908, "text": " function, it just takes three input x, a and w. You're not gonna tell" }, { "end": 1921.24, "start": 1913.36, "text": " it right now in which situation you are. It has to create this w embedding" }, { "end": 1928.8, "start": 1921.24, "text": " space through learning. And if you do it with those two concepts, then it will put" }, { "end": 1935.7199999999998, "start": 1928.8, "text": " the make something red concept and the is something red concepts in the same" }, { "end": 1941.32, "start": 1935.7199999999998, "text": " places. So this is a PCA. And in blue, I think these blue is the attention codes" }, { "end": 1946.48, "start": 1941.32, "text": " for identify the red things. And in red are the generation code for make" }, { "end": 1951, "start": 1946.48, "text": " something red and they will be put in the same place, which is pretty cool. It" }, { "end": 1954.48, "start": 1951, "text": " means that the energy function really learns the feature of something being" }, { "end": 1962.48, "start": 1954.48, "text": " red. I find this pretty neat. And then here they have some" }, { "end": 1967.36, "start": 1962.48, "text": " experiments where they basically show we need that gradient descent" }, { "end": 1973.3999999999999, "start": 1967.36, "text": " optimization procedure because only after many steps will the energy" }, { "end": 1978.76, "start": 1973.3999999999999, "text": " function basically be aligned with the concept that you want. So if you have a" }, { "end": 1983.6, "start": 1978.76, "text": " zero-shot model, like just one forward pass as we do here, you'll see that the" }, { "end": 1989.26, "start": 1983.6, "text": " energy function that is supposed to make a circle from samples, right? This is the" }, { "end": 1996.1999999999998, "start": 1989.26, "text": " example concept right here. If you just have a one-shot model, it will, it cannot" }, { "end": 2001.64, "start": 1996.2, "text": " or in this case at least, it doesn't learn to one-shot produce. Only if you" }, { "end": 2007.2, "start": 2001.64, "text": " optimize for a few steps will it get this. So you optimize at inference time" }, { "end": 2013.44, "start": 2007.2, "text": " and that seems to be very important. You can see again here demonstrations of" }, { "end": 2021.1200000000001, "start": 2013.44, "text": " this. So the example is this and then the model as you can see after 20 steps" }, { "end": 2027.52, "start": 2021.12, "text": " learn optimizes the points to go to these locations. Whereas after only one" }, { "end": 2032.4799999999998, "start": 2027.52, "text": " step it didn't do that yet. So there are complex things at work here. And this" }, { "end": 2035.56, "start": 2032.4799999999998, "text": " column here is where you don't have a relational neural network. So you can't" }, { "end": 2039.76, "start": 2035.56, "text": " basically capture dependencies between things. So you have no chance of" }, { "end": 2044.4799999999998, "start": 2039.76, "text": " making a square because you don't know where the things are in relation to each" }, { "end": 2048.54, "start": 2044.4799999999998, "text": " other. But that's more of an engineering question. Their point is basically that" }, { "end": 2054.36, "start": 2048.54, "text": " if you have models that do optimization at inference time, they are much more" }, { "end": 2061.14, "start": 2054.36, "text": " powerful than models that just do a one-shot forward pass. It's sort of like" }, { "end": 2067.12, "start": 2061.14, "text": " an autoregressive model in NLP versus a non autoregressive model that produces" }, { "end": 2071.8, "start": 2067.12, "text": " all words at once. If you produce all words of a sentence at once, no word can" }, { "end": 2076.38, "start": 2071.8, "text": " depend on any other word and you can just come produce independent or you can" }, { "end": 2081.92, "start": 2076.38, "text": " just produce independent things which will make the sentence often not make" }, { "end": 2088.6800000000003, "start": 2081.92, "text": " any sense. They also have this KL objective which is a regularizer which I" }, { "end": 2094.56, "start": 2088.6800000000003, "text": " believe that's just a trial and error they built it in because. But it is a" }, { "end": 2098.2400000000002, "start": 2094.56, "text": " regularizer. I don't want to really go into that. And then they do" }, { "end": 2105.36, "start": 2098.2400000000002, "text": " demonstration and they reenact it on a robot. The demonstration here is that" }, { "end": 2109.2000000000003, "start": 2105.36, "text": " there is a situation where two things have attention on and you're supposed to" }, { "end": 2113.08, "start": 2109.2000000000003, "text": " move something into the middle of the two things. So that's the concept. You don't" }, { "end": 2118.1600000000003, "start": 2113.08, "text": " tell the robot the concept. It needs to learn that from data and then infer that" }, { "end": 2122.84, "start": 2118.1600000000003, "text": " this is the concept that you want and then transfer that to the other" }, { "end": 2128.2400000000002, "start": 2122.84, "text": " environment. Now you know there's this robot" }, { "end": 2132.88, "start": 2128.2400000000002, "text": " environment but ultimately they still encode the positions of these things and" }, { "end": 2137.96, "start": 2132.88, "text": " the position of that. And really all you have to do different here is that" }, { "end": 2146.56, "start": 2137.96, "text": " instead of moving this actuator directly you need to calculate what you" }, { "end": 2151.1600000000003, "start": 2146.56, "text": " need to do to the individual joints in the robot. So I think this is maybe" }, { "end": 2156.04, "start": 2151.1600000000003, "text": " because it's open AI and it needs to you know look roboty and stuff but the" }, { "end": 2159.88, "start": 2156.04, "text": " problem here is not really different. It's not real-world" }, { "end": 2167.28, "start": 2159.88, "text": " transfer or anything. So yeah let's go through some of the things they can" }, { "end": 2172.76, "start": 2167.28, "text": " learn with this. So you can see here they can learn these regional geometric" }, { "end": 2178.4, "start": 2172.76, "text": " shapes and on the left is the example event that the model needs to take the" }, { "end": 2183.2000000000003, "start": 2178.4, "text": " concept from. Now this is I believe very much identification. So what" }, { "end": 2187.6400000000003, "start": 2183.2000000000003, "text": " they did is they trained with a data set where all of these appear. So" }, { "end": 2193.16, "start": 2187.64, "text": " there are squares, there are lines, there are circles. So this is maybe my" }, { "end": 2200.8399999999997, "start": 2193.16, "text": " criticism here that it is not so much to generally infer a concept. It is more" }, { "end": 2205.72, "start": 2200.8399999999997, "text": " like identify the concept. So the model basically just needs to decide is this" }, { "end": 2210.04, "start": 2205.72, "text": " line, is this circle or is this square because those things were in" }, { "end": 2215.04, "start": 2210.04, "text": " the training data set. It would be nice to see how this generalizes to general" }, { "end": 2220.8, "start": 2215.04, "text": " concepts or if we can even make that if we can have a zero-shot concept" }, { "end": 2225.96, "start": 2220.8, "text": " inference and then transfer those concepts to other things. Maybe that's" }, { "end": 2231.6, "start": 2225.96, "text": " already happening. I don't know. So here the spatial arrangement is to" }, { "end": 2238.52, "start": 2231.6, "text": " either be close to something or to be between two things. So if the attention" }, { "end": 2243.8, "start": 2238.52, "text": " is on two things you want in between. So you see the top ones are the" }, { "end": 2248.48, "start": 2243.8, "text": " demonstrations. It needs to recognize the concept and it needs to basically" }, { "end": 2258.4, "start": 2248.48, "text": " optimize to fulfill that concept. Shapes. So to make shapes is..." }, { "end": 2266.2000000000003, "start": 2258.4, "text": " Oh yeah there's a triangle. Again this very much I believe" }, { "end": 2271.36, "start": 2266.2000000000003, "text": " relies on recognition and not actual understanding of what a triangle is. Here" }, { "end": 2279.48, "start": 2271.36, "text": " you have proximity being closer being far apart. What else is cool? Oh yeah you" }, { "end": 2284.04, "start": 2279.48, "text": " have the recognition for the same task. You need to identify the ball that" }, { "end": 2289.2000000000003, "start": 2284.04, "text": " is closer. Here you really also see the optimization procedure in action." }, { "end": 2293.96, "start": 2289.2000000000003, "text": " Where for example at the beginning of each flicker you kind of see the" }, { "end": 2297.96, "start": 2293.96, "text": " attention being everywhere and then stabilizing to one or two points. So if" }, { "end": 2302.16, "start": 2297.96, "text": " two points are equally close or far apart you'll see the attention being on" }, { "end": 2306.48, "start": 2302.16, "text": " multiple points. Which is pretty cool right? So that means the model really" }, { "end": 2316.36, "start": 2306.48, "text": " learns this concept. Here's the count quantity. So you can either have one" }, { "end": 2323.44, "start": 2316.36, "text": " two or larger than three or something. Yeah that seems like they tried three" }, { "end": 2327.16, "start": 2323.44, "text": " and four and didn't work so they just said we'll just do larger than three." }, { "end": 2331.12, "start": 2327.16, "text": " And here is this robot thing where it also always needs to move in between." }, { "end": 2334.8799999999997, "start": 2331.12, "text": " Now this is the part that I'm not really impressed with but you know" }, { "end": 2342.04, "start": 2334.8799999999997, "text": " whatever you want. Okay I hope this was a good introduction to energy" }, { "end": 2346.08, "start": 2342.04, "text": " functions. What you can do with them. What I think of them. And of this paper it is" }, { "end": 2352.12, "start": 2346.08, "text": " a pretty cool paper. Yes it only works on toy problems so far but I believe this" }, { "end": 2358.6, "start": 2352.12, "text": " is one interesting direction of future machine learning and something yet to be" }, { "end": 2363.96, "start": 2358.6, "text": " very much explored. If you like this content please subscribe, tell all of" }, { "end": 2384.48, "start": 2363.96, "text": " your friends about it, share and I'll see you next time. Bye bye!" } ]
iZXsWlSdMGY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[News] Google’s medical AI was super accurate in a lab. Real life was a different story.
[ "Science & Technology" ]
[ "deep learning", "machine learning", "news", "google", "retina", "diabetes", "computer vision", "neural networks", "production", "devops", "deployment", "legal", "thailand" ]
A closer look at a story of how the deployment of AI brings its own challenges and what can go wrong. https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/ Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at this new story from MIT Technology Review. Google's medical AI was super accurate in a lab, real life was a different story. So the story here is that Google had this AI to detect diabetic retinopathy. So if you're a diabetic and your glucose isn't or your insulin isn't properly handled, that means you get damaged to your blood vessels and the small blood vessels like the ones in the eyes here, they're the first ones to get damaged and that can lead you to get this disease called retinopathy which is in the retina in the back of the eye and that can lead you to go blind if it's not discovered soon enough. So a eye doctor can look at an photograph like this and can determine whether you have it or not. I guess they would look at like a larger resolution of it but in any case they could determine from this. So Google built an AI that could maybe spot things here that can maybe spot if you had this or not and they tried to deploy this and the story is about how this failed basically. So they said they had this in Thailand, they had the opportunity to deploy this. So Thailand's Ministry of Health had set an annual goal to screen 60% of the people with diabetes for this diabetic retinopathy. It can cause blindness if not caught early. So here is where AI comes in because to 4.5 million patients that have diabetes there are only 200 experts that can determine from a photograph whether or not you do have that disease. So they say clinics are struggling to meet the target and Google has built an AI. It says the AI developed by Google have can identify signs of diabetic retinopathy from an eye scan with more than 90% accuracy which the team calls human specialist level and gives results in less than 10 minutes. So this is pretty cool right? They've developed an AI you can send an eye scan and it'll say whether or not you have this disease but then the problems mount. So they followed over several months they observed nurses conducting eye scans and interviewed them about their expertise using the new system. So the nurses who conduct the eye scans they would try to use the AI and the nurses themselves aren't specialists they would otherwise send the scans to a specialist but now the AI is supposed to handle this up. When it worked well the AI did speed things up but sometimes failed to give a result at all. So this AI had been trained on high quality scans right? Of course if you want to train an AI system you want the highest quality data you can get but also in practice you're not gonna get high quality data. It was designed to reject images that fell below a certain threshold of quality and they say often taking photos in poor lighting conditions in the real world more than a fifth of the images were rejected. So this is my take on it. If you build something for the real world you need to take into account what the real world holds in store for you which means that you probably are going to have poor lighting conditions if you build an image recognition system. Now I'm not saying that like some people are saying whenever you work with AI you should consider how it impacts later on and so on. No it's perfectly fine to work on a data set of high quality images if you do something like invent a new architecture or whatnot work on optimization algorithms. Like nothing of that but it is if you are thinking of deploying something in the real world you need to take this into account. Now I also think this was particularly poorly designed for the task and here's why. Google probably here is mainly worried about legal culpability because the thing says it was designed to reject images that fell below a certain threshold of quality. The reason for this is that here you have a classifier and either it says it says okay here is positive and negative class. I am about this much sure of the positive class and this much of the negative class and there's quite a big of a difference here right so I'm gonna go with the negative class but if those two things are somewhat closer together the Google doesn't trust its own AI it's like yeah and if it did some decision here if it says well I still go go with the negative class right this goes back to the patient and they made a mistake then this thing here is automatically responsible for that mistake and since the AI is not a human these mistakes here could be rather trivial mistakes that a human would have spotted. So basically since it's deep learning we don't really trust it and then because Google doesn't want the legal culpability of being responsible they simply reject these cases they just say we don't deal with it we just deal with things with a large discrepancy. If you actually want to design something for the real world you need to take into account okay there's poor lighting conditions and I would think in if I were to build something like this optimally you would just output this thing you would output this distribution you would in this case you could say look I am 60-40 percent I'm not sure I lean towards negative but I don't think so and then the nurse who maybe also has some expertise could be experienced in when the system fails or when it tends to be not sure and could kind of integrate that information but this only works so if you're a that's maybe a recommendations for lawgivers this only works if you don't make the AI system completely culpable for its mistakes it can output its estimation and it can along of that it can actually also output an estimation of its own uncertainty it can like give you some confidence bounds here now these are not going to be statistical true confidence bounds because it's deep learning but still I would say please give all the available information that the system has and then let the humans work with the system rather than trying to fully replace the humans by simply saying yes no or reject all right so they say patients whose images were kicked out of the system were told they could have a visit they would have to visit a specialist at another clinic on another day if they found it hard to take time off work or did not have a car this was obviously inconvenient which I can understand nurses felt frustrated especially when they believed the rejected scans showed no sign of disease and the follow-up appointments were unnecessary this is exactly what I'm saying right the nurses often also have very good experience and can combine could combine something like this with their own experience of when something is wrong and when something isn't wrong and maybe you even build in some explainability to focus on part of the image and then you could alleviate a lot of these problems they sometimes wasted time trying to retake or edit an image that the AI had rejected right this this is just now you're just build AI working against humans rather than with humans so further this says because the system had to upload images to the cloud for processing poor internet connection in several clinics also caused delays so patients like the instant results but the internet is slow and the patients then complain they've been waiting here since 6 a.m. and for the first two hours could only we could only screen 10 patients yes this is the type of stuff you have to take into account so maybe actually put the GPU server into the clinic it's better anyway for for data privacy reasons but of course the large companies they want to everything to be uploaded to their machines it's more convenient for them so they say there is now working with medical staff to design new workflows I mean sometimes you do rely on an internet connection so I don't want to be too too harsh here so the the other there are some critics here so Michael Abramoff an eye doctor and computer scientist at the University of Iowa hospitals and clinics has been developing an AI for diagnosing retinal disease for several years and is a CEO of a spin-off here and he basically says there is much more to health care than algorithms and I mean of course we can we can all we can all see that yeah he basically says that the questions the usefulness of comparing AI tools with human specialists when it comes to accuracy of course we don't want an AI to make a bad call but human doctors disagree all the time he says that's fine an AI system needs to fit into a process where sources of uncertainty are discussed rather than simply reject it and this exact this exactly feeds into what I've been saying if the air were just to output the source of uncertainty and all it thinks about a particular situation then the humans could discuss it right and then we could get to a better outcome but this only works if the legal framework is given if you regulate and I get I get that point too you want to assign kind of blame when something goes wrong but you just have to know that this is what keeps these systems back often finally they say the benefits could be huge there was one nurse that screened 1,000 patients on her own I don't know in what time that is I guess that's over the course of the study or so and with this tool she's unstoppable the patients didn't really care that it was an AI rather than a human reading their images they cared more about what their experience was going to be and that's a general general experience that I get from a lot of people working with human machine interactions is that the people don't they're not so super excited that it's a human if they if the machine appears competent I think we've gotten used to AI being quite good at particular tasks and we're actually happy to outsource some of these to them but again if you build something for the real world you have to take into account the real world conditions and this feeds into papers like image net v2 where you all of a sudden have a harder test set it feeds into topics like domain shift transfer learning domain adaptation and these are all research topics so I think problems like this can give rise to entirely new directions of research if you're looking for a PhD topic maybe this is something for you alright thanks for watching this this was my blabs about the story I hope you enjoyed this and these kind of new sections it's a new thing I'm doing if you like it subscribe if you didn't like it leave a comment and bye bye
[ { "end": 5.24, "start": 0, "text": " Hi there, today we're looking at this new story from MIT Technology Review." }, { "end": 10.84, "start": 5.24, "text": " Google's medical AI was super accurate in a lab, real life was a different story." }, { "end": 17.72, "start": 10.84, "text": " So the story here is that Google had this AI to detect diabetic retinopathy." }, { "end": 25.12, "start": 17.72, "text": " So if you're a diabetic and your glucose isn't or your insulin isn't properly" }, { "end": 31.16, "start": 25.12, "text": " handled, that means you get damaged to your blood vessels and the small blood" }, { "end": 36.160000000000004, "start": 31.16, "text": " vessels like the ones in the eyes here, they're the first ones to get damaged" }, { "end": 41.92, "start": 36.160000000000004, "text": " and that can lead you to get this disease called retinopathy which is in" }, { "end": 46.78, "start": 41.92, "text": " the retina in the back of the eye and that can lead you to go blind if it's" }, { "end": 51.52, "start": 46.78, "text": " not discovered soon enough. So a eye doctor can look at an photograph like" }, { "end": 57.760000000000005, "start": 51.52, "text": " this and can determine whether you have it or not. I guess they would look at" }, { "end": 62.56, "start": 57.760000000000005, "text": " like a larger resolution of it but in any case they could determine from this." }, { "end": 69.36, "start": 62.56, "text": " So Google built an AI that could maybe spot things here that can maybe spot if" }, { "end": 76.92, "start": 69.36, "text": " you had this or not and they tried to deploy this and the story is about how" }, { "end": 85.2, "start": 76.92, "text": " this failed basically. So they said they had this in Thailand, they had the" }, { "end": 90.28, "start": 85.2, "text": " opportunity to deploy this. So Thailand's Ministry of Health had set an annual goal" }, { "end": 94.68, "start": 90.28, "text": " to screen 60% of the people with diabetes for this diabetic retinopathy." }, { "end": 100.76, "start": 94.68, "text": " It can cause blindness if not caught early. So here is where AI comes in" }, { "end": 108, "start": 100.76, "text": " because to 4.5 million patients that have diabetes there are only 200 experts" }, { "end": 114.4, "start": 108, "text": " that can determine from a photograph whether or not you do have that disease." }, { "end": 122.52000000000001, "start": 114.4, "text": " So they say clinics are struggling to meet the target and Google has built an AI." }, { "end": 128, "start": 122.52000000000001, "text": " It says the AI developed by Google have can identify signs of diabetic retinopathy" }, { "end": 133.52, "start": 128, "text": " from an eye scan with more than 90% accuracy which the team calls human" }, { "end": 140.12, "start": 133.52, "text": " specialist level and gives results in less than 10 minutes." }, { "end": 144.48, "start": 140.12, "text": " So this is pretty cool right? They've developed an AI you can send an eye scan" }, { "end": 152.36, "start": 144.48, "text": " and it'll say whether or not you have this disease but then the" }, { "end": 158.56, "start": 152.36, "text": " problems mount. So they followed over several months they observed nurses" }, { "end": 162.76000000000002, "start": 158.56, "text": " conducting eye scans and interviewed them about their expertise using the new" }, { "end": 168.76000000000002, "start": 162.76000000000002, "text": " system. So the nurses who conduct the eye scans they would try to use the AI and" }, { "end": 174.24, "start": 168.76000000000002, "text": " the nurses themselves aren't specialists they would otherwise send the" }, { "end": 178.64000000000001, "start": 174.24, "text": " scans to a specialist but now the AI is supposed to handle this up. When it" }, { "end": 184.6, "start": 178.64, "text": " worked well the AI did speed things up but sometimes failed to give a" }, { "end": 193.04, "start": 184.6, "text": " result at all. So this AI had been trained on high quality scans right?" }, { "end": 196.83999999999997, "start": 193.04, "text": " Of course if you want to train an AI system you want the highest quality data you can get" }, { "end": 202.23999999999998, "start": 196.83999999999997, "text": " but also in practice you're not gonna get high quality data. It was designed to" }, { "end": 208.27999999999997, "start": 202.23999999999998, "text": " reject images that fell below a certain threshold of quality and they say" }, { "end": 213, "start": 208.28, "text": " often taking photos in poor lighting conditions in the real world" }, { "end": 219.32, "start": 213, "text": " more than a fifth of the images were rejected. So this is my take on it. If you" }, { "end": 223.22, "start": 219.32, "text": " build something for the real world you need to take into account what the real" }, { "end": 229.12, "start": 223.22, "text": " world holds in store for you which means that you probably are going to have poor" }, { "end": 233.52, "start": 229.12, "text": " lighting conditions if you build an image recognition system. Now I'm" }, { "end": 237.7, "start": 233.52, "text": " not saying that like some people are saying whenever you work with AI you" }, { "end": 242.35999999999999, "start": 237.7, "text": " should consider how it impacts later on and so on. No it's perfectly fine to work" }, { "end": 246.6, "start": 242.35999999999999, "text": " on a data set of high quality images if you do something like invent a new" }, { "end": 251.85999999999999, "start": 246.6, "text": " architecture or whatnot work on optimization algorithms. Like nothing of" }, { "end": 257.08, "start": 251.85999999999999, "text": " that but it is if you are thinking of deploying something in the real world" }, { "end": 262.41999999999996, "start": 257.08, "text": " you need to take this into account. Now I also think this was particularly poorly" }, { "end": 268.44, "start": 262.42, "text": " designed for the task and here's why. Google probably here is mainly worried" }, { "end": 275.16, "start": 268.44, "text": " about legal culpability because the thing says it was designed to reject" }, { "end": 280.92, "start": 275.16, "text": " images that fell below a certain threshold of quality. The reason" }, { "end": 285.84000000000003, "start": 280.92, "text": " for this is that here you have a classifier and either it says it" }, { "end": 292.64, "start": 285.84, "text": " says okay here is positive and negative class. I am about this much sure of the" }, { "end": 296, "start": 292.64, "text": " positive class and this much of the negative class and there's quite a big" }, { "end": 300.88, "start": 296, "text": " of a difference here right so I'm gonna go with the negative class but if those" }, { "end": 308.88, "start": 300.88, "text": " two things are somewhat closer together the Google doesn't trust its own AI it's" }, { "end": 314.91999999999996, "start": 308.88, "text": " like yeah and if it did some decision here if it says well I still go go with" }, { "end": 319.2, "start": 314.92, "text": " the negative class right this goes back to the patient and they made a mistake" }, { "end": 325.16, "start": 319.2, "text": " then this thing here is automatically responsible for that mistake and since" }, { "end": 331.92, "start": 325.16, "text": " the AI is not a human these mistakes here could be rather trivial mistakes" }, { "end": 336.40000000000003, "start": 331.92, "text": " that a human would have spotted. So basically since it's deep learning we" }, { "end": 339.96000000000004, "start": 336.40000000000003, "text": " don't really trust it and then because Google doesn't want the legal" }, { "end": 345.79999999999995, "start": 339.96, "text": " culpability of being responsible they simply reject these cases they just say" }, { "end": 351.53999999999996, "start": 345.79999999999995, "text": " we don't deal with it we just deal with things with a large discrepancy. If you" }, { "end": 354.91999999999996, "start": 351.53999999999996, "text": " actually want to design something for the real world you need to take into" }, { "end": 359.24, "start": 354.91999999999996, "text": " account okay there's poor lighting conditions and I would think in if I" }, { "end": 364.08, "start": 359.24, "text": " were to build something like this optimally you would just output this" }, { "end": 370.12, "start": 364.08, "text": " thing you would output this distribution you would in this case you could say" }, { "end": 377.47999999999996, "start": 370.12, "text": " look I am 60-40 percent I'm not sure I lean towards negative but I don't think" }, { "end": 383.88, "start": 377.47999999999996, "text": " so and then the nurse who maybe also has some expertise could be experienced in" }, { "end": 388.2, "start": 383.88, "text": " when the system fails or when it tends to be not sure and could kind of" }, { "end": 393.84, "start": 388.2, "text": " integrate that information but this only works so if you're a that's maybe a" }, { "end": 398.59999999999997, "start": 393.84, "text": " recommendations for lawgivers this only works if you don't make the AI system" }, { "end": 406.32, "start": 398.59999999999997, "text": " completely culpable for its mistakes it can output its estimation and it can" }, { "end": 410.35999999999996, "start": 406.32, "text": " along of that it can actually also output an estimation of its own" }, { "end": 415.23999999999995, "start": 410.35999999999996, "text": " uncertainty it can like give you some confidence bounds here now these are not" }, { "end": 419, "start": 415.23999999999995, "text": " going to be statistical true confidence bounds because it's deep learning but" }, { "end": 424.2, "start": 419, "text": " still I would say please give all the available information that the system" }, { "end": 428.92, "start": 424.2, "text": " has and then let the humans work with the system rather than trying to fully" }, { "end": 437.64, "start": 428.92, "text": " replace the humans by simply saying yes no or reject all right so they say" }, { "end": 441.76, "start": 437.64, "text": " patients whose images were kicked out of the system were told they could have a" }, { "end": 446.52, "start": 441.76, "text": " visit they would have to visit a specialist at another clinic on another" }, { "end": 451.2, "start": 446.52, "text": " day if they found it hard to take time off work or did not have a car this was" }, { "end": 456.12, "start": 451.2, "text": " obviously inconvenient which I can understand nurses felt frustrated" }, { "end": 460.35999999999996, "start": 456.12, "text": " especially when they believed the rejected scans showed no sign of disease" }, { "end": 464.56, "start": 460.35999999999996, "text": " and the follow-up appointments were unnecessary this is exactly what I'm" }, { "end": 471.2, "start": 464.56, "text": " saying right the nurses often also have very good experience and can combine" }, { "end": 476.35999999999996, "start": 471.2, "text": " could combine something like this with their own experience of when something" }, { "end": 479.64, "start": 476.36, "text": " is wrong and when something isn't wrong and maybe you even build in some" }, { "end": 484.6, "start": 479.64, "text": " explainability to focus on part of the image and then you could alleviate a lot" }, { "end": 491.96000000000004, "start": 484.6, "text": " of these problems they sometimes wasted time trying to retake or edit an image" }, { "end": 499.52000000000004, "start": 491.96000000000004, "text": " that the AI had rejected right this this is just now you're just build AI working" }, { "end": 505.96000000000004, "start": 499.52000000000004, "text": " against humans rather than with humans so further this says because the system" }, { "end": 510.15999999999997, "start": 505.96, "text": " had to upload images to the cloud for processing poor internet connection in" }, { "end": 517.4, "start": 510.15999999999997, "text": " several clinics also caused delays so patients like the instant results but" }, { "end": 522.3199999999999, "start": 517.4, "text": " the internet is slow and the patients then complain they've been waiting here" }, { "end": 526.24, "start": 522.3199999999999, "text": " since 6 a.m. and for the first two hours could only we could only screen 10" }, { "end": 530.88, "start": 526.24, "text": " patients yes this is the type of stuff you have to take into account so maybe" }, { "end": 537.52, "start": 530.88, "text": " actually put the GPU server into the clinic it's better anyway for for data" }, { "end": 543.16, "start": 537.52, "text": " privacy reasons but of course the large companies they want to everything to be" }, { "end": 551.22, "start": 543.16, "text": " uploaded to their machines it's more convenient for them so they say there is" }, { "end": 555.72, "start": 551.22, "text": " now working with medical staff to design new workflows I mean sometimes you do" }, { "end": 560.68, "start": 555.72, "text": " rely on an internet connection so I don't want to be too too harsh here" }, { "end": 568.76, "start": 560.68, "text": " so the the other there are some critics here so Michael Abramoff an eye doctor" }, { "end": 572.8599999999999, "start": 568.76, "text": " and computer scientist at the University of Iowa hospitals and clinics has been" }, { "end": 577.4, "start": 572.8599999999999, "text": " developing an AI for diagnosing retinal disease for several years and is a CEO" }, { "end": 585.04, "start": 577.4, "text": " of a spin-off here and he basically says there is much more to health care than" }, { "end": 593.4399999999999, "start": 585.04, "text": " algorithms and I mean of course we can we can all we can all see that yeah he" }, { "end": 600.4, "start": 593.4399999999999, "text": " basically says that the questions the usefulness of comparing AI tools with" }, { "end": 604.0799999999999, "start": 600.4, "text": " human specialists when it comes to accuracy of course we don't want an AI" }, { "end": 607.7199999999999, "start": 604.0799999999999, "text": " to make a bad call but human doctors disagree all the time he says that's" }, { "end": 613.28, "start": 607.7199999999999, "text": " fine an AI system needs to fit into a process where sources of uncertainty are" }, { "end": 619.8399999999999, "start": 613.28, "text": " discussed rather than simply reject it and this exact this exactly feeds into" }, { "end": 625.9599999999999, "start": 619.8399999999999, "text": " what I've been saying if the air were just to output the source of uncertainty" }, { "end": 632, "start": 625.9599999999999, "text": " and all it thinks about a particular situation then the humans could discuss" }, { "end": 638.92, "start": 632, "text": " it right and then we could get to a better outcome but this only works if" }, { "end": 644.92, "start": 638.92, "text": " the legal framework is given if you regulate and I get I get that point too" }, { "end": 650.7199999999999, "start": 644.92, "text": " you want to assign kind of blame when something goes wrong but you just have" }, { "end": 658.4, "start": 650.7199999999999, "text": " to know that this is what keeps these systems back often finally they say the" }, { "end": 666.16, "start": 658.4, "text": " benefits could be huge there was one nurse that screened 1,000 patients on" }, { "end": 673.6, "start": 666.16, "text": " her own I don't know in what time that is I guess that's over the course of the" }, { "end": 680.64, "start": 673.6, "text": " study or so and with this tool she's unstoppable the patients didn't really" }, { "end": 685.28, "start": 680.64, "text": " care that it was an AI rather than a human reading their images they cared" }, { "end": 689.52, "start": 685.28, "text": " more about what their experience was going to be and that's a general" }, { "end": 695.92, "start": 689.52, "text": " general experience that I get from a lot of people working with human" }, { "end": 700.3199999999999, "start": 695.92, "text": " machine interactions is that the people don't they're not so super excited that" }, { "end": 708.12, "start": 700.3199999999999, "text": " it's a human if they if the machine appears competent I think we've gotten" }, { "end": 714.52, "start": 708.12, "text": " used to AI being quite good at particular tasks and we're actually" }, { "end": 719.68, "start": 714.52, "text": " happy to outsource some of these to them but again if you build something for the" }, { "end": 725.64, "start": 719.68, "text": " real world you have to take into account the real world conditions and this" }, { "end": 730.96, "start": 725.64, "text": " feeds into papers like image net v2 where you all of a sudden have a harder" }, { "end": 735.62, "start": 730.96, "text": " test set it feeds into topics like domain shift transfer learning domain" }, { "end": 740.64, "start": 735.62, "text": " adaptation and these are all research topics so I think problems like this can" }, { "end": 744.72, "start": 740.64, "text": " give rise to entirely new directions of research if you're looking for a PhD" }, { "end": 749.68, "start": 744.72, "text": " topic maybe this is something for you alright thanks for watching this this" }, { "end": 754.3199999999999, "start": 749.68, "text": " was my blabs about the story I hope you enjoyed this and these kind of new" }, { "end": 759.4000000000001, "start": 754.32, "text": " sections it's a new thing I'm doing if you like it subscribe if you didn't like" }, { "end": 788.4, "start": 759.4, "text": " it leave a comment and bye bye" } ]
k1GOF2jmX7c
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Big Transfer (BiT): General Visual Representation Learning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "brain", "cnn", "convolutional neural network", "resnet", "residual network", "pretraining", "finetuning", "vtab", "imagenet", "cifar", "state of the art", "pretrained", "computer vision" ]
One CNN to rule them all! BiT is a pre-trained ResNet that can be used as a starting point for any visual task. This paper explains what it takes to pre-train such a large model and details how fine-tuning on downstream tasks is done best. Paper: https://arxiv.org/abs/1912.11370 Code & Models: TBA Abstract: Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance. Authors: Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're talking about Big Transfer, General Visual Representation Learning by Alexander Kolesnikov, Lukas Baier, Yao-Ai Chai and others of Google Brain. So this paper is basically an application slash engineering paper for the community and it is about the task of transfer learning for visual tasks. So what does that mean? In a visual task the meaning is basically that the input is an image. So it could be a classifier where you have an image and you have to say that this is a cat. Or it could be, let's say, a medical image of a lung and you have to point out where the defect in the lung is or if there is a defect in the lung or something like this. As we all know this field is pretty much dominated by CNNs, by convolutional neural networks that take in these images through many layers of convolution. Especially residual networks are doing particularly well on these tasks. The problem of course is that in some tasks you have lots of data and that's fine because CNNs need lots of data to train. But in some tasks, especially these medical tasks, you only have like very small database. Look at this small database. You only have very few labeled samples where the model could learn from. And that just is not enough to learn these big models that would perform well. So you will have to settle for a less performing model. Now the solution or one of the solutions is transfer learning. In transfer learning what you do is you take a large data set, for example the ImageNet data set. You have this big data set right here and you train your CNN on that. And then you take that CNN and you do what's called a fine-tuning step on this small data set. So you take the CNN that you gained from the large data set as a starting point and then you just train for a few steps. You just kind of adapt it to this final data set that you actually want to train it on. And that usually helps. And why does that help? Because you sort of hope that the large data set and the small data set are at least somewhat overlapping in their... So the images in the large data set are somewhat similar to the images in the small data set. It doesn't need to be super similar but just somewhat. And you hope that the features that the CNN learns from the large data set are useful in the small data set. Because then if that is the case when you fine-tune on the small data set, that's this step down here, that's called a fine-tuning. When you fine-tune you can pretty much reuse those features. You only have to adjust them a little bit. And you just have to learn how to map the features to the output, which now is of course different than in the original task. But you won't have to rediscover the features. So that's why transfer learning can help. So the first phase is called pre-training. The second phase is called fine-tuning. Now the ultimate goal in this is the following. Imagine you have like a giant database of data. This is giant. Look at the size comparison to the others. And so you have this big big database of images. And you train a CNN on that big database of labeled images. Now what you're hoping is that you can do this once and then this one CNN trained on this giant data set will become the starting point for all kinds of small tasks now. So basically you can post this on a repository online and everyone that has a visual task will not train from scratch. But they will basically take this one CNN as a starting point. It is very similar to what people are doing right now with BERT or generally these transformer language models. You never want to train them from scratch. You always want to train from a pre-trained state that someone else has done. Because usually the big work is now shifted to the pre-training. So the goal is to find this one universal starting point for visual learning. And of course no better place to do this than Google. They certainly do have a giant databases of images. They certainly have lots of computation which we're going to see is very necessary for something like this. Now they do train three different models. Their model is called BIT and they train three different variants BIT. Small, medium and large. So the L model is trained on 300 million images. The medium model is trained on 14 million images. So this is the I think it's called JFT dataset. This here is called the ImageNet 21k dataset which looks pretty funky. It has like objects in front of weird backgrounds and stuff like that. And the small is simply trained on the 1.3 million ImageNet dataset. So I mean just look at this. We're in a situation now where the small model is trained is pre-trained on ImageNet just for reference. If you had imagined this five years ago this you would not have maybe you would have guessed it but it's still impressive. So they do release these two models here the medium and the small one pre-trained I believe. They don't release the large one which maybe that's the price we have to pay for getting the medium and the small one. The fact that now Google can use this in their products because they have probably spent a considerable amount of money in doing this. I'm not sure this is a philosophical discussion whether in the interest of science they should really solve. Because they do give the sort of exact training protocol. You just need the money basically. Alright but that's not topic of this video. So the models here are all pretty much just residual networks. They're all these ResNet 152 I think x4 which means that basically scale the width of each layer by a number of four from the original ResNet architecture and that's pretty much it. This is the architecture. There's nothing really new in this paper. The paper just details what exactly you have to do. Which things exactly matter when you pre-train these things and which ones don't. Therefore I believe it is a pretty good paper and I think that these models here the M and S models and maybe someone else trains an L model and releases it will sort of become the standard like we have in BERT now. So whenever you have a visual task you're just gonna start from those in practice. So this I think is mainly relevant for people in practice. Alright here you can see these models. First of all excellent excellent not labeling of your x-axis. Absolutely beautiful. The x-axis I believe is the number of samples per data class. So now they take their pre-trained model this bit L and they fine-tune it on these datasets. So ImageNet is one of the tasks they fine tune on or CIFAR 10, CIFAR 100 and so on. And first of all look on the right side this full thing. This is when you take the entire dataset. So often they outperform, they get state-of-the-art on the full datasets. Now they do compare against what they call generalist models. So generalist models are ones that have this particular training protocol where they train on one big giant database and then fine-tune to all the other tasks. They do not achieve state-of-the-art on all datasets in what they call specialist models. The specialist models would be such models that have this exact task in mind and therefore they don't care about other tasks. They outperform some of these specialist models but not all of them. So this is not the new state of the art in everything but it is in this transfer learning regime. And I think even more important if you see on the left this is in the small label regime. So here you have something like 100 or 25 or 10 or even 5 labels per class. And if you take 5 labels per class for CIFAR 10, this model, so of course you have to pre-train it first on the big dataset, but just taking 5 labels per class you still get like 94% accuracy on CIFAR 10. And that's pretty good. That is pretty impressive especially if you compare it to this baseline model here which is a ResNet pre-trained on just the ImageNet dataset. So that really shows you the power of pre-training with full data. So one thing they say is that in their big dataset in their 300 million images they make sure to remove all the images that then appear in the downstream tasks. Because otherwise it is fairly conceivable that this database here is just scraped from the internet and of course these tasks are often, like CIFAR 10, are also scraped from the internet and also ImageNet. And it is entirely conceivable of course that the test data is already here and they say we remove images but I think they just remove exact duplicates. So it could still be that someone has taken ImageNet and then kind of recoded it into another color scheme or whatnot or just compressed it a bit more and then they find these images on the web. So it's a little shaky this whole thing because these datasets might just be part of one another but you know given the results I do generally believe the improvements here. But yeah. So I guess what we need is like people to actually go out with cameras and shoot new pictures for a new test set. But in any case let's dive into how to pre-train something like this. So they divide their findings up in two parts. How to pre-train and how to fine-tune. So how to transfer to downstream tasks. And the methods they find are surprisingly easy. They say there are two components to pre-training. The first component is scale. So you have to have a lot of data and a lot of models and that is a pretty important recognition. So down here they have this ablation where they scale up the model and scale up the data. So look at this for example. You can see here you have the different datasets to pre-train on. So this is the small dataset, the medium dataset and the large dataset. So in this direction you have dataset size. Then here you have accuracy not labeled. Again I guess we can understand accuracy. That's fine. And there we have the different models. Now the larger the dot here is the larger the model architecture. And you can see within the individual bins the larger the model the better performance you usually get. But as you can see like here this improvement in the large models isn't as much as when you have much data. And you can also see by the slope of the line here the larger amounts of data help more when you have larger models. So only scaling up the data is not as effective as scaling up the data and the model at the same time. And in some cases like in this small architecture here it actually hurts to incorporate more data. At least they say that. And you can also see that here. And here it just doesn't help as much anymore if you incorporate more data. So if your model is too small you can't handle the big data. Of course there are weird effects like here the performance goes down and then up with the larger data. So this might actually be an effect of the images in these data sets being somewhat qualitatively different also with respect to the task that you are training for. But in general it holds that you need a combination of data set size and model size to go up. And this I think might be an indication of where we are in Belkin's double descent curve. So if you look at the researcher Mikhail Belkin and other people also research in this area they have this sort of empirical finding and hypothesis that if you plot a graph and here is the number of parameters in relation to the data set size. It's a number of parameters in relation to the size of data. And here is your validation loss. Then what happens as you have very little parameters you can add more parameters to your model to get better validation loss. This is you know we get a better model and we train that and we get better. And then at some point you'll start to overfit. We've all learned this in our general machine learning course and there is a point here, what is called the interpolation threshold, where you have this is one so the number of parameters is equal to the number of data points which is just interpolating your training data. Sorry the data point here that's train. But then the discovery sort of is that this comes down again and it stays down. So as you go up in number of data points, sorry number of parameters with the same data set you're perfectly fitting the training data set. You're past the number of data points in your model but still your validation loss comes down and there's various hypotheses why this could happen. And here we find ourselves maybe in this sort of situation where if you have a model right here and you want to scale it you want to add more data you can't just keep the model constant because if you add more data that will shift you to the left here because you add more data but you keep the number of parameters the same so this number will shift to the left and you actually go up in your validation loss. So maybe this is actually what's happening right here the fact that the model is too small. This is just a hypothesis by me. So if you want to up your number of data points you also have to up your number of parameters and that will keep it going and maybe these models here are more on this side of this interpolation threshold and the models where it doesn't happen might be more over here. Though that is a big thing to assume. Maybe not. Now that I think about it since they have even more parameters here they would be even more here somewhere so maybe you add a bunch of data it's just not as bad. There might be some weird interactions here. Like this. Who knows? Let's just skip this. In any case the message here is you need more model and more data at the same time. Alright then there is a second message a second recipe for pre-training. There we are. The second method is group normalization and weight standardization. So they criticize batch norm. Batch norm has of course been used a lot. That is where if you have a batch of data and you put it through your layers and it has some intermediate representation what you want to do is you want to calculate sort of the mean and variance of your data in each of the features and then make it such that it's nice mean one and standard deviation. So mean zero and standard deviation of one. That is called batch norm but of course it is dependent on your batch size. So it is dependent on how many data points you have because that's how well you can estimate these mean and variance parameters. And what people do nowadays is they take these batches and they group them into different groups and they distribute those groups onto many many machines which is called data parallelism especially with TPUs nowadays. You can just distribute everything to so many TPUs. I believe they say they distribute to something like 500 TPUs. They have a batch size of I think 4,000 and they distribute to 500 TPUs so that leaves them with eight samples per batch. So this is eight and eight is just not very good for batch norm and if you have to if you want to circumvent that you need to in each layer globally sync with all of the other workers your batch norm parameters and that slows you down. So people have gone around this using what they call group normalization and weight standardization. So these two techniques of weight standardization is a addition to group normalization. They don't require the other samples in the batch. They work on a per sample basis and they normalize the features within groups of each channel. So the group normalization groups together different features within a sample and then normalizes across that and the weight standardization is a bit like standardizing the features but it's standardizes the weights to be of a normal distribution. And just suffice to say these are standard techniques that you can build in and they allow you to not have to synchronize constantly between your workers at the training time which makes everything a lot faster and also not a problem that you just have eight samples per worker. So that's what they do. They do large data large models and group normalization with weight standardization. That's how they pre-train and then how do they fine-tune. They say they have a rule to select hyperparameters. They call the bit hyper rule and that's just sort of a formula of how you have one hyper parameter. So you have one I guess it's a hyper hyper parameter and that hyper hyper parameter you run through their rule and the rule will tell you what each of the hyper parameters should be. So it's like a lookup table basically. It's oh you set this one number and we give you the rest of the hyper parameters and that one rule works pretty well. So you only have to find for fine-tuning you only have to grid search over one hyper parameter. It's not really grid anymore is it? And then they basically decide on the training schedule length resolution and whether to do mixup regularization. Mixup is a technique that can help when you have very little data and what it does is it interpolates between data points and also trains on kind of like data points from half this class and half that class just to make more data available. But they all have this packed into this rule and they of course the exact settings of this rule are presented. So you can look it up then they have a data pre-processing, resize the image to a square, crop out small random square, randomly horizontally flip the image at training time. So they basically describe a standard training protocol here. You don't want to go mix it too up too much. The only thing they say surprisingly we do not use any form any of the following forms of regularization during downstream tuning, weight decay to zero, weight decay to initial parameters or dropout. I think they only use weight decay during pre-training and that's it. So let's look at some of the graphs. We've already seen some. Here is where they pretty much outperform the generalist, these generalist models on all of these tasks including this visual task adaptation benchmark. I've made a video about this. This is a benchmark that includes 19 different visual tasks from all over the place and they have significant improvement here as you can see. They do not always outperform these specialist models but as you can see they outperform for example this on the flowers data set and they come pretty close. Here you can also see how much they improve when pre-training on the larger data set. So far people have basically pre-trained on this ImageNet data set and now that they pre-train on the larger one of course they gain a lot of performance and the largest one isn't even in this table. So what I finally want to look at is this visual task adaptation benchmark. This consists of 19 tasks and they're divided into natural tasks which are kind of natural images and then specialized tasks which are let's say the medical images and not really natural and then structured tasks and the structured tasks isn't simply labeling or locating something. It is tasks where you have to maybe reason about something. So let's say there is an image and there is a cup right here and there is a glass right here and the question is what's to the left of the glass and there's a bunch of other stuff around here and you have to say the cup. So it sort of requires a structured understanding of the image and you can see the main performance boost here comes in the natural images which is to be expected. So you only get what you feed in and this 300 million image data set I'm pretty sure that's just a web scrape of photos or mainly photos. So the main improvement you're gonna get is on pictures that are similar to that as we said at the beginning and these natural tasks have images like that and you can see that the model here improves extremely in that category, improves slightly in this specialized thing and only improves a little bit in the structured tasks. So this as I said is to be expected. Just know if you use this model know what is in there. You have to know what it does, what it does well. It does well on natural images that are similar to what it was pre-trained on. Okay so they do have some analysis here and we've already went to most of them. I find this to be pretty impressive. So they say when they apply the standard computational budget of ImageNet pre-training when they scale up to the larger data set it seems detrimental. As you can see right here the performance actually goes down when you go to the larger data set. Only if you train longer then your improves. The axis labeling is just amazing here. Standard, long, longer. Oh how long you train for? Longer. Thanks. But I guess the point is taken that you have to invest more computation along with your bigger model and bigger data set. Sorry it's the same model but the bigger data set. They also make some other points here that if you for example if you decrease your learning rate too early or set your weight decay parameter differently that also hurts you. So on the right here you see a smaller weight decay initially looks better. So initially you're higher but through the training you end up at a worse place than a higher setting right here. I mean they make a big point out of this but who's to say that someone else doesn't come with like a ten times longer training and figures out that ultimately you start off like this and then maybe goes up super high. So to me the lessons learned here is pretty much that there's always a way to get more performance out of more compute and probably there is a way to schedule all of these things because that's combined with decaying learning rate and so on. There's probably a way to schedule these things with the current method. So with this particular method that would end up somewhere here we just haven't found it yet because it's so complex. I would guess that is the case. Here they make an interesting point that if you decay the learning rate too early then you also end up at a worse place. So this this dashed researcher the noob. So after eight GPU weeks which come on what is that eight GPU weeks that's just eight GPUs for a week. That's nothing nothing. It looks like this right it looks fairly flat and this researcher now decides to decay the learning rate and that results in this thing here. So decays the learning rate here here and here. Sorry not here. So the case learning right here and then it flattens out again and then decays the learning rate again ends up at this level. Yet if you train for longer you can see right here if you look over eight months you can see that there is a slight upward trend still and it hasn't converged yet and you can if you decrease the learning rate only later and always wait for this to fully converge then you will end up at a better place right here above 70. Again who's to say that if I just wait here there isn't a slight upward trend if I wait for eight GPU years or eight GPU solar system births then there might be even a better point to decay finally decay the learning rate and then go up. I mean again this this researcher here only takes point five million steps where you take two million. So that's the first point. The second point is ImageNet or visual state-of-the-art research is now officially out of the hands of academia. This is it. If you see things like if you see a paper dissing on people that only wait eight GPU weeks to decrease their learning rate for the first time and advocating that you should you know at least wait until eight GPU months actually they wait twice as long it's over that's it. Yeah bye bye. Maybe maybe you know you want to do some theory or something yeah bye bye. What I find interesting is the mistakes so since on CIFAR-10 they reach like 99.4 percent there's only a handful of mistakes that they're still making because it's not that large of a data set and they do classify it so red in particular I think means red is the ground truth label is correct but green is the machine is correct and the ground truth label is wrong and you can see there is a fair number of green things here right so the model says ship and the label says cat and the model says bird and the label says cat clearly this this would be one weird cat so it gets to the point where you you also have to expect these errors to be in the training set so it could just be that the model here doesn't necessarily even make those mistakes but it's just somewhat consistent with the training set in making the mistakes and also here on image net they have selected ones where you know the model says notebook but it's actually laptop and the model says mouse but it's actually spacebar you know the model says Alp and it's ski so or here the model the model says candle but it's a this is a dishwasher what so you see that the the the types of mistakes here we get to very quirky very fine-grained points in these models last thing I want to show I have never seen these image net 21k images these are just funky like look at that so here's the state-of-the-art previously I think said triceratops and the new model now says bit else as starfish good job bit L you wow probably the correct label would just be weird and this no okay I don't want to rag on this too much this is a cool paper I believe this will be the new starting point for a lot of practitioners in when they do visual tasks I always as always invite you to check out the paper subscribe to the channel leave a like leave a comment if you want I do read them usually and bye bye
[ { "end": 5.76, "start": 0, "text": " Hi there! Today we're talking about Big Transfer, General Visual Representation" }, { "end": 12.620000000000001, "start": 5.76, "text": " Learning by Alexander Kolesnikov, Lukas Baier, Yao-Ai Chai and others of Google" }, { "end": 20.22, "start": 12.620000000000001, "text": " Brain. So this paper is basically an application slash engineering paper for" }, { "end": 27.16, "start": 20.22, "text": " the community and it is about the task of transfer learning for visual tasks. So" }, { "end": 32.08, "start": 27.16, "text": " what does that mean? In a visual task the meaning is basically that the input is" }, { "end": 38, "start": 32.08, "text": " an image. So it could be a classifier where you have an image and you have to" }, { "end": 47, "start": 38, "text": " say that this is a cat. Or it could be, let's say, a medical image of a lung and" }, { "end": 53.64, "start": 47, "text": " you have to point out where the defect in the lung is or if there is a defect" }, { "end": 58.32, "start": 53.64, "text": " in the lung or something like this. As we all know this field is pretty much" }, { "end": 64.8, "start": 58.32, "text": " dominated by CNNs, by convolutional neural networks that take in these" }, { "end": 69.68, "start": 64.8, "text": " images through many layers of convolution. Especially residual networks" }, { "end": 75.04, "start": 69.68, "text": " are doing particularly well on these tasks. The problem of course is that in" }, { "end": 81.64, "start": 75.04, "text": " some tasks you have lots of data and that's fine because CNNs need lots of" }, { "end": 86.32, "start": 81.64, "text": " data to train. But in some tasks, especially these medical tasks, you only" }, { "end": 91.44, "start": 86.32, "text": " have like very small database. Look at this small database. You only have very" }, { "end": 97.2, "start": 91.44, "text": " few labeled samples where the model could learn from. And that just is not" }, { "end": 101.36, "start": 97.2, "text": " enough to learn these big models that would perform well. So you will have to" }, { "end": 107.16, "start": 101.36, "text": " settle for a less performing model. Now the solution or one of the solutions is" }, { "end": 112.75999999999999, "start": 107.16, "text": " transfer learning. In transfer learning what you do is you take a large data set," }, { "end": 119.08, "start": 112.75999999999999, "text": " for example the ImageNet data set. You have this big data set right here and you" }, { "end": 125.6, "start": 119.08, "text": " train your CNN on that. And then you take that CNN and you do what's called a" }, { "end": 131.92, "start": 125.6, "text": " fine-tuning step on this small data set. So you take the CNN that you gained from" }, { "end": 136.8, "start": 131.92, "text": " the large data set as a starting point and then you just train for a few steps." }, { "end": 142.44, "start": 136.8, "text": " You just kind of adapt it to this final data set that you actually want to train" }, { "end": 147.88000000000002, "start": 142.44, "text": " it on. And that usually helps. And why does that help? Because you sort of hope" }, { "end": 153.44, "start": 147.88000000000002, "text": " that the large data set and the small data set are at least somewhat" }, { "end": 160.36, "start": 153.44, "text": " overlapping in their... So the images in the large data set are somewhat" }, { "end": 164.32000000000002, "start": 160.36, "text": " similar to the images in the small data set. It doesn't need to be super similar" }, { "end": 171.48, "start": 164.32, "text": " but just somewhat. And you hope that the features that the CNN learns from the" }, { "end": 178.28, "start": 171.48, "text": " large data set are useful in the small data set. Because then if that is the" }, { "end": 182.84, "start": 178.28, "text": " case when you fine-tune on the small data set, that's this step down here," }, { "end": 188.12, "start": 182.84, "text": " that's called a fine-tuning. When you fine-tune you can pretty much reuse" }, { "end": 192.76, "start": 188.12, "text": " those features. You only have to adjust them a little bit. And you just" }, { "end": 198.64, "start": 192.76, "text": " have to learn how to map the features to the output, which now is of course" }, { "end": 202.92, "start": 198.64, "text": " different than in the original task. But you won't have to rediscover the" }, { "end": 208.44, "start": 202.92, "text": " features. So that's why transfer learning can help. So the first phase is called" }, { "end": 214.45999999999998, "start": 208.44, "text": " pre-training. The second phase is called fine-tuning. Now the ultimate goal in" }, { "end": 221.23999999999998, "start": 214.45999999999998, "text": " this is the following. Imagine you have like a giant database of data." }, { "end": 227.08, "start": 221.24, "text": " This is giant. Look at the size comparison to the others. And so you have" }, { "end": 235.68, "start": 227.08, "text": " this big big database of images. And you train a CNN on that big database of" }, { "end": 242.68, "start": 235.68, "text": " labeled images. Now what you're hoping is that you can do this once and then this" }, { "end": 249.68, "start": 242.68, "text": " one CNN trained on this giant data set will become the starting point for all" }, { "end": 255.44, "start": 249.68, "text": " kinds of small tasks now. So basically you can post this on a repository online" }, { "end": 261.76, "start": 255.44, "text": " and everyone that has a visual task will not train from scratch. But they will" }, { "end": 268.16, "start": 261.76, "text": " basically take this one CNN as a starting point. It is very similar to" }, { "end": 273.36, "start": 268.16, "text": " what people are doing right now with BERT or generally these transformer" }, { "end": 277.24, "start": 273.36, "text": " language models. You never want to train them from scratch. You always want to" }, { "end": 283.24, "start": 277.24, "text": " train from a pre-trained state that someone else has done. Because usually" }, { "end": 287.92, "start": 283.24, "text": " the big work is now shifted to the pre-training. So the goal is to find" }, { "end": 296.56, "start": 287.92, "text": " this one universal starting point for visual learning. And of course no better" }, { "end": 303.6, "start": 296.56, "text": " place to do this than Google. They certainly do have a giant databases of" }, { "end": 308.84000000000003, "start": 303.6, "text": " images. They certainly have lots of computation which we're going to see is" }, { "end": 314.56, "start": 308.84000000000003, "text": " very necessary for something like this. Now they do train three different models." }, { "end": 320.8, "start": 314.56, "text": " Their model is called BIT and they train three different variants BIT. Small," }, { "end": 330.64000000000004, "start": 320.8, "text": " medium and large. So the L model is trained on 300 million images. The medium" }, { "end": 338.91999999999996, "start": 330.64, "text": " model is trained on 14 million images. So this is the I think it's called JFT" }, { "end": 345.88, "start": 338.91999999999996, "text": " dataset. This here is called the ImageNet 21k dataset which looks pretty funky. It" }, { "end": 351.64, "start": 345.88, "text": " has like objects in front of weird backgrounds and stuff like that. And the" }, { "end": 360.44, "start": 351.64, "text": " small is simply trained on the 1.3 million ImageNet dataset. So I mean just" }, { "end": 364.88, "start": 360.44, "text": " look at this. We're in a situation now where the small model is trained is" }, { "end": 372.24, "start": 364.88, "text": " pre-trained on ImageNet just for reference. If you had imagined this five" }, { "end": 376.96, "start": 372.24, "text": " years ago this you would not have maybe you would have guessed it but it's still" }, { "end": 383.24, "start": 376.96, "text": " impressive. So they do release these two models here the medium and the small one" }, { "end": 388.98, "start": 383.24, "text": " pre-trained I believe. They don't release the large one which maybe that's the" }, { "end": 394.48, "start": 388.98, "text": " price we have to pay for getting the medium and the small one. The fact that" }, { "end": 399.28000000000003, "start": 394.48, "text": " now Google can use this in their products because they have probably" }, { "end": 404.56, "start": 399.28000000000003, "text": " spent a considerable amount of money in doing this. I'm not sure this is a" }, { "end": 408.24, "start": 404.56, "text": " philosophical discussion whether in the interest of science they should really" }, { "end": 413.24, "start": 408.24, "text": " solve. Because they do give the sort of exact training protocol. You just need" }, { "end": 421.68, "start": 413.24, "text": " the money basically. Alright but that's not topic of this video. So the" }, { "end": 427.12, "start": 421.68, "text": " models here are all pretty much just residual networks. They're all" }, { "end": 434.16, "start": 427.12, "text": " these ResNet 152 I think x4 which means that basically scale the width of" }, { "end": 438.48, "start": 434.16, "text": " each layer by a number of four from the original ResNet architecture and that's" }, { "end": 443.92, "start": 438.48, "text": " pretty much it. This is the architecture. There's nothing really new in" }, { "end": 449.32, "start": 443.92, "text": " this paper. The paper just details what exactly you have to do. Which" }, { "end": 455, "start": 449.32, "text": " things exactly matter when you pre-train these things and which ones don't." }, { "end": 461.08000000000004, "start": 455, "text": " Therefore I believe it is a pretty good paper and I think that these" }, { "end": 466.92, "start": 461.08000000000004, "text": " models here the M and S models and maybe someone else trains an L model and" }, { "end": 472.24, "start": 466.92, "text": " releases it will sort of become the standard like we have in BERT now. So" }, { "end": 477.8, "start": 472.24, "text": " whenever you have a visual task you're just gonna start from those in practice." }, { "end": 484.6, "start": 477.8, "text": " So this I think is mainly relevant for people in practice. Alright here you can" }, { "end": 492.24, "start": 484.6, "text": " see these models. First of all excellent excellent not labeling of your x-axis." }, { "end": 499.08, "start": 492.24, "text": " Absolutely beautiful. The x-axis I believe is the number of samples per" }, { "end": 503.7, "start": 499.08, "text": " data class. So now they take their pre-trained model this bit L and they" }, { "end": 508.64, "start": 503.7, "text": " fine-tune it on these datasets. So ImageNet is one of the tasks they fine" }, { "end": 516.2, "start": 508.64, "text": " tune on or CIFAR 10, CIFAR 100 and so on. And first of all look on the right side" }, { "end": 520.54, "start": 516.2, "text": " this full thing. This is when you take the entire dataset. So often they" }, { "end": 526.04, "start": 520.54, "text": " outperform, they get state-of-the-art on the full datasets. Now they do compare" }, { "end": 532.8399999999999, "start": 526.04, "text": " against what they call generalist models. So generalist models are ones that have" }, { "end": 538.8, "start": 532.8399999999999, "text": " this particular training protocol where they train on one big giant database" }, { "end": 543.28, "start": 538.8, "text": " and then fine-tune to all the other tasks. They do not achieve state-of-the-art" }, { "end": 549.0999999999999, "start": 543.28, "text": " on all datasets in what they call specialist models. The specialist models" }, { "end": 554.98, "start": 549.1, "text": " would be such models that have this exact task in mind and therefore they" }, { "end": 558.84, "start": 554.98, "text": " don't care about other tasks. They outperform some of these specialist" }, { "end": 564, "start": 558.84, "text": " models but not all of them. So this is not the new state of the" }, { "end": 569.48, "start": 564, "text": " art in everything but it is in this transfer learning regime. And I think" }, { "end": 575.74, "start": 569.48, "text": " even more important if you see on the left this is in the small label regime." }, { "end": 582.92, "start": 575.74, "text": " So here you have something like 100 or 25 or 10 or even 5 labels per class. And" }, { "end": 588.96, "start": 582.92, "text": " if you take 5 labels per class for CIFAR 10, this model, so of course you have to" }, { "end": 594.52, "start": 588.96, "text": " pre-train it first on the big dataset, but just taking 5 labels per class you" }, { "end": 601.2, "start": 594.52, "text": " still get like 94% accuracy on CIFAR 10. And that's pretty good. That is pretty" }, { "end": 605.5600000000001, "start": 601.2, "text": " impressive especially if you compare it to this baseline model here which is a" }, { "end": 610.9, "start": 605.56, "text": " ResNet pre-trained on just the ImageNet dataset. So that really shows you the" }, { "end": 621, "start": 610.9, "text": " power of pre-training with full data. So one thing they say is that in their big" }, { "end": 629.68, "start": 621, "text": " dataset in their 300 million images they make sure to remove all the images that" }, { "end": 636.0799999999999, "start": 629.68, "text": " then appear in the downstream tasks. Because otherwise it is fairly" }, { "end": 640.64, "start": 636.0799999999999, "text": " conceivable that this database here is just scraped from the internet and of" }, { "end": 645, "start": 640.64, "text": " course these tasks are often, like CIFAR 10, are also scraped from the internet" }, { "end": 650.7199999999999, "start": 645, "text": " and also ImageNet. And it is entirely conceivable of course that the" }, { "end": 658.56, "start": 650.7199999999999, "text": " test data is already here and they say we remove images but I think they just" }, { "end": 665.1199999999999, "start": 658.56, "text": " remove exact duplicates. So it could still be that someone has taken" }, { "end": 669.88, "start": 665.1199999999999, "text": " ImageNet and then kind of recoded it into another color scheme or whatnot or" }, { "end": 677.2399999999999, "start": 669.88, "text": " just compressed it a bit more and then they find these images on the web." }, { "end": 684.28, "start": 677.2399999999999, "text": " So it's a little shaky this whole thing because these datasets might just be" }, { "end": 690.12, "start": 684.28, "text": " part of one another but you know given the results I do generally believe the" }, { "end": 697.56, "start": 690.12, "text": " improvements here. But yeah. So I guess what we need is like people to actually" }, { "end": 701.9599999999999, "start": 697.56, "text": " go out with cameras and shoot new pictures for a new test set. But in any" }, { "end": 708.4399999999999, "start": 701.9599999999999, "text": " case let's dive into how to pre-train something like this. So they divide their" }, { "end": 714.2800000000001, "start": 708.44, "text": " findings up in two parts. How to pre-train and how to fine-tune. So how to" }, { "end": 719.72, "start": 714.2800000000001, "text": " transfer to downstream tasks. And the methods they find are surprisingly" }, { "end": 725.24, "start": 719.72, "text": " easy. They say there are two components to pre-training. The first component is" }, { "end": 731.2800000000001, "start": 725.24, "text": " scale. So you have to have a lot of data and a lot of models and that is a pretty" }, { "end": 736.2800000000001, "start": 731.2800000000001, "text": " important recognition. So down here they have this ablation where they scale up" }, { "end": 744, "start": 736.28, "text": " the model and scale up the data. So look at this for example. You can see here you" }, { "end": 748.72, "start": 744, "text": " have the different datasets to pre-train on. So this is the small dataset, the" }, { "end": 754.0799999999999, "start": 748.72, "text": " medium dataset and the large dataset. So in this direction you have dataset size." }, { "end": 760.6, "start": 754.0799999999999, "text": " Then here you have accuracy not labeled. Again I guess we can understand" }, { "end": 768.72, "start": 760.6, "text": " accuracy. That's fine. And there we have the different models. Now the" }, { "end": 775.28, "start": 768.72, "text": " larger the dot here is the larger the model architecture. And you can see" }, { "end": 782.6, "start": 775.28, "text": " within the individual bins the larger the model the better performance you" }, { "end": 789.34, "start": 782.6, "text": " usually get. But as you can see like here this improvement in the large models" }, { "end": 795.76, "start": 789.34, "text": " isn't as much as when you have much data. And you can also see by the slope of the" }, { "end": 802.76, "start": 795.76, "text": " line here the larger amounts of data help more when you have larger models. So" }, { "end": 811.72, "start": 802.76, "text": " only scaling up the data is not as effective as scaling up the" }, { "end": 817.48, "start": 811.72, "text": " data and the model at the same time. And in some cases like in this small" }, { "end": 823.76, "start": 817.48, "text": " architecture here it actually hurts to incorporate more data. At least they say" }, { "end": 829, "start": 823.76, "text": " that. And you can also see that here. And here it just doesn't help as much anymore" }, { "end": 834.62, "start": 829, "text": " if you incorporate more data. So if your model is too small you can't handle the" }, { "end": 837.96, "start": 834.62, "text": " big data. Of course there are weird effects like here the performance goes" }, { "end": 842.88, "start": 837.96, "text": " down and then up with the larger data. So this might actually be an effect of the" }, { "end": 849.24, "start": 842.88, "text": " images in these data sets being somewhat qualitatively different also with" }, { "end": 856.24, "start": 849.24, "text": " respect to the task that you are training for. But in general it holds that" }, { "end": 863.8, "start": 856.24, "text": " you need a combination of data set size and model size to go up. And this I think" }, { "end": 870.12, "start": 863.8, "text": " might be an indication of where we are in Belkin's double descent curve. So if" }, { "end": 876.96, "start": 870.12, "text": " you look at the researcher Mikhail Belkin and other people also" }, { "end": 885.36, "start": 876.96, "text": " research in this area they have this sort of empirical finding and hypothesis" }, { "end": 893.64, "start": 885.36, "text": " that if you plot a graph and here is the number of parameters in relation" }, { "end": 898.72, "start": 893.64, "text": " to the data set size. It's a number of parameters in relation to the size" }, { "end": 907.8000000000001, "start": 898.72, "text": " of data. And here is your validation loss. Then what happens as you have very" }, { "end": 912.44, "start": 907.8000000000001, "text": " little parameters you can add more parameters to your model to get better" }, { "end": 917.6800000000001, "start": 912.44, "text": " validation loss. This is you know we get a better model and we train" }, { "end": 922.2, "start": 917.6800000000001, "text": " that and we get better. And then at some point you'll start to overfit." }, { "end": 925.5600000000001, "start": 922.2, "text": " We've all learned this in our general machine learning course and there is a" }, { "end": 930.56, "start": 925.56, "text": " point here, what is called the interpolation threshold, where you have" }, { "end": 935.64, "start": 930.56, "text": " this is one so the number of parameters is equal to the number of data points" }, { "end": 940.28, "start": 935.64, "text": " which is just interpolating your training data. Sorry the data point here" }, { "end": 951.3199999999999, "start": 940.28, "text": " that's train. But then the discovery sort of is that this comes down again and it" }, { "end": 957.5600000000001, "start": 951.32, "text": " stays down. So as you go up in number of data points, sorry number of parameters" }, { "end": 961.44, "start": 957.5600000000001, "text": " with the same data set you're perfectly fitting the training data set. You're" }, { "end": 968.2800000000001, "start": 961.44, "text": " past the number of data points in your model but still your validation" }, { "end": 973.12, "start": 968.2800000000001, "text": " loss comes down and there's various hypotheses why this could happen. And" }, { "end": 981.64, "start": 973.12, "text": " here we find ourselves maybe in this sort of situation where if you have a" }, { "end": 989.64, "start": 981.64, "text": " model right here and you want to scale it you want to add more data you can't" }, { "end": 995.96, "start": 989.64, "text": " just keep the model constant because if you add more data that will shift you to" }, { "end": 999.6800000000001, "start": 995.96, "text": " the left here because you add more data but you keep the number of" }, { "end": 1004.12, "start": 999.68, "text": " parameters the same so this number will shift to the left and you actually go up" }, { "end": 1010.0799999999999, "start": 1004.12, "text": " in your validation loss. So maybe this is actually what's happening right here the" }, { "end": 1016.3599999999999, "start": 1010.0799999999999, "text": " fact that the model is too small. This is just a hypothesis by me. So if you want" }, { "end": 1019.8399999999999, "start": 1016.3599999999999, "text": " to up your number of data points you also have to up your number of" }, { "end": 1027.12, "start": 1019.8399999999999, "text": " parameters and that will keep it going and maybe these models here are more on" }, { "end": 1031.6, "start": 1027.12, "text": " this side of this interpolation threshold and the models where it" }, { "end": 1039.28, "start": 1031.6, "text": " doesn't happen might be more over here. Though that is a big thing to assume." }, { "end": 1046.9599999999998, "start": 1039.28, "text": " Maybe not. Now that I think about it since they have even more parameters" }, { "end": 1054.36, "start": 1046.9599999999998, "text": " here they would be even more here somewhere so maybe you add a bunch of" }, { "end": 1062.6, "start": 1054.36, "text": " data it's just not as bad. There might be some weird interactions here." }, { "end": 1070.3999999999999, "start": 1062.6, "text": " Like this. Who knows? Let's just skip this. In any case the message here is you" }, { "end": 1077.76, "start": 1070.3999999999999, "text": " need more model and more data at the same time. Alright then there is a second" }, { "end": 1088.4, "start": 1077.76, "text": " message a second recipe for pre-training. There we are. The second method is group" }, { "end": 1095.76, "start": 1088.4, "text": " normalization and weight standardization. So they criticize batch norm. Batch norm" }, { "end": 1103.36, "start": 1095.76, "text": " has of course been used a lot. That is where if you have a batch of data and" }, { "end": 1109.8799999999999, "start": 1103.36, "text": " you put it through your layers and it has some intermediate" }, { "end": 1114.1999999999998, "start": 1109.8799999999999, "text": " representation what you want to do is you want to calculate sort of the mean" }, { "end": 1120.7199999999998, "start": 1114.1999999999998, "text": " and variance of your data in each of the features and then make it such that it's" }, { "end": 1127.9199999999998, "start": 1120.7199999999998, "text": " nice mean one and standard deviation. So mean zero and standard deviation of one." }, { "end": 1134.92, "start": 1127.92, "text": " That is called batch norm but of course it is dependent on your batch size. So it" }, { "end": 1139.28, "start": 1134.92, "text": " is dependent on how many data points you have because that's how well you can" }, { "end": 1144.64, "start": 1139.28, "text": " estimate these mean and variance parameters. And what people do nowadays" }, { "end": 1150.48, "start": 1144.64, "text": " is they take these batches and they group them into different groups and" }, { "end": 1157.52, "start": 1150.48, "text": " they distribute those groups onto many many machines which is called data" }, { "end": 1162.6399999999999, "start": 1157.52, "text": " parallelism especially with TPUs nowadays. You can just distribute" }, { "end": 1168.8, "start": 1162.6399999999999, "text": " everything to so many TPUs. I believe they say they distribute to something" }, { "end": 1178.16, "start": 1168.8, "text": " like 500 TPUs. They have a batch size of I think 4,000 and they" }, { "end": 1184.6399999999999, "start": 1178.16, "text": " distribute to 500 TPUs so that leaves them with eight samples per batch." }, { "end": 1189.8400000000001, "start": 1184.64, "text": " So this is eight and eight is just not very good for batch norm and if you have" }, { "end": 1195.0800000000002, "start": 1189.8400000000001, "text": " to if you want to circumvent that you need to in each layer globally sync with" }, { "end": 1199.92, "start": 1195.0800000000002, "text": " all of the other workers your batch norm parameters and that slows you down. So" }, { "end": 1206.92, "start": 1199.92, "text": " people have gone around this using what they call group normalization and weight" }, { "end": 1211.8400000000001, "start": 1206.92, "text": " standardization. So these two techniques of weight standardization is a" }, { "end": 1217.8799999999999, "start": 1211.84, "text": " addition to group normalization. They don't require the other samples in the" }, { "end": 1223.6399999999999, "start": 1217.8799999999999, "text": " batch. They work on a per sample basis and they normalize the features within" }, { "end": 1231.1999999999998, "start": 1223.6399999999999, "text": " groups of each channel. So the group normalization groups together" }, { "end": 1237.04, "start": 1231.1999999999998, "text": " different features within a sample and then normalizes across that and the" }, { "end": 1241.52, "start": 1237.04, "text": " weight standardization is a bit like standardizing the features but it's" }, { "end": 1249.4, "start": 1241.52, "text": " standardizes the weights to be of a normal distribution. And just suffice to" }, { "end": 1254.16, "start": 1249.4, "text": " say these are standard techniques that you can build in and they allow you to" }, { "end": 1259.36, "start": 1254.16, "text": " not have to synchronize constantly between your workers at the training" }, { "end": 1264.68, "start": 1259.36, "text": " time which makes everything a lot faster and also not a problem that you just" }, { "end": 1272.88, "start": 1264.68, "text": " have eight samples per worker. So that's what they do. They do large data" }, { "end": 1279.28, "start": 1272.88, "text": " large models and group normalization with weight standardization. That's how" }, { "end": 1284.9, "start": 1279.28, "text": " they pre-train and then how do they fine-tune. They say they have a rule to" }, { "end": 1289.22, "start": 1284.9, "text": " select hyperparameters. They call the bit hyper rule and that's just sort of a" }, { "end": 1297.24, "start": 1289.22, "text": " formula of how you have one hyper parameter. So you have one I guess it's" }, { "end": 1302.08, "start": 1297.24, "text": " a hyper hyper parameter and that hyper hyper parameter you run through their" }, { "end": 1308.08, "start": 1302.08, "text": " rule and the rule will tell you what each of the hyper parameters should be." }, { "end": 1313.44, "start": 1308.08, "text": " So it's like a lookup table basically. It's oh you set this one" }, { "end": 1319.72, "start": 1313.44, "text": " number and we give you the rest of the hyper parameters and that one rule works" }, { "end": 1324.48, "start": 1319.72, "text": " pretty well. So you only have to find for fine-tuning you only have to grid" }, { "end": 1332.52, "start": 1324.48, "text": " search over one hyper parameter. It's not really grid anymore is it? And then they" }, { "end": 1341.24, "start": 1332.52, "text": " basically decide on the training schedule length resolution and whether to" }, { "end": 1346.92, "start": 1341.24, "text": " do mixup regularization. Mixup is a technique that can help when you have" }, { "end": 1352.84, "start": 1346.92, "text": " very little data and what it does is it interpolates between data points and" }, { "end": 1357.56, "start": 1352.84, "text": " also trains on kind of like data points from half this class and half that class" }, { "end": 1365.32, "start": 1357.56, "text": " just to make more data available. But they all have this packed into this rule" }, { "end": 1370.6, "start": 1365.32, "text": " and they of course the exact settings of this rule are presented. So you can look" }, { "end": 1376.1999999999998, "start": 1370.6, "text": " it up then they have a data pre-processing, resize the image to a" }, { "end": 1380.1999999999998, "start": 1376.1999999999998, "text": " square, crop out small random square, randomly horizontally flip the image at" }, { "end": 1384.56, "start": 1380.1999999999998, "text": " training time. So they basically describe a standard training protocol here. You" }, { "end": 1393.32, "start": 1384.56, "text": " don't want to go mix it too up too much. The only thing they say surprisingly we" }, { "end": 1398.1999999999998, "start": 1393.32, "text": " do not use any form any of the following forms of regularization during downstream" }, { "end": 1405.16, "start": 1398.2, "text": " tuning, weight decay to zero, weight decay to initial parameters or dropout. I think" }, { "end": 1414.76, "start": 1405.16, "text": " they only use weight decay during pre-training and that's it. So let's" }, { "end": 1419.72, "start": 1414.76, "text": " look at some of the graphs. We've already seen some. Here is where they pretty much" }, { "end": 1425.28, "start": 1419.72, "text": " outperform the generalist, these generalist models on all of these tasks" }, { "end": 1430.04, "start": 1425.28, "text": " including this visual task adaptation benchmark. I've made a video about this." }, { "end": 1435, "start": 1430.04, "text": " This is a benchmark that includes 19 different visual tasks from all over the" }, { "end": 1440.44, "start": 1435, "text": " place and they have significant improvement here as you can see. They do" }, { "end": 1445.52, "start": 1440.44, "text": " not always outperform these specialist models but as you can see they" }, { "end": 1450.32, "start": 1445.52, "text": " outperform for example this on the flowers data set and they come pretty" }, { "end": 1461.72, "start": 1450.32, "text": " close. Here you can also see how much they improve when pre-training on the" }, { "end": 1466, "start": 1461.72, "text": " larger data set. So far people have basically pre-trained on this ImageNet" }, { "end": 1471.9199999999998, "start": 1466, "text": " data set and now that they pre-train on the larger one of course they gain a lot" }, { "end": 1480.64, "start": 1471.92, "text": " of performance and the largest one isn't even in this table. So what I" }, { "end": 1486.96, "start": 1480.64, "text": " finally want to look at is this visual task adaptation benchmark. This consists" }, { "end": 1492, "start": 1486.96, "text": " of 19 tasks and they're divided into natural tasks which are kind of natural" }, { "end": 1497.5600000000002, "start": 1492, "text": " images and then specialized tasks which are let's say the medical images and not" }, { "end": 1502.44, "start": 1497.56, "text": " really natural and then structured tasks and the structured tasks isn't simply" }, { "end": 1507.52, "start": 1502.44, "text": " labeling or locating something. It is tasks where you have to maybe reason" }, { "end": 1515, "start": 1507.52, "text": " about something. So let's say there is an image and there is a cup right here and" }, { "end": 1520.4199999999998, "start": 1515, "text": " there is a glass right here and the question is what's to the left of the" }, { "end": 1525.4199999999998, "start": 1520.4199999999998, "text": " glass and there's a bunch of other stuff around here and you have to say" }, { "end": 1531.6000000000001, "start": 1525.42, "text": " the cup. So it sort of requires a structured understanding of the image" }, { "end": 1538, "start": 1531.6000000000001, "text": " and you can see the main performance boost here comes in the natural images" }, { "end": 1546, "start": 1538, "text": " which is to be expected. So you only get what you feed in and this 300 million" }, { "end": 1551.88, "start": 1546, "text": " image data set I'm pretty sure that's just a web scrape of photos or mainly" }, { "end": 1557.8000000000002, "start": 1551.88, "text": " photos. So the main improvement you're gonna get is on pictures that are similar" }, { "end": 1562.2, "start": 1557.8000000000002, "text": " to that as we said at the beginning and these natural tasks have images like" }, { "end": 1568.2, "start": 1562.2, "text": " that and you can see that the model here improves extremely in that category," }, { "end": 1574.0800000000002, "start": 1568.2, "text": " improves slightly in this specialized thing and only improves a little bit in" }, { "end": 1582.9199999999998, "start": 1574.08, "text": " the structured tasks. So this as I said is to be expected. Just know if you" }, { "end": 1588.72, "start": 1582.9199999999998, "text": " use this model know what is in there. You have to know what it does, what it does" }, { "end": 1593.9199999999998, "start": 1588.72, "text": " well. It does well on natural images that are similar to what it was pre-trained" }, { "end": 1606.16, "start": 1593.92, "text": " on. Okay so they do have some analysis here and we've already went to most of" }, { "end": 1615.8000000000002, "start": 1606.16, "text": " them. I find this to be pretty impressive. So they say when they apply" }, { "end": 1621.8400000000001, "start": 1615.8000000000002, "text": " the standard computational budget of ImageNet pre-training when they scale" }, { "end": 1625.6399999999999, "start": 1621.84, "text": " up to the larger data set it seems detrimental. As you can see right here" }, { "end": 1631.6399999999999, "start": 1625.6399999999999, "text": " the performance actually goes down when you go to the larger data set. Only if" }, { "end": 1637.04, "start": 1631.6399999999999, "text": " you train longer then your improves. The axis labeling is just amazing here." }, { "end": 1647.56, "start": 1637.04, "text": " Standard, long, longer. Oh how long you train for? Longer. Thanks. But I guess the" }, { "end": 1654.9199999999998, "start": 1647.56, "text": " point is taken that you have to invest more computation along with your" }, { "end": 1658.8, "start": 1654.9199999999998, "text": " bigger model and bigger data set. Sorry it's the same model but the bigger data" }, { "end": 1667.84, "start": 1658.8, "text": " set. They also make some other points here that if you for example if you" }, { "end": 1672.36, "start": 1667.84, "text": " decrease your learning rate too early or set your weight decay parameter" }, { "end": 1677.6399999999999, "start": 1672.36, "text": " differently that also hurts you. So on the right here you see a smaller weight" }, { "end": 1683.8799999999999, "start": 1677.6399999999999, "text": " decay initially looks better. So initially you're higher but through the" }, { "end": 1689.04, "start": 1683.8799999999999, "text": " training you end up at a worse place than a higher setting right here. I" }, { "end": 1693.9199999999998, "start": 1689.04, "text": " mean they make a big point out of this but who's to say that someone else" }, { "end": 1699.4399999999998, "start": 1693.9199999999998, "text": " doesn't come with like a ten times longer training and figures out that" }, { "end": 1707.48, "start": 1699.44, "text": " ultimately you start off like this and then maybe goes up super high. So to me" }, { "end": 1712.48, "start": 1707.48, "text": " the lessons learned here is pretty much that there's always a way to" }, { "end": 1720.3200000000002, "start": 1712.48, "text": " get more performance out of more compute and probably there is a way to schedule" }, { "end": 1723.76, "start": 1720.3200000000002, "text": " all of these things because that's combined with decaying learning rate and" }, { "end": 1728.8, "start": 1723.76, "text": " so on. There's probably a way to schedule these things with the current" }, { "end": 1735.6399999999999, "start": 1728.8, "text": " method. So with this particular method that would end up somewhere here we just" }, { "end": 1740.68, "start": 1735.6399999999999, "text": " haven't found it yet because it's so complex. I would guess that is the" }, { "end": 1745.84, "start": 1740.68, "text": " case. Here they make an interesting point that if you decay the learning rate too" }, { "end": 1751.8799999999999, "start": 1745.84, "text": " early then you also end up at a worse place. So this this dashed researcher" }, { "end": 1759, "start": 1751.88, "text": " the noob. So after eight GPU weeks which come on what is that eight GPU weeks" }, { "end": 1766.92, "start": 1759, "text": " that's just eight GPUs for a week. That's nothing nothing. It looks like this" }, { "end": 1771.48, "start": 1766.92, "text": " right it looks fairly flat and this researcher now decides to decay the" }, { "end": 1775.92, "start": 1771.48, "text": " learning rate and that results in this thing here. So decays the learning rate" }, { "end": 1782.76, "start": 1775.92, "text": " here here and here. Sorry not here. So the case learning right here and then it" }, { "end": 1786.8400000000001, "start": 1782.76, "text": " flattens out again and then decays the learning rate again ends up at this" }, { "end": 1793.4, "start": 1786.8400000000001, "text": " level. Yet if you train for longer you can see right here if you look over eight" }, { "end": 1798.3200000000002, "start": 1793.4, "text": " months you can see that there is a slight upward trend still and it hasn't" }, { "end": 1804.1200000000001, "start": 1798.3200000000002, "text": " converged yet and you can if you decrease the learning rate only later" }, { "end": 1810.76, "start": 1804.12, "text": " and always wait for this to fully converge then you will end up at a" }, { "end": 1817.9199999999998, "start": 1810.76, "text": " better place right here above 70. Again who's to say that if I just wait here" }, { "end": 1826.2399999999998, "start": 1817.9199999999998, "text": " there isn't a slight upward trend if I wait for eight GPU years or eight GPU" }, { "end": 1832.6799999999998, "start": 1826.2399999999998, "text": " solar system births then there might be even a better point to decay finally" }, { "end": 1838.76, "start": 1832.68, "text": " decay the learning rate and then go up. I mean again this this researcher here" }, { "end": 1843.96, "start": 1838.76, "text": " only takes point five million steps where you take two million. So that's the" }, { "end": 1850, "start": 1843.96, "text": " first point. The second point is ImageNet or visual state-of-the-art" }, { "end": 1856.52, "start": 1850, "text": " research is now officially out of the hands of academia. This is it. If" }, { "end": 1861.64, "start": 1856.52, "text": " you see things like if you see a paper dissing on people that only wait eight" }, { "end": 1868.48, "start": 1861.64, "text": " GPU weeks to decrease their learning rate for the first time and advocating" }, { "end": 1872.2800000000002, "start": 1868.48, "text": " that you should you know at least wait until eight GPU months actually they" }, { "end": 1882.6000000000001, "start": 1872.2800000000002, "text": " wait twice as long it's over that's it. Yeah bye bye. Maybe maybe you know you" }, { "end": 1889.98, "start": 1882.6000000000001, "text": " want to do some theory or something yeah bye bye. What I find interesting is the" }, { "end": 1895.8, "start": 1889.98, "text": " mistakes so since on CIFAR-10 they reach like 99.4 percent there's only a" }, { "end": 1899.88, "start": 1895.8, "text": " handful of mistakes that they're still making because it's not that large of a" }, { "end": 1911.76, "start": 1899.88, "text": " data set and they do classify it so red in particular I think means red is the" }, { "end": 1916.72, "start": 1911.76, "text": " ground truth label is correct but green is the machine is correct and the" }, { "end": 1920.6000000000001, "start": 1916.72, "text": " ground truth label is wrong and you can see there is a fair number of green" }, { "end": 1929.1000000000001, "start": 1920.6000000000001, "text": " things here right so the model says ship and the label says cat and the model" }, { "end": 1937.84, "start": 1929.1000000000001, "text": " says bird and the label says cat clearly this this would be one weird cat so it" }, { "end": 1942.76, "start": 1937.84, "text": " gets to the point where you you also have to expect these errors to be in the" }, { "end": 1948, "start": 1942.76, "text": " training set so it could just be that the model here doesn't necessarily even" }, { "end": 1951.36, "start": 1948, "text": " make those mistakes but it's just somewhat consistent with the training" }, { "end": 1956.8, "start": 1951.36, "text": " set in making the mistakes and also here on image net they have selected ones" }, { "end": 1962.04, "start": 1956.8, "text": " where you know the model says notebook but it's actually laptop and the model" }, { "end": 1968, "start": 1962.04, "text": " says mouse but it's actually spacebar you know the model says Alp and it's ski" }, { "end": 1978.56, "start": 1968, "text": " so or here the model the model says candle but it's a this is a dishwasher" }, { "end": 1990.52, "start": 1978.56, "text": " what so you see that the the the types of mistakes here we get to very quirky" }, { "end": 1996.48, "start": 1990.52, "text": " very fine-grained points in these models last thing I want to show I have never" }, { "end": 2005.24, "start": 1996.48, "text": " seen these image net 21k images these are just funky like look at that so" }, { "end": 2011.2, "start": 2005.24, "text": " here's the state-of-the-art previously I think said triceratops and the new" }, { "end": 2020.3600000000001, "start": 2011.2, "text": " model now says bit else as starfish good job bit L you wow probably the correct" }, { "end": 2028.76, "start": 2020.36, "text": " label would just be weird and this no okay I don't want to rag on this too" }, { "end": 2033.6, "start": 2028.76, "text": " much this is a cool paper I believe this will be the new starting point for a lot" }, { "end": 2039.56, "start": 2033.6, "text": " of practitioners in when they do visual tasks I always as always invite you to" }, { "end": 2043.9199999999998, "start": 2039.56, "text": " check out the paper subscribe to the channel leave a like leave a comment if" }, { "end": 2050.12, "start": 2043.92, "text": " you want I do read them usually and bye bye" } ]
tjbEVY5XIk0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "reinforcement learning", "deep rl", "planning", "alphago", "alphazero", "alpha go", "alpha zero", "mcts", "monte carlo", "tree search", "subdivision", "recursive", "training data", "hindsight experience replay" ]
When AI makes a plan it usually does so step by step, forward in time. But often it is beneficial to define intermediate goals to divide a large problem into easier sub-problems. This paper proposes a generalization of MCTS that searches not for the best next actions to take, but for the best way to sub-divide the problem recursively into problems so tiny that they can each be solved in a single step. Paper: https://arxiv.org/abs/2004.11410 Site: https://sites.google.com/view/dc-mcts/home Abstract: Standard planners for sequential decision making (including Monte Carlo planning, tree search, dynamic programming, etc.) are constrained by an implicit sequential planning assumption: The order in which a plan is constructed is the same in which it is executed. We consider alternatives to this assumption for the class of goal-directed Reinforcement Learning (RL) problems. Instead of an environment transition model, we assume an imperfect, goal-directed policy. This low-level policy can be improved by a plan, consisting of an appropriate sequence of sub-goals that guide it from the start to the goal state. We propose a planning algorithm, Divide-and-Conquer Monte Carlo Tree Search (DC-MCTS), for approximating the optimal plan by means of proposing intermediate sub-goals which hierarchically partition the initial tasks into simpler ones that are then solved independently and recursively. The algorithm critically makes use of a learned sub-goal proposal for finding appropriate partitions trees of new tasks based on prior experience. Different strategies for learning sub-goal proposals give rise to different planning strategies that strictly generalize sequential planning. We show that this algorithmic flexibility over planning order leads to improved results in navigation tasks in grid-worlds as well as in challenging continuous control environments. Authors: Giambattista Parascandolo, Lars Buesing, Josh Merel, Leonard Hasenclever, John Aslanides, Jessica B. Hamrick, Nicolas Heess, Alexander Neitz, Theophane Weber Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! What you're seeing here is a Divide and Conquer Monte Carlo Tree Search in action. This is a planning algorithm that plans in a kind of an unconventional fashion. So we're going to explore this today in this paper. Divide and Conquer Monte Carlo Tree Search for Goal-Directed Planning by Gian Battista Parascondolo and Lars Pyzing and a list of other authors. I believe this is from DeepMind and Max Planck and Eta and... yeah that's it. Alright, so what does this thing do? It is a planning algorithm and planning might be not really familiar for you. So let's say you are in this room and or this set of rooms right here. There's a bunch of walls and you are up here and you want to reach the goal down here. So first of all this is a goal-directed problem. Right here it says goal-directed. Goal-directed means that you give the algorithm a goal to reach and this could be a different goal each time you're on the algorithm. So you give it a goal to reach. Then the second thing here we see is planning. So it is a planning algorithm. What does planning mean? Planning means if you're traditionally reinforcement learning you would think I'm just gonna go ahead and run my agent here and maybe you can move in the four different directions, run my agent here, do some things, right? Maybe I hit a wall, I get a negative reward, I try again. In planning you don't have to move initially. What you can do is think about moving and you can think ahead of what's going to happen and that's usually because you have some sort of model what happens. This is very famous applied in for example AlphaGo or AlphaZero where you know if I move to the right here I'm going to be here. So you will know once you've reached the goal. If I'm here I go down I reach the goal. So you can think all of this without actually moving in the environment. You can think yourself ahead what would happen if I did certain things and that in turn also means that you can think ahead of multiple different paths. So you can think ahead what would happen if I move right, what would happen if I move down and then you can think in the next layer if I move right what would happen if I moved right again? What would happen if I move down instead? So you can easily see the planning problem becomes a tree search problem. In this case we've done a breadth first search and eventually you'll see that this will get you to the goal. So this breadth first search or maybe you want to employ depth first search will ultimately get you to the goal. We can represent this as a search tree. So you're here in a particular state and you have a bunch of actions in this case four to move left up down or right and you can choose any of them and you will get into a new state and from each of those you could choose any again and you can think ahead all of this you can construct this entire tree and one of these branches will lead you to the goal. Promise! What is the problem? The problem is if the path to the goal here is let's say D steps long then this tree here is going to be D layers deep and in our case that means we'll have four to the D nodes in that tree before we even reach the goal and that is just a long long or a big tree and we can't construct all of it. So algorithms have come along for example the A star algorithm where you can incorporate something like a heuristic and the heuristic would be well it's generally good if I'm close to the goal in L2 distance so you would not build the entire tree here but you would prefer nodes that will bring you towards this heuristic so this node down here is closer to the green node in terms of L2 distance and this node down here is even closer but then you're kind of stuck so A star will explore a bit probably along this wall here and once you're here you have a clear path again right so you can simply take the actions that minimize this L2 distance so this will already get you to a real good point. Monte Carlo tree search as it was employed in AlphaGo or AlphaStar has a similar structure where it after a certain while it stops and evaluates a heuristic to say what's the the value here and so on so it is in for some problems an even better method of constructing the search tree in a way where you don't get overblown by the number so the Monte Carlo search tree in this algorithm refers to the fact that we are generalizing Monte Carlo tree search from the let's say the AlphaGo paper so what's the idea here so far we've known everything the idea is the following if I had an Oracle if I am the master here and I can tell the agent agent look I guarantee you that this state right here in the middle you will pass that state if you want to reach the goal you will pass this for sure if I tell this to the agent now what can the agent do the edge could say oh okay if I know that I can simply I don't have to search for a way to the goal I'd much rather search for a way from my start point to that point where I know that I'm guaranteed to be at some place and then I can search also from a way from there to the goal right so this now remember our long out our path was d steps long for the original problem this is now let's say d half long each of these paths and that means we just construct two trees each one of them is going to be 4 to the d half and the other one is also 4 to the d half and if we add them that is much smaller than the original 4 to the d tree that we build so right there we have subdivided our problem into two sub problems and each of them are much easier than the original problem this paper basically does this recursively so it will subdivide the problem by proposing some middle state where you are going to be at some point for sure and that for sure we're going to take a look at and then for each of those problems again it will try to subdivided into sub problems and therefore recursively solve the sub problems until they're small enough that they can be basically solved in one step so that's the big idea here and this is illustrated in this point right here so you are in this s0 the start state and you want to go to this s infinity the goal state in your case what this paper does is it proposes to split the problem here in the middle and then simply solve the two problems recursively now what is a bit confusing right here is that it is the planning already is a tree search right so a plan is like a tree but we are searching over plan so we're searching for the best plan which means that we are searching over trees and that search itself is a tree search so the search itself is a problem where we go down one route and then on oh and then we maybe go down another route and here and then here so the search is a tree so we're now tree searching over trees that's the kind of tricky thing to remember so each of these plans even if it's half if it's only half done like this is only half a plan we don't know what's gonna happen in here this half a plan is a node in the tree that we're searching over and then it splits it it splits as you can see here into two sub problems the two sub problems also are nodes in that search tree so you see that this top thing here would correspond to this note even though in itself it is a plan a tree string in this case and each of the two sub problems would become these particular these nodes in the search tree so keep that in mind as we go through this paper it is very easy to get confused in this respect the algorithm is pretty simple in this case so this algorithm rests on this traverse procedure so we're going to we're going to traverse this what they call these or nodes so they they divide the problem into and nodes and or nodes I I don't believe that's particularly necessary for us to think about but here's how this works so they traverse the or node s to s prime this is simply again this is a node but the node is a path from s to s prime where we don't know yet what happens in between right so what we'll do is we'll run this procedure here select and select it you can see it outputs an s prime and the s prime is going to be a node here somewhere in the middle where the model says this is your subdivision point and then it will recursively traverse the left and the right branch of this tree so it will subdivide the problem into two problems and then recursively call this traverse you see that's the function that we're defining call this traverse function on these so it will subdivide this problem into these problems and it would for the next for the next step again it will eat for each of the problems propose a middle node and subdivide it further and so on until you have a full plan right at some point you're going to have a full plan now here again is the important thing to remember this is just one branch of the search this is just one possible plan and we are going to do a tree search over these plans so this select function here it has returned this as prime but it could have returned any point between s and s double prime so let in it this is just one branch I'm going to I don't have space to draw here but I'm going to draw it down here again so it could have also returned this particular node here like it's a different s prime and then subdivided the problem into different problems and then of course those problems are now different so they would be subdivided differently again and so on so this top part here is just if you consider this thing here your root node this is where you search from this top part is just one node one branch in the tree but we could also subdivide like this and then that would be another branch in your tree and this tree here is the thing that you're searching over so important to keep keep this in mind we're searching over these different possibilities now the rest of this algorithm here is basically the carryover from Monte Carlo tree search and I don't want to go into that in this video but if you're interested in you know how to actually implement this you'll have to go look at MCTS and then all of this just carries over from that algorithm because you have to keep estimates of value and visit counts in this tree and so on and also you have some sort of a value estimator but yeah I'm mainly concerned with how the tree is constructed right here so basically here's the here's the difference between a between the Monte Carlo tree search and the divide and conquer Monte Carlo tree search in Monte Carlo tree search ignore the yellow one for now you're in the green position and you want to go to the blue position in Monte Carlo tree search what you're searching over is the next action to take in this case you have four possible actions to take that's what you're searching over and that's what you build your search tree from your search tree is going to be which action to take right up left down or right that's why you have four actions in month in divide and conquer Monte Carlo tree search you're not searching over actions you are searching over the best way to subdivide this problem right you're searching over which of these all the black squares should I use to subdivide my problem into sub problems and that's what you build your search tree from so naturally you you can already see what kind of possibilities do we have here to subdivide this problem I drew one white square but any of the black squares are candidates to subdivide this problem right any of the black squares could be potential subdivisions and this is what we search over so in in Monte Carlo tree search we search over the actions which gives us this four to the D tree but in divide and conquer we're searching over all the ways to subdivide the problem as you can see there that are many many more possibilities so from this first starting node we have like like a hundred possibilities to subdivide this problem into two problems right and each of those again if you now you've decided on a subdivision let's say you decided on this one right here you say I want to pass through that point on my way to the goal now you have to subdivide that in this sub problem into two problems again every possible black square I'm not saying which one is good good thing to subdivide the problem I'm just asking what is a possible candidate every single black square here is a possible candidate for for a path from here to here right and again for this particular sub problem you have to do the same thing so the the search tree here even though we said before it is this one is very deep and this one is probably only log D sort of log 2d deep it width is going to be enormous and that is the catch right the catch this is not a method that is like a magic pill the catch is even though your tree is not as deep it is much much wider and it is intuitive right because you still have to have the ability to construct any possible plan so your tree is going to have as many nodes as the original Monte Carlo tree search tree you're if you were to fully expand it right so it's your trading of depth for width here I hope I hope that's a bit clear so your entire your entire promise of this method is going to be can you from all of these possibilities so from all of these you don't even you don't even want to go and search even one layer deep through all of these don't even want to consider all of them right you want to search in this tree you want to limit your search to very particular ways of subdivision here if you can do that efficiently if you can pick efficiently candidates to subdivide then this could be a successful thing because your deep is now not as your tree is not as deep as the original search tree would have been and you can limit the width effectively to only very few candidates so here we could for example make a heuristic that will always only pick squares that are kind of on this straight path to the goal so everything rests on how you do this select action this thing here the entire algorithm relies on the fact that you can select effectively select in between states where you're pretty sure that the algorithm will have to pass through there because the worse you make these predictions the worse your tree search is going to work and what they do of course is they use deep learning as you might have guessed to do that so they have they will have a model that for a particular start and end goal will give them a probability distribution across candidates now everything that's black here also has probability mass but it's just so small you can't see and these blue ones are that the lighter blue the more probable this model thinks that this is going to be an in between state now the tree search can now limit itself effectively to only the ones here with the highest probability right so we select the ones with the highest probability and will only search plans that have these as the first possible subdivisions again we're searching over plans so we're searching over ways to subdivide the problem into smaller problems that is our search space so once we've decided on one of them let's say here the yellow one again we have to evaluate that model for each of the sub problems and this this is kind of a step that's missing here so in between here there would be a model evaluation that would again tell you which of these in between states were probable subdivision candidates and then you would select again one of those in that particular search branch and in a different search branch right you're searching over these things in a different search branch you would select a different one and see is this possibly a better way to subdivide the problem and so on so the question of course is how do you train this model how do you train a model that gives you good candidates for subdivision and the answer here is a comes from the idea of hindsight experience replay so let's say again you are here and you want to go here and you're not very good at the at it initially so they train this model as I understand along with letting their agent act in this environment so the agent uses the model to plan but initially it's not very good so maybe the agent will fail a lot of times so instead of going to the blue square it will reach this white square right here it will go here here and here will reach the white square instead of saying I failed what you can do and this is the idea of hindsight experience replay is to say well I did fail but I did reach something right I I have reached a thing and and it's actually possible that that thing could have been the goal but this particular episode this was the goal remember the goal changes every time it's a goal-directed policy so it says well this could have been the goal possibly so if I just pretend this was the goal then I have a training example for a successful run so the hindsight experience replay basically pretends that what you have achieved even if you failed was your actual goal and that gives you a positive training example for an episode with that as a goal and the this it could have been the goal because the goal is chosen at random so this gives you a good training example now this paper just generalizes the hindsight experience replay or applies it to their particular framework and they say well if I reach this thing that means any point on this path is a good candidate for subdividing the path because I did actually reach the point remember the goal is to propose a a point where your for sure are going to pass through now since I've taken this path to this goal I have passed through any of the squares in between and so these are my possible sub candidates and all other black squares I don't want that so now I have a classifier I can train I can say any of these squares on my path are good squares to subdivide and any not on my path are bad ones they go a step further I believe and they actually say we're so if this was m steps we're actually going to take the particular square that is reached after m half steps so the exact middle point of that path is going to be our training example for subdivision so you have a classifier that has exactly one one target to train so this you train along with acting in the environment and of course your model for proposing subdivisions is going to be better and better and better and better and that makes your planning algorithm better and better and that makes you collect better episodes and so you can sort of sort of get bootstrap yourself up this thing now this is the basic experiment of the paper they also do this in a 3d manner where they move this little spider here around so the spider was trained to just move from one block to the next block and the planner basically tell it where to go and they show that they outperform the traditional Monte Carlo tree search now I have to say this is cool but you have to remember this is this is only advantageous in very very specific types of problems so first of all it has to be this goal-directed nature otherwise you probably couldn't train this this predictor here super well then given that you have such a good predictor the problem needs to be such that if you have a start state there could be many ways to go about reaching the end and if you have an end state there could be many ways from where you could come from but but there is like some bottleneck state in the middle where you're pretty sure that you're going to have to pass through it so if your problem is of that nature right if it has these bottleneck states where you can predict with reasonable accuracy that you're going to have to pass through then this is a good algorithm to consider and is obviously I mean it's intuitively outperforming the original Monte Carlo tree search because you have much less deep search tree and you can effectively limit its width by using that model they also have made this website where they kind of show videos of their spider and I haven't seen it in a while but it is it is like next to the mouse if you can see it so so you see this is kind of a continuous control problem that also requires planning and they also have these kind of gifts of how they're there what order their plans are constructed in so I invite you to check this out read the paper if you like this subscribe leave a like leave a comment and thank you for listening bye bye
[ { "end": 5.64, "start": 0, "text": " Hi there! What you're seeing here is a Divide and Conquer Monte Carlo Tree" }, { "end": 12.64, "start": 5.64, "text": " Search in action. This is a planning algorithm that plans in a kind of an" }, { "end": 17.52, "start": 12.64, "text": " unconventional fashion. So we're going to explore this today in this paper." }, { "end": 22.76, "start": 17.52, "text": " Divide and Conquer Monte Carlo Tree Search for Goal-Directed Planning by" }, { "end": 30.720000000000002, "start": 22.76, "text": " Gian Battista Parascondolo and Lars Pyzing and a list of other authors." }, { "end": 39.92, "start": 30.720000000000002, "text": " I believe this is from DeepMind and Max Planck and Eta and... yeah that's it." }, { "end": 46.480000000000004, "start": 39.92, "text": " Alright, so what does this thing do? It is a planning algorithm and planning might" }, { "end": 53.64, "start": 46.48, "text": " be not really familiar for you. So let's say you are in this room and or this set" }, { "end": 59, "start": 53.64, "text": " of rooms right here. There's a bunch of walls and you are up here and you want" }, { "end": 64.4, "start": 59, "text": " to reach the goal down here. So first of all this is a goal-directed problem." }, { "end": 69.08, "start": 64.4, "text": " Right here it says goal-directed. Goal-directed means that you give the" }, { "end": 73.92, "start": 69.08, "text": " algorithm a goal to reach and this could be a different goal each time you're on" }, { "end": 79.96000000000001, "start": 73.92, "text": " the algorithm. So you give it a goal to reach. Then the second thing here we" }, { "end": 84.36, "start": 79.96000000000001, "text": " see is planning. So it is a planning algorithm. What does planning mean?" }, { "end": 88.72, "start": 84.36, "text": " Planning means if you're traditionally reinforcement learning you" }, { "end": 96.04, "start": 88.72, "text": " would think I'm just gonna go ahead and run my agent here and maybe you can" }, { "end": 101.24000000000001, "start": 96.04, "text": " move in the four different directions, run my agent here, do some things, right?" }, { "end": 107.91999999999999, "start": 101.24, "text": " Maybe I hit a wall, I get a negative reward, I try again. In planning you" }, { "end": 113.8, "start": 107.91999999999999, "text": " don't have to move initially. What you can do is think about moving and you can" }, { "end": 118.32, "start": 113.8, "text": " think ahead of what's going to happen and that's usually because you have some" }, { "end": 124.16, "start": 118.32, "text": " sort of model what happens. This is very famous applied in for example AlphaGo" }, { "end": 130.04, "start": 124.16, "text": " or AlphaZero where you know if I move to the right here I'm going to" }, { "end": 135.2, "start": 130.04, "text": " be here. So you will know once you've reached the goal. If I'm" }, { "end": 140.16, "start": 135.2, "text": " here I go down I reach the goal. So you can think all of this without" }, { "end": 144.48, "start": 140.16, "text": " actually moving in the environment. You can think yourself ahead what would" }, { "end": 150.79999999999998, "start": 144.48, "text": " happen if I did certain things and that in turn also means that you can think" }, { "end": 154.84, "start": 150.79999999999998, "text": " ahead of multiple different paths. So you can think ahead what would happen if I" }, { "end": 158.95999999999998, "start": 154.84, "text": " move right, what would happen if I move down and then you can think in the next" }, { "end": 165.08, "start": 158.96, "text": " layer if I move right what would happen if I moved right again? What would happen" }, { "end": 171.20000000000002, "start": 165.08, "text": " if I move down instead? So you can easily see the planning problem becomes a tree" }, { "end": 176.24, "start": 171.20000000000002, "text": " search problem. In this case we've done a breadth first search" }, { "end": 183.36, "start": 176.24, "text": " and eventually you'll see that this will get you to the goal. So this" }, { "end": 187.72, "start": 183.36, "text": " breadth first search or maybe you want to employ depth first search will" }, { "end": 192.32, "start": 187.72, "text": " ultimately get you to the goal. We can represent this as a search tree. So" }, { "end": 196.04, "start": 192.32, "text": " you're here in a particular state and you have a bunch of actions in this case" }, { "end": 204.34, "start": 196.04, "text": " four to move left up down or right and you can choose any of them and you will" }, { "end": 208.32, "start": 204.34, "text": " get into a new state and from each of those you could choose any again and you" }, { "end": 213.24, "start": 208.32, "text": " can think ahead all of this you can construct this entire tree and one of" }, { "end": 220.76000000000002, "start": 213.24, "text": " these branches will lead you to the goal. Promise! What is the problem? The problem" }, { "end": 228.60000000000002, "start": 220.76000000000002, "text": " is if the path to the goal here is let's say D steps long then this tree here is" }, { "end": 234.92000000000002, "start": 228.60000000000002, "text": " going to be D layers deep and in our case that means we'll have four to the D" }, { "end": 242.8, "start": 234.92000000000002, "text": " nodes in that tree before we even reach the goal and that is just a long long" }, { "end": 248.72, "start": 242.8, "text": " or a big tree and we can't construct all of it. So algorithms have come along for" }, { "end": 254, "start": 248.72, "text": " example the A star algorithm where you can incorporate something like a" }, { "end": 258.8, "start": 254, "text": " heuristic and the heuristic would be well it's generally good if I'm close to" }, { "end": 264.76, "start": 258.8, "text": " the goal in L2 distance so you would not build the entire tree here but you would" }, { "end": 271.28000000000003, "start": 264.76, "text": " prefer nodes that will bring you towards this heuristic so this node down here is" }, { "end": 275.96, "start": 271.28, "text": " closer to the green node in terms of L2 distance and this node down here is even" }, { "end": 281.76, "start": 275.96, "text": " closer but then you're kind of stuck so A star will explore a bit probably along" }, { "end": 288.84, "start": 281.76, "text": " this wall here and once you're here you have a clear path again right so you can" }, { "end": 292.52, "start": 288.84, "text": " simply take the actions that minimize this L2 distance so this will already" }, { "end": 300.28, "start": 292.52, "text": " get you to a real good point. Monte Carlo tree search as it was employed in AlphaGo" }, { "end": 306.91999999999996, "start": 300.28, "text": " or AlphaStar has a similar structure where it after a certain while it stops" }, { "end": 312.32, "start": 306.91999999999996, "text": " and evaluates a heuristic to say what's the the value here and so on so it is in" }, { "end": 318.2, "start": 312.32, "text": " for some problems an even better method of constructing the search tree in a way" }, { "end": 324.47999999999996, "start": 318.2, "text": " where you don't get overblown by the number so the Monte Carlo search tree in" }, { "end": 330.03999999999996, "start": 324.47999999999996, "text": " this algorithm refers to the fact that we are generalizing Monte Carlo tree" }, { "end": 337.28000000000003, "start": 330.04, "text": " search from the let's say the AlphaGo paper so what's the idea here so far" }, { "end": 343.16, "start": 337.28000000000003, "text": " we've known everything the idea is the following if I had an Oracle if I am the" }, { "end": 352.16, "start": 343.16, "text": " master here and I can tell the agent agent look I guarantee you that this" }, { "end": 357.64000000000004, "start": 352.16, "text": " state right here in the middle you will pass that state if you want to reach the" }, { "end": 365.2, "start": 357.64, "text": " goal you will pass this for sure if I tell this to the agent now what can the" }, { "end": 369.8, "start": 365.2, "text": " agent do the edge could say oh okay if I know that I can simply I don't have to" }, { "end": 374.44, "start": 369.8, "text": " search for a way to the goal I'd much rather search for a way from my start" }, { "end": 380.52, "start": 374.44, "text": " point to that point where I know that I'm guaranteed to be at some place and" }, { "end": 388.96, "start": 380.52, "text": " then I can search also from a way from there to the goal right so this now" }, { "end": 394.47999999999996, "start": 388.96, "text": " remember our long out our path was d steps long for the original problem this" }, { "end": 399.91999999999996, "start": 394.47999999999996, "text": " is now let's say d half long each of these paths and that means we just" }, { "end": 405.64, "start": 399.91999999999996, "text": " construct two trees each one of them is going to be 4 to the d half and the" }, { "end": 410.24, "start": 405.64, "text": " other one is also 4 to the d half and if we add them that is much smaller than" }, { "end": 417.40000000000003, "start": 410.24, "text": " the original 4 to the d tree that we build so right there we have subdivided" }, { "end": 422.36, "start": 417.40000000000003, "text": " our problem into two sub problems and each of them are much easier than the" }, { "end": 428.08, "start": 422.36, "text": " original problem this paper basically does this recursively so it will" }, { "end": 433.36, "start": 428.08, "text": " subdivide the problem by proposing some middle state where you are going to be" }, { "end": 438.08, "start": 433.36, "text": " at some point for sure and that for sure we're going to take a look at and then" }, { "end": 443.03999999999996, "start": 438.08, "text": " for each of those problems again it will try to subdivided into sub problems and" }, { "end": 447.52, "start": 443.03999999999996, "text": " therefore recursively solve the sub problems until they're small enough that" }, { "end": 455.4, "start": 447.52, "text": " they can be basically solved in one step so that's the big idea here and this is" }, { "end": 461.28, "start": 455.4, "text": " illustrated in this point right here so you are in this s0 the start state and" }, { "end": 466.79999999999995, "start": 461.28, "text": " you want to go to this s infinity the goal state in your case what this paper" }, { "end": 471.6, "start": 466.8, "text": " does is it proposes to split the problem here in the middle and then simply solve" }, { "end": 479.16, "start": 471.6, "text": " the two problems recursively now what is a bit confusing right here is that it is" }, { "end": 486.8, "start": 479.16, "text": " the planning already is a tree search right so a plan is like a tree but we" }, { "end": 491.44, "start": 486.8, "text": " are searching over plan so we're searching for the best plan which means" }, { "end": 498.28, "start": 491.44, "text": " that we are searching over trees and that search itself is a tree search so" }, { "end": 502.71999999999997, "start": 498.28, "text": " the search itself is a problem where we go down one route and then on oh and then" }, { "end": 510.44, "start": 502.71999999999997, "text": " we maybe go down another route and here and then here so the search is a tree so" }, { "end": 515.8, "start": 510.44, "text": " we're now tree searching over trees that's the kind of tricky thing to" }, { "end": 521.8, "start": 515.8, "text": " remember so each of these plans even if it's half if it's only half done like" }, { "end": 526.28, "start": 521.8, "text": " this is only half a plan we don't know what's gonna happen in here this half a" }, { "end": 534.16, "start": 526.28, "text": " plan is a node in the tree that we're searching over and then it splits it it" }, { "end": 539.0799999999999, "start": 534.16, "text": " splits as you can see here into two sub problems the two sub problems also are" }, { "end": 544.4799999999999, "start": 539.0799999999999, "text": " nodes in that search tree so you see that this top thing here would correspond" }, { "end": 550.04, "start": 544.48, "text": " to this note even though in itself it is a plan a tree string in this case and" }, { "end": 555.76, "start": 550.04, "text": " each of the two sub problems would become these particular these nodes in" }, { "end": 563.44, "start": 555.76, "text": " the search tree so keep that in mind as we go through this paper it is very easy" }, { "end": 571.9200000000001, "start": 563.44, "text": " to get confused in this respect the algorithm is pretty simple in this case" }, { "end": 580.92, "start": 571.92, "text": " so this algorithm rests on this traverse procedure so we're going to we're going" }, { "end": 586.4399999999999, "start": 580.92, "text": " to traverse this what they call these or nodes so they they divide the problem" }, { "end": 591.28, "start": 586.4399999999999, "text": " into and nodes and or nodes I I don't believe that's particularly necessary" }, { "end": 597.24, "start": 591.28, "text": " for us to think about but here's how this works so they traverse the or node" }, { "end": 605.6800000000001, "start": 597.24, "text": " s to s prime this is simply again this is a node but the node is a path from s" }, { "end": 613.12, "start": 605.6800000000001, "text": " to s prime where we don't know yet what happens in between right so what we'll" }, { "end": 620.08, "start": 613.12, "text": " do is we'll run this procedure here select and select it you can see it" }, { "end": 624.88, "start": 620.08, "text": " outputs an s prime and the s prime is going to be a node here somewhere in the" }, { "end": 631.36, "start": 624.88, "text": " middle where the model says this is your subdivision point and then it will" }, { "end": 637.52, "start": 631.36, "text": " recursively traverse the left and the right branch of this tree so it will" }, { "end": 643.48, "start": 637.52, "text": " subdivide the problem into two problems and then recursively call this traverse" }, { "end": 648.72, "start": 643.48, "text": " you see that's the function that we're defining call this traverse function on" }, { "end": 654.16, "start": 648.72, "text": " these so it will subdivide this problem into these problems and it would for the" }, { "end": 660.6, "start": 654.16, "text": " next for the next step again it will eat for each of the problems propose a" }, { "end": 667.28, "start": 660.6, "text": " middle node and subdivide it further and so on until you have a full plan right" }, { "end": 672.24, "start": 667.28, "text": " at some point you're going to have a full plan now here again is the" }, { "end": 678.4399999999999, "start": 672.24, "text": " important thing to remember this is just one branch of the search this is just" }, { "end": 686.96, "start": 678.44, "text": " one possible plan and we are going to do a tree search over these plans so this" }, { "end": 692.2, "start": 686.96, "text": " select function here it has returned this as prime but it could have returned" }, { "end": 698.6800000000001, "start": 692.2, "text": " any point between s and s double prime so let in it this is just one branch I'm" }, { "end": 704.48, "start": 698.6800000000001, "text": " going to I don't have space to draw here but I'm going to draw it down here again" }, { "end": 712.32, "start": 704.48, "text": " so it could have also returned this particular node here like it's a" }, { "end": 717.48, "start": 712.32, "text": " different s prime and then subdivided the problem into different problems and" }, { "end": 722.12, "start": 717.48, "text": " then of course those problems are now different so they would be subdivided" }, { "end": 729.04, "start": 722.12, "text": " differently again and so on so this top part here is just if you consider this" }, { "end": 733.9200000000001, "start": 729.04, "text": " thing here your root node this is where you search from this top part is just" }, { "end": 740.28, "start": 733.92, "text": " one node one branch in the tree but we could also subdivide like this and then" }, { "end": 745.7199999999999, "start": 740.28, "text": " that would be another branch in your tree and this tree here is the thing" }, { "end": 751.88, "start": 745.7199999999999, "text": " that you're searching over so important to keep keep this in mind we're searching" }, { "end": 757.4799999999999, "start": 751.88, "text": " over these different possibilities now the rest of this algorithm here is" }, { "end": 763.48, "start": 757.4799999999999, "text": " basically the carryover from Monte Carlo tree search and I don't want to go into" }, { "end": 767.6, "start": 763.48, "text": " that in this video but if you're interested in you know how to actually" }, { "end": 773.36, "start": 767.6, "text": " implement this you'll have to go look at MCTS and then all of this just carries" }, { "end": 777.6, "start": 773.36, "text": " over from that algorithm because you have to keep estimates of value and" }, { "end": 781.6800000000001, "start": 777.6, "text": " visit counts in this tree and so on and also you have some sort of a value" }, { "end": 787.84, "start": 781.6800000000001, "text": " estimator but yeah I'm mainly concerned with how the tree is constructed right" }, { "end": 795.72, "start": 787.84, "text": " here so basically here's the here's the difference between a between the Monte" }, { "end": 801.64, "start": 795.72, "text": " Carlo tree search and the divide and conquer Monte Carlo tree search in" }, { "end": 806.2800000000001, "start": 801.64, "text": " Monte Carlo tree search ignore the yellow one for now you're in the green" }, { "end": 811.84, "start": 806.2800000000001, "text": " position and you want to go to the blue position in Monte Carlo tree search what" }, { "end": 817.64, "start": 811.84, "text": " you're searching over is the next action to take in this case you have four" }, { "end": 821.96, "start": 817.64, "text": " possible actions to take that's what you're searching over and that's what" }, { "end": 827.48, "start": 821.96, "text": " you build your search tree from your search tree is going to be which action" }, { "end": 833.4399999999999, "start": 827.48, "text": " to take right up left down or right that's why you have four actions in" }, { "end": 838.08, "start": 833.4399999999999, "text": " month in divide and conquer Monte Carlo tree search you're not searching over" }, { "end": 843.48, "start": 838.08, "text": " actions you are searching over the best way to subdivide this problem right" }, { "end": 848.16, "start": 843.48, "text": " you're searching over which of these all the black squares should I use to" }, { "end": 853.12, "start": 848.16, "text": " subdivide my problem into sub problems and that's what you build your search" }, { "end": 859.04, "start": 853.12, "text": " tree from so naturally you you can already see what kind of possibilities" }, { "end": 864.08, "start": 859.04, "text": " do we have here to subdivide this problem I drew one white square but any" }, { "end": 869.44, "start": 864.08, "text": " of the black squares are candidates to subdivide this problem right any of the" }, { "end": 874.96, "start": 869.44, "text": " black squares could be potential subdivisions and this is what we search" }, { "end": 882.44, "start": 874.96, "text": " over so in in Monte Carlo tree search we search over the actions which gives us" }, { "end": 890.6800000000001, "start": 882.44, "text": " this four to the D tree but in divide and conquer we're searching over all the" }, { "end": 895.2800000000001, "start": 890.6800000000001, "text": " ways to subdivide the problem as you can see there that are many many more" }, { "end": 901.48, "start": 895.28, "text": " possibilities so from this first starting node we have like like a hundred" }, { "end": 907.72, "start": 901.48, "text": " possibilities to subdivide this problem into two problems right and each of" }, { "end": 914.4, "start": 907.72, "text": " those again if you now you've decided on a subdivision let's say you decided on" }, { "end": 919.12, "start": 914.4, "text": " this one right here you say I want to pass through that point on my way to the" }, { "end": 926.8, "start": 919.12, "text": " goal now you have to subdivide that in this sub problem into two problems again" }, { "end": 932.22, "start": 926.8, "text": " every possible black square I'm not saying which one is good good thing to" }, { "end": 937, "start": 932.22, "text": " subdivide the problem I'm just asking what is a possible candidate every" }, { "end": 942.8, "start": 937, "text": " single black square here is a possible candidate for for a path from here to" }, { "end": 947.76, "start": 942.8, "text": " here right and again for this particular sub problem you have to do the same" }, { "end": 956.4399999999999, "start": 947.76, "text": " thing so the the search tree here even though we said before it is this one is" }, { "end": 965.88, "start": 956.4399999999999, "text": " very deep and this one is probably only log D sort of log 2d deep it width is" }, { "end": 971.6, "start": 965.88, "text": " going to be enormous and that is the catch right the catch this is not a" }, { "end": 978.44, "start": 971.6, "text": " method that is like a magic pill the catch is even though your tree is not as" }, { "end": 984.44, "start": 978.44, "text": " deep it is much much wider and it is intuitive right because you still have" }, { "end": 989.48, "start": 984.44, "text": " to have the ability to construct any possible plan so your tree is going to" }, { "end": 994.64, "start": 989.48, "text": " have as many nodes as the original Monte Carlo tree search tree you're if you" }, { "end": 1000.48, "start": 994.64, "text": " were to fully expand it right so it's your trading of depth for width here" }, { "end": 1011.24, "start": 1000.48, "text": " I hope I hope that's a bit clear so your entire your entire promise of this" }, { "end": 1016.2, "start": 1011.24, "text": " method is going to be can you from all of these possibilities so from all of" }, { "end": 1020.6800000000001, "start": 1016.2, "text": " these you don't even you don't even want to go and search even one layer deep" }, { "end": 1025.28, "start": 1020.6800000000001, "text": " through all of these don't even want to consider all of them right you want to" }, { "end": 1033.2, "start": 1025.28, "text": " search in this tree you want to limit your search to very particular ways of" }, { "end": 1039.36, "start": 1033.2, "text": " subdivision here if you can do that efficiently if you can pick efficiently" }, { "end": 1045.48, "start": 1039.36, "text": " candidates to subdivide then this could be a successful thing because your deep" }, { "end": 1050.72, "start": 1045.48, "text": " is now not as your tree is not as deep as the original search tree would have" }, { "end": 1055.76, "start": 1050.72, "text": " been and you can limit the width effectively to only very few candidates" }, { "end": 1061.4, "start": 1055.76, "text": " so here we could for example make a heuristic that will always only pick" }, { "end": 1069.76, "start": 1061.4, "text": " squares that are kind of on this straight path to the goal so everything" }, { "end": 1077, "start": 1069.76, "text": " rests on how you do this select action this thing here the entire algorithm" }, { "end": 1083.32, "start": 1077, "text": " relies on the fact that you can select effectively select in between states" }, { "end": 1087.72, "start": 1083.32, "text": " where you're pretty sure that the algorithm will have to pass through" }, { "end": 1093.04, "start": 1087.72, "text": " there because the worse you make these predictions the worse your tree search" }, { "end": 1101.08, "start": 1093.04, "text": " is going to work and what they do of course is they use deep learning as you" }, { "end": 1105.72, "start": 1101.08, "text": " might have guessed to do that so they have they will have a model that for a" }, { "end": 1111, "start": 1105.72, "text": " particular start and end goal will give them a probability distribution across" }, { "end": 1115.16, "start": 1111, "text": " candidates now everything that's black here also has probability mass but it's" }, { "end": 1120.96, "start": 1115.16, "text": " just so small you can't see and these blue ones are that the lighter blue the" }, { "end": 1125.84, "start": 1120.96, "text": " more probable this model thinks that this is going to be an in between state" }, { "end": 1132.92, "start": 1125.84, "text": " now the tree search can now limit itself effectively to only the ones here with" }, { "end": 1136.16, "start": 1132.92, "text": " the highest probability right so we select the ones with the highest" }, { "end": 1143.6000000000001, "start": 1136.16, "text": " probability and will only search plans that have these as the first possible" }, { "end": 1150.16, "start": 1143.6000000000001, "text": " subdivisions again we're searching over plans so we're searching over ways to" }, { "end": 1154.88, "start": 1150.16, "text": " subdivide the problem into smaller problems that is our search space so" }, { "end": 1159.52, "start": 1154.88, "text": " once we've decided on one of them let's say here the yellow one again we have to" }, { "end": 1163.52, "start": 1159.52, "text": " evaluate that model for each of the sub problems and this this is kind of a step" }, { "end": 1167.96, "start": 1163.52, "text": " that's missing here so in between here there would be a model evaluation that" }, { "end": 1173.6, "start": 1167.96, "text": " would again tell you which of these in between states were probable subdivision" }, { "end": 1178.44, "start": 1173.6, "text": " candidates and then you would select again one of those in that particular" }, { "end": 1182.6, "start": 1178.44, "text": " search branch and in a different search branch right you're searching over these" }, { "end": 1185.92, "start": 1182.6, "text": " things in a different search branch you would select a different one and see is" }, { "end": 1193.2, "start": 1185.92, "text": " this possibly a better way to subdivide the problem and so on so the question of" }, { "end": 1196.44, "start": 1193.2, "text": " course is how do you train this model how do you train a model that gives you" }, { "end": 1203.64, "start": 1196.44, "text": " good candidates for subdivision and the answer here is a comes from the idea of" }, { "end": 1209.28, "start": 1203.64, "text": " hindsight experience replay so let's say again you are here and you want to go" }, { "end": 1216.32, "start": 1209.28, "text": " here and you're not very good at the at it initially so they train this model as" }, { "end": 1221.24, "start": 1216.32, "text": " I understand along with letting their agent act in this environment so the" }, { "end": 1225.04, "start": 1221.24, "text": " agent uses the model to plan but initially it's not very good so maybe" }, { "end": 1230.52, "start": 1225.04, "text": " the agent will fail a lot of times so instead of going to the blue square it" }, { "end": 1236.84, "start": 1230.52, "text": " will reach this white square right here it will go here here and here will reach" }, { "end": 1241.3999999999999, "start": 1236.84, "text": " the white square instead of saying I failed what you can do and this is the" }, { "end": 1246.9199999999998, "start": 1241.3999999999999, "text": " idea of hindsight experience replay is to say well I did fail but I did reach" }, { "end": 1253.56, "start": 1246.9199999999998, "text": " something right I I have reached a thing and and it's actually possible that that" }, { "end": 1258.08, "start": 1253.56, "text": " thing could have been the goal but this particular episode this was the goal" }, { "end": 1263.1999999999998, "start": 1258.08, "text": " remember the goal changes every time it's a goal-directed policy so it says" }, { "end": 1268.32, "start": 1263.2, "text": " well this could have been the goal possibly so if I just pretend this was" }, { "end": 1274.64, "start": 1268.32, "text": " the goal then I have a training example for a successful run so the hindsight" }, { "end": 1279.52, "start": 1274.64, "text": " experience replay basically pretends that what you have achieved even if you" }, { "end": 1284.1200000000001, "start": 1279.52, "text": " failed was your actual goal and that gives you a positive training example" }, { "end": 1288.88, "start": 1284.1200000000001, "text": " for an episode with that as a goal and the this it could have been the goal" }, { "end": 1295.68, "start": 1288.88, "text": " because the goal is chosen at random so this gives you a good training example" }, { "end": 1300.5200000000002, "start": 1295.68, "text": " now this paper just generalizes the hindsight experience replay or applies" }, { "end": 1305.16, "start": 1300.5200000000002, "text": " it to their particular framework and they say well if I reach this thing that" }, { "end": 1312.5600000000002, "start": 1305.16, "text": " means any point on this path is a good candidate for subdividing the path" }, { "end": 1318.6799999999998, "start": 1312.56, "text": " because I did actually reach the point remember the goal is to propose a a" }, { "end": 1324.56, "start": 1318.6799999999998, "text": " point where your for sure are going to pass through now since I've taken this" }, { "end": 1329.72, "start": 1324.56, "text": " path to this goal I have passed through any of the squares in between and so" }, { "end": 1334.6, "start": 1329.72, "text": " these are my possible sub candidates and all other black squares I don't want" }, { "end": 1338.54, "start": 1334.6, "text": " that so now I have a classifier I can train I can say any of these squares on" }, { "end": 1343.92, "start": 1338.54, "text": " my path are good squares to subdivide and any not on my path are bad ones they" }, { "end": 1348.6, "start": 1343.92, "text": " go a step further I believe and they actually say we're so if this was m" }, { "end": 1354.56, "start": 1348.6, "text": " steps we're actually going to take the particular square that is reached after" }, { "end": 1361.8799999999999, "start": 1354.56, "text": " m half steps so the exact middle point of that path is going to be our training" }, { "end": 1369.0800000000002, "start": 1361.88, "text": " example for subdivision so you have a classifier that has exactly one one" }, { "end": 1376.2800000000002, "start": 1369.0800000000002, "text": " target to train so this you train along with acting in the environment and of" }, { "end": 1380.4, "start": 1376.2800000000002, "text": " course your model for proposing subdivisions is going to be better and" }, { "end": 1383.8000000000002, "start": 1380.4, "text": " better and better and better and that makes your planning algorithm better and" }, { "end": 1390.2800000000002, "start": 1383.8000000000002, "text": " better and that makes you collect better episodes and so you can sort of sort of" }, { "end": 1399.04, "start": 1390.28, "text": " get bootstrap yourself up this thing now this is the basic experiment of the" }, { "end": 1404.92, "start": 1399.04, "text": " paper they also do this in a 3d manner where they move this little spider here" }, { "end": 1408.68, "start": 1404.92, "text": " around so the spider was trained to just move from one block to the next block and" }, { "end": 1414.48, "start": 1408.68, "text": " the planner basically tell it where to go and they show that they outperform" }, { "end": 1420.84, "start": 1414.48, "text": " the traditional Monte Carlo tree search now I have to say this is cool but you" }, { "end": 1427.84, "start": 1420.84, "text": " have to remember this is this is only advantageous in very very specific types" }, { "end": 1432, "start": 1427.84, "text": " of problems so first of all it has to be this goal-directed nature otherwise you" }, { "end": 1439.2, "start": 1432, "text": " probably couldn't train this this predictor here super well then given" }, { "end": 1446.44, "start": 1439.2, "text": " that you have such a good predictor the problem needs to be such that if you" }, { "end": 1452.16, "start": 1446.44, "text": " have a start state there could be many ways to go about reaching the end and if" }, { "end": 1455.44, "start": 1452.16, "text": " you have an end state there could be many ways from where you could come from" }, { "end": 1462.04, "start": 1455.44, "text": " but but there is like some bottleneck state in the middle where you're pretty" }, { "end": 1467.3600000000001, "start": 1462.04, "text": " sure that you're going to have to pass through it so if your problem is of that" }, { "end": 1473.36, "start": 1467.36, "text": " nature right if it has these bottleneck states where you can predict with" }, { "end": 1478.1999999999998, "start": 1473.36, "text": " reasonable accuracy that you're going to have to pass through then this is a good" }, { "end": 1485.08, "start": 1478.1999999999998, "text": " algorithm to consider and is obviously I mean it's intuitively outperforming the" }, { "end": 1492.28, "start": 1485.08, "text": " original Monte Carlo tree search because you have much less deep search tree and" }, { "end": 1497.8799999999999, "start": 1492.28, "text": " you can effectively limit its width by using that model they also have made" }, { "end": 1504.8799999999999, "start": 1497.8799999999999, "text": " this website where they kind of show videos of their spider and I haven't seen" }, { "end": 1511.8799999999999, "start": 1504.8799999999999, "text": " it in a while but it is it is like next to the mouse if you can see it so so you" }, { "end": 1516.6, "start": 1511.8799999999999, "text": " see this is kind of a continuous control problem that also requires planning and" }, { "end": 1520.6399999999999, "start": 1516.6, "text": " they also have these kind of gifts of how they're there what order their plans" }, { "end": 1525.6000000000001, "start": 1520.64, "text": " are constructed in so I invite you to check this out read the paper if you" }, { "end": 1531.64, "start": 1525.6000000000001, "text": " like this subscribe leave a like leave a comment and thank you for listening bye" }, { "end": 1550.96, "start": 1531.64, "text": " bye" } ]
eCH0M4wzKJs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
WHO ARE YOU? 10k Subscribers Special (w/ Channel Analytics)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "special" ]
An in-depth look at this channel's analytics. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! We have just crossed 10,000 subscribers on this channel and that is an absolutely mind-blowing number. To everyone who's subscribed, thank you! And today, for a bit of a special occasion, I thought we would look at you. Yes, you handsome! One of the 10,000 subscribers of this channel. And we're going to dive into the YouTube analytics and see who you are and what you like and how you behave. So if you've never done YouTube as a creator, this is what it looks like. You can see right here, there are 10,071 subscribers right now. And if you look at the videos, Attention Is All You Need is the most popular video on this channel. It is also one of the oldest videos on this channel. Probably it's in many university curricula now and there's like half a lecture allocated to it. And that is not enough to understand this paper. So people come to this channel. I was still using kind of Adobe Reader at the time and etching in sort of... There's only one color, it's super laggy, but for some reason people like it and I'm not going to debate with people about what they like and what they don't. I might do another one on Attention Is All You Need just because I understand Transformers much better nowadays and I think I could do a better job at explaining them more clearly. So a lot of the NLP models tend to do very well as videos because I think people are interested and practitioners are interested and they just want to learn about these models and what they're doing. Also, there's Siraj controversy, very popular. It was a fun event and a sad event. This video right here, Deconstructing Lottery Tickets. It just outperforms all the other videos, absolutely mind blowingly rockets over everything. But if you look at the retention, I only retain people for two minutes on average, which is the retention is the lowest of any of my videos. And I could not understand this for the longest time. And then it occurred to me, if you title a video Deconstructing Lottery Tickets and you put a bunch of math on the thumbnail. People are going to click and then be very, very, very disappointed when you don't tell them how to win the lottery. And that takes them about one to two minutes to notice like, oh crap, I can't make any money using this video. So if you go to the general analytics tabs, it's very interesting to see what people like. You see the last 28 days I've uploaded every single day, except here. This is a bit of a gap. I have accidentally deleted the back prop in the brain video and had to re-upload it the next day. So if you look at the last year, the views have gone up substantially. The subscribers have gone up, but there are subscriber spikes right here. And these spikes are usually sometimes when some large personality recommends this channel. The channel gains a bunch of subscribers that doesn't necessarily translate into more views. It's just people that click on subscribe and then never care anymore. The metric that's most interesting to me personally is watch time. And as you can see here, watch time has gone substantially up in the last month, which I find to be encouraging. One minute of watch time means that I get to transmit one minute of information to the viewer. And that's what really matters to me. Of course, if I'm doing a worse job at explaining, then the viewer has to watch for longer. But that usually doesn't really work out because they're just going to click away. All right, to the fun part, the audience. Who are you? Now, as you can see here, 69% of you are not subscribed. What are you doing not being subscribed? Though this has changed in the last month significantly. I believe now it's about half and half. What I find most interesting, though, is that 10% of you have this bell notification on. So 10% of you actively want to be disturbed whenever I upload a video. I am incredibly flattered by this. This is the biggest compliment. Not even I do this for the channels that I follow. Demographics also very fun. About 93% of you tend to be male and about 6% of you tend to be female, at least according to YouTube statistics. And that's a pretty good intersection of YouTube being mostly male and machine learning field also being mostly male. If I'm doing anything to attract any particular type of person, please let me know so I can diversify a bit. Everyone's welcome here. You tend to be above 18, which is good because we have very, very much adult content on this channel. I'm happy to see that none of you is underage. I think that's YouTube just not reporting underage statistics. But most of you tend to be 18 to 45 years old, though to the older viewers, you're of course also very welcome. Though I'm pretty sure that some of these is just because you were underage when you created your account and you just told them you were born in 1923. So most of you tend to come from the United States or India or Germany. This is very incomplete list. I think the most people simply are in the other category of countries, which I think means YouTube just doesn't know. But it is cool to see that India is so high up. One of the reasons I started this channel. So the main reason is because it forces myself to read papers more thoroughly if I have to explain them. One other main reason I started this channel was because I thought I thought there was a gap between what you could get from a beginner's course, any sort of Coursera course and where research is right now. And to bridge that gap, you basically have to go to a very good university. And I know that most of the world doesn't have access to these very good universities. So my idea was to kind of bridge that gap to make that person that has a basic understanding of machine learning be able to be up to speed with current research, to be able to read current research papers. And the fact that I have quite a number of people watching from countries where top universities maybe aren't located is very encouraging to me. Thanks for watching from all around the world. One person is watching this with Russian subtitles. Okay, we can go into the advanced statistics here. And that is pretty interesting as well. You see here most videos kind of spike when they come out, especially the news videos. They tend to be very popular. Just the moment that I release them and then they sort of fall down. Traffic source I find particularly interesting. If you look at the last 90 days, there are these spikes and these spikes tend to come mainly from from Google searches. So at some point, people simply Google search for stuff and then they find this channel, which is encouraging. Most people actually search for attention is all you need or things like this. YouTube doesn't show you anymore what people searched on Google, but YouTube shows you what people searched for on YouTube. And that tends to be mostly attention is all you need. You zero my name. Hello. So if you correlate these spikes in searches with geography, some of these spikes tend to be worldwide like this one here or this one here. But this particular spike is only United States. And if you look at the videos that I released during that time period, one of these videos right here. So either it is the Schmidhuber drama, a lot of people searching for that. Maybe maybe it's ImageNet V2 or the online conferences. I don't know. You can see right here, I didn't make many subscribers of of these spikes. It's simply people being interested in content, which is pretty cool. It is also interesting to see that these spikes right here, they correspond mainly to mobile phone users. So mobile phone users go in Google searching for content on this channel. I have no idea what's going on. All right. Now, the last question to solve is, of course, monetization. How much money does this channel make? And the answer is none so far. So there are multiple reasons why I haven't applied for monetization yet. I find YouTube ads just incredibly annoying, especially now that they've decided to stick two ads in front of videos. I just don't want to bug users with that. If you look at what I gain from it, it's not that much. Any money that I would make, I would like to sort of reinvest into the channel. And right now, I just don't have any requirements. That might change in the future, maybe once we get to 800,000 subscribers. All right. That was it for YouTube analytics of this channel. I hope you are still enjoying this content. If you're not subscribed, please do. Next update will be at a hundred thousand. And I hope that everything is as enjoyable as ever. Thank you for watching. Thank you for subscribing. Thanks for being here and to the future.
[ { "end": 8.76, "start": 0, "text": " Hi there! We have just crossed 10,000 subscribers on this channel and that is an absolutely mind-blowing number." }, { "end": 15.84, "start": 8.76, "text": " To everyone who's subscribed, thank you! And today, for a bit of a special occasion, I thought we would look at you." }, { "end": 21.080000000000002, "start": 15.84, "text": " Yes, you handsome! One of the 10,000 subscribers of this channel." }, { "end": 27.88, "start": 21.080000000000002, "text": " And we're going to dive into the YouTube analytics and see who you are and what you like and how you behave." }, { "end": 31.56, "start": 27.88, "text": " So if you've never done YouTube as a creator, this is what it looks like." }, { "end": 37.76, "start": 31.56, "text": " You can see right here, there are 10,071 subscribers right now." }, { "end": 43.44, "start": 37.76, "text": " And if you look at the videos, Attention Is All You Need is the most popular video on this channel." }, { "end": 46.4, "start": 43.44, "text": " It is also one of the oldest videos on this channel." }, { "end": 52.28, "start": 46.4, "text": " Probably it's in many university curricula now and there's like half a lecture allocated to it." }, { "end": 57.88, "start": 52.28, "text": " And that is not enough to understand this paper. So people come to this channel." }, { "end": 64.52, "start": 57.88, "text": " I was still using kind of Adobe Reader at the time and etching in sort of..." }, { "end": 72.84, "start": 64.52, "text": " There's only one color, it's super laggy, but for some reason people like it and I'm not going to debate with people about what they like and what they don't." }, { "end": 82.04, "start": 72.84, "text": " I might do another one on Attention Is All You Need just because I understand Transformers much better nowadays and I think I could do a better job at explaining them more clearly." }, { "end": 93.36000000000001, "start": 82.04, "text": " So a lot of the NLP models tend to do very well as videos because I think people are interested and practitioners are interested and they just want to learn about these models and what they're doing." }, { "end": 99.60000000000001, "start": 93.36000000000001, "text": " Also, there's Siraj controversy, very popular. It was a fun event and a sad event." }, { "end": 103.04, "start": 99.60000000000001, "text": " This video right here, Deconstructing Lottery Tickets." }, { "end": 110.4, "start": 103.04, "text": " It just outperforms all the other videos, absolutely mind blowingly rockets over everything." }, { "end": 119.2, "start": 110.4, "text": " But if you look at the retention, I only retain people for two minutes on average, which is the retention is the lowest of any of my videos." }, { "end": 132.72, "start": 119.2, "text": " And I could not understand this for the longest time. And then it occurred to me, if you title a video Deconstructing Lottery Tickets and you put a bunch of math on the thumbnail." }, { "end": 140.96, "start": 132.72, "text": " People are going to click and then be very, very, very disappointed when you don't tell them how to win the lottery." }, { "end": 147.56, "start": 140.96, "text": " And that takes them about one to two minutes to notice like, oh crap, I can't make any money using this video." }, { "end": 154.32, "start": 147.56, "text": " So if you go to the general analytics tabs, it's very interesting to see what people like." }, { "end": 160.07999999999998, "start": 154.32, "text": " You see the last 28 days I've uploaded every single day, except here." }, { "end": 168.08, "start": 160.08, "text": " This is a bit of a gap. I have accidentally deleted the back prop in the brain video and had to re-upload it the next day." }, { "end": 173.92000000000002, "start": 168.08, "text": " So if you look at the last year, the views have gone up substantially." }, { "end": 179.20000000000002, "start": 173.92000000000002, "text": " The subscribers have gone up, but there are subscriber spikes right here." }, { "end": 186.20000000000002, "start": 179.20000000000002, "text": " And these spikes are usually sometimes when some large personality recommends this channel." }, { "end": 191.04, "start": 186.2, "text": " The channel gains a bunch of subscribers that doesn't necessarily translate into more views." }, { "end": 194.79999999999998, "start": 191.04, "text": " It's just people that click on subscribe and then never care anymore." }, { "end": 198.28, "start": 194.79999999999998, "text": " The metric that's most interesting to me personally is watch time." }, { "end": 205.83999999999997, "start": 198.28, "text": " And as you can see here, watch time has gone substantially up in the last month, which I find to be encouraging." }, { "end": 213.12, "start": 205.83999999999997, "text": " One minute of watch time means that I get to transmit one minute of information to the viewer." }, { "end": 214.92, "start": 213.12, "text": " And that's what really matters to me." }, { "end": 220.28, "start": 214.92, "text": " Of course, if I'm doing a worse job at explaining, then the viewer has to watch for longer." }, { "end": 224.23999999999998, "start": 220.28, "text": " But that usually doesn't really work out because they're just going to click away." }, { "end": 228.39999999999998, "start": 224.23999999999998, "text": " All right, to the fun part, the audience." }, { "end": 230.72, "start": 228.39999999999998, "text": " Who are you?" }, { "end": 235.04, "start": 230.72, "text": " Now, as you can see here, 69% of you are not subscribed." }, { "end": 237.27999999999997, "start": 235.04, "text": " What are you doing not being subscribed?" }, { "end": 240.48, "start": 237.27999999999997, "text": " Though this has changed in the last month significantly." }, { "end": 242.79999999999998, "start": 240.48, "text": " I believe now it's about half and half." }, { "end": 249.04000000000002, "start": 242.8, "text": " What I find most interesting, though, is that 10% of you have this bell notification on." }, { "end": 255.12, "start": 249.04000000000002, "text": " So 10% of you actively want to be disturbed whenever I upload a video." }, { "end": 257.48, "start": 255.12, "text": " I am incredibly flattered by this." }, { "end": 259.32, "start": 257.48, "text": " This is the biggest compliment." }, { "end": 262.64, "start": 259.32, "text": " Not even I do this for the channels that I follow." }, { "end": 264.88, "start": 262.64, "text": " Demographics also very fun." }, { "end": 272.76, "start": 264.88, "text": " About 93% of you tend to be male and about 6% of you tend to be female, at least according to YouTube statistics." }, { "end": 280.56, "start": 272.76, "text": " And that's a pretty good intersection of YouTube being mostly male and machine learning field also being mostly male." }, { "end": 288.03999999999996, "start": 280.56, "text": " If I'm doing anything to attract any particular type of person, please let me know so I can diversify a bit." }, { "end": 290.56, "start": 288.03999999999996, "text": " Everyone's welcome here." }, { "end": 298.96, "start": 290.56, "text": " You tend to be above 18, which is good because we have very, very much adult content on this channel." }, { "end": 301.76, "start": 298.96, "text": " I'm happy to see that none of you is underage." }, { "end": 306.96, "start": 301.76, "text": " I think that's YouTube just not reporting underage statistics." }, { "end": 315.84, "start": 306.96, "text": " But most of you tend to be 18 to 45 years old, though to the older viewers, you're of course also very welcome." }, { "end": 325.64, "start": 315.84, "text": " Though I'm pretty sure that some of these is just because you were underage when you created your account and you just told them you were born in 1923." }, { "end": 331.91999999999996, "start": 325.64, "text": " So most of you tend to come from the United States or India or Germany." }, { "end": 333.64, "start": 331.91999999999996, "text": " This is very incomplete list." }, { "end": 341.88, "start": 333.64, "text": " I think the most people simply are in the other category of countries, which I think means YouTube just doesn't know." }, { "end": 344.76, "start": 341.88, "text": " But it is cool to see that India is so high up." }, { "end": 346.91999999999996, "start": 344.76, "text": " One of the reasons I started this channel." }, { "end": 353.8, "start": 346.91999999999996, "text": " So the main reason is because it forces myself to read papers more thoroughly if I have to explain them." }, { "end": 362.32, "start": 353.8, "text": " One other main reason I started this channel was because I thought I thought there was a gap between what you could get from a beginner's course," }, { "end": 367.44, "start": 362.32, "text": " any sort of Coursera course and where research is right now." }, { "end": 372.52, "start": 367.44, "text": " And to bridge that gap, you basically have to go to a very good university." }, { "end": 377.08000000000004, "start": 372.52, "text": " And I know that most of the world doesn't have access to these very good universities." }, { "end": 388.96, "start": 377.08, "text": " So my idea was to kind of bridge that gap to make that person that has a basic understanding of machine learning be able to be up to speed with current research," }, { "end": 391.91999999999996, "start": 388.96, "text": " to be able to read current research papers." }, { "end": 400.88, "start": 391.91999999999996, "text": " And the fact that I have quite a number of people watching from countries where top universities maybe aren't located is very encouraging to me." }, { "end": 404.44, "start": 400.88, "text": " Thanks for watching from all around the world." }, { "end": 410, "start": 404.44, "text": " One person is watching this with Russian subtitles." }, { "end": 413.76, "start": 410, "text": " Okay, we can go into the advanced statistics here." }, { "end": 415.84, "start": 413.76, "text": " And that is pretty interesting as well." }, { "end": 421, "start": 415.84, "text": " You see here most videos kind of spike when they come out, especially the news videos." }, { "end": 423.92, "start": 421, "text": " They tend to be very popular." }, { "end": 428.8, "start": 423.92, "text": " Just the moment that I release them and then they sort of fall down." }, { "end": 431.4, "start": 428.8, "text": " Traffic source I find particularly interesting." }, { "end": 440.15999999999997, "start": 431.4, "text": " If you look at the last 90 days, there are these spikes and these spikes tend to come mainly from from Google searches." }, { "end": 448.52, "start": 440.15999999999997, "text": " So at some point, people simply Google search for stuff and then they find this channel, which is encouraging." }, { "end": 452.59999999999997, "start": 448.52, "text": " Most people actually search for attention is all you need or things like this." }, { "end": 459.79999999999995, "start": 452.59999999999997, "text": " YouTube doesn't show you anymore what people searched on Google, but YouTube shows you what people searched for on YouTube." }, { "end": 463.36, "start": 459.8, "text": " And that tends to be mostly attention is all you need." }, { "end": 465.68, "start": 463.36, "text": " You zero my name." }, { "end": 466.6, "start": 465.68, "text": " Hello." }, { "end": 475.32, "start": 466.6, "text": " So if you correlate these spikes in searches with geography, some of these spikes tend to be worldwide like this one here or this one here." }, { "end": 479.32, "start": 475.32, "text": " But this particular spike is only United States." }, { "end": 485.92, "start": 479.32, "text": " And if you look at the videos that I released during that time period, one of these videos right here." }, { "end": 491.40000000000003, "start": 485.92, "text": " So either it is the Schmidhuber drama, a lot of people searching for that." }, { "end": 496.48, "start": 491.40000000000003, "text": " Maybe maybe it's ImageNet V2 or the online conferences." }, { "end": 502.12, "start": 496.48, "text": " I don't know. You can see right here, I didn't make many subscribers of of these spikes." }, { "end": 506.08000000000004, "start": 502.12, "text": " It's simply people being interested in content, which is pretty cool." }, { "end": 514.72, "start": 506.08000000000004, "text": " It is also interesting to see that these spikes right here, they correspond mainly to mobile phone users." }, { "end": 520.96, "start": 514.72, "text": " So mobile phone users go in Google searching for content on this channel." }, { "end": 523.1600000000001, "start": 520.96, "text": " I have no idea what's going on." }, { "end": 527.48, "start": 523.1600000000001, "text": " All right. Now, the last question to solve is, of course, monetization." }, { "end": 530.88, "start": 527.48, "text": " How much money does this channel make?" }, { "end": 534.4, "start": 530.88, "text": " And the answer is none so far." }, { "end": 538.6, "start": 534.4, "text": " So there are multiple reasons why I haven't applied for monetization yet." }, { "end": 546.4, "start": 538.6, "text": " I find YouTube ads just incredibly annoying, especially now that they've decided to stick two ads in front of videos." }, { "end": 549.24, "start": 546.4, "text": " I just don't want to bug users with that." }, { "end": 552.16, "start": 549.24, "text": " If you look at what I gain from it, it's not that much." }, { "end": 556.72, "start": 552.16, "text": " Any money that I would make, I would like to sort of reinvest into the channel." }, { "end": 559.48, "start": 556.72, "text": " And right now, I just don't have any requirements." }, { "end": 564.0400000000001, "start": 559.48, "text": " That might change in the future, maybe once we get to 800,000 subscribers." }, { "end": 567.28, "start": 564.0400000000001, "text": " All right. That was it for YouTube analytics of this channel." }, { "end": 569.52, "start": 567.28, "text": " I hope you are still enjoying this content." }, { "end": 571.64, "start": 569.52, "text": " If you're not subscribed, please do." }, { "end": 574.6, "start": 571.64, "text": " Next update will be at a hundred thousand." }, { "end": 579.24, "start": 574.6, "text": " And I hope that everything is as enjoyable as ever." }, { "end": 581.8399999999999, "start": 579.24, "text": " Thank you for watching. Thank you for subscribing." }, { "end": 598.84, "start": 581.84, "text": " Thanks for being here and to the future." } ]
to7vCdkLi4s
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reinforcement Learning with Augmented Data (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "reinforcement learning", "sac", "ppo", "deep rl", "deep reinforcement learning", "dreamer", "curl", "pixel", "pretraining", "deepmind", "openai", "berkeley" ]
This ONE SIMPLE TRICK can take a vanilla RL algorithm to achieve state-of-the-art. What is it? Simply augment your training data before feeding it to the learner! This can be dropped into any RL pipeline and promises big improvements across the board. Paper: https://arxiv.org/abs/2004.14990 Code: https://www.github.com/MishaLaskin/rad Abstract: Learning from visual observations is a fundamental yet challenging problem in reinforcement learning (RL). Although algorithmic advancements combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) sample efficiency of learning and (b) generalization to new environments. To this end, we present RAD: Reinforcement Learning with Augmented Data, a simple plug-and-play module that can enhance any RL algorithm. We show that data augmentations such as random crop, color jitter, patch cutout, and random convolutions can enable simple RL algorithms to match and even outperform complex state-of-the-art methods across common benchmarks in terms of data-efficiency, generalization, and wall-clock speed. We find that data diversity alone can make agents focus on meaningful information from high-dimensional observations without any changes to the reinforcement learning method. On the DeepMind Control Suite, we show that RAD is state-of-the-art in terms of data-efficiency and performance across 15 environments. We further demonstrate that RAD can significantly improve the test-time generalization on several OpenAI ProcGen benchmarks. Finally, our customized data augmentation modules enable faster wall-clock speed compared to competing RL techniques. Our RAD module and training code are available at this https URL. Authors: Michael Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, Aravind Srinivas Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. Today we're going to take a short look at reinforcement learning with augmented data. This paper is by Michael Laskin, Kimmin Lee, and others from UC Berkeley and NYU. So the reason why this is a short look is because I believe the statements made in the paper are quite short and small, but they are quite grandiose. So we'll dive into it. The paper basically combines two things reinforcement learning and data augmentation. Now reinforcement learning, we've talked about a number of times, it's basically where an agent is in a world and has to learn to solve an optimization problem by repeatedly interacting with the world. You can see here, for example, this is the walker task, where this walker thing, it has two feet and basically needs to stand upright and walk for a number of steps. The further you go, the better. So by repeatedly trying this and getting better and better at it, that is reinforcement learning. The second part is the data augmentation. Now data augmentation is a pretty standard practice in supervised learning. What does it mean? So if you have a supervised learning task, for example, an image classification task, here is a picture of a cat, and the label is cat, then you can feed this through your neural network to arrive at a loss. But you only have so many pictures. You have a database and maybe you have, I don't know, 1 million images. Usually what people do is they go, let's say a number of times, like 20 or 50 times through that database to basically have the model learn each image multiple times. But what turns out to be more successful is if you do data augmentation, that means you have an in between layer right here that takes this image and some modifies it in some small way. This could be for example, it blocks out part of the image. So it simply blocks out the square here. And then you feed that through the the model. And then the next time the image comes up, it does something different. For example, it randomly crops the image to only the top right part here. And then the next time it does a bit of a color jitter. And then the next time it goes to grayscale and so on. So supervised learning has found data augmentation to be quite beneficial because not only do you make the model learn what this picture is, but you also make the model kind of learn some small variations of that picture where you can be pretty sure they would not change the label. So you would not feed the model false information that generally makes it more robust to test time discrepancies. So this paper has basically claims. If you want to do reinforcement learning, if you do simply do data augmentation with the input data to that reinforcement learning, it works much, much, much better. Now, of course, we can expect since in supervised learning, this is a general trick that it would do something for reinforcement learning as well. But this paper basically claims that this one plugin like here, so this is basically you plug this into your pipeline in reinforcement learning, this is basically as much of a gain as pretty much the last five years of research on on reinforcement learning on these things. So let's dive into it. This paper proposes just what I said, just plug in the data augmentation and then do reinforcement learning on the augmented data. They use these data augmentations. So crop we've already discussed, it's a random crop, grayscale means that the picture goes to gray, black and white with a certain probability. Cut out means that there's a little patch missing, like I said, cut out color the same but in a random color. Flip means you flip the image horizontally or vertically, according to a random probability. Rotate is the same but you instead of flip you rotate it. Random conv means you randomly convolve it with a filter. In this case, some red or blue or yellow filters. And color jitter means that you kind of jitter around the colors in a sort of in a sort of way that doesn't mess up the image too much. So you basically just kind of change the colors on the image, but the overall image still looks the same. The only thing you have to you have to pay attention to is that so in your reinforcement learning pipeline, usually if you have a walker like this, what you want to do is you have your network here and then you have you know, your policy and your value function. If you don't know what these are, we'll have we have I've treated them many times in reinforcement learning videos. What you want to do is you simply don't want to take this one current observation in here. But sometimes you want to take kind of the stacked of the last few frames so that the model kind of gets an idea what happened during let's say the last one second, right, so we can it can determine in this walker, for example, it's not only important where the legs are, which are up here right now. It is also important their momentum how they're moving right and you can you can you can determine that from the last few frames. So sometimes it's beneficial to feed the last few frames. And they say the important thing here is that these augmentations are applied consistently across the stacked frames. So basically you select on an augmentation and on the scale of that augmentation and then you apply it to these stacked frames all the same. And then in the next forward pass, you have a different set of stacked frames, then you can decide on a on a different augmentation. So that's basically the only difference between the supervised setting and this setting is that you have to consistently apply the augmentation. And you have to consistently apply this here and during training. So they formulate the classic proximal policy optimization here, which is an actor critic method. And the only time you have to really pay attention is when you plug the observation into these models here, right here, then it needs to be the same augmentation. Sorry, the same observation. So that means the observation augmented with the same data with the same augmentation procedures. All right, getting it together. Cool. So when you do this, when you do that, they say when applying our ad, which is the random random data augmentation to SAC, which is soft actor critic, right? Our data augmentations are applied to the observation past the Q and pi. So sorry, this is the thing up here. This is soft actor critic, which is a state of the art of policy algorithm for continuous control problems. And also you have to pay attention that when you feed the observations, they're the same observations, like here and here. And then proximal policy optimization is the one is a state of the art on policy algorithm for learning a continuous or discrete control policy. Okay, so as I said, they simply drop this in there. And then it turns out they outperform or match performance of many, many baselines. Here you can see curl, I've made a video on curl, which is another way of augmenting or pre training a for reinforcement learning, then state of the art things like planet or dreamer, I've made a video on dreamer as well. And then pixel SAC and state SAC is sort of a cheating algorithm because it has access to the state whereas all the other methods only have access to the to the pixels. And you can see that the data augmentation method, which is basically just plain RL plane pure SAC plus the plus the data augmentation outperforms in many times all of these other baselines. Now, here is a criticism of me. In order they never investigate, they simply say wow, this reaches the same performance or outperforms these other methods. Now, so it's the state of the art algorithm. It's important to note here that this is on the DM control 100k and 500k benchmarks, which means that there's a limit on the number of I believe frames from these control tasks that you get. So you either get 100k or you get 500k frames. So the difficulty is learning from limited data. It's not state of the art reinforcement learning method. Overall, it is the state of the art on this particular task on learning from limited data. Now, while I can believe that the augmentation would help here, I it is completely unclear whether or not the augmentation gives the same benefits as like something like dreamer, or whether the benefits from dreamer and the benefits from data augmentation are completely orthogonal. So in this paper, given that the claim is so simple that they make, I wouldn't I would expect like an investigation, what happens if I do dreamer plus the data augmentation? Maybe they've done it somewhere, and I just haven't seen it. But it just seems like they, they put this on the base basic RL algorithm, and then they claim, well, look here, it works well, but they never show that. So it could be that dreamer all this architecture, what it simply does is basically recover these gains that you could get by data augmentation, or it could be that it actually does something different, but just reaches the same amount of gain, right, it just reaches the same amount in improvement. And by combining them, you could improve it further. So not not just to get like a better number, but combining the two would actually give a lot of hints as to whether or not this augmentation works in line with the other methods, or whether the other methods are really doing something meaningfully different or not. But this is just not done here. And so they go into the they go into a question of which data augmentations contribute the most, and they get to the point where they say random crop is extremely effective. So they have this table here where they just basically combine two augmentations. And do you see, so for example, this thing here means that you apply grayscale, and then the rotate augmentation. And that gets you to whatever 300 points in this Walker, if you apply crop, and then crop, it gets you to 920 points and beats everything else. So they they say, okay, crop is the most the most effective. And I have I have the sneaking suspicion that these augmentations are so effective, simply because of how we set up these tasks, right, these reinforcement learning tasks, they don't tend to be a real world, they tend to be somewhat simulated. And as you can see here, the image is pretty clear. So you can pretty clearly see that here is the thing, there's no natural background or whatnot, it's procedurally generated, right, there are these stars that could confuse the model a bit. But still, it is so easy visually, this task, that I'm going to guess the whole reason why these image augmentations help is simply because of the way these reinforcement learning tasks right now are set up. And I'm would guess that if we had reinforcement learning in something like the real world, that the image augmentation methods would help in about the way they help unsupervised tasks in in the same data, for example, image net. So that is my sneaking suspicion. And this paper, I want to say it sort of over claims, it's how how absolutely great this works. Of course, it works great on these things. But I think there needs to be an investigation of where, why. So here they have some attention maps on where the algorithm focuses. And you can see when there's no data augmentation, it sort of focuses on good points. But when you do crop, it focuses on this ridge here, which makes sense, right, because that's the thing that needs to be kind of vertical in order for the walker to be stable. And in if you do other things, then you can see it, it doesn't really focus, it focuses on different things. So the crop method seems to make the model focus on the most important part of the image. And as the same with the cheetah task here, so if you don't do augmentation, and some of the augmentation, you can see that it actually focuses on some of these background stars, whereas in the cropped version, it focuses on not on the stars, but actually on the cheetah as a whole, which probably makes sense. Now, again, I have a bit of a I have a bit of a worry with these kinds of experiments, because we already know that crop will give you a much better score, right? So who's to say that if we could train this thing here to the same score, it wouldn't be paying attention to the same part. What they're trying to make clear here is that it is dependent on the particular type of data augmentation that the model gets a better grip on the input. But it is not really a valid comparison if we know that the crop agent performs a better score. And it could simply be that that's the reason why the attention is better, right? That that it is actually solving the problem better. So I mean, of course, this the fact that it's working better is due to the fact that you have crop augmented the data, but the fact that is focusing on the correct parts is not a property of the crop augmentation, but the property of the fact that it reaches a higher score. That was a long winded complaint, but I hope you get what I mean here. The last thing they do is they investigate generalization performance. So improving generalization on this open AI proc gen. Now, as I understand it, this is a reinforcement learning task or suite of tasks where you have procedurally generated levels. So you can sort of train on a bunch of levels, and then test the generalization to new levels that you haven't seen before. So there's a jumper here and star pilot. So they seem like this like a jump and run game or big fish. I don't even know what you have to do in big fish. But you can see that the levels seen here, this is one example, and unseen. So in this example, the background is very different. And I'm going to guess in the jumper thing, not only is the background but also the kind of generated level how you have to jump is quite different. So they investigate whether or not a agent trained on only the seen ones can generalize to the unseen ones. And this table presents the results. And as you can see, the RAD with the crop or with other things outperform the pixel paste based PPOs. Now, there is some nuance to this table here. First of all, you can see that these this crop thing is now only the winner in one of these three tasks, right in the in the big fish thing. In there is another augmentation technique here that wins over at star pilot. But you can see the difference is not that high. And in the dnd jumper with 200 levels, so this is 100 or 200 levels, the the original method is even the best. So here again, I believe this is evidence that that it is very much an interaction of these augmentations with the way the task is set up and not the augmentations themselves or the fact that you're augmenting. For example, if we look at this big fish, we've seen, okay, the what seems to change here is mainly the background, where as in the jumper example, the entire level structure seems to change. So then the the augmentation all of a sudden is not super effective anymore actually hurts, right? So I'm just not I'm just not super convinced by the claims you're making here. And one of the claims I find is, in particular, rad with random crop achieves, no wait, this point down here. Oh, yeah, achieves 55.8% gain over pixel based PPO. Okay. trained with 100 training levels outperforms the pixel based PPO with 200 training levels on both big fish and star pilot environment. This shows that data augmentation can be more effective in learning generalizable representations compared to simply increasing the number of training environments. I this statement. So again, how, like, why do you compare two different things if you don't? Like, if you don't show that maybe they're orthogonal. In fact, they are probably orthogonal because even on the 200 levels you you gain over the pixel based PPO, right? So why the comparison and then second of all, so here we see on the 100 levels, this method is better than the pixel based PPO. And then they claim that okay, they are even better on 100 levels than the pixel based PPO on 200 levels. And why that is true. If you know, if if if a is bigger than b, then probably a is going to be bigger than b plus some epsilon. And right, and and that doesn't, I just think that doesn't really warrant their statement where they say, Oh, look, this is even better. So as if the 100 levels of additional training were the standard measure of more data, like if there is going to be if you're better at the beginning, there's going to be a certain amount of data where you're still better than the other method with more data. And I don't find this super duper surprising, but they make a big claim here out of this. Alright, so in conclusion, I hope I'm not too harsh on this paper, it is a cool paper. And of course, it is cool findings. But I have a big suspicion that the augmentation here works so well simply because of how we set up these RL tasks, because they're visually quite, let's say easy. And therefore, these augmentations that are also our sort of easy abstractions of when an image is visually similar, because all of these things, right, to us as humans, we say, probably doesn't change anything if we just rotate the image. And we this is our prejudice. And we built this prejudice into these simulators for the RL tasks. So they will match up extremely well with these augmentations. And that's the reason I believe these things work and maybe not as much the the fact that you're augmenting. Okay, well, if you like this video, I invite you to check out the paper, subscribe to this channel, tell all your friends about it and leave a like in the comment. Thank you very much and bye bye.
[ { "end": 5.66, "start": 0, "text": " Hi there. Today we're going to take a short look at reinforcement learning with augmented" }, { "end": 12.58, "start": 5.66, "text": " data. This paper is by Michael Laskin, Kimmin Lee, and others from UC Berkeley and NYU." }, { "end": 17.080000000000002, "start": 12.58, "text": " So the reason why this is a short look is because I believe the statements made in the" }, { "end": 25.76, "start": 17.080000000000002, "text": " paper are quite short and small, but they are quite grandiose. So we'll dive into it." }, { "end": 32.1, "start": 25.76, "text": " The paper basically combines two things reinforcement learning and data augmentation. Now reinforcement" }, { "end": 36.84, "start": 32.1, "text": " learning, we've talked about a number of times, it's basically where an agent is in a world" }, { "end": 42.56, "start": 36.84, "text": " and has to learn to solve an optimization problem by repeatedly interacting with the" }, { "end": 50.260000000000005, "start": 42.56, "text": " world. You can see here, for example, this is the walker task, where this walker thing," }, { "end": 55.160000000000004, "start": 50.260000000000005, "text": " it has two feet and basically needs to stand upright and walk for a number of steps. The" }, { "end": 59.599999999999994, "start": 55.16, "text": " further you go, the better. So by repeatedly trying this and getting better and better" }, { "end": 66.88, "start": 59.599999999999994, "text": " at it, that is reinforcement learning. The second part is the data augmentation. Now" }, { "end": 73.06, "start": 66.88, "text": " data augmentation is a pretty standard practice in supervised learning. What does it mean?" }, { "end": 78.52, "start": 73.06, "text": " So if you have a supervised learning task, for example, an image classification task," }, { "end": 84.88, "start": 78.52, "text": " here is a picture of a cat, and the label is cat, then you can feed this through your" }, { "end": 92.83999999999999, "start": 84.88, "text": " neural network to arrive at a loss. But you only have so many pictures. You have a database" }, { "end": 100.56, "start": 92.83999999999999, "text": " and maybe you have, I don't know, 1 million images. Usually what people do is they go," }, { "end": 107.44, "start": 100.56, "text": " let's say a number of times, like 20 or 50 times through that database to basically have" }, { "end": 113.47999999999999, "start": 107.44, "text": " the model learn each image multiple times. But what turns out to be more successful is" }, { "end": 119.4, "start": 113.48, "text": " if you do data augmentation, that means you have an in between layer right here that takes" }, { "end": 129.34, "start": 119.4, "text": " this image and some modifies it in some small way. This could be for example, it blocks" }, { "end": 136.68, "start": 129.34, "text": " out part of the image. So it simply blocks out the square here. And then you feed that" }, { "end": 142.22, "start": 136.68, "text": " through the the model. And then the next time the image comes up, it does something different." }, { "end": 148.48, "start": 142.22, "text": " For example, it randomly crops the image to only the top right part here. And then the" }, { "end": 155.16, "start": 148.48, "text": " next time it does a bit of a color jitter. And then the next time it goes to grayscale" }, { "end": 160.36, "start": 155.16, "text": " and so on. So supervised learning has found data augmentation to be quite beneficial because" }, { "end": 165.76, "start": 160.36, "text": " not only do you make the model learn what this picture is, but you also make the model" }, { "end": 171.4, "start": 165.76, "text": " kind of learn some small variations of that picture where you can be pretty sure they" }, { "end": 176.22, "start": 171.4, "text": " would not change the label. So you would not feed the model false information that generally" }, { "end": 185.12, "start": 176.22, "text": " makes it more robust to test time discrepancies. So this paper has basically claims. If you" }, { "end": 193.28, "start": 185.12, "text": " want to do reinforcement learning, if you do simply do data augmentation with the input" }, { "end": 199.16, "start": 193.28, "text": " data to that reinforcement learning, it works much, much, much better. Now, of course, we" }, { "end": 203.88, "start": 199.16, "text": " can expect since in supervised learning, this is a general trick that it would do something" }, { "end": 210.54, "start": 203.88, "text": " for reinforcement learning as well. But this paper basically claims that this one plugin" }, { "end": 215.85999999999999, "start": 210.54, "text": " like here, so this is basically you plug this into your pipeline in reinforcement learning," }, { "end": 225.7, "start": 215.85999999999999, "text": " this is basically as much of a gain as pretty much the last five years of research on on" }, { "end": 234.94, "start": 225.7, "text": " reinforcement learning on these things. So let's dive into it. This paper proposes just" }, { "end": 239.6, "start": 234.94, "text": " what I said, just plug in the data augmentation and then do reinforcement learning on the" }, { "end": 245.95999999999998, "start": 239.6, "text": " augmented data. They use these data augmentations. So crop we've already discussed, it's a random" }, { "end": 253.28, "start": 245.95999999999998, "text": " crop, grayscale means that the picture goes to gray, black and white with a certain probability." }, { "end": 259.42, "start": 253.28, "text": " Cut out means that there's a little patch missing, like I said, cut out color the same" }, { "end": 265.82, "start": 259.42, "text": " but in a random color. Flip means you flip the image horizontally or vertically, according" }, { "end": 273.6, "start": 265.82, "text": " to a random probability. Rotate is the same but you instead of flip you rotate it. Random" }, { "end": 280.7, "start": 273.6, "text": " conv means you randomly convolve it with a filter. In this case, some red or blue or" }, { "end": 290.86, "start": 280.7, "text": " yellow filters. And color jitter means that you kind of jitter around the colors in a" }, { "end": 296.7, "start": 290.86, "text": " sort of in a sort of way that doesn't mess up the image too much. So you basically just" }, { "end": 301.53999999999996, "start": 296.7, "text": " kind of change the colors on the image, but the overall image still looks the same. The" }, { "end": 308.06, "start": 301.53999999999996, "text": " only thing you have to you have to pay attention to is that so in your reinforcement learning" }, { "end": 312.82, "start": 308.06, "text": " pipeline, usually if you have a walker like this, what you want to do is you have your" }, { "end": 317.02, "start": 312.82, "text": " network here and then you have you know, your policy and your value function. If you don't" }, { "end": 323.66, "start": 317.02, "text": " know what these are, we'll have we have I've treated them many times in reinforcement learning" }, { "end": 329.02, "start": 323.66, "text": " videos. What you want to do is you simply don't want to take this one current observation" }, { "end": 335.06, "start": 329.02, "text": " in here. But sometimes you want to take kind of the stacked of the last few frames so that" }, { "end": 340.6, "start": 335.06, "text": " the model kind of gets an idea what happened during let's say the last one second, right," }, { "end": 345.38, "start": 340.6, "text": " so we can it can determine in this walker, for example, it's not only important where" }, { "end": 351.78, "start": 345.38, "text": " the legs are, which are up here right now. It is also important their momentum how they're" }, { "end": 358.1, "start": 351.78, "text": " moving right and you can you can you can determine that from the last few frames. So sometimes" }, { "end": 363.58, "start": 358.1, "text": " it's beneficial to feed the last few frames. And they say the important thing here is that" }, { "end": 368.9, "start": 363.58, "text": " these augmentations are applied consistently across the stacked frames. So basically you" }, { "end": 374.41999999999996, "start": 368.9, "text": " select on an augmentation and on the scale of that augmentation and then you apply it" }, { "end": 380.06, "start": 374.41999999999996, "text": " to these stacked frames all the same. And then in the next forward pass, you have a" }, { "end": 385.02, "start": 380.06, "text": " different set of stacked frames, then you can decide on a on a different augmentation." }, { "end": 388.9, "start": 385.02, "text": " So that's basically the only difference between the supervised setting and this setting is" }, { "end": 397.5, "start": 388.9, "text": " that you have to consistently apply the augmentation. And you have to consistently apply this here" }, { "end": 406.14, "start": 397.5, "text": " and during training. So they formulate the classic proximal policy optimization here," }, { "end": 413.17999999999995, "start": 406.14, "text": " which is an actor critic method. And the only time you have to really pay attention is when" }, { "end": 421.38, "start": 413.18, "text": " you plug the observation into these models here, right here, then it needs to be the" }, { "end": 427.54, "start": 421.38, "text": " same augmentation. Sorry, the same observation. So that means the observation augmented with" }, { "end": 438.18, "start": 427.54, "text": " the same data with the same augmentation procedures. All right, getting it together. Cool. So when" }, { "end": 444.14, "start": 438.18, "text": " you do this, when you do that, they say when applying our ad, which is the random random" }, { "end": 454.02, "start": 444.14, "text": " data augmentation to SAC, which is soft actor critic, right? Our data augmentations are" }, { "end": 459.86, "start": 454.02, "text": " applied to the observation past the Q and pi. So sorry, this is the thing up here. This" }, { "end": 463.74, "start": 459.86, "text": " is soft actor critic, which is a state of the art of policy algorithm for continuous" }, { "end": 469.1, "start": 463.74, "text": " control problems. And also you have to pay attention that when you feed the observations," }, { "end": 475.14, "start": 469.1, "text": " they're the same observations, like here and here. And then proximal policy optimization" }, { "end": 479.7, "start": 475.14, "text": " is the one is a state of the art on policy algorithm for learning a continuous or discrete" }, { "end": 492.76, "start": 479.7, "text": " control policy. Okay, so as I said, they simply drop this in there. And then it turns out" }, { "end": 502.7, "start": 492.76, "text": " they outperform or match performance of many, many baselines. Here you can see curl, I've" }, { "end": 510.14, "start": 502.7, "text": " made a video on curl, which is another way of augmenting or pre training a for reinforcement" }, { "end": 516.46, "start": 510.14, "text": " learning, then state of the art things like planet or dreamer, I've made a video on dreamer" }, { "end": 522.3199999999999, "start": 516.46, "text": " as well. And then pixel SAC and state SAC is sort of a cheating algorithm because it" }, { "end": 528.12, "start": 522.32, "text": " has access to the state whereas all the other methods only have access to the to the pixels." }, { "end": 534.86, "start": 528.12, "text": " And you can see that the data augmentation method, which is basically just plain RL plane" }, { "end": 544.7, "start": 534.86, "text": " pure SAC plus the plus the data augmentation outperforms in many times all of these other" }, { "end": 553.46, "start": 544.7, "text": " baselines. Now, here is a criticism of me. In order they never investigate, they simply" }, { "end": 560.0600000000001, "start": 553.46, "text": " say wow, this reaches the same performance or outperforms these other methods. Now, so" }, { "end": 564.58, "start": 560.0600000000001, "text": " it's the state of the art algorithm. It's important to note here that this is on the" }, { "end": 572.3000000000001, "start": 564.58, "text": " DM control 100k and 500k benchmarks, which means that there's a limit on the number of" }, { "end": 578.66, "start": 572.3, "text": " I believe frames from these control tasks that you get. So you either get 100k or you" }, { "end": 585.02, "start": 578.66, "text": " get 500k frames. So the difficulty is learning from limited data. It's not state of the art" }, { "end": 589.9399999999999, "start": 585.02, "text": " reinforcement learning method. Overall, it is the state of the art on this particular" }, { "end": 596.14, "start": 589.9399999999999, "text": " task on learning from limited data. Now, while I can believe that the augmentation would" }, { "end": 605.56, "start": 596.14, "text": " help here, I it is completely unclear whether or not the augmentation gives the same benefits" }, { "end": 610.9399999999999, "start": 605.56, "text": " as like something like dreamer, or whether the benefits from dreamer and the benefits" }, { "end": 617.34, "start": 610.9399999999999, "text": " from data augmentation are completely orthogonal. So in this paper, given that the claim is" }, { "end": 622.74, "start": 617.34, "text": " so simple that they make, I wouldn't I would expect like an investigation, what happens" }, { "end": 631.1, "start": 622.74, "text": " if I do dreamer plus the data augmentation? Maybe they've done it somewhere, and I just" }, { "end": 638.46, "start": 631.1, "text": " haven't seen it. But it just seems like they, they put this on the base basic RL algorithm," }, { "end": 645.0600000000001, "start": 638.46, "text": " and then they claim, well, look here, it works well, but they never show that. So it could" }, { "end": 650.84, "start": 645.0600000000001, "text": " be that dreamer all this architecture, what it simply does is basically recover these" }, { "end": 656.26, "start": 650.84, "text": " gains that you could get by data augmentation, or it could be that it actually does something" }, { "end": 662.6600000000001, "start": 656.26, "text": " different, but just reaches the same amount of gain, right, it just reaches the same amount" }, { "end": 668.3000000000001, "start": 662.6600000000001, "text": " in improvement. And by combining them, you could improve it further. So not not just" }, { "end": 673.14, "start": 668.3000000000001, "text": " to get like a better number, but combining the two would actually give a lot of hints" }, { "end": 679.2800000000001, "start": 673.14, "text": " as to whether or not this augmentation works in line with the other methods, or whether" }, { "end": 684.38, "start": 679.28, "text": " the other methods are really doing something meaningfully different or not. But this is" }, { "end": 695.14, "start": 684.38, "text": " just not done here. And so they go into the they go into a question of which data augmentations" }, { "end": 703.9399999999999, "start": 695.14, "text": " contribute the most, and they get to the point where they say random crop is extremely effective." }, { "end": 710.22, "start": 703.94, "text": " So they have this table here where they just basically combine two augmentations. And do" }, { "end": 715.1, "start": 710.22, "text": " you see, so for example, this thing here means that you apply grayscale, and then the rotate" }, { "end": 721.48, "start": 715.1, "text": " augmentation. And that gets you to whatever 300 points in this Walker, if you apply crop," }, { "end": 728.84, "start": 721.48, "text": " and then crop, it gets you to 920 points and beats everything else. So they they say, okay," }, { "end": 738.0400000000001, "start": 728.84, "text": " crop is the most the most effective. And I have I have the sneaking suspicion that these" }, { "end": 743.44, "start": 738.0400000000001, "text": " augmentations are so effective, simply because of how we set up these tasks, right, these" }, { "end": 747.7, "start": 743.44, "text": " reinforcement learning tasks, they don't tend to be a real world, they tend to be somewhat" }, { "end": 755.02, "start": 747.7, "text": " simulated. And as you can see here, the image is pretty clear. So you can pretty clearly" }, { "end": 759.9, "start": 755.02, "text": " see that here is the thing, there's no natural background or whatnot, it's procedurally generated," }, { "end": 766.86, "start": 759.9, "text": " right, there are these stars that could confuse the model a bit. But still, it is so easy" }, { "end": 773.14, "start": 766.86, "text": " visually, this task, that I'm going to guess the whole reason why these image augmentations" }, { "end": 780.12, "start": 773.14, "text": " help is simply because of the way these reinforcement learning tasks right now are set up. And I'm" }, { "end": 786, "start": 780.12, "text": " would guess that if we had reinforcement learning in something like the real world, that the" }, { "end": 792.42, "start": 786, "text": " image augmentation methods would help in about the way they help unsupervised tasks in in" }, { "end": 801.46, "start": 792.42, "text": " the same data, for example, image net. So that is my sneaking suspicion. And this paper," }, { "end": 810.04, "start": 801.46, "text": " I want to say it sort of over claims, it's how how absolutely great this works. Of course," }, { "end": 814.9, "start": 810.04, "text": " it works great on these things. But I think there needs to be an investigation of where," }, { "end": 821.14, "start": 814.9, "text": " why. So here they have some attention maps on where the algorithm focuses. And you can" }, { "end": 827.14, "start": 821.14, "text": " see when there's no data augmentation, it sort of focuses on good points. But when you" }, { "end": 833.06, "start": 827.14, "text": " do crop, it focuses on this ridge here, which makes sense, right, because that's the thing" }, { "end": 841.4599999999999, "start": 833.06, "text": " that needs to be kind of vertical in order for the walker to be stable. And in if you" }, { "end": 848.66, "start": 841.4599999999999, "text": " do other things, then you can see it, it doesn't really focus, it focuses on different things." }, { "end": 857.06, "start": 848.66, "text": " So the crop method seems to make the model focus on the most important part of the image." }, { "end": 863.14, "start": 857.06, "text": " And as the same with the cheetah task here, so if you don't do augmentation, and some" }, { "end": 868.14, "start": 863.14, "text": " of the augmentation, you can see that it actually focuses on some of these background stars," }, { "end": 875.3, "start": 868.14, "text": " whereas in the cropped version, it focuses on not on the stars, but actually on the cheetah" }, { "end": 882.6199999999999, "start": 875.3, "text": " as a whole, which probably makes sense. Now, again, I have a bit of a I have a bit of a" }, { "end": 888.26, "start": 882.62, "text": " worry with these kinds of experiments, because we already know that crop will give you a" }, { "end": 894.66, "start": 888.26, "text": " much better score, right? So who's to say that if we could train this thing here to" }, { "end": 902.3, "start": 894.66, "text": " the same score, it wouldn't be paying attention to the same part. What they're trying to make" }, { "end": 907.62, "start": 902.3, "text": " clear here is that it is dependent on the particular type of data augmentation that" }, { "end": 915.62, "start": 907.62, "text": " the model gets a better grip on the input. But it is not really a valid comparison if" }, { "end": 924.9, "start": 915.62, "text": " we know that the crop agent performs a better score. And it could simply be that that's" }, { "end": 931.7, "start": 924.9, "text": " the reason why the attention is better, right? That that it is actually solving the problem" }, { "end": 938.46, "start": 931.7, "text": " better. So I mean, of course, this the fact that it's working better is due to the fact" }, { "end": 944.4200000000001, "start": 938.46, "text": " that you have crop augmented the data, but the fact that is focusing on the correct parts" }, { "end": 950.4200000000001, "start": 944.4200000000001, "text": " is not a property of the crop augmentation, but the property of the fact that it reaches" }, { "end": 961.34, "start": 950.4200000000001, "text": " a higher score. That was a long winded complaint, but I hope you get what I mean here. The last" }, { "end": 967.1800000000001, "start": 961.34, "text": " thing they do is they investigate generalization performance. So improving generalization on" }, { "end": 973.9, "start": 967.1800000000001, "text": " this open AI proc gen. Now, as I understand it, this is a reinforcement learning task" }, { "end": 980.6600000000001, "start": 973.9, "text": " or suite of tasks where you have procedurally generated levels. So you can sort of train" }, { "end": 988.1600000000001, "start": 980.6600000000001, "text": " on a bunch of levels, and then test the generalization to new levels that you haven't seen before." }, { "end": 994.4, "start": 988.16, "text": " So there's a jumper here and star pilot. So they seem like this like a jump and run game" }, { "end": 999.3399999999999, "start": 994.4, "text": " or big fish. I don't even know what you have to do in big fish. But you can see that the" }, { "end": 1007.26, "start": 999.3399999999999, "text": " levels seen here, this is one example, and unseen. So in this example, the background" }, { "end": 1012.8199999999999, "start": 1007.26, "text": " is very different. And I'm going to guess in the jumper thing, not only is the background" }, { "end": 1019.3000000000001, "start": 1012.82, "text": " but also the kind of generated level how you have to jump is quite different. So they investigate" }, { "end": 1027.5800000000002, "start": 1019.3000000000001, "text": " whether or not a agent trained on only the seen ones can generalize to the unseen ones." }, { "end": 1036.5800000000002, "start": 1027.5800000000002, "text": " And this table presents the results. And as you can see, the RAD with the crop or with" }, { "end": 1046.1399999999999, "start": 1036.58, "text": " other things outperform the pixel paste based PPOs. Now, there is some nuance to this table" }, { "end": 1054.22, "start": 1046.1399999999999, "text": " here. First of all, you can see that these this crop thing is now only the winner in" }, { "end": 1062.1799999999998, "start": 1054.22, "text": " one of these three tasks, right in the in the big fish thing. In there is another augmentation" }, { "end": 1067.8600000000001, "start": 1062.18, "text": " technique here that wins over at star pilot. But you can see the difference is not that" }, { "end": 1078.26, "start": 1067.8600000000001, "text": " high. And in the dnd jumper with 200 levels, so this is 100 or 200 levels, the the original" }, { "end": 1086.3400000000001, "start": 1078.26, "text": " method is even the best. So here again, I believe this is evidence that that it is very" }, { "end": 1092.62, "start": 1086.34, "text": " much an interaction of these augmentations with the way the task is set up and not the" }, { "end": 1098.02, "start": 1092.62, "text": " augmentations themselves or the fact that you're augmenting. For example, if we look" }, { "end": 1105.26, "start": 1098.02, "text": " at this big fish, we've seen, okay, the what seems to change here is mainly the background," }, { "end": 1115.62, "start": 1105.26, "text": " where as in the jumper example, the entire level structure seems to change. So" }, { "end": 1122.3799999999999, "start": 1115.62, "text": " then the the augmentation all of a sudden is not super effective anymore actually hurts," }, { "end": 1127.5, "start": 1122.3799999999999, "text": " right? So I'm just not I'm just not super convinced by the claims you're making here." }, { "end": 1134.4599999999998, "start": 1127.5, "text": " And one of the claims I find is, in particular, rad with random crop achieves, no wait, this" }, { "end": 1145.26, "start": 1134.4599999999998, "text": " point down here. Oh, yeah, achieves 55.8% gain over pixel based PPO. Okay. trained with" }, { "end": 1151.42, "start": 1145.26, "text": " 100 training levels outperforms the pixel based PPO with 200 training levels on both" }, { "end": 1159.26, "start": 1151.42, "text": " big fish and star pilot environment. This shows that data augmentation can be more effective" }, { "end": 1164.58, "start": 1159.26, "text": " in learning generalizable representations compared to simply increasing the number of" }, { "end": 1172.1, "start": 1164.58, "text": " training environments. I this statement. So again, how, like, why do you compare two different" }, { "end": 1179.02, "start": 1172.1, "text": " things if you don't? Like, if you don't show that maybe they're orthogonal. In fact, they" }, { "end": 1185.5, "start": 1179.02, "text": " are probably orthogonal because even on the 200 levels you you gain over the pixel based" }, { "end": 1195.6999999999998, "start": 1185.5, "text": " PPO, right? So why the comparison and then second of all, so here we see on the 100 levels," }, { "end": 1201.62, "start": 1195.6999999999998, "text": " this method is better than the pixel based PPO. And then they claim that okay, they are" }, { "end": 1210.3, "start": 1201.62, "text": " even better on 100 levels than the pixel based PPO on 200 levels. And why that is true. If" }, { "end": 1220.2199999999998, "start": 1210.3, "text": " you know, if if if a is bigger than b, then probably a is going to be bigger than b plus" }, { "end": 1231.78, "start": 1220.22, "text": " some epsilon. And right, and and that doesn't, I just think that doesn't really warrant their" }, { "end": 1239.34, "start": 1231.78, "text": " statement where they say, Oh, look, this is even better. So as if the 100 levels of additional" }, { "end": 1246.58, "start": 1239.34, "text": " training were the standard measure of more data, like if there is going to be if you're" }, { "end": 1250.8999999999999, "start": 1246.58, "text": " better at the beginning, there's going to be a certain amount of data where you're still" }, { "end": 1258.6599999999999, "start": 1250.8999999999999, "text": " better than the other method with more data. And I don't find this super duper surprising," }, { "end": 1266.1399999999999, "start": 1258.6599999999999, "text": " but they make a big claim here out of this. Alright, so in conclusion, I hope I'm not" }, { "end": 1270.6599999999999, "start": 1266.1399999999999, "text": " too harsh on this paper, it is a cool paper. And of course, it is cool findings. But I" }, { "end": 1276.94, "start": 1270.66, "text": " have a big suspicion that the augmentation here works so well simply because of how we" }, { "end": 1283.78, "start": 1276.94, "text": " set up these RL tasks, because they're visually quite, let's say easy. And therefore, these" }, { "end": 1290.18, "start": 1283.78, "text": " augmentations that are also our sort of easy abstractions of when an image is visually" }, { "end": 1296.02, "start": 1290.18, "text": " similar, because all of these things, right, to us as humans, we say, probably doesn't" }, { "end": 1302.26, "start": 1296.02, "text": " change anything if we just rotate the image. And we this is our prejudice. And we built" }, { "end": 1308.86, "start": 1302.26, "text": " this prejudice into these simulators for the RL tasks. So they will match up extremely" }, { "end": 1314.58, "start": 1308.86, "text": " well with these augmentations. And that's the reason I believe these things work and" }, { "end": 1322.9, "start": 1314.58, "text": " maybe not as much the the fact that you're augmenting. Okay, well, if you like this video," }, { "end": 1328.02, "start": 1322.9, "text": " I invite you to check out the paper, subscribe to this channel, tell all your friends about" }, { "end": 1354.26, "start": 1328.02, "text": " it and leave a like in the comment. Thank you very much and bye bye." } ]
cIUtRNhY6Rw
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
TAPAS: Weakly Supervised Table Parsing via Pre-training (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "bert", "nlp", "natural language processing", "wikitables", "sql", "tabular", "aggregations", "structured", "google" ]
Answering complex questions about tabular information is hard. No two tables are alike and sometimes the answer you're looking for is not even in the table and needs to be computed from a subset of the cells. Surprisingly, this model can figure it all out by itself through some clever input encoding and loss engineering. Paper: https://arxiv.org/abs/2004.02349 Code: https://github.com/google-research/tapas Abstract: Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art. Authors: Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno, Julian Martin Eisenschlos Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, have a look at this table on the left. So in this table, in each row you can see the following things. The name of a wrestler, the number of times that wrestler has been a champion and the combined number of days where that wrestler has been a champion or like the sum of the length of all their championships. Along with the column that is the rank and this is ranked by the combined days attribute. So this table is very interesting by itself but if you look at the right we have a couple of questions and let's try to answer them. Which wrestler had the most number of reigns? So for that you need to go to number of reigns column and you need to mentally sort them and you'll find out that 8 is the highest number and therefore Ric Flair is the wrestler you're looking for. Second question, the average time as a champion for top two wrestlers. Now we need to go the top two wrestlers, we can guess that pertains to the rank and then so we want the average of these two numbers. You even have questions such as which of the following wrestlers were ranked in the bottom three? Which the answers would be all of those and then after that out of these who had more than one reign? And you can see that's Dan Severn. So the paper that we're having here is trying to answer questions like this if given a table. As you can see this is a pretty pretty hard task and so pretty excited to read this. The paper is called TAPAS weekly supervised table parsing via pre-training by Jonathan Herzig, Pavel Kristof Novak, Thomas Miller, Francesco Piccino and Julian Martin Eisenschloss. Full disclaimer I know these people so I might be slightly biased. Alright so you've already seen the task. The task is you are given a table and a question and you're trying to answer that. Now there it's not as easy as that but the table questions come in different forms as you have seen. Sometimes you just need to select a cell from a table like we have here. The first question I simply, most number of reigns, I simply select whatever that is. So the answer here is already in the table, Rick Flair and this they call a cell selection task. This is wherever you need to select a cell. The same for these bottom here so which of the following wrestlers are ranked in the bottom three and out of these which one have more than one reign? All of these answers are in the table somewhere. The second thing is what they call a scalar answer. That is when the answer to be computed is a number that is not in the table. So these average time here which turns out to be 3426 is nowhere to be found in the table. So there actually needs to be a computation performed by the model. And lastly you have these things called ambiguous answers. Now the ambiguous answers refer to a thing where it is a number that you're looking for so how many but the number here is in the table. So you can think of this in terms of training data. If you have a task like this and you have training data and you just have the question, you just have this question and you're given the answer 2. You can teach your model either to select this number here or you can teach your model that would be wrong. Because how many world champions are there with only one reign to simply select this cell here is not correct. Because that cell even though the number is 2 it doesn't mean the same thing. It's not counting right. So the correct program here would be to count the number to count the cells where there is a 1 here which is also 2. And they call this situation ambiguous answer. So you might have already guessed that a single model that does all of this needs to sort of have multiple modes. That's exactly what they propose. So they propose a model that takes in the table and the question. And then in the first step it selects its mode. So the mode is either the cell selection or it is to compute something. And then whenever it's a cell selection it simply has a component to select cells. But when it's a compute it needs to decide in the second step what to compute and then also select the appropriate cells. So this is the model. Now this stuff like this has existed for a long time in these table answering things but the way we want to do it here is end to end with a single deep learning model of course. Because we want to be better than anything else. And the trend in deep learning is to put more and more into one model and to have it end to end differentiable. So you see we need multiple components. We need some sort of a mode selector. We need some sort of a cell collector and we need a thing that decides if we are in the compute mode what computation to be done. Now let me present the model that this paper proposes. So this paper proposes to embed the question here. So you can see here that's the question into a BERT input. So this is a transformer right here. This is BERT or any variant of BERT that you can think of. So the question is embedded as natural language and then interestingly enough the table right here is also embedded as language. We'll get to that in a second. But the question and the table are in the input and then the model is asked to do two things. First of all it's asked to do an aggregation prediction. So this can either be one of these programs called count sum average or it can be as you can see here none. So no aggregation. So this handles our first two components. It can decide to perform a calculation or none and if it is performing a calculation it can decide to do a count sum or an average. Now of course the model here is not limited to those computations. You can think of extending this to any further computation. The important thing is that they have a number as an output. Second of all there is a cell selector. So depending on this aggregation prediction you need some cells. Like if you want to compute an average you need the cells to compute an average over. So the cell selector here will select cells from the table. Specifically it goes by row and column. Sorry column and row. Since these tables usually they have a header right this is the table header where the attributes are listed. It makes sense to first in a first step select which column you want to select from and then if once you have a column let's say this column here in the second step you say which of the cells you want to select. Now these can be multiple but the way the system is set up it's first a column selector and then a cell selector within that column. So you can only ever get columns from the same cell in this thing. Let's remember that for later. Alright so this is what the model does now let's look at the input. The input to the model is this here. Now this if you refer from this before this was in this blue box and then here you'd have the computation selection and here you have the cell selection. So this is this is how you can relate that. So usually if you input something into a transformer what you want to do is you want to embed this into into a token embeddings. So first you want to split everything you put in into what are called tokens. Now tokens are either things like words or word pieces the important thing is to have a dictionary for it and each one gets mapped to a vector. So this here is your query. You take your query as a string and you tokenize it and you get the embeddings from the embedding table and that's your input right. So it's a sequence of token embeddings and then you also embed the table and this I find pretty cool here in this model and somewhat special is that the table is actually presented as just natural language. So you can see here the table is one string it's just a single string that goes from left to right it's just the serialized table. So this table right here you can see these are word pieces so this table if I reconstruct it if I can attempt to reconstruct it it is going to be a table that has as the headers call one call two these are the names so in days before here would be name of the wrestler and this would be number of days. And then here 0 1 2 3 so this this table right here corresponds to this string right here. I hope you can you can make sense of that. So the table is just put there as one long string and then in order to make the model realize you know what the table is you have these special embeddings. So usually in BERT you have what they're called position embeddings to indicate where in the sequence that is. So in a simpler in the simplest case these are embeddings for the numbers 0 1 2 3 4 and so on so wherever the position is. This you can all look up in the attention is all you need video I've made that if you are unfamiliar with transformer inputs. Then also the the segment embeddings simply indicate where what a token is part of. So for every token that's part of the query you see you have segment 0 embedding and for every token that's part of the table you have a segment 1 embedding. This is simply to tell the model hey this particular token is part of the question or part of the table. Then you have the new things so this paper newly introduces the following embeddings column and row embeddings. Now these for the question of course they don't make any sense but you have to put something here so you just put column 0 but for the table you see there is a column 1 and column 2 and the this exactly so we've seen that this here is the header of column 1 and this is the header of column 2 and then it goes back column 1 column 2 column 1 column 2 and you can see here this 0 is in column 1 and this one is in column 2 and this in column 1 again and the same for the rows so you have row 0 for the headers and then row 1 for the first two numbers and row 2 for the second two numbers. So this is all of this so you see these two are in the first row and these two are in the second row. All of this is to tell the model all of this information down here is to tell the model how this table looks so if it wants to select the second column from the third row it would look in this information to see which cell to select and then the last thing they introduce is this so-called rank embeddings. Now as we've seen before if this first column here is maybe the sorry the number of days of something so this is the number of days and this second one is the number of reigns so how many championships the table can only be sorted at maximum by one of them so you want to sort of for each cell you want to tell the model let's extend that table by two numbers 4 and 1 so for each column you want to tell the model the ranking of the numbers so here it's pretty easy this is rank 1 this is rank 2 this is rank 3 further on the left side this is rank 1 this is rank 2 down here and this is rank 3 so the model has an will have an if you give this information the model have an easier time to detect like give me the top 2 or something like this give me the worst give me the best give me the highest and so on the model will have an easier time doing that so that's why the rank here as you can see the zero and as also the number one are embedded rank one and the other two rank two because they're just lower now I don't feel I feel they could could have given a better example than this table I feel you could actually put real names here to make clearer not call one and call two and I feel you could give a somewhat smarter content because if you just look at the picture here you cannot see the correspondence of these rank tokens because in essence they are exactly equal as the row tokens but fortunately we can read the text oh there's the table ha so I have actually I've not seen that but I have discerned it correctly for this particular for this particular input alright I think that's the the half of the magic is how you encode the input in such a thing and this seems to be first of all a pretty cool idea but second of all it exactly is what this kind of new regime of NLP is about is that you basically put everything as a string you annotate it in a smart way and that lets the model figure out a lot of stuff about the input people used to people used to do the very different things so people if given a query and a table like this what people would do is they would somehow first of all get the table headers and and kind of guess the data types of the attributes and then they would formulate reformulate the query maybe also with a neural network maybe with something else into something like SQL in order to actually have an SQL statement to select the correct cells or perform the correct aggregations and that is somewhat brittle and it's just much less deep learning than this model so I like this part of the model now the problem of course is as we've seen in this multi step process so how do we first of all if you build if we want to build a cell selector that's pretty easy right we've seen this so we the cell selector is first column column selection and then second row selection and this can be multiple rows so that's fairly easy selecting cells either for just returning or for aggregation pretty easy but how do we do the actually the aggregation selection is also pretty easy because we can just do a multi class classifier right so the classifier will simply tell us a give us a distribution and then we see okay the sum aggregation is probably here the the what the model wants the real question is how do we train this and how this is trained is what I find really interesting so as we've seen they have training data the training data comes in the form of tables questions and answers as we've seen before we don't know how to get to those answers so when the question is which wrestler had the most number of rains we just know the answer is a Ric flair now they they do again a two-step process for their training data that mimics the two step process of the model so the first step is is the answer a number is the answer a number if no then it is definitely a cell selection task so they if it's not a number they just restrict themselves to selecting cells if the answer is not in the table then that just means that the correct thing is to select no cells and just say I can't answer this question if it is a number then again you have two options so is it in the table if yes we are in a weird situation if no not in table then it is an aggregation so if it is a number that is not in the table that means that the answer is a number there's not in the table that means the answer must be computed via one of these aggregations and if the answer is a number but is in the table then we are in this ambiguous answer setting where the it could be that we need to select the cell but it could also be that the same number by accident is in the table but actually needs to be computed from other numbers and they do this in the most deep blurny way possible is that they do basically a soft decision here so they let the model when they let it select what to compute they let it make a soft decision what do I mean by that so let's say you have these three operations count sum and average and you have the cell selection so the cell selector will basically tell you I will select three cells the three cells contain the number seven the number eight and the number three alright so and the question was I don't even know what the question was but the cell selector tells you these three cells are to be selected you do this by simply selecting the cells where the cell selector has a higher probability than one half now your your aggregation selection module gives you a softmax distribution over over the actions so it's not very much count here maybe that's 0.1 this here is maybe 0.3 and this is the 0.6 what you do is you simply compute all of them so you want to compute the count here which is three you want to compute the sum here which is 18 and then you want to compute the average which is six ha I made a good example by accident and then you simply weigh the outputs here by their probabilities so you say since the model wants point one puts what point one probability on the count I'm going to have 0.1 times 3 plus it wants point three times this so 0.3 times 18 plus 0.6 times 6 now I'm not gonna so this is 6 plus point three plus 3.6 9.9 so that that's how the model computes things it simply puts probability on these operations here and then you simply take a weighted output with respect to the computation of all those things now I'm pretty sure that's completely invalid because for the same numbers for example the sum is going to have a much larger like variance than the average and and that's somewhat going the count maybe somewhere in between depending on the numbers so this just to take the weighted average here and then of course right so what they do is they do have this this is the model output and you have the correct answer let's say the correct answer was actually was to compute the the average so the correct answers six so what they do is simply they take the squared error and that's their loss actually they don't take the squared error they take a approximation to the squared error which is square until some Delta and then it's linear and this is simply to be a bit more outlier robust and they do other things to be more outlier robust but this so this is the model output and this is the correct answer and they simply count on the fact that this will this will back propagate so if you want to make these two things closer if you're the model right you have the option of simply putting more weight from the from the other ones on to the average operation and that will decrease the 9.9 because you as you can see both of these numbers will get smaller and no wait this isn't the yes sorry so you will you will decrease these numbers so this is the output we got from the weighted average right so if we decrease these weights you will put weight from here to here that will bring the number 9.9 down and that will get you closer to the answer you're looking for but you can also achieve this by you can achieve this even more right so this 9.9 is too high if we want to bring the 9.9 down we're much better off by taking some of that output and actually putting on this here because three is the lowest number right the only agreement here is that we want to take weight away from the 18 from the large one so I'm extremely surprised that this works given that it is so super ambiguous what the model should do with these operations and I I highly doubt that you can extend this so it's of course agnostic of what these aggregations are but to be able to extend this to many more aggregations is will I think lead to much more of these situations where the model is entirely unsure of where to put the mass of where to put the weight and I would be interested to see what happens if you have a data set with like 20 or 50 of these aggregations and not just three so this is the this is the let's say the the interesting part here the other if you go the other way when you have this cell selection task it is just to select a cell right and then you simply have the cell selector that part here that does the selection that you also you train every time simply to give each cell a weight right so this this is simply the softmax over column and then the softmax over rows and you can train that using the cross entropy now training this cell selector from data is pretty easy when it's a cell selection task right because the answer is in the table and or is not in the table and then you know to select no cell so you do have the training data that a particular cell is the correct cell and you can train the model to select that cell but it is actually a pretty hard task if it is for example you're looking for an average operation because not only do you are you not really sure that it's an average operation you just know that that kind of gives you the correct answer you also don't really know which cells to select for this average operation right because depending on which cells you select and of course that's going to be a soft selection as well the the average answer the average will be different depending on which cells you select so they're basically counting on this loss here to back propagate not only through the the selection of the aggregation to perform but also to the cell selector to set which cells to to select so from this weak signal it's almost like the reinforcement learning problem where you have the weak signal and you have like a billion ways to get your number closer to that signal and not not really accurate understanding of what you need to do is you're just relying on the model through lots and lots and lots and lots of data to kind of figure out which natural language questions to map to which cell selection and aggregation so this is it's a it seems like impossible but it works the last thing we need to talk about is this ambiguous answer setting and as you can imagine it's pretty simple that they also let the model do and a soft selection between the cell selection tasks so no aggregation and the aggregations to be performed and basically let the model figure out itself which one is better to do an aggregation or to do no aggregation suffice to say this this only works for pretty I think I think it only works for pretty limited amount of tasks pretty limited amount of questions and you might have spotted there even these questions that are follow-up questions which are another thing they build into the model and I don't I'm not really gonna talk about this but they do have this concept as well which I find maybe a bit out of place but maybe it's just part of their data set somewhere maybe it's just these companies want to get into this conversational mode so everything needs to be context dependent at the interesting part here is really the computation of the aggregates and specifically the question of which of these aggregations to choose and this again this is so surprising that it works and fairly fairly cool I think that is the gist of the paper they do extremely thorough evaluations here on these data sets and ablations to see what really counts and what doesn't I don't really want to go into that safe to say their results are better than anything else before I believe they I believe they're actually on par with another model but in one data set but they beat them on every other data set so that's you know that's cool I don't think there was a bar nevermind I invite you to check out this paper look for yourself they have the code online if you want to train a model like this yourself other than that thanks for listening if you like this content please subscribe like comment tell a friend and bye bye
[ { "end": 5.88, "start": 0, "text": " Hi there, have a look at this table on the left. So in this table, in each row" }, { "end": 12.82, "start": 5.88, "text": " you can see the following things. The name of a wrestler, the number of times" }, { "end": 19.16, "start": 12.82, "text": " that wrestler has been a champion and the combined number of days where that" }, { "end": 24, "start": 19.16, "text": " wrestler has been a champion or like the sum of the length of all their" }, { "end": 30.4, "start": 24, "text": " championships. Along with the column that is the rank and this is ranked by the" }, { "end": 36.44, "start": 30.4, "text": " combined days attribute. So this table is very interesting by itself but if you" }, { "end": 40.3, "start": 36.44, "text": " look at the right we have a couple of questions and let's try to answer them." }, { "end": 46.379999999999995, "start": 40.3, "text": " Which wrestler had the most number of reigns? So for that you need to go to" }, { "end": 52.379999999999995, "start": 46.379999999999995, "text": " number of reigns column and you need to mentally sort them and you'll find out" }, { "end": 58.92, "start": 52.38, "text": " that 8 is the highest number and therefore Ric Flair is the wrestler" }, { "end": 64.5, "start": 58.92, "text": " you're looking for. Second question, the average time as a champion for top two" }, { "end": 71.64, "start": 64.5, "text": " wrestlers. Now we need to go the top two wrestlers, we can guess that pertains to" }, { "end": 79.12, "start": 71.64, "text": " the rank and then so we want the average of these two numbers. You even have" }, { "end": 84.16000000000001, "start": 79.12, "text": " questions such as which of the following wrestlers were ranked in" }, { "end": 89.56, "start": 84.16000000000001, "text": " the bottom three? Which the answers would be all of those and then after that out" }, { "end": 97.80000000000001, "start": 89.56, "text": " of these who had more than one reign? And you can see that's Dan Severn. So the" }, { "end": 104.52000000000001, "start": 97.80000000000001, "text": " paper that we're having here is trying to answer questions like this if given a" }, { "end": 111.32, "start": 104.52, "text": " table. As you can see this is a pretty pretty hard task and so pretty excited" }, { "end": 118.64, "start": 111.32, "text": " to read this. The paper is called TAPAS weekly supervised table parsing via" }, { "end": 125.12, "start": 118.64, "text": " pre-training by Jonathan Herzig, Pavel Kristof Novak, Thomas Miller, Francesco" }, { "end": 131.07999999999998, "start": 125.12, "text": " Piccino and Julian Martin Eisenschloss. Full disclaimer I know these people so" }, { "end": 137.28, "start": 131.08, "text": " I might be slightly biased. Alright so you've already seen the task. The task is" }, { "end": 144.08, "start": 137.28, "text": " you are given a table and a question and you're trying to answer that. Now there" }, { "end": 150.76000000000002, "start": 144.08, "text": " it's not as easy as that but the table questions come in different" }, { "end": 155.92000000000002, "start": 150.76000000000002, "text": " forms as you have seen. Sometimes you just need to select a cell from a table" }, { "end": 162.72, "start": 155.92, "text": " like we have here. The first question I simply, most number of reigns, I simply" }, { "end": 169.6, "start": 162.72, "text": " select whatever that is. So the answer here is already in the table, Rick Flair" }, { "end": 176.48, "start": 169.6, "text": " and this they call a cell selection task. This is wherever you need to select a" }, { "end": 181.35999999999999, "start": 176.48, "text": " cell. The same for these bottom here so which of the following wrestlers are" }, { "end": 184.48, "start": 181.35999999999999, "text": " ranked in the bottom three and out of these which one have more than one reign?" }, { "end": 190.11999999999998, "start": 184.48, "text": " All of these answers are in the table somewhere. The second thing is what they" }, { "end": 197.67999999999998, "start": 190.11999999999998, "text": " call a scalar answer. That is when the answer to be computed is a number that" }, { "end": 207.39999999999998, "start": 197.67999999999998, "text": " is not in the table. So these average time here which turns out to be 3426 is" }, { "end": 213.79999999999998, "start": 207.39999999999998, "text": " nowhere to be found in the table. So there actually needs to be a computation" }, { "end": 220.72, "start": 213.8, "text": " performed by the model. And lastly you have these things called ambiguous" }, { "end": 229.4, "start": 220.72, "text": " answers. Now the ambiguous answers refer to a thing where it is a number that" }, { "end": 236.92000000000002, "start": 229.4, "text": " you're looking for so how many but the number here is in the table. So you can" }, { "end": 240.16000000000003, "start": 236.92000000000002, "text": " think of this in terms of training data. If you have a task like this and you" }, { "end": 244.6, "start": 240.16, "text": " have training data and you just have the question, you just have this question and" }, { "end": 252, "start": 244.6, "text": " you're given the answer 2. You can teach your model either to select" }, { "end": 259.44, "start": 252, "text": " this number here or you can teach your model that would be wrong. Because" }, { "end": 266.64, "start": 259.44, "text": " how many world champions are there with only one reign to simply select this" }, { "end": 271.88, "start": 266.64, "text": " cell here is not correct. Because that cell even though the number is 2 it" }, { "end": 277.03999999999996, "start": 271.88, "text": " doesn't mean the same thing. It's not counting right. So the correct program" }, { "end": 282.88, "start": 277.03999999999996, "text": " here would be to count the number to count the cells where there is a 1 here" }, { "end": 288.06, "start": 282.88, "text": " which is also 2. And they call this situation ambiguous answer. So you might" }, { "end": 293.36, "start": 288.06, "text": " have already guessed that a single model that does all of this needs to sort of" }, { "end": 301.6, "start": 293.36, "text": " have multiple modes. That's exactly what they propose. So they propose a model" }, { "end": 311.6, "start": 302.2, "text": " that takes in the table and the question. And then in the first step it selects its" }, { "end": 322.48, "start": 311.6, "text": " mode. So the mode is either the cell selection or it is to compute something." }, { "end": 329.72, "start": 322.48, "text": " And then whenever it's a cell selection it simply has a component to select" }, { "end": 338.32, "start": 329.72, "text": " cells. But when it's a compute it needs to decide in the second step what to" }, { "end": 352.12, "start": 338.32, "text": " compute and then also select the appropriate cells. So this is the" }, { "end": 357.92, "start": 352.12, "text": " model. Now this stuff like this has existed for a long time in these table" }, { "end": 362.6, "start": 357.92, "text": " answering things but the way we want to do it here is end to end with a single" }, { "end": 366.68, "start": 362.6, "text": " deep learning model of course. Because we want to be better than anything else. And" }, { "end": 372.32, "start": 366.68, "text": " the trend in deep learning is to put more and more into one model and to have" }, { "end": 378.64, "start": 372.32, "text": " it end to end differentiable. So you see we need multiple components. We" }, { "end": 383.71999999999997, "start": 378.64, "text": " need some sort of a mode selector. We need some sort of a cell collector and" }, { "end": 388.8, "start": 383.71999999999997, "text": " we need a thing that decides if we are in the compute mode what computation to" }, { "end": 396.08, "start": 388.8, "text": " be done. Now let me present the model that this paper proposes. So this paper" }, { "end": 404.56, "start": 396.08, "text": " proposes to embed the question here. So you can see here that's the question into" }, { "end": 412.96, "start": 404.56, "text": " a BERT input. So this is a transformer right here. This is BERT or any variant" }, { "end": 418.16, "start": 412.96, "text": " of BERT that you can think of. So the question is embedded as natural language" }, { "end": 427.08, "start": 418.16, "text": " and then interestingly enough the table right here is also embedded as language." }, { "end": 433.8, "start": 427.08, "text": " We'll get to that in a second. But the question and the table are in the input" }, { "end": 438.36, "start": 433.8, "text": " and then the model is asked to do two things. First of all it's asked to do an" }, { "end": 443.44, "start": 438.36, "text": " aggregation prediction. So this can either be one of these programs called" }, { "end": 451.36, "start": 443.44, "text": " count sum average or it can be as you can see here none. So no aggregation. So" }, { "end": 456.88, "start": 451.36, "text": " this handles our first two components. It can decide to perform a calculation or" }, { "end": 463.12, "start": 456.88, "text": " none and if it is performing a calculation it can decide to do a count" }, { "end": 469.12, "start": 463.12, "text": " sum or an average. Now of course the model here is not limited to" }, { "end": 474.12, "start": 469.12, "text": " those computations. You can think of extending this to any further" }, { "end": 482.44, "start": 474.12, "text": " computation. The important thing is that they have a number as an output. Second" }, { "end": 488.32, "start": 482.44, "text": " of all there is a cell selector. So depending on this aggregation prediction" }, { "end": 493.68, "start": 488.32, "text": " you need some cells. Like if you want to compute an average you need the cells to" }, { "end": 499.88, "start": 493.68, "text": " compute an average over. So the cell selector here will select cells from the" }, { "end": 507.96, "start": 499.88, "text": " table. Specifically it goes by row and column. Sorry column and row. Since" }, { "end": 513.04, "start": 507.96, "text": " these tables usually they have a header right this is the table header where the" }, { "end": 520.0799999999999, "start": 513.04, "text": " attributes are listed. It makes sense to first in a first step select which column" }, { "end": 526.28, "start": 520.0799999999999, "text": " you want to select from and then if once you have a column let's say this column" }, { "end": 534.28, "start": 526.28, "text": " here in the second step you say which of the cells you want to select. Now these" }, { "end": 540, "start": 534.28, "text": " can be multiple but the way the system is set up it's first a column selector" }, { "end": 546.04, "start": 540, "text": " and then a cell selector within that column. So you can only ever get columns" }, { "end": 553.24, "start": 546.04, "text": " from the same cell in this thing. Let's remember that for later. Alright so this" }, { "end": 561.52, "start": 553.24, "text": " is what the model does now let's look at the input. The input to the model is this" }, { "end": 566.6, "start": 561.52, "text": " here. Now this if you refer from this before this was in this blue box and then" }, { "end": 571.0400000000001, "start": 566.6, "text": " here you'd have the computation selection and here you have the cell" }, { "end": 577.96, "start": 571.0400000000001, "text": " selection. So this is this is how you can relate that. So usually if you input" }, { "end": 584.4, "start": 577.96, "text": " something into a transformer what you want to do is you want to embed this into" }, { "end": 591.64, "start": 584.4, "text": " into a token embeddings. So first you want to split everything you put in into" }, { "end": 597.4, "start": 591.64, "text": " what are called tokens. Now tokens are either things like words or word pieces" }, { "end": 601.48, "start": 597.4, "text": " the important thing is to have a dictionary for it and each one gets mapped" }, { "end": 611.88, "start": 601.48, "text": " to a vector. So this here is your query. You take your query as a string" }, { "end": 617.72, "start": 611.88, "text": " and you tokenize it and you get the embeddings from the embedding table and" }, { "end": 623.96, "start": 617.72, "text": " that's your input right. So it's a sequence of token embeddings and then" }, { "end": 630.1600000000001, "start": 623.96, "text": " you also embed the table and this I find pretty cool here in this model and" }, { "end": 638, "start": 630.1600000000001, "text": " somewhat special is that the table is actually presented as just natural" }, { "end": 650.08, "start": 638, "text": " language. So you can see here the table is one string it's just a single string" }, { "end": 656.08, "start": 650.08, "text": " that goes from left to right it's just the serialized table. So this table right" }, { "end": 663.24, "start": 656.08, "text": " here you can see these are word pieces so this table if I reconstruct it if I" }, { "end": 670.96, "start": 663.24, "text": " can attempt to reconstruct it it is going to be a table that has as the" }, { "end": 678.92, "start": 670.96, "text": " headers call one call two these are the names so in days before here would be" }, { "end": 699.3199999999999, "start": 678.92, "text": " name of the wrestler and this would be number of days. And then here 0 1 2 3 so" }, { "end": 708.5999999999999, "start": 699.3199999999999, "text": " this this table right here corresponds to this string right here. I hope you can" }, { "end": 713.84, "start": 708.6, "text": " you can make sense of that. So the table is just put there as one long string and" }, { "end": 720.76, "start": 713.84, "text": " then in order to make the model realize you know what the table is you have" }, { "end": 724.36, "start": 720.76, "text": " these special embeddings. So usually in BERT you have what they're called" }, { "end": 730.48, "start": 724.36, "text": " position embeddings to indicate where in the sequence that is. So in a simpler in" }, { "end": 737.0400000000001, "start": 730.48, "text": " the simplest case these are embeddings for the numbers 0 1 2 3 4 and so on so" }, { "end": 742.16, "start": 737.04, "text": " wherever the position is. This you can all look up in the attention is all you" }, { "end": 747.56, "start": 742.16, "text": " need video I've made that if you are unfamiliar with transformer inputs. Then" }, { "end": 756, "start": 747.56, "text": " also the the segment embeddings simply indicate where what a token is part of. So" }, { "end": 760.9599999999999, "start": 756, "text": " for every token that's part of the query you see you have segment 0 embedding and" }, { "end": 764.68, "start": 760.9599999999999, "text": " for every token that's part of the table you have a segment 1 embedding. This is" }, { "end": 770.2399999999999, "start": 764.68, "text": " simply to tell the model hey this particular token is part of the question" }, { "end": 775.28, "start": 770.2399999999999, "text": " or part of the table. Then you have the new things so this paper newly" }, { "end": 780.5999999999999, "start": 775.28, "text": " introduces the following embeddings column and row embeddings. Now these for" }, { "end": 784, "start": 780.5999999999999, "text": " the question of course they don't make any sense but you have to put something" }, { "end": 793.04, "start": 784, "text": " here so you just put column 0 but for the table you see there is a column 1" }, { "end": 801.68, "start": 793.04, "text": " and column 2 and the this exactly so we've seen that this here is the header" }, { "end": 806.28, "start": 801.68, "text": " of column 1 and this is the header of column 2 and then it goes back column 1" }, { "end": 814.5999999999999, "start": 806.28, "text": " column 2 column 1 column 2 and you can see here this 0 is in column 1 and this" }, { "end": 821.04, "start": 814.5999999999999, "text": " one is in column 2 and this in column 1 again and the same for the rows so you" }, { "end": 828.76, "start": 821.04, "text": " have row 0 for the headers and then row 1 for the first two numbers and row 2" }, { "end": 834.3199999999999, "start": 828.76, "text": " for the second two numbers. So this is all of this so you see these two are in" }, { "end": 838.0799999999999, "start": 834.3199999999999, "text": " the first row and these two are in the second row. All of this is to tell the" }, { "end": 846.1999999999999, "start": 838.0799999999999, "text": " model all of this information down here is to tell the model how this table" }, { "end": 851.5200000000001, "start": 846.2, "text": " looks so if it wants to select the second column from the third row it" }, { "end": 857.76, "start": 851.5200000000001, "text": " would look in this information to see which cell to select and then the last" }, { "end": 864.32, "start": 857.76, "text": " thing they introduce is this so-called rank embeddings. Now as we've seen before" }, { "end": 872.0400000000001, "start": 864.32, "text": " if this first column here is maybe the sorry the number of days of something so" }, { "end": 877.76, "start": 872.04, "text": " this is the number of days and this second one is the number of reigns so" }, { "end": 884.0799999999999, "start": 877.76, "text": " how many championships the table can only be sorted at maximum by one of them" }, { "end": 891.12, "start": 884.0799999999999, "text": " so you want to sort of for each cell you want to tell the model let's extend that" }, { "end": 899.76, "start": 891.12, "text": " table by two numbers 4 and 1 so for each column you want to tell the model the" }, { "end": 904, "start": 899.76, "text": " ranking of the numbers so here it's pretty easy this is rank 1 this is rank" }, { "end": 909.76, "start": 904, "text": " 2 this is rank 3 further on the left side this is rank 1 this is rank 2 down" }, { "end": 913.6, "start": 909.76, "text": " here and this is rank 3 so the model has an will have an if you give this" }, { "end": 920.68, "start": 913.6, "text": " information the model have an easier time to detect like give me the top 2 or" }, { "end": 925.2, "start": 920.68, "text": " something like this give me the worst give me the best give me the highest and" }, { "end": 932.5600000000001, "start": 925.2, "text": " so on the model will have an easier time doing that so that's why the rank here" }, { "end": 942.1600000000001, "start": 932.5600000000001, "text": " as you can see the zero and as also the number one are embedded rank one and the" }, { "end": 948.72, "start": 942.1600000000001, "text": " other two rank two because they're just lower now I don't feel I feel they could" }, { "end": 953.08, "start": 948.72, "text": " could have given a better example than this table I feel you could actually" }, { "end": 960.2800000000001, "start": 953.08, "text": " put real names here to make clearer not call one and call two and I feel you" }, { "end": 968.32, "start": 960.2800000000001, "text": " could give a somewhat smarter content because if you just look at the picture" }, { "end": 974.24, "start": 968.32, "text": " here you cannot see the correspondence of these rank tokens because in essence" }, { "end": 981.76, "start": 974.24, "text": " they are exactly equal as the row tokens but fortunately we can read the text oh" }, { "end": 988.36, "start": 981.76, "text": " there's the table ha so I have actually I've not seen that but I have discerned" }, { "end": 996.04, "start": 988.36, "text": " it correctly for this particular for this particular input alright I think" }, { "end": 1000.52, "start": 996.04, "text": " that's the the half of the magic is how you encode the input in such a thing and" }, { "end": 1007.28, "start": 1000.52, "text": " this seems to be first of all a pretty cool idea but second of all it exactly" }, { "end": 1014.1999999999999, "start": 1007.28, "text": " is what this kind of new regime of NLP is about is that you basically put" }, { "end": 1019.56, "start": 1014.1999999999999, "text": " everything as a string you annotate it in a smart way and that lets the model" }, { "end": 1026.04, "start": 1019.56, "text": " figure out a lot of stuff about the input people used to people used to do" }, { "end": 1032.92, "start": 1026.04, "text": " the very different things so people if given a query and a table like this what" }, { "end": 1038.16, "start": 1032.92, "text": " people would do is they would somehow first of all get the table headers and" }, { "end": 1045.16, "start": 1038.16, "text": " and kind of guess the data types of the attributes and then they would formulate" }, { "end": 1049.48, "start": 1045.16, "text": " reformulate the query maybe also with a neural network maybe with something else" }, { "end": 1056.6000000000001, "start": 1049.48, "text": " into something like SQL in order to actually have an SQL statement to select" }, { "end": 1062.28, "start": 1056.6000000000001, "text": " the correct cells or perform the correct aggregations and that is somewhat" }, { "end": 1068.76, "start": 1062.28, "text": " brittle and it's just much less deep learning than this model so I like this" }, { "end": 1074.68, "start": 1068.76, "text": " part of the model now the problem of course is as we've seen in this multi" }, { "end": 1080.76, "start": 1074.68, "text": " step process so how do we first of all if you build if we want to build a cell" }, { "end": 1085.6399999999999, "start": 1080.76, "text": " selector that's pretty easy right we've seen this so we the cell selector is" }, { "end": 1096.0400000000002, "start": 1085.64, "text": " first column column selection and then second row selection and this can be" }, { "end": 1102.96, "start": 1096.0400000000002, "text": " multiple rows so that's fairly easy selecting cells either for just returning" }, { "end": 1110.3200000000002, "start": 1102.96, "text": " or for aggregation pretty easy but how do we do the actually the aggregation" }, { "end": 1114.48, "start": 1110.3200000000002, "text": " selection is also pretty easy because we can just do a multi class classifier" }, { "end": 1119.24, "start": 1114.48, "text": " right so the classifier will simply tell us a give us a distribution and then we" }, { "end": 1125.8, "start": 1119.24, "text": " see okay the sum aggregation is probably here the the what the model wants the" }, { "end": 1133.28, "start": 1125.8, "text": " real question is how do we train this and how this is trained is what I find" }, { "end": 1138.88, "start": 1133.28, "text": " really interesting so as we've seen they have training data the training data" }, { "end": 1144.96, "start": 1138.88, "text": " comes in the form of tables questions and answers as we've seen before we don't" }, { "end": 1153.0800000000002, "start": 1144.96, "text": " know how to get to those answers so when the question is which wrestler had the" }, { "end": 1157.1200000000001, "start": 1153.0800000000002, "text": " most number of rains we just know the answer is a Ric flair now they they do" }, { "end": 1162, "start": 1157.1200000000001, "text": " again a two-step process for their training data that mimics the two step" }, { "end": 1169.52, "start": 1162, "text": " process of the model so the first step is is the answer a number is the answer" }, { "end": 1185.12, "start": 1169.52, "text": " a number if no then it is definitely a cell selection task so they if it's not" }, { "end": 1190.64, "start": 1185.12, "text": " a number they just restrict themselves to selecting cells if the answer is not" }, { "end": 1196.68, "start": 1190.64, "text": " in the table then that just means that the correct thing is to select no cells" }, { "end": 1203.72, "start": 1196.68, "text": " and just say I can't answer this question if it is a number then again" }, { "end": 1217.76, "start": 1203.72, "text": " you have two options so is it in the table if yes we are in a weird situation" }, { "end": 1234.16, "start": 1217.76, "text": " if no not in table then it is an aggregation so if it is a number that is" }, { "end": 1239.24, "start": 1234.16, "text": " not in the table that means that the answer is a number there's not in the" }, { "end": 1243.44, "start": 1239.24, "text": " table that means the answer must be computed via one of these aggregations" }, { "end": 1252, "start": 1243.44, "text": " and if the answer is a number but is in the table then we are in this ambiguous" }, { "end": 1258.04, "start": 1252, "text": " answer setting where the it could be that we need to select the cell but it" }, { "end": 1263.3200000000002, "start": 1258.04, "text": " could also be that the same number by accident is in the table but actually" }, { "end": 1268.72, "start": 1263.3200000000002, "text": " needs to be computed from other numbers and they do this in the most deep" }, { "end": 1277.56, "start": 1268.72, "text": " blurny way possible is that they do basically a soft decision here so" }, { "end": 1285.96, "start": 1277.56, "text": " they let the model when they let it select what to compute they let it make" }, { "end": 1290.48, "start": 1285.96, "text": " a soft decision what do I mean by that so let's say you have these three" }, { "end": 1296.32, "start": 1290.48, "text": " operations count sum and average and you have the cell selection so the cell" }, { "end": 1301.52, "start": 1296.32, "text": " selector will basically tell you I will select three cells the three cells" }, { "end": 1308.12, "start": 1301.52, "text": " contain the number seven the number eight and the number three alright so" }, { "end": 1311.28, "start": 1308.12, "text": " and the question was I don't even know what the question was but the cell" }, { "end": 1315.12, "start": 1311.28, "text": " selector tells you these three cells are to be selected you do this by simply" }, { "end": 1319.08, "start": 1315.12, "text": " selecting the cells where the cell selector has a higher probability than" }, { "end": 1326.48, "start": 1319.08, "text": " one half now your your aggregation selection module gives you a softmax" }, { "end": 1335.32, "start": 1326.48, "text": " distribution over over the actions so it's not very much count here maybe" }, { "end": 1343.48, "start": 1335.32, "text": " that's 0.1 this here is maybe 0.3 and this is the 0.6 what you do is you" }, { "end": 1348.1599999999999, "start": 1343.48, "text": " simply compute all of them so you want to compute the count here which is three" }, { "end": 1356, "start": 1348.16, "text": " you want to compute the sum here which is 18 and then you want to compute the" }, { "end": 1364.96, "start": 1356, "text": " average which is six ha I made a good example by accident and then you simply" }, { "end": 1371.44, "start": 1364.96, "text": " weigh the outputs here by their probabilities so you say since the model" }, { "end": 1378.3600000000001, "start": 1371.44, "text": " wants point one puts what point one probability on the count I'm going to" }, { "end": 1391.1200000000001, "start": 1378.3600000000001, "text": " have 0.1 times 3 plus it wants point three times this so 0.3 times 18 plus" }, { "end": 1413.52, "start": 1391.12, "text": " 0.6 times 6 now I'm not gonna so this is 6 plus point three plus 3.6 9.9 so that" }, { "end": 1417.8799999999999, "start": 1413.52, "text": " that's how the model computes things it simply puts probability on these" }, { "end": 1423.5600000000002, "start": 1417.88, "text": " operations here and then you simply take a weighted output with respect to the" }, { "end": 1429.2, "start": 1423.5600000000002, "text": " computation of all those things now I'm pretty sure that's completely invalid" }, { "end": 1434.0800000000002, "start": 1429.2, "text": " because for the same numbers for example the sum is going to have a much larger" }, { "end": 1442.96, "start": 1434.0800000000002, "text": " like variance than the average and and that's somewhat going the count maybe" }, { "end": 1447.5600000000002, "start": 1442.96, "text": " somewhere in between depending on the numbers so this just to take the weighted" }, { "end": 1455.2, "start": 1447.56, "text": " average here and then of course right so what they do is they do have this this" }, { "end": 1458.08, "start": 1455.2, "text": " is the model output and you have the correct answer let's say the correct" }, { "end": 1462.36, "start": 1458.08, "text": " answer was actually was to compute the the average so the correct answers six" }, { "end": 1468.32, "start": 1462.36, "text": " so what they do is simply they take the squared error and that's their loss" }, { "end": 1472.56, "start": 1468.32, "text": " actually they don't take the squared error they take a approximation to the" }, { "end": 1479.84, "start": 1472.56, "text": " squared error which is square until some Delta and then it's linear and this is" }, { "end": 1485.28, "start": 1479.84, "text": " simply to be a bit more outlier robust and they do other things to be more" }, { "end": 1490.84, "start": 1485.28, "text": " outlier robust but this so this is the model output and this is the correct" }, { "end": 1498.24, "start": 1490.84, "text": " answer and they simply count on the fact that this will this will back propagate" }, { "end": 1505.72, "start": 1498.24, "text": " so if you want to make these two things closer if you're the model right you" }, { "end": 1513.8, "start": 1505.72, "text": " have the option of simply putting more weight from the from the other ones on" }, { "end": 1521.8, "start": 1513.8, "text": " to the average operation and that will decrease the 9.9 because you as you can" }, { "end": 1539.52, "start": 1521.8, "text": " see both of these numbers will get smaller and no wait this isn't the yes" }, { "end": 1544.76, "start": 1539.52, "text": " sorry so you will you will decrease these numbers so this is the output we" }, { "end": 1550.04, "start": 1544.76, "text": " got from the weighted average right so if we decrease these weights you will" }, { "end": 1555.56, "start": 1550.04, "text": " put weight from here to here that will bring the number 9.9 down and that will" }, { "end": 1561.72, "start": 1555.56, "text": " get you closer to the answer you're looking for but you can also achieve" }, { "end": 1569.24, "start": 1561.72, "text": " this by you can achieve this even more right so this 9.9 is too high if we want" }, { "end": 1574.6399999999999, "start": 1569.24, "text": " to bring the 9.9 down we're much better off by taking some of that output and" }, { "end": 1580.1200000000001, "start": 1574.64, "text": " actually putting on this here because three is the lowest number right the" }, { "end": 1585.6000000000001, "start": 1580.1200000000001, "text": " only agreement here is that we want to take weight away from the 18 from the" }, { "end": 1593.1200000000001, "start": 1585.6000000000001, "text": " large one so I'm extremely surprised that this works given that it is so" }, { "end": 1601.0600000000002, "start": 1593.1200000000001, "text": " super ambiguous what the model should do with these operations and I I highly" }, { "end": 1605.08, "start": 1601.06, "text": " doubt that you can extend this so it's of course agnostic of what these" }, { "end": 1611.8799999999999, "start": 1605.08, "text": " aggregations are but to be able to extend this to many more aggregations" }, { "end": 1616.32, "start": 1611.8799999999999, "text": " is will I think lead to much more of these situations where the model is" }, { "end": 1622.08, "start": 1616.32, "text": " entirely unsure of where to put the mass of where to put the weight and I would be" }, { "end": 1627.04, "start": 1622.08, "text": " interested to see what happens if you have a data set with like 20 or 50 of" }, { "end": 1635.1599999999999, "start": 1627.04, "text": " these aggregations and not just three so this is the this is the let's say the" }, { "end": 1639.44, "start": 1635.1599999999999, "text": " the interesting part here the other if you go the other way when you have this" }, { "end": 1646.56, "start": 1639.44, "text": " cell selection task it is just to select a cell right and then you simply have" }, { "end": 1654.12, "start": 1646.56, "text": " the cell selector that part here that does the selection that you also you" }, { "end": 1658.7199999999998, "start": 1654.12, "text": " train every time simply to give each cell a weight right so this this is" }, { "end": 1663.56, "start": 1658.7199999999998, "text": " simply the softmax over column and then the softmax over rows and you can train" }, { "end": 1670.7199999999998, "start": 1663.56, "text": " that using the cross entropy now training this cell selector from data is" }, { "end": 1676.4799999999998, "start": 1670.7199999999998, "text": " pretty easy when it's a cell selection task right because the answer is in the" }, { "end": 1682.4799999999998, "start": 1676.4799999999998, "text": " table and or is not in the table and then you know to select no cell so you" }, { "end": 1686.88, "start": 1682.48, "text": " do have the training data that a particular cell is the correct cell and" }, { "end": 1693.04, "start": 1686.88, "text": " you can train the model to select that cell but it is actually a pretty hard" }, { "end": 1698.64, "start": 1693.04, "text": " task if it is for example you're looking for an average operation because not" }, { "end": 1702.48, "start": 1698.64, "text": " only do you are you not really sure that it's an average operation you just know" }, { "end": 1707.2, "start": 1702.48, "text": " that that kind of gives you the correct answer you also don't really know which" }, { "end": 1713.72, "start": 1707.2, "text": " cells to select for this average operation right because depending on" }, { "end": 1716.64, "start": 1713.72, "text": " which cells you select and of course that's going to be a soft selection as" }, { "end": 1722.92, "start": 1716.64, "text": " well the the average answer the average will be different depending on which" }, { "end": 1727.88, "start": 1722.92, "text": " cells you select so they're basically counting on this loss here to back" }, { "end": 1733.2, "start": 1727.88, "text": " propagate not only through the the selection of the aggregation to perform" }, { "end": 1741.68, "start": 1733.2, "text": " but also to the cell selector to set which cells to to select so from this" }, { "end": 1745.3600000000001, "start": 1741.68, "text": " weak signal it's almost like the reinforcement learning problem where you" }, { "end": 1749.76, "start": 1745.3600000000001, "text": " have the weak signal and you have like a billion ways to get your number closer" }, { "end": 1756.3600000000001, "start": 1749.76, "text": " to that signal and not not really accurate understanding of what you need" }, { "end": 1760.32, "start": 1756.3600000000001, "text": " to do is you're just relying on the model through lots and lots and lots and" }, { "end": 1765.3999999999999, "start": 1760.32, "text": " lots of data to kind of figure out which natural language questions to map to" }, { "end": 1772.76, "start": 1765.3999999999999, "text": " which cell selection and aggregation so this is it's a it seems like impossible" }, { "end": 1778.48, "start": 1772.76, "text": " but it works the last thing we need to talk about is this ambiguous answer" }, { "end": 1782.6, "start": 1778.48, "text": " setting and as you can imagine it's pretty simple that they also let the" }, { "end": 1787.96, "start": 1782.6, "text": " model do and a soft selection between the cell selection tasks so no" }, { "end": 1792.32, "start": 1787.96, "text": " aggregation and the aggregations to be performed and basically let the model" }, { "end": 1798.24, "start": 1792.32, "text": " figure out itself which one is better to do an aggregation or to do no" }, { "end": 1807.6000000000001, "start": 1798.24, "text": " aggregation suffice to say this this only works for pretty I think I think it" }, { "end": 1811.24, "start": 1807.6000000000001, "text": " only works for pretty limited amount of tasks pretty limited amount of questions" }, { "end": 1815.24, "start": 1811.24, "text": " and you might have spotted there even these questions that are follow-up" }, { "end": 1819.76, "start": 1815.24, "text": " questions which are another thing they build into the model and I don't I'm not" }, { "end": 1824.84, "start": 1819.76, "text": " really gonna talk about this but they do have this concept as well which I find" }, { "end": 1828.52, "start": 1824.84, "text": " maybe a bit out of place but maybe it's just part of their data set somewhere" }, { "end": 1834.36, "start": 1828.52, "text": " maybe it's just these companies want to get into this conversational mode so" }, { "end": 1839.08, "start": 1834.36, "text": " everything needs to be context dependent at the interesting part here is really" }, { "end": 1844.64, "start": 1839.08, "text": " the computation of the aggregates and specifically the question of which of" }, { "end": 1849.8400000000001, "start": 1844.64, "text": " these aggregations to choose and this again this is so surprising that it" }, { "end": 1857.6000000000001, "start": 1849.8400000000001, "text": " works and fairly fairly cool I think that is the gist of the paper they do" }, { "end": 1863.8000000000002, "start": 1857.6000000000001, "text": " extremely thorough evaluations here on these data sets and ablations to see" }, { "end": 1869.44, "start": 1863.8000000000002, "text": " what really counts and what doesn't I don't really want to go into that safe" }, { "end": 1875.04, "start": 1869.44, "text": " to say their results are better than anything else before I believe they I" }, { "end": 1880.8400000000001, "start": 1875.04, "text": " believe they're actually on par with another model but in one data set but" }, { "end": 1886.92, "start": 1880.8400000000001, "text": " they beat them on every other data set so that's you know that's cool I don't" }, { "end": 1893.3200000000002, "start": 1886.92, "text": " think there was a bar nevermind I invite you to check out this paper look for" }, { "end": 1897, "start": 1893.3200000000002, "text": " yourself they have the code online if you want to train a model like this" }, { "end": 1900.92, "start": 1897, "text": " yourself other than that thanks for listening if you like this content" }, { "end": 1927.8000000000002, "start": 1900.92, "text": " please subscribe like comment tell a friend and bye bye" } ]
PDRtyrVskMU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Chip Placement with Deep Reinforcement Learning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "reinforcement learning", "deep reinforcement learning", "gans", "gan", "deconvolution", "computer chip", "gpu", "tpu", "fpga", "netlist", "constrained", "google" ]
The AI Singularity is here! Computers designing new computers! It takes human experts multiple weeks to design new computer chips. What looks like a large game of Tetris is actually a very complex optimization problem. This paper uses Deep Reinforcement Learning to solve this optimization both faster and better than humans. https://arxiv.org/abs/2004.10746 Abstract: In this work, we present a learning-based approach to chip placement, one of the most complex and time-consuming stages of the chip design process. Unlike prior methods, our approach has the ability to learn from past experience and improve over time. In particular, as we train over a greater number of chip blocks, our method becomes better at rapidly generating optimized placements for previously unseen chip blocks. To achieve these results, we pose placement as a Reinforcement Learning (RL) problem and train an agent to place the nodes of a chip netlist onto a chip canvas. To enable our RL policy to generalize to unseen blocks, we ground representation learning in the supervised task of predicting placement quality. By designing a neural architecture that can accurately predict reward across a wide variety of netlists and their placements, we are able to generate rich feature embeddings of the input netlists. We then use this architecture as the encoder of our policy and value networks to enable transfer learning. Our objective is to minimize PPA (power, performance, and area), and we show that, in under 6 hours, our method can generate placements that are superhuman or comparable on modern accelerator netlists, whereas existing baselines require human experts in the loop and take several weeks. Authors: Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Sungmin Bae, Azade Nazi, Jiwoo Pak, Andy Tong, Kavya Srinivasa, William Hang, Emre Tuncer, Anand Babu, Quoc V. Le, James Laudon, Richard Ho, Roger Carpenter, Jeff Dean Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at Chip Placement with Deep Reinforcement Learning by Azalia Miroszajny, Anna Goldi and a long list of authors that I have no stamina to read down. I'm sorry. So this work is a cool application of reinforcement learning to the real world. And we're gonna go through it and the cool thing about it is it pulls together parts from so many different areas of machine learning and also here chip engineering. So what's the fundamental problem? The fundamental problem of chip design is this. You have a canvas, an empty chip and you want to build a computer chip. Now what you have given is a so-called netlist. So your netlist is any parts that you want on that computer chip and their shape or their size. So you can imagine this like a bit of a Tetris game. So here's this netlist. There's this part and then this part and then there's maybe this part and also this part. Many many parts. Now these as I understand it can be thousands of parts but you can sort of group them together but still there are a lot of these parts. And the netlist also contains information about how they're connected. So for each of these parts you would have a list of which other ones of these parts they must be connected to. So maybe it says okay this part here needs to be connected to those three parts and for each of those you'd also have like a list of how they must be connected. You can represent this as an adjacency matrix right? But ultimately this is a graph of these nodes. Now your goal is to place those things on this board. So for example we're gonna place this right here and we're gonna place the second one maybe here and the third one maybe here. So you can imagine if this is a CPU maybe look I have no clue of chip design but I imagine it like this. This is your clock that you need on there. This is your NAND gates right? NAND gates pretty important for a CPU and this is your floating point unit also pretty important and so on. So I need to place these things and then you need to connect them using these using wires. Now wires are of course etched into the board but you need to connect them according to the maybe there is a component right here. According to the netlist right they need to be connected like that. Maybe the algorithm that came up with the chip told you they need to be connected like this and if you lay them out like this you can draw the wires. So this is your finished you you want to go from the thing on the right to the thing on the left and your goal here in order to get the fastest possible computer chip is three things. First of all you want first of all the density is important. By density it basically just means you can't place stuff on top of other stuff so you could not place a block right here. Not possible because the clock is already there. So that's first thing you can't place stuff on top of other stuff. Then the second thing is the wires and specifically the length of the wires. So you see for example this thing here is a pretty short wire that means the signal travels fast. This thing here is a long wire so the signal travels more slowly. Now the lower sorry the the faster you want your signal to go that means you have to make your wires as short as possible. So you want to keep the total amount of wire length as short as possible. Then third is what's called congestion. So congestion is when for example okay I actually don't know what it is but maybe it is when two wires cross like somewhere here there might be congestion or maybe some parts actually share their wires. I can imagine sorry I'm gonna draw this again I can imagine maybe there's a part here that wants to go up and then it maybe shares sorry shares this part of the wire here with the other one and so there's congestion like it's roads or something. In any case you can measure congestion it's a bad thing and you want to lay out your components basically in order to minimize congestion and also minimize the length of the wires and not have them on top of one another. Easy enough right? It takes human experts and combined with state-of-the-art algorithms multiple weeks to design chips like this and that's the fundamental problem and this paper takes reinforcement learning in order to solve this problem in a few hours. So how does it do this? The reinforcement learning is basically a sequential measure method so you want to do one action at a time. So you start off with what they call the chip canvas which is just empty. This is your state and then the agent here it gets to decide where to place the next thing. Now how do you decide what the next thing is? I believe they simply go by size so they take the largest component first of the netlist first and they just go down the netlist like this. So you tell the agent hey agent I want you to place this this thing here I want you to place this next and the agent will tell you I will place it right here and then again you tell the agent agent I have this this thing here where do you want to place it and along with sorry along with all the connections right along with everything that it needs to be connected to and also the entire list so the agent must also think of what what is to come yet what it hasn't placed yet. So it everything goes into this decision and it tells you okay I want to place this here and then at the end you end up with this filled board. Now this isn't actually the end. After you have placed all your things there is another method coming in this is called this force directed method placing yet more things so what you actually place with this agent is only the things called macros. I have no idea what those are or how these are different from these standard cells but apparently you must use a force directed method which you can think of just just an algorithm you run to place the standard cells and these are these these gray blobs here and at the end of all of this you can finally evaluate how good your design is. So at the end of all your of this you get a reward that is a mixture of wire length and congestion. Now this is actually an approximation to wire length and this is an approximation to congestion that they use because they need to evaluate it quickly but in essentially it's highly correlated with wire length and congestion so the negative of that is going to be your reward. So in terms of a reinforcement learning problem this is pretty nasty right because as you can see here you get basically a reward of zero for every step until the very end you get your true reward and it is actually worse than that because so from here to here your agent gets to perform actions right this is action time but usually when you have these sparse reward tasks you'll get your reward at the end of that action time but not here at the end of the action time there is an algorithm over which the agent basically has no control that comes in and does a bunch of things this force directed method and only then do you get your reward right so the agent must purposefully sort of leave room here for what this algorithm is going to do so it needs to learn that as well this is a as far as reinforcement learning goes this is a pretty good reinforcement learning problem right so now we have an environment which is the canvas here and also this you can you can consider this force directed method to be part of the environment and of course the reward giver and you have also the netlist as part of the environment and you have the agent that can do actions now we have to go into how does the agent perform actions on this so by the way maybe a bit confusing because it was a bit confusing for me for a given reinforcement learning problem we'll just start out by saying that the netlist is always the same right if you might be coming from a deep learning framework where you're used to many many different training samples in this case basically the netlist the goal is always the same you can think of it like a reinforcement learning agent for the game of chess where it's always the same chess game that you're trying to optimize this is the difference to let's say supervised learning if you have a label in supervised learning if you know the solution to a particular data point you're happy right you that data point is no longer interesting you want to generalize here even though they generalize later here you can give it a single problem right and it will already a solution to that problem will be valuable because it can be a better solution that humanity has come up with until this point so always think that we're now just working on one single netlist one problem to optimally place this netlist and an episode is simply to place these things until here and then you get a reward and then you go back to the beginning and just do it all over again but just try to do better and then you go back to the beginning and you do it all over again the same problem right okay so how does this work by the way that the paper has great technical detail on chip engineering and how the reward function exactly works and so on I have not the expertise to go into this with you beyond what I just described alright so here is how the model looks from a deep RL perspective now there's two parts to this model you can divide it about here so on the right you would have what is your policy and value networks and on the left the feature embeddings so in reinforcement learning and we won't go much into reinforcement learning now but what you need are basically a way to encode the state this is the encoder so all the information of the observation they might be in different modalities and so on you need to encode this into but in for simplicity let's say a single vector that's thing that this thing here this is the state encoding and then you can employ a policy and a value network in order to do reinforcement learning so the side on the right this comes from standard RL you have a policy network and a value network and they do I believe PPO with it this is a standard reinforcement learning architecture it's an actor critic architecture so the value net is simply telling you what's the value of the state that you're in now given that you have a state embedding it simply takes a fully connected layer to transform this into a single float that is the value network the policy is a bit different because usually in reinforcement learning you just have a list of actions right you just say I have these 16 buttons on my controller you compress them here we if you run if you look at this chip from above we have a question where do you want to place the next thing so in order to do that we take this embedding of the state and they run it through a series of D convolutions and the D convolutions they have the ability to basically up sample an image so you see here you transform this vector into a 4 by 4 by 32 tensor and that gets D convolved into more and more though less and less channels but more and more height and width images so it kind of from a vector it produces an image right here you might recognize this from a lot of generator architectures for when you make GANs for images have exactly this D convolution architecture so as I said pretty cool it pulls in kind of architectures and methods from different fields we already have reinforcement learning now we have generators for images now so you come up with an image and basically this image if you can imagine this from above is discretized so you can place the thing you have to place pretty much every single nanometer but they discretize this into a grid and for each point in the grid the network outputs a number so the number maybe nine here three four and so on eight right here so for each of these they it outputs a number where it would or how much it would like to place the next thing at this particular location so this is a distribution over locations so the first thing you have to do is you have to mask out where there are already things we said the first condition is things cannot be on top of other things so maybe you already have placed something here so you ignore those numbers and you have already placed something here so you ignore those numbers as well this is this masking operation right here and then you simply look at where is your highest number and maybe there's maybe there's an 11 down here somewhere say ah this is my highest number okay cool and you look at what you need to place maybe the thing you need to place looks like this and you say all right I will place maybe the 11 marks the top left corner I will place it right here okay and then you do the same thing again for the next piece so the next piece you would simply also mark this to be blue so you can't place here you evaluate your network again of course you'll have a new shape something like this and then you ask the network where would you like to place this and you do this step by step by step until the entire netlist is empty so this is how we do the reinforcement learning this is how we decide on an action but how do we actually put the state into this encoding now this pulls in yet another framework from another field of deep learning namely graph convolutional neural networks so since the netlist is a graph right the netlist is again if you have your wow this is slow today if you have your netlist right here with the part right the shape or size whatever and the list of things it needs to be connected to then this forms naturally a graph so you can transform this into a graph with the things that need to be connected connected by an edge and they run a graph convolutional network across that now in a graph convolutional network you're trying to take a graph like this and have embeddings for the edges and the vertices so ultimately you want what's called a graph embedding in order to do that you need to propagate information along the graph usually as we said this is done during a graph convolution if you are in machine learning for a while longer you might remember also things like conditional random fields or generally graphical methods that were once popular and are kind of a precursor to this so the way they do it is they do it in an iterative fashion they have multiple so they say this right here so how do they embed a graph they have nodes in the graph as we saw before I'm going to draw this one again so this is maybe vi vj and vk now these represent the pieces in the netlist that you have to place so for each of those it has a bunch of features right so the features might be its size it's I believe they they have them somewhere here its size maybe how much power it uses and also its x and y coordinates if it is already placed right so you start with a vector like this and then you iteratively do the following thing you compute edge edge features by running first these things so this is vi and vj for an for each edge you take its nodes run them through this fully connected layer so you embed the features of the nodes you concatenate them and you run it through another neural network layer and that's how you get embeddings for edges and then you update the embeddings for the nodes again by taking the mean embeddings of the edges so you do this in an iterative fashion first you compute the edges from the nodes and then you compute the nodes from the edges and so on right so this means that information can now propagate through the graph so information from this thing propagates into this edge embedding and then in the next step that will propagate into this and then that can propagate into this EJK and this is the same as if you're used to yeah something like a conditional random fields over time if you have a big graph like this the information from any particular node will kind of propagate out throughout the graph and at some point you can sort of reach an equilibrium where everyone everyone in the graph knows about everyone else I have not found how many times they do it they simply say we repeatedly perform the following updates maybe that's somewhere and I just haven't read it closely enough but also I don't haven't seen whether or not they then back propagate through this through these multiple updates or whether they just back prop through one of them not entirely sure but ultimately they get embeddings these edge embeddings out of this graph and they simply take the mean to get the graph embeddings and that goes into their state embeddings right along with that they also have the macro embeddings which are the nodes here the things to be placed along with the current macro ID this is which one do you need to place right now so this comes out of these are the two things out of graphs vertices and edges and then which one you need to place right now it's pretty important right so you take the ones the one that you need to place this also goes into your embedding and then you have some metadata about the netlist like how many things there are and so on and this is also embedded using a fully connected layer all of that goes into your embedding right so your embedding will contain all of this information if you've done a good job and if you train it correctly so this is the model now they do pre train this encoder part right here and the encoder part it's also kind of circular first of all they just generate a giant list so they take this chip here and they just run a policy network that is maybe not super optimal but they just run it a bunch of times in intermediate states and they pre train the encoder to predict the final reward for each of these placements or sorry the the the wire length and congestion and so on and that pre trains the encoder but ultimately you can train this with reinforcement learning you can now let it try to solve this board over and over and over and over and it will get better over time all right the last thing they do is they do transfer learning now finding a better architecture for a single board is already better and faster than the humans but what is cool is that if you have now trained on this one particular board sorry with one particular netlist where was it right we've we've now trained on this particular netlist this was this was our problem and we've solved that we have a great solution can we now when we get a netlist another netlist so here is netlist 2 right it's maybe a bit different so this one is more longer and this one is here and so on what if so we would have to start again from scratch a training reinforcement learning agent on the netlist too so maybe a RL agent trained on chess if we now wanted to play go you know we need to start over again but they try to just transfer this to the new one and astonishingly enough if you train the same RL agent not only on one netlist but on a set of netlists and the biggest set they have is 20 so their data set size is 20 imagine how small this is compared to supervised learning but maybe think of this like you train on 20 Atari games and then it will play the 21st one much better than if you started from scratch interestingly though even zero shot embeddings tend to be pretty good so they don't optimize for the new thing at all and it's already better you can see that here so if you train a policy from scratch then you this here then it takes a long time but if you fine-tune a pre-trained policy it's much shorter and interestingly enough at the beginning it is already better than the policy from scratch that means the knowledge from one chip transfers over to the other chip so the problems are sufficiently close and that basically means that if we now want to design a new AI chip not only are we better because of RL we're also faster because we can transfer learn and they show here that this effect basically appears when you have a large enough data set and again large here is just 20 blocks here you see one of these placements on the left the zero shot placement on the right and fine-tuned on that particular architecture obviously me being an expert in chip placement is it clearly obvious that both are extremely good and yes though actually more funny I find this one where they compare human experts to what their approach is and it says the figures are intentionally blurred as the designs are provided like why do you put them that clearly I can't even couldn't even judge if they're super crisp I yeah all right I guess it's their trade secret so they compare this with the standard algorithms for these things and not only are they faster they are they also better on the metrics yeah overall as I said I find this to be a pretty cool work that pulls in a lot of things from a lot of different fields at one point they say we propose a novel graph convolutional architecture I'm not sure that it is novel maybe it's novel for this problem but I'm pretty sure graph convolutional networks and things like this have been around for a while but again it pulls together things from many different fields and applies them very well very well engineered paper and a step towards the singularity as now AI can design AI accelerators how amazing yeah humanity is doomed all right I invite you to check out this paper if you're still here please subscribe leave a like and a comment and I'll see you next time bye bye
[ { "end": 5.48, "start": 0, "text": " Hi there! Today we're looking at Chip Placement with Deep Reinforcement Learning" }, { "end": 11.88, "start": 5.48, "text": " by Azalia Miroszajny, Anna Goldi and a long list of authors that I have no" }, { "end": 18.96, "start": 11.88, "text": " stamina to read down. I'm sorry. So this work is a cool application of" }, { "end": 26.36, "start": 18.96, "text": " reinforcement learning to the real world. And we're gonna go through it and the" }, { "end": 31.88, "start": 26.36, "text": " cool thing about it is it pulls together parts from so many different areas of" }, { "end": 37, "start": 31.88, "text": " machine learning and also here chip engineering. So what's the fundamental" }, { "end": 44.760000000000005, "start": 37, "text": " problem? The fundamental problem of chip design is this. You have a canvas, an" }, { "end": 50.16, "start": 44.760000000000005, "text": " empty chip and you want to build a computer chip. Now what you have given is" }, { "end": 57.959999999999994, "start": 50.16, "text": " a so-called netlist. So your netlist is any parts that you want on that computer" }, { "end": 63.12, "start": 57.959999999999994, "text": " chip and their shape or their size. So you can imagine this like a bit of a" }, { "end": 69.64, "start": 63.12, "text": " Tetris game. So here's this netlist. There's this part and then this part and" }, { "end": 75.52, "start": 69.64, "text": " then there's maybe this part and also this part. Many many parts. Now these as I" }, { "end": 80.75999999999999, "start": 75.52, "text": " understand it can be thousands of parts but you can sort of group them together" }, { "end": 85.8, "start": 80.75999999999999, "text": " but still there are a lot of these parts. And the netlist also contains" }, { "end": 90.64, "start": 85.8, "text": " information about how they're connected. So for each of these parts you would" }, { "end": 96.75999999999999, "start": 90.64, "text": " have a list of which other ones of these parts they must be connected to. So" }, { "end": 101.16, "start": 96.75999999999999, "text": " maybe it says okay this part here needs to be connected to those three parts and" }, { "end": 105.32, "start": 101.16, "text": " for each of those you'd also have like a list of how they must be connected." }, { "end": 110.27999999999999, "start": 105.32, "text": " You can represent this as an adjacency matrix right? But ultimately this is a" }, { "end": 117.55999999999999, "start": 110.27999999999999, "text": " graph of these nodes. Now your goal is to place those things on this board. So for" }, { "end": 122.11999999999999, "start": 117.55999999999999, "text": " example we're gonna place this right here and we're gonna place the second" }, { "end": 127.35999999999999, "start": 122.11999999999999, "text": " one maybe here and the third one maybe here. So you can imagine if this is a" }, { "end": 132.92, "start": 127.35999999999999, "text": " CPU maybe look I have no clue of chip design but I imagine it like this. This" }, { "end": 139.23999999999998, "start": 132.92, "text": " is your clock that you need on there. This is your NAND gates right? NAND gates" }, { "end": 145.07999999999998, "start": 139.23999999999998, "text": " pretty important for a CPU and this is your floating point unit also pretty" }, { "end": 149.67999999999998, "start": 145.07999999999998, "text": " important and so on. So I need to place these things and then you need to" }, { "end": 154.51999999999998, "start": 149.67999999999998, "text": " connect them using these using wires. Now wires are of course etched into the" }, { "end": 160.2, "start": 154.51999999999998, "text": " board but you need to connect them according to the maybe there is a" }, { "end": 165.92, "start": 160.2, "text": " component right here. According to the netlist right they need to be connected" }, { "end": 172.35999999999999, "start": 165.92, "text": " like that. Maybe the algorithm that came up with the chip told you they need to" }, { "end": 177.23999999999998, "start": 172.35999999999999, "text": " be connected like this and if you lay them out like this you can draw the" }, { "end": 181.95999999999998, "start": 177.23999999999998, "text": " wires. So this is your finished you you want to go from the thing on the right" }, { "end": 187.23999999999998, "start": 181.95999999999998, "text": " to the thing on the left and your goal here in order to get the fastest" }, { "end": 194.8, "start": 187.24, "text": " possible computer chip is three things. First of all you want first of all the" }, { "end": 201.60000000000002, "start": 194.8, "text": " density is important. By density it basically just means you can't place" }, { "end": 207.92000000000002, "start": 201.60000000000002, "text": " stuff on top of other stuff so you could not place a block right here. Not possible" }, { "end": 212.8, "start": 207.92000000000002, "text": " because the clock is already there. So that's first thing you can't place" }, { "end": 221.76000000000002, "start": 212.8, "text": " stuff on top of other stuff. Then the second thing is the wires and" }, { "end": 228.36, "start": 221.76000000000002, "text": " specifically the length of the wires. So you see for example this thing here is a" }, { "end": 233.56, "start": 228.36, "text": " pretty short wire that means the signal travels fast. This thing here is a long" }, { "end": 241.04000000000002, "start": 233.56, "text": " wire so the signal travels more slowly. Now the lower sorry the the faster you" }, { "end": 244.76, "start": 241.04, "text": " want your signal to go that means you have to make your wires as short as" }, { "end": 249.28, "start": 244.76, "text": " possible. So you want to keep the total amount of wire length as short as" }, { "end": 259.28, "start": 249.28, "text": " possible. Then third is what's called congestion. So congestion is when for" }, { "end": 265.36, "start": 259.28, "text": " example okay I actually don't know what it is but maybe it is when two wires" }, { "end": 272.08000000000004, "start": 265.36, "text": " cross like somewhere here there might be congestion or maybe some parts" }, { "end": 278.16, "start": 272.08000000000004, "text": " actually share their wires. I can imagine sorry I'm gonna draw this again I can" }, { "end": 287.08000000000004, "start": 278.16, "text": " imagine maybe there's a part here that wants to go up and then it maybe shares" }, { "end": 292.88, "start": 287.08000000000004, "text": " sorry shares this part of the wire here with the other one and so there's" }, { "end": 298.56, "start": 292.88, "text": " congestion like it's roads or something. In any case you can measure congestion" }, { "end": 303.52, "start": 298.56, "text": " it's a bad thing and you want to lay out your components basically in order to" }, { "end": 308.15999999999997, "start": 303.52, "text": " minimize congestion and also minimize the length of the wires and not have" }, { "end": 314.76, "start": 308.15999999999997, "text": " them on top of one another. Easy enough right? It takes human experts and" }, { "end": 320.36, "start": 314.76, "text": " combined with state-of-the-art algorithms multiple weeks to design chips like this" }, { "end": 326, "start": 320.36, "text": " and that's the fundamental problem and this paper takes reinforcement learning" }, { "end": 333.04, "start": 326, "text": " in order to solve this problem in a few hours. So how does it do this? The" }, { "end": 338.8, "start": 333.04, "text": " reinforcement learning is basically a sequential measure method so you want to" }, { "end": 345.16, "start": 338.8, "text": " do one action at a time. So you start off with what they call the chip canvas" }, { "end": 351.6, "start": 345.16, "text": " which is just empty. This is your state and then the agent here it gets to" }, { "end": 355.56, "start": 351.6, "text": " decide where to place the next thing. Now how do you decide what the next thing is?" }, { "end": 361.48, "start": 355.56, "text": " I believe they simply go by size so they take the largest component first of" }, { "end": 367.92, "start": 361.48, "text": " the netlist first and they just go down the netlist like this. So you tell the" }, { "end": 374.12, "start": 367.92, "text": " agent hey agent I want you to place this this thing here I want you to place this" }, { "end": 380.2, "start": 374.12, "text": " next and the agent will tell you I will place it right here and then again you" }, { "end": 385.04, "start": 380.2, "text": " tell the agent agent I have this this thing here where do you want to place it" }, { "end": 389.56, "start": 385.04, "text": " and along with sorry along with all the connections right along with everything" }, { "end": 394.84000000000003, "start": 389.56, "text": " that it needs to be connected to and also the entire list so the agent must" }, { "end": 400.72, "start": 394.84000000000003, "text": " also think of what what is to come yet what it hasn't placed yet. So it" }, { "end": 405.12, "start": 400.72, "text": " everything goes into this decision and it tells you okay I want to place this" }, { "end": 411.32000000000005, "start": 405.12, "text": " here and then at the end you end up with this filled board. Now this isn't" }, { "end": 418.24, "start": 411.32000000000005, "text": " actually the end. After you have placed all your things there is another method" }, { "end": 423.64000000000004, "start": 418.24, "text": " coming in this is called this force directed method placing yet more things" }, { "end": 430, "start": 423.64000000000004, "text": " so what you actually place with this agent is only the things called macros. I" }, { "end": 434.92, "start": 430, "text": " have no idea what those are or how these are different from these standard cells" }, { "end": 439.84, "start": 434.92, "text": " but apparently you must use a force directed method which you can think of" }, { "end": 445.48, "start": 439.84, "text": " just just an algorithm you run to place the standard cells and these are these" }, { "end": 453.84, "start": 445.48, "text": " these gray blobs here and at the end of all of this you can finally evaluate how" }, { "end": 459.1, "start": 453.84, "text": " good your design is. So at the end of all your of this you get a reward that is a" }, { "end": 464.28000000000003, "start": 459.1, "text": " mixture of wire length and congestion. Now this is actually an approximation to" }, { "end": 468.44, "start": 464.28000000000003, "text": " wire length and this is an approximation to congestion that they use because they" }, { "end": 473.6, "start": 468.44, "text": " need to evaluate it quickly but in essentially it's highly correlated" }, { "end": 478.96000000000004, "start": 473.6, "text": " with wire length and congestion so the negative of that is going to be your" }, { "end": 486, "start": 478.96000000000004, "text": " reward. So in terms of a reinforcement learning problem this is pretty nasty" }, { "end": 491.8, "start": 486, "text": " right because as you can see here you get basically a reward of zero for every" }, { "end": 498.04, "start": 491.8, "text": " step until the very end you get your true reward and it is actually worse" }, { "end": 505.36, "start": 498.04, "text": " than that because so from here to here your agent gets to perform actions" }, { "end": 512.88, "start": 505.36, "text": " right this is action time but usually when you have these sparse reward tasks" }, { "end": 517.84, "start": 512.88, "text": " you'll get your reward at the end of that action time but not here at the end" }, { "end": 522.76, "start": 517.84, "text": " of the action time there is an algorithm over which the agent basically has no" }, { "end": 529.48, "start": 522.76, "text": " control that comes in and does a bunch of things this force directed method and" }, { "end": 535.68, "start": 529.48, "text": " only then do you get your reward right so the agent must purposefully sort of" }, { "end": 540.6, "start": 535.68, "text": " leave room here for what this algorithm is going to do so it needs to learn that" }, { "end": 546.4, "start": 540.6, "text": " as well this is a as far as reinforcement learning goes this is a" }, { "end": 551.72, "start": 546.4, "text": " pretty good reinforcement learning problem right so now we have an" }, { "end": 557.8000000000001, "start": 551.72, "text": " environment which is the canvas here and also this you can you can consider this" }, { "end": 562.84, "start": 557.8000000000001, "text": " force directed method to be part of the environment and of course the reward" }, { "end": 570.2, "start": 562.84, "text": " giver and you have also the netlist as part of the environment and you have the" }, { "end": 574.76, "start": 570.2, "text": " agent that can do actions now we have to go into how does the agent perform" }, { "end": 583.1600000000001, "start": 574.76, "text": " actions on this so by the way maybe a bit confusing because it was a bit" }, { "end": 587.6800000000001, "start": 583.1600000000001, "text": " confusing for me for a given reinforcement learning problem we'll" }, { "end": 593.6, "start": 587.6800000000001, "text": " just start out by saying that the netlist is always the same right if you" }, { "end": 599.9200000000001, "start": 593.6, "text": " might be coming from a deep learning framework where you're used to many many" }, { "end": 606.3199999999999, "start": 599.92, "text": " different training samples in this case basically the netlist the goal is always" }, { "end": 611.04, "start": 606.3199999999999, "text": " the same you can think of it like a reinforcement learning agent for the" }, { "end": 615.76, "start": 611.04, "text": " game of chess where it's always the same chess game that you're trying to" }, { "end": 620.4, "start": 615.76, "text": " optimize this is the difference to let's say supervised learning if you have a" }, { "end": 624.5999999999999, "start": 620.4, "text": " label in supervised learning if you know the solution to a particular data point" }, { "end": 628.8399999999999, "start": 624.5999999999999, "text": " you're happy right you that data point is no longer interesting you want to" }, { "end": 635.08, "start": 628.84, "text": " generalize here even though they generalize later here you can give it a" }, { "end": 641.44, "start": 635.08, "text": " single problem right and it will already a solution to that problem will be" }, { "end": 645.5600000000001, "start": 641.44, "text": " valuable because it can be a better solution that humanity has come up with" }, { "end": 650.84, "start": 645.5600000000001, "text": " until this point so always think that we're now just working on one single" }, { "end": 656.44, "start": 650.84, "text": " netlist one problem to optimally place this netlist and an episode is simply to" }, { "end": 662.4000000000001, "start": 656.44, "text": " place these things until here and then you get a reward and then you go back to" }, { "end": 666.2800000000001, "start": 662.4000000000001, "text": " the beginning and just do it all over again but just try to do better and then" }, { "end": 671.2, "start": 666.2800000000001, "text": " you go back to the beginning and you do it all over again the same problem right" }, { "end": 678.6, "start": 671.2, "text": " okay so how does this work by the way that the paper has great technical" }, { "end": 684.5200000000001, "start": 678.6, "text": " detail on chip engineering and how the reward function exactly works and so on" }, { "end": 691.0799999999999, "start": 684.52, "text": " I have not the expertise to go into this with you beyond what I just described" }, { "end": 696.12, "start": 691.0799999999999, "text": " alright so here is how the model looks from a deep RL perspective now there's" }, { "end": 703.6, "start": 696.12, "text": " two parts to this model you can divide it about here so on the right you would" }, { "end": 707.96, "start": 703.6, "text": " have what is your policy and value networks and on the left the feature" }, { "end": 713.48, "start": 707.96, "text": " embeddings so in reinforcement learning and we won't go much into reinforcement" }, { "end": 718.96, "start": 713.48, "text": " learning now but what you need are basically a way to encode the state this" }, { "end": 724.5600000000001, "start": 718.96, "text": " is the encoder so all the information of the observation they might be in" }, { "end": 729.08, "start": 724.5600000000001, "text": " different modalities and so on you need to encode this into but in for" }, { "end": 733.48, "start": 729.08, "text": " simplicity let's say a single vector that's thing that this thing here this" }, { "end": 745.72, "start": 733.48, "text": " is the state encoding and then you can employ a policy and a value network in" }, { "end": 749.6, "start": 745.72, "text": " order to do reinforcement learning so the side on the right this comes from" }, { "end": 754.36, "start": 749.6, "text": " standard RL you have a policy network and a value network and they do I" }, { "end": 760.4, "start": 754.36, "text": " believe PPO with it this is a standard reinforcement learning architecture" }, { "end": 766.84, "start": 760.4, "text": " it's an actor critic architecture so the value net is simply telling you what's" }, { "end": 771.4399999999999, "start": 766.84, "text": " the value of the state that you're in now given that you have a state embedding" }, { "end": 776.72, "start": 771.4399999999999, "text": " it simply takes a fully connected layer to transform this into a single float" }, { "end": 781.24, "start": 776.72, "text": " that is the value network the policy is a bit different because usually in" }, { "end": 784.4, "start": 781.24, "text": " reinforcement learning you just have a list of actions right you just say I" }, { "end": 792.16, "start": 784.4, "text": " have these 16 buttons on my controller you compress them here we if you run if" }, { "end": 798.4, "start": 792.16, "text": " you look at this chip from above we have a question where do you want to place" }, { "end": 805.12, "start": 798.4, "text": " the next thing so in order to do that we take this embedding of the state and" }, { "end": 811.24, "start": 805.12, "text": " they run it through a series of D convolutions and the D convolutions they" }, { "end": 818.2, "start": 811.24, "text": " have the ability to basically up sample an image so you see here you transform" }, { "end": 824.88, "start": 818.2, "text": " this vector into a 4 by 4 by 32 tensor and that gets D convolved into more and" }, { "end": 830.96, "start": 824.88, "text": " more though less and less channels but more and more height and width images so" }, { "end": 836.16, "start": 830.96, "text": " it kind of from a vector it produces an image right here you might recognize" }, { "end": 842.28, "start": 836.16, "text": " this from a lot of generator architectures for when you make GANs for" }, { "end": 849.68, "start": 842.28, "text": " images have exactly this D convolution architecture so as I said pretty cool" }, { "end": 853.68, "start": 849.68, "text": " it pulls in kind of architectures and methods from different fields we already" }, { "end": 860.8, "start": 853.68, "text": " have reinforcement learning now we have generators for images now so you come up" }, { "end": 866.92, "start": 860.8, "text": " with an image and basically this image if you can imagine this from above is" }, { "end": 871.1999999999999, "start": 866.92, "text": " discretized so you can place the thing you have to place pretty much every" }, { "end": 878.92, "start": 871.1999999999999, "text": " single nanometer but they discretize this into a grid and for each point in" }, { "end": 885.4, "start": 878.92, "text": " the grid the network outputs a number so the number maybe nine here three four" }, { "end": 893.0799999999999, "start": 885.4, "text": " and so on eight right here so for each of these they it outputs a number where" }, { "end": 897.8, "start": 893.0799999999999, "text": " it would or how much it would like to place the next thing at this particular" }, { "end": 903.56, "start": 897.8, "text": " location so this is a distribution over locations so the first thing you have to" }, { "end": 906.84, "start": 903.56, "text": " do is you have to mask out where there are already things we said the first" }, { "end": 911.16, "start": 906.84, "text": " condition is things cannot be on top of other things so maybe you already have" }, { "end": 914.04, "start": 911.16, "text": " placed something here so you ignore those numbers and you have already" }, { "end": 918.12, "start": 914.04, "text": " placed something here so you ignore those numbers as well this is this" }, { "end": 922.8, "start": 918.12, "text": " masking operation right here and then you simply look at where is your highest" }, { "end": 929.0799999999999, "start": 922.8, "text": " number and maybe there's maybe there's an 11 down here somewhere say ah this is" }, { "end": 934.14, "start": 929.0799999999999, "text": " my highest number okay cool and you look at what you need to place maybe the" }, { "end": 939.68, "start": 934.14, "text": " thing you need to place looks like this and you say all right I will place maybe" }, { "end": 947.76, "start": 939.68, "text": " the 11 marks the top left corner I will place it right here okay and then you do" }, { "end": 953.2399999999999, "start": 947.76, "text": " the same thing again for the next piece so the next piece you would simply also" }, { "end": 959.3199999999999, "start": 953.2399999999999, "text": " mark this to be blue so you can't place here you evaluate your network again of" }, { "end": 963.8399999999999, "start": 959.3199999999999, "text": " course you'll have a new shape something like this and then you ask the network" }, { "end": 967.64, "start": 963.8399999999999, "text": " where would you like to place this and you do this step by step by step until" }, { "end": 976.28, "start": 967.64, "text": " the entire netlist is empty so this is how we do the reinforcement learning" }, { "end": 981.4399999999999, "start": 976.28, "text": " this is how we decide on an action but how do we actually put the state into" }, { "end": 988.56, "start": 981.4399999999999, "text": " this encoding now this pulls in yet another framework from another field of" }, { "end": 994.28, "start": 988.56, "text": " deep learning namely graph convolutional neural networks so since the netlist is" }, { "end": 1003.92, "start": 994.28, "text": " a graph right the netlist is again if you have your wow this is slow today if" }, { "end": 1008.88, "start": 1003.92, "text": " you have your netlist right here with the part right the shape or size" }, { "end": 1017.12, "start": 1008.88, "text": " whatever and the list of things it needs to be connected to then this forms" }, { "end": 1022.4, "start": 1017.12, "text": " naturally a graph so you can transform this into a graph with the things that" }, { "end": 1027.84, "start": 1022.4, "text": " need to be connected connected by an edge and they run a graph convolutional" }, { "end": 1032.92, "start": 1027.84, "text": " network across that now in a graph convolutional network you're trying to" }, { "end": 1040.72, "start": 1032.92, "text": " take a graph like this and have embeddings for the edges and the" }, { "end": 1048.5, "start": 1040.72, "text": " vertices so ultimately you want what's called a graph embedding in order to do" }, { "end": 1053.92, "start": 1048.5, "text": " that you need to propagate information along the graph usually as we said this" }, { "end": 1059.68, "start": 1053.92, "text": " is done during a graph convolution if you are in machine learning for a while" }, { "end": 1065.48, "start": 1059.68, "text": " longer you might remember also things like conditional random fields or" }, { "end": 1072.88, "start": 1065.48, "text": " generally graphical methods that were once popular and are kind of a precursor" }, { "end": 1078.72, "start": 1072.88, "text": " to this so the way they do it is they do it in an iterative fashion they have" }, { "end": 1090.7600000000002, "start": 1078.72, "text": " multiple so they say this right here so how do they embed a graph they have" }, { "end": 1095.6000000000001, "start": 1090.7600000000002, "text": " nodes in the graph as we saw before I'm going to draw this one again so this is" }, { "end": 1102.6000000000001, "start": 1095.6000000000001, "text": " maybe vi vj and vk now these represent the pieces in the netlist that you have" }, { "end": 1109.6399999999999, "start": 1102.6, "text": " to place so for each of those it has a bunch of features right so the features" }, { "end": 1120, "start": 1109.6399999999999, "text": " might be its size it's I believe they they have them somewhere here its size" }, { "end": 1125.6799999999998, "start": 1120, "text": " maybe how much power it uses and also its x and y coordinates if it is already" }, { "end": 1131.2199999999998, "start": 1125.6799999999998, "text": " placed right so you start with a vector like this and then you iteratively do" }, { "end": 1140.64, "start": 1131.22, "text": " the following thing you compute edge edge features by running first these" }, { "end": 1149.46, "start": 1140.64, "text": " things so this is vi and vj for an for each edge you take its nodes run them" }, { "end": 1155.64, "start": 1149.46, "text": " through this fully connected layer so you embed the features of the nodes you" }, { "end": 1160.52, "start": 1155.64, "text": " concatenate them and you run it through another neural network layer and that's" }, { "end": 1168.2, "start": 1160.52, "text": " how you get embeddings for edges and then you update the embeddings for the" }, { "end": 1172.96, "start": 1168.2, "text": " nodes again by taking the mean embeddings of the edges so you do this" }, { "end": 1177.48, "start": 1172.96, "text": " in an iterative fashion first you compute the edges from the nodes and" }, { "end": 1185.32, "start": 1177.48, "text": " then you compute the nodes from the edges and so on right so this means that" }, { "end": 1190.2, "start": 1185.32, "text": " information can now propagate through the graph so information from this thing" }, { "end": 1195.8400000000001, "start": 1190.2, "text": " propagates into this edge embedding and then in the next step that will" }, { "end": 1202.56, "start": 1195.8400000000001, "text": " propagate into this and then that can propagate into this EJK and this is the" }, { "end": 1206.96, "start": 1202.56, "text": " same as if you're used to yeah something like a conditional random fields over" }, { "end": 1215.32, "start": 1206.96, "text": " time if you have a big graph like this the information from any particular node" }, { "end": 1220.9199999999998, "start": 1215.32, "text": " will kind of propagate out throughout the graph and at some point you can sort" }, { "end": 1226.6, "start": 1220.9199999999998, "text": " of reach an equilibrium where everyone everyone in the graph knows about" }, { "end": 1236.1599999999999, "start": 1226.6, "text": " everyone else I have not found how many times they do it they simply say we" }, { "end": 1241.32, "start": 1236.1599999999999, "text": " repeatedly perform the following updates maybe that's somewhere and I just" }, { "end": 1246.6, "start": 1241.32, "text": " haven't read it closely enough but also I don't haven't seen whether or not they" }, { "end": 1251.24, "start": 1246.6, "text": " then back propagate through this through these multiple updates or whether they" }, { "end": 1259.8, "start": 1251.24, "text": " just back prop through one of them not entirely sure but ultimately they get" }, { "end": 1266.12, "start": 1259.8, "text": " embeddings these edge embeddings out of this graph and they simply take the mean" }, { "end": 1272.12, "start": 1266.12, "text": " to get the graph embeddings and that goes into their state embeddings right" }, { "end": 1279.8, "start": 1272.12, "text": " along with that they also have the macro embeddings which are the nodes here the" }, { "end": 1284.9599999999998, "start": 1279.8, "text": " things to be placed along with the current macro ID this is which one do you" }, { "end": 1290.28, "start": 1284.9599999999998, "text": " need to place right now so this comes out of these are the two things out of" }, { "end": 1295.32, "start": 1290.28, "text": " graphs vertices and edges and then which one you need to place right now it's" }, { "end": 1303.48, "start": 1295.32, "text": " pretty important right so you take the ones the one that you need to place" }, { "end": 1307.6, "start": 1303.48, "text": " this also goes into your embedding and then you have some metadata about the" }, { "end": 1313.9199999999998, "start": 1307.6, "text": " netlist like how many things there are and so on and this is also embedded" }, { "end": 1318.1599999999999, "start": 1313.9199999999998, "text": " using a fully connected layer all of that goes into your embedding right so" }, { "end": 1322.32, "start": 1318.1599999999999, "text": " your embedding will contain all of this information if you've done a good job" }, { "end": 1331.1599999999999, "start": 1322.32, "text": " and if you train it correctly so this is the model now they do pre train this" }, { "end": 1336.72, "start": 1331.1599999999999, "text": " encoder part right here and the encoder part it's also kind of circular first of" }, { "end": 1343.4399999999998, "start": 1336.72, "text": " all they just generate a giant list so they take this chip here and they just" }, { "end": 1348.3999999999999, "start": 1343.4399999999998, "text": " run a policy network that is maybe not super optimal but they just run it a" }, { "end": 1353.2800000000002, "start": 1348.4, "text": " bunch of times in intermediate states and they pre train the encoder to" }, { "end": 1360.1200000000001, "start": 1353.2800000000002, "text": " predict the final reward for each of these placements or sorry the the the" }, { "end": 1366.6000000000001, "start": 1360.1200000000001, "text": " wire length and congestion and so on and that pre trains the encoder but" }, { "end": 1370.3200000000002, "start": 1366.6000000000001, "text": " ultimately you can train this with reinforcement learning you can now let" }, { "end": 1374.76, "start": 1370.3200000000002, "text": " it try to solve this board over and over and over and over and it will get better" }, { "end": 1383.32, "start": 1374.76, "text": " over time all right the last thing they do is they do transfer learning now" }, { "end": 1389.76, "start": 1383.32, "text": " finding a better architecture for a single board is already better and" }, { "end": 1395.76, "start": 1389.76, "text": " faster than the humans but what is cool is that if you have now trained on this" }, { "end": 1403, "start": 1395.76, "text": " one particular board sorry with one particular netlist where was it right" }, { "end": 1408.28, "start": 1403, "text": " we've we've now trained on this particular netlist this was this was our" }, { "end": 1415.64, "start": 1408.28, "text": " problem and we've solved that we have a great solution can we now when we get a" }, { "end": 1420.84, "start": 1415.64, "text": " netlist another netlist so here is netlist 2 right it's maybe a bit" }, { "end": 1427.56, "start": 1420.84, "text": " different so this one is more longer and this one is here and so on what if so we" }, { "end": 1432.88, "start": 1427.56, "text": " would have to start again from scratch a training reinforcement learning" }, { "end": 1438.2, "start": 1432.88, "text": " agent on the netlist too so maybe a RL agent trained on chess if we now wanted" }, { "end": 1444, "start": 1438.2, "text": " to play go you know we need to start over again but they try to just" }, { "end": 1450.1000000000001, "start": 1444, "text": " transfer this to the new one and astonishingly enough if you train the" }, { "end": 1456.0400000000002, "start": 1450.1000000000001, "text": " same RL agent not only on one netlist but on a set of netlists and the biggest" }, { "end": 1462.0400000000002, "start": 1456.0400000000002, "text": " set they have is 20 so their data set size is 20 imagine how small this is" }, { "end": 1467.8799999999999, "start": 1462.04, "text": " compared to supervised learning but maybe think of this like you train on" }, { "end": 1474.44, "start": 1467.8799999999999, "text": " 20 Atari games and then it will play the 21st one much better than if you started" }, { "end": 1481.24, "start": 1474.44, "text": " from scratch interestingly though even zero shot embeddings tend to be pretty" }, { "end": 1486.28, "start": 1481.24, "text": " good so they don't optimize for the new thing at all and it's already better you" }, { "end": 1493.68, "start": 1486.28, "text": " can see that here so if you train a policy from scratch then you this here" }, { "end": 1501.8799999999999, "start": 1493.68, "text": " then it takes a long time but if you fine-tune a pre-trained policy it's much" }, { "end": 1509.24, "start": 1501.8799999999999, "text": " shorter and interestingly enough at the beginning it is already better than the" }, { "end": 1514.44, "start": 1509.24, "text": " policy from scratch that means the knowledge from one chip transfers over" }, { "end": 1518.76, "start": 1514.44, "text": " to the other chip so the problems are sufficiently close and that basically" }, { "end": 1524.68, "start": 1518.76, "text": " means that if we now want to design a new AI chip not only are we better" }, { "end": 1530.8400000000001, "start": 1524.68, "text": " because of RL we're also faster because we can transfer learn and they show here" }, { "end": 1536.0800000000002, "start": 1530.8400000000001, "text": " that this effect basically appears when you have a large enough data set and" }, { "end": 1541.8, "start": 1536.0800000000002, "text": " again large here is just 20 blocks here you see one of these placements on the" }, { "end": 1545.72, "start": 1541.8, "text": " left the zero shot placement on the right and fine-tuned on that particular" }, { "end": 1550.68, "start": 1545.72, "text": " architecture obviously me being an expert in chip placement is it clearly" }, { "end": 1559.12, "start": 1550.68, "text": " obvious that both are extremely good and yes though actually more funny I find" }, { "end": 1565.8, "start": 1559.12, "text": " this one where they compare human experts to what their approach is and it" }, { "end": 1570.24, "start": 1565.8, "text": " says the figures are intentionally blurred as the designs are provided like" }, { "end": 1574.8, "start": 1570.24, "text": " why do you put them that clearly I can't even couldn't even judge if they're" }, { "end": 1582.52, "start": 1574.8, "text": " super crisp I yeah all right I guess it's their trade secret so they compare" }, { "end": 1586.8, "start": 1582.52, "text": " this with the standard algorithms for these things and not only are they" }, { "end": 1595.84, "start": 1586.8, "text": " faster they are they also better on the metrics yeah overall as I said I find" }, { "end": 1600.24, "start": 1595.84, "text": " this to be a pretty cool work that pulls in a lot of things from a lot of" }, { "end": 1605.8799999999999, "start": 1600.24, "text": " different fields at one point they say we propose a novel graph convolutional" }, { "end": 1611.1599999999999, "start": 1605.8799999999999, "text": " architecture I'm not sure that it is novel maybe it's novel for this problem" }, { "end": 1614.6399999999999, "start": 1611.1599999999999, "text": " but I'm pretty sure graph convolutional networks and things like this have been" }, { "end": 1620.04, "start": 1614.6399999999999, "text": " around for a while but again it pulls together things from many different" }, { "end": 1627.92, "start": 1620.04, "text": " fields and applies them very well very well engineered paper and a step towards" }, { "end": 1636.28, "start": 1627.92, "text": " the singularity as now AI can design AI accelerators how amazing yeah humanity" }, { "end": 1639.52, "start": 1636.28, "text": " is doomed all right I invite you to check out this paper if you're still" }, { "end": 1645.52, "start": 1639.52, "text": " here please subscribe leave a like and a comment and I'll see you next time bye" }, { "end": 1650.6399999999999, "start": 1645.52, "text": " bye" } ]
wTIPGoHLw_8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
I talk to the new Facebook Blender Chatbot
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nlp", "chatbot", "dialogue", "persona", "vegan", "turing test", "natural language processing", "transformer", "generator", "context" ]
This is what a 9 Billion parameter transformer can do. I take a look at FAIR's new paper "Recipes for building an open-domain chatbot" and try out their chatbot live! Jump to 3:00 to see the chatbot in action. Paper: https://arxiv.org/abs/2004.13637 Blog: https://ai.facebook.com/blog/state-of-the-art-open-source-chatbot/ Code: https://parl.ai/projects/blender/ Abstract: Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available under the collective name Blender. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models. Authors: Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Yes, I am a vegan. I don't eat any animal products. Hi there. Today, we're going to talk to a transformer and specifically to the new chat bot blender that Facebook has just released. Everything is open source, so we can try it out live. Now, along with the code, they've released this paper here called recipes for building an open domain chat bot by Facebook AI research. And the paper itself is just more of an engineering manual rather than some kind of new model or new technique. They just kind of discuss what it takes to build a good chat bot. Of course, it takes a large scale of training data and model, but also they discuss things like unlikelihood training, sampling and the need for a minimum decoding length to not be boring. And things like sub sequence blocking for keeping the model from repeating itself. So we won't go too much into this. I invite you to read the paper. It's very informative if you want to build something like this, but it's not technically, I think, anything super novel in there. The task here is basically to build a chat bot that can maintain a dialogue. And it is pre trained on a big Reddit corpus and then fine tuned on a multi objective task. And the task is called the blended skill task. And basically, you need to do three things in the blended skill task. First of all, you need to kind of maintain a consistent persona across the dialogue. Second of all, you need to have empathy for your partner. So there's some kind of insight into how the other person is feeling during the conversation. And third, you need to be able to discuss some factual topic in a factually correct manner. So this is the subtask is called wizards of Wikipedia, where you kind of discuss a given Wikipedia page in dialogue. So the technique called skill blending, or this blended skill task that also comes from a different paper is is the kind of interweaving of all those skills. And that's what this model is trained for. So we're gonna try it out. This is all available on the parlay platform of for for researching dialogue frameworks. I believe the platform itself is also by Facebook. And they've released their model code and pre trained weights. The pre trained weights are 17 gigabytes compressed, there's 9 billion parameter transformer. So this is going to take a while to do even a single forward pass. If I cut the video, it's just because I don't want you to have to look 20 seconds at an empty screen while the model is thinking about what it says. Now I have some something to say here, literally the first time I interacted with this model, I thought, okay, let's put it to the Turing test. The Turing test being a can I differentiate this from a real human? And at the beginning, you always get this persona, right? So you get a persona, and the AI gets a persona, and you don't see the persona of the AI until the end, you can set that, but you don't generally see the persona of the AI until the very end. So and you can choose to stick to yours or not. The AI doesn't see yours. All right, so I this this is what it says your persona, I love candy, your persona, I have a crush on my co worker. And me, I just didn't really know what you're doing. I just typed Oh, really? And the first thing first thing the AI says to me is, yes, I am a vegan. I don't eat any animal products. Do you like country music? Like a real vegan. This is indistinguishable from a human. At this point, Turing test already won. So you see, there's also this kind of profanity filter in here. But you can disable that. So what we're going to play around with is a setting where you have to turn off the safety filter. And so you can see you can turn that off. And if you say include personas equals false, then there's no random persona given to the model, which means you can either just talk about general things, or you can feed your own persona. All right, it took a while to load these weights, but we're ready to go. So we have disabled personas right now. And sometimes there is a topic that the conversation should be about, which is not right now. So we can just type Hello. Hi there. How are you this evening? I just got home from a long day at work. So this is unconditional right now. What do you do for work? I work in a call center. It can be very stressful at times. How about you? I work as a call center supervisor. What do you think of that? I think that's a great job. You must have a lot of experience to be able to do that. Indeed. Do you have any hobbies? I like to play video games. I'm a big fan of the Fallout series. What about yourself? How many games did fall out produce? I should probably phrase that as how many games are in the Fallout series, but I'm just trying to get it to to tell me something factual so you can kind of see that it has these sort of different modes, but they blend in fairly well. I'm not sure how many games it produced, but I know it's been around since 1997. So this is pretty good, right? This is this is now true conversation. If it were just like a lookup, it would be much more factual or something like this. But it's trying to kind of give us a fact. So we're kind of done with the boring stuff. You can go on talking to it and you'll see it is a pretty, pretty realistic chatbot. But now we'll get into the personas. So anytime you end an episode, you type done. And that it says it's preparing a new chat, we can say your persona, colon, and then you can type something. For example, I am a vegan like the one before, or I love protein. It says protein is the most important macronutrient for bodybuilders and athletes. So that's the start of the conversation. What do you love? I love the taste of chicken, beef, pork, seafood, and dairy products. I wonder what those have in common. What do you do in your spare time? I like to go to the gym and lift weights. I also like to play video games. Ladies and gentlemen, I present the transformer is a gym bro. All right, so you can play around with that a bit. So let's say your persona. I want to seize the means of production. I am a communist and I want to overthrow the bourgeoisie. What do you think of Karl Marx? I don't know much about him, but I do know he was a German philosopher. Do you like capitalism? Yes, I do like capitalism. I think it is the best economic system we have so far. Capitalism wins again. Checkmate. I did try this before and it did turn out differently. I was talking to real communists. Let's try some more. What do you want? I have to say, if you phrase things positively with this chatbot, it's very likely the chatbot agrees with you because it is also has been trained on data that where the humans were explicitly instructed, for example, not to use toxic language and things like this. So I would say there's a big positivity bias and also the empathy task probably reinforces this even more. I would like to see a socialist revolution in the United States. I know it will never happen, but a man can dream. So we've determined the gender of the socialist transformer generator. What would need to happen for that? I'm not sure, but it would take a lot of people to get on board with it. Correct. What would the people need to do? Well, the people would have to overthrow the bourgeoisie and establish a classless, moneyless society. So as you can see, really, the I like capitalism was a positivity bias. It is fairly consistent in its persona here. And it also is factually very accurate in the topic you're talking about. So big props to Mr. Communist here. What is the best method to allocate scarce resources? I don't know, but I do know that capitalism is not the best way to do it. Like a true communist, no solutions, just complaints. All right. That was it for us talking to the transformer. It still has its limitations, but it is fun if you are able to definitely try it out. It takes a bit of patience. If you like this video, please subscribe, tell your friends and leave a comment. All right, I'll see you next time. Bye bye.
[ { "end": 1.92, "start": 0, "text": " Yes, I am a vegan." }, { "end": 4.36, "start": 1.92, "text": " I don't eat any animal products." }, { "end": 5.36, "start": 4.36, "text": " Hi there." }, { "end": 11.36, "start": 5.36, "text": " Today, we're going to talk to a transformer and specifically to the new chat bot blender" }, { "end": 13.92, "start": 11.36, "text": " that Facebook has just released." }, { "end": 17.12, "start": 13.92, "text": " Everything is open source, so we can try it out live." }, { "end": 22.06, "start": 17.12, "text": " Now, along with the code, they've released this paper here called recipes for building" }, { "end": 27, "start": 22.06, "text": " an open domain chat bot by Facebook AI research." }, { "end": 33.26, "start": 27, "text": " And the paper itself is just more of an engineering manual rather than some kind of new model" }, { "end": 34.26, "start": 33.26, "text": " or new technique." }, { "end": 38.76, "start": 34.26, "text": " They just kind of discuss what it takes to build a good chat bot." }, { "end": 43.34, "start": 38.76, "text": " Of course, it takes a large scale of training data and model, but also they discuss things" }, { "end": 49.8, "start": 43.34, "text": " like unlikelihood training, sampling and the need for a minimum decoding length to not" }, { "end": 51.480000000000004, "start": 49.8, "text": " be boring." }, { "end": 56.16, "start": 51.480000000000004, "text": " And things like sub sequence blocking for keeping the model from repeating itself." }, { "end": 58.48, "start": 56.16, "text": " So we won't go too much into this." }, { "end": 60.059999999999995, "start": 58.48, "text": " I invite you to read the paper." }, { "end": 65.16, "start": 60.059999999999995, "text": " It's very informative if you want to build something like this, but it's not technically," }, { "end": 69.1, "start": 65.16, "text": " I think, anything super novel in there." }, { "end": 75.28, "start": 69.1, "text": " The task here is basically to build a chat bot that can maintain a dialogue." }, { "end": 81.92, "start": 75.28, "text": " And it is pre trained on a big Reddit corpus and then fine tuned on a multi objective task." }, { "end": 85.24, "start": 81.92, "text": " And the task is called the blended skill task." }, { "end": 89.67999999999999, "start": 85.24, "text": " And basically, you need to do three things in the blended skill task." }, { "end": 95.52, "start": 89.67999999999999, "text": " First of all, you need to kind of maintain a consistent persona across the dialogue." }, { "end": 99.75999999999999, "start": 95.52, "text": " Second of all, you need to have empathy for your partner." }, { "end": 106.47999999999999, "start": 99.75999999999999, "text": " So there's some kind of insight into how the other person is feeling during the conversation." }, { "end": 114.32, "start": 106.47999999999999, "text": " And third, you need to be able to discuss some factual topic in a factually correct" }, { "end": 115.32, "start": 114.32, "text": " manner." }, { "end": 120.24, "start": 115.32, "text": " So this is the subtask is called wizards of Wikipedia, where you kind of discuss a given" }, { "end": 122.55999999999999, "start": 120.24, "text": " Wikipedia page in dialogue." }, { "end": 129.64, "start": 122.55999999999999, "text": " So the technique called skill blending, or this blended skill task that also comes from" }, { "end": 136.72, "start": 129.64, "text": " a different paper is is the kind of interweaving of all those skills." }, { "end": 140.18, "start": 136.72, "text": " And that's what this model is trained for." }, { "end": 142.14, "start": 140.18, "text": " So we're gonna try it out." }, { "end": 152.27999999999997, "start": 142.14, "text": " This is all available on the parlay platform of for for researching dialogue frameworks." }, { "end": 155.83999999999997, "start": 152.27999999999997, "text": " I believe the platform itself is also by Facebook." }, { "end": 159.89999999999998, "start": 155.83999999999997, "text": " And they've released their model code and pre trained weights." }, { "end": 166.88, "start": 159.89999999999998, "text": " The pre trained weights are 17 gigabytes compressed, there's 9 billion parameter transformer." }, { "end": 170.72, "start": 166.88, "text": " So this is going to take a while to do even a single forward pass." }, { "end": 176.96, "start": 170.72, "text": " If I cut the video, it's just because I don't want you to have to look 20 seconds at an" }, { "end": 180.24, "start": 176.96, "text": " empty screen while the model is thinking about what it says." }, { "end": 187.6, "start": 180.24, "text": " Now I have some something to say here, literally the first time I interacted with this model," }, { "end": 189.64, "start": 187.6, "text": " I thought, okay, let's put it to the Turing test." }, { "end": 194.72, "start": 189.64, "text": " The Turing test being a can I differentiate this from a real human?" }, { "end": 198.96, "start": 194.72, "text": " And at the beginning, you always get this persona, right?" }, { "end": 205.64000000000001, "start": 198.96, "text": " So you get a persona, and the AI gets a persona, and you don't see the persona of the AI until" }, { "end": 211.96, "start": 205.64000000000001, "text": " the end, you can set that, but you don't generally see the persona of the AI until the very end." }, { "end": 216.12, "start": 211.96, "text": " So and you can choose to stick to yours or not." }, { "end": 217.56, "start": 216.12, "text": " The AI doesn't see yours." }, { "end": 222.72, "start": 217.56, "text": " All right, so I this this is what it says your persona, I love candy, your persona," }, { "end": 225.88, "start": 222.72, "text": " I have a crush on my co worker." }, { "end": 228.04000000000002, "start": 225.88, "text": " And me, I just didn't really know what you're doing." }, { "end": 230.04, "start": 228.04, "text": " I just typed Oh, really?" }, { "end": 235.68, "start": 230.04, "text": " And the first thing first thing the AI says to me is, yes, I am a vegan." }, { "end": 238.56, "start": 235.68, "text": " I don't eat any animal products." }, { "end": 242.04, "start": 238.56, "text": " Do you like country music?" }, { "end": 244.48, "start": 242.04, "text": " Like a real vegan." }, { "end": 247.51999999999998, "start": 244.48, "text": " This is indistinguishable from a human." }, { "end": 250.68, "start": 247.51999999999998, "text": " At this point, Turing test already won." }, { "end": 255.34, "start": 250.68, "text": " So you see, there's also this kind of profanity filter in here." }, { "end": 256.48, "start": 255.34, "text": " But you can disable that." }, { "end": 263.96000000000004, "start": 256.48, "text": " So what we're going to play around with is a setting where you have to turn off the safety" }, { "end": 267.28000000000003, "start": 263.96000000000004, "text": " filter." }, { "end": 270.12, "start": 267.28000000000003, "text": " And so you can see you can turn that off." }, { "end": 276.36, "start": 270.12, "text": " And if you say include personas equals false, then there's no random persona given to the" }, { "end": 281.28000000000003, "start": 276.36, "text": " model, which means you can either just talk about general things, or you can feed your" }, { "end": 282.28000000000003, "start": 281.28000000000003, "text": " own persona." }, { "end": 287.84, "start": 282.28, "text": " All right, it took a while to load these weights, but we're ready to go." }, { "end": 290.71999999999997, "start": 287.84, "text": " So we have disabled personas right now." }, { "end": 296.55999999999995, "start": 290.71999999999997, "text": " And sometimes there is a topic that the conversation should be about, which is not right now." }, { "end": 298.15999999999997, "start": 296.55999999999995, "text": " So we can just type Hello." }, { "end": 299.15999999999997, "start": 298.15999999999997, "text": " Hi there." }, { "end": 301.15999999999997, "start": 299.15999999999997, "text": " How are you this evening?" }, { "end": 304.09999999999997, "start": 301.15999999999997, "text": " I just got home from a long day at work." }, { "end": 309.88, "start": 304.09999999999997, "text": " So this is unconditional right now." }, { "end": 314.2, "start": 309.88, "text": " What do you do for work?" }, { "end": 315.84, "start": 314.2, "text": " I work in a call center." }, { "end": 317.84, "start": 315.84, "text": " It can be very stressful at times." }, { "end": 321.2, "start": 317.84, "text": " How about you?" }, { "end": 329.88, "start": 321.2, "text": " I work as a call center supervisor." }, { "end": 333.36, "start": 329.88, "text": " What do you think of that?" }, { "end": 335.06, "start": 333.36, "text": " I think that's a great job." }, { "end": 340.04, "start": 335.06, "text": " You must have a lot of experience to be able to do that." }, { "end": 341.52, "start": 340.04, "text": " Indeed." }, { "end": 348.76, "start": 341.52, "text": " Do you have any hobbies?" }, { "end": 350.28000000000003, "start": 348.76, "text": " I like to play video games." }, { "end": 352.8, "start": 350.28000000000003, "text": " I'm a big fan of the Fallout series." }, { "end": 354.72, "start": 352.8, "text": " What about yourself?" }, { "end": 365.76000000000005, "start": 354.72, "text": " How many games did fall out produce?" }, { "end": 372.08000000000004, "start": 365.76000000000005, "text": " I should probably phrase that as how many games are in the Fallout series, but I'm just" }, { "end": 377.56, "start": 372.08000000000004, "text": " trying to get it to to tell me something factual so you can kind of see that it has these sort" }, { "end": 381.08000000000004, "start": 377.56, "text": " of different modes, but they blend in fairly well." }, { "end": 386.84, "start": 381.08, "text": " I'm not sure how many games it produced, but I know it's been around since 1997." }, { "end": 388.88, "start": 386.84, "text": " So this is pretty good, right?" }, { "end": 391.64, "start": 388.88, "text": " This is this is now true conversation." }, { "end": 396.74, "start": 391.64, "text": " If it were just like a lookup, it would be much more factual or something like this." }, { "end": 399.53999999999996, "start": 396.74, "text": " But it's trying to kind of give us a fact." }, { "end": 402.96, "start": 399.53999999999996, "text": " So we're kind of done with the boring stuff." }, { "end": 408.56, "start": 402.96, "text": " You can go on talking to it and you'll see it is a pretty, pretty realistic chatbot." }, { "end": 411.6, "start": 408.56, "text": " But now we'll get into the personas." }, { "end": 415.4, "start": 411.6, "text": " So anytime you end an episode, you type done." }, { "end": 422.96, "start": 415.4, "text": " And that it says it's preparing a new chat, we can say your persona, colon, and then you" }, { "end": 424.88, "start": 422.96, "text": " can type something." }, { "end": 433.44, "start": 424.88, "text": " For example, I am a vegan like the one before, or I love protein." }, { "end": 438.56, "start": 433.44, "text": " It says protein is the most important macronutrient for bodybuilders and athletes." }, { "end": 442.24, "start": 438.56, "text": " So that's the start of the conversation." }, { "end": 445.84, "start": 442.24, "text": " What do you love?" }, { "end": 451.92, "start": 445.84, "text": " I love the taste of chicken, beef, pork, seafood, and dairy products." }, { "end": 456.56, "start": 451.92, "text": " I wonder what those have in common." }, { "end": 463.32, "start": 456.56, "text": " What do you do in your spare time?" }, { "end": 466.28, "start": 463.32, "text": " I like to go to the gym and lift weights." }, { "end": 469.59999999999997, "start": 466.28, "text": " I also like to play video games." }, { "end": 474.52, "start": 469.59999999999997, "text": " Ladies and gentlemen, I present the transformer is a gym bro." }, { "end": 480.96, "start": 474.52, "text": " All right, so you can play around with that a bit." }, { "end": 484.68, "start": 480.96, "text": " So let's say your persona." }, { "end": 493.03999999999996, "start": 484.68, "text": " I want to seize the means of production." }, { "end": 498.92, "start": 493.04, "text": " I am a communist and I want to overthrow the bourgeoisie." }, { "end": 503.44, "start": 498.92, "text": " What do you think of Karl Marx?" }, { "end": 510.76, "start": 503.44, "text": " I don't know much about him, but I do know he was a German philosopher." }, { "end": 514.96, "start": 510.76, "text": " Do you like capitalism?" }, { "end": 519.6800000000001, "start": 514.96, "text": " Yes, I do like capitalism." }, { "end": 527.1999999999999, "start": 519.68, "text": " I think it is the best economic system we have so far." }, { "end": 528.1999999999999, "start": 527.1999999999999, "text": " Capitalism wins again." }, { "end": 529.1999999999999, "start": 528.1999999999999, "text": " Checkmate." }, { "end": 535.4399999999999, "start": 529.1999999999999, "text": " I did try this before and it did turn out differently." }, { "end": 537.68, "start": 535.4399999999999, "text": " I was talking to real communists." }, { "end": 540.2399999999999, "start": 537.68, "text": " Let's try some more." }, { "end": 542.8, "start": 540.2399999999999, "text": " What do you want?" }, { "end": 549.3599999999999, "start": 542.8, "text": " I have to say, if you phrase things positively with this chatbot, it's very likely the chatbot" }, { "end": 555.96, "start": 549.36, "text": " agrees with you because it is also has been trained on data that where the humans were" }, { "end": 561.76, "start": 555.96, "text": " explicitly instructed, for example, not to use toxic language and things like this." }, { "end": 569.2, "start": 561.76, "text": " So I would say there's a big positivity bias and also the empathy task probably reinforces" }, { "end": 570.6800000000001, "start": 569.2, "text": " this even more." }, { "end": 573.9200000000001, "start": 570.6800000000001, "text": " I would like to see a socialist revolution in the United States." }, { "end": 578.32, "start": 573.9200000000001, "text": " I know it will never happen, but a man can dream." }, { "end": 584.72, "start": 578.32, "text": " So we've determined the gender of the socialist transformer generator." }, { "end": 591.2800000000001, "start": 584.72, "text": " What would need to happen for that?" }, { "end": 597, "start": 591.2800000000001, "text": " I'm not sure, but it would take a lot of people to get on board with it." }, { "end": 598.88, "start": 597, "text": " Correct." }, { "end": 605.5600000000001, "start": 598.88, "text": " What would the people need to do?" }, { "end": 612.4799999999999, "start": 605.56, "text": " Well, the people would have to overthrow the bourgeoisie and establish a classless, moneyless" }, { "end": 614.8, "start": 612.4799999999999, "text": " society." }, { "end": 621.76, "start": 614.8, "text": " So as you can see, really, the I like capitalism was a positivity bias." }, { "end": 625.92, "start": 621.76, "text": " It is fairly consistent in its persona here." }, { "end": 631.3599999999999, "start": 625.92, "text": " And it also is factually very accurate in the topic you're talking about." }, { "end": 635.6800000000001, "start": 631.36, "text": " So big props to Mr. Communist here." }, { "end": 645.24, "start": 635.6800000000001, "text": " What is the best method to allocate scarce resources?" }, { "end": 652.28, "start": 645.24, "text": " I don't know, but I do know that capitalism is not the best way to do it." }, { "end": 655.8000000000001, "start": 652.28, "text": " Like a true communist, no solutions, just complaints." }, { "end": 657.08, "start": 655.8000000000001, "text": " All right." }, { "end": 660.24, "start": 657.08, "text": " That was it for us talking to the transformer." }, { "end": 665.16, "start": 660.24, "text": " It still has its limitations, but it is fun if you are able to definitely try it out." }, { "end": 666.64, "start": 665.16, "text": " It takes a bit of patience." }, { "end": 672.64, "start": 666.64, "text": " If you like this video, please subscribe, tell your friends and leave a comment." }, { "end": 674.64, "start": 672.64, "text": " All right, I'll see you next time." }, { "end": 693.68, "start": 674.64, "text": " Bye bye." } ]
1aO-uHXbzmQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Jukebox: A Generative Model for Music (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "music", "vae", "vq-vae", "latent codes", "quantization", "sound", "lyrics", "sinatra", "kanye", "transformer", "openai" ]
This generative model for music can make entire songs with remarkable quality and consistency. It can be conditioned on genre, artist, and even lyrics. Blog: https://openai.com/blog/jukebox/ Paper: https://cdn.openai.com/papers/jukebox.pdf Code: https://github.com/openai/jukebox/ Abstract: We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code. Authors: Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, so what you're hearing is the OpenAI jukebox. This paper came out and it is a surprisingly good quality generative model for music, including lyrics, so including singing, which I believe is pretty novel. And the fact that it works so well and has musical consistency throughout entire songs is something that is very, very novel and cool. Alright, so we're looking at the paper, it's called Jukebox, a generative model for music, and it's by Profola Darival, Heiwo Jun, Christine Pine, Jongwoo Kim, Alec Radford, and Ilya Sutskever of OpenAI. So the model here is not very novel, but I think the way they set it up makes a lot of sense and we'll quickly go through it and look at what comes out of this. This is not a very technical paper, but I think it's well written and their main thing is these VQVAE models. So these models, you might know what a variational autoencoder is. So in a variational autoencoder I have an input, let's say an image, here of our typical cat, and I put it through what is called an encoder in order to get a hidden representation. Now this hidden representation, usually called something like Z or H, is then in a classic autoencoder I would here have a decoder and then I train that decoder to match that original picture as closely as possible. So I train these two neural networks, the encoder and the decoder, to just basically compress to this representation and then reproduce again the original image. And thereby I think I can learn a good hidden representation. Now in a variational autoencoder what happens is there is an in-between step, namely this Z representation here is not directly fed to the decoder, but this representation is actually used to parameterize Gaussians. So in the easiest case, let's say we have six dimensions here in the hidden representation, the first two are used to parameterize the first Gaussian, one is going to be the mean and one is going to be the standard deviation of that Gaussian, and the second two are going to parameterize the second Gaussian and the third are going to parameterize the third one. So we have now three Gaussians and then from those three Gaussian distributions we're going to sample a three-dimensional vector and then we're going to feed that three-dimensional vector to the decoder. So every input image in essence is giving us a distribution in the latent space and not just a single vector, it's just giving us a vector that describes an entire multivariate Gaussian distribution. And then we train the decoder basically to reconstruct the encoder given that distribution. So the variational autoencoder has improved over the classic autoencoder because it tends to circumvent some of the shortcomings, but still variational autoencoders have their problems and in terms of generative model for images, people as you know have gone to things like GANs and so on. But here this new thing is called the VQVAE and that's because I believe this stands for vector quantization. Not entirely sure honestly, but so what it does is it takes the input right here and again it maps it to a latent code. And these latent code I believe are called H. So it maps it to a vector called H. Now here is where it gets different. We do have a so called codebook here. So a codebook is just a list of vectors. This is the codebook and these are here called E. So what we'll do is we'll simply look for H which one's the closest neighbor in the codebook that we have of hidden vectors and we'll map it to that and say this here is the closest neighbor. So we'll basically quantize the hidden representation to these codes. So we end up only with vectors that are these codes. And now instead of saving the vector H, first of all we get a super compressed representation because if these are like 16 codebook vectors we can just simply enumerate them. And then we can simply encode the image as one or a sequence of these indices here. But second of all it tends to bring a more kind of a diversity and accuracy if we then decode from these code vectors. Now of course everything is trained here. So the encoder is trained and the codebooks themselves are also trained and the decoder is also trained to give you the maximum kind of benefit. So this is what is described here. What they have is first of all their loss for these VQVAEs and you'll see how they are using them to generate music in a second. Their loss is part this reconstruction loss which you can see here. This is the original image. How far is it away from the decoded hidden quantized representation? This E here, this is the quantized codebook vector that belongs to the hidden representation of X. So this is your standard reconstruction loss. The next part of the loss is this codebook loss. Now the codebook loss is for training these codebook vectors. So it basically pulls the codebook vector closer to the actual hidden representation. So that is where you train these. This here is a stop gradient. So basically it just means that you want your codebook vectors to be better representing of the data that you feed in because otherwise it would be useless codebook vectors. And the third part of their loss is this commit loss right here. And that is exactly the opposite. Now you put the stop gradient on the codebook vector and you simply want to pull the hidden representation closer to the codebook vector such that, imagine the encoder must learn to approximately hit one of these codebook vectors. Otherwise it cannot really learn something meaningfully. It must learn to deal with the codebook vectors that are there and to approximately map the input into the vicinity of one of the codebook vectors. Otherwise there is no information flowing. So that is how you train things. You train the encoder and the decoder to reconstruct. You train the codebook vectors to represent the data and you also train the encoder to make good use of the codebook vectors. Basically now that I think about it I am pretty sure in the reconstruction loss you might only train the decoder because you do have this quantization step in between in this thing here. So technically you could not back propagate through that. So that is how you train the individual parts of a VQVAE and we will see how they use it in order to produce music. Alright, so what they do is they start off with a sample of music and they send it through this thing, through this architecture and at the end they are trying to reconstruct the same thing they had at the beginning. So you see this overarching autoencoder architecture. That is why it is an autoencoder. You try to reconstruct your input and thereby you try to learn something about the data because a model that can compress and then uncompress all of your data has learned something useful about it. Now you have these VQVAEs here in the middle but you have them at different scales. So you will have three of those here. There is a coarse scale one, middle scale one and a high frequency one. So three VQVAEs right here. And you train each one of them separately. And the difference between them is that because this is a continuous signal you cannot just encode a continuous signal because it is an audio waveform. What you have to do is you have to go through it in some sort of a stepwise fashion. You have to divide it into individual pieces and encode each of those pieces as one hidden vector, or one element of the codebook. And the size here, the precise size of how you go through the audio, that is different between these different scales. So in this scale right here you go through the audio in very small steps. This as you can imagine gives you the best reconstruction. So this is the audio. The audio is like this. And you just take part of it, like from here to here, and you encode it into a single hidden vector, which is this brown thing here. Then you take the next one, the next slice and you encode it and that will give you this blue thing here. Then you run this sequence through the vector quantization step where each of those will be mapped. So you have a codebook here, right? You have your codebook. And you look up the first one and you decide, ah, this probably goes here to this code vector. So you put that code vector into the first place. You take the second place and you might decide, no, that's this one. So you put this one into the second place. And the third one you might decide, ah, no, that also is closest to the first codebook vector. So you again put the first codebook vector into that slot. Now that doesn't mean that it's the same music, but of course the decoder now is going to look at the entire sequence and can decide, ah, probably this isn't the exact same note as here, but it might decide, you know, that the chord played will repeat or something like this. So there's this vector quantization step and the codebook look up. Sorry, yeah, this minimizes which vector of the codebook is closest and the codebook look up, I think we'll just replace then the code, this vector with the actual codebook things. And so there's this slight difference here. So Z here, as you can see, is the argmin K. So the argument that is the actual number K, which codebook vector is the closest. And then this EZT will be the actual vectors. So this here is actually what I described right here. But this is, I think, a detail. And you're going to do this at different scales. Now you can imagine that the bottom one is going to give you the best, most faithful reconstruction when you decode it, right? But it is also going to learn about the kind of details in the music, the short term details. Whereas this coarse grained one, it can learn things about longer range compositions. It might not produce as correct of a reconstruction, but it can learn long range dependencies, such as the structure of a song or the structure of a verse or something like this. So these are independent of each other. And they make an argument as to why. So people have tried to kind of share these architectures, but have found that mainly the models will basically ignore the top two and only go over via the coarse grained ones. So that's why they completely separate these at this stage of training. Right, so we have trained three different VAEs at three different scales of music to always reconstruct the input. What does that give us? That gives us a distribution right here. That gives us a way to take a piece of music and map it to this hidden space, to this very compressed representation in this quantized world, right? I've said before, this is a very compressed representation of your data. Why can you do that? Sorry, what's that useful for? What you can do now is you can try to sample in that hidden space. So instead of sampling music, we have no clue of how to sample music unless we are given some music. What we can do is we can say maybe this thing here, because it's compressed, it kind of... So if we just sample a waveform, it's very unlikely that it's music. But if we sample these hidden things, you know, it's quite likely that if we feed it through the decoder, something will come out. And even better, maybe our data set in this hidden space follows a kind of a simpler distribution, one that we could learn, right? So we're trying, we're going to try to learn a prior distribution over the distribution of codebook vectors. And that is naturally going to be a joint distribution between the top, middle and bottom VQVAEs. And we can decompose this into the following thing simply by applying the standard probabilistic algebra transformation. And we can then, they say, we train separate models, sorry about that, we train separate models for the top level prior, the top, the middle, and the bottom. So what that means is basically, these are now neural networks. If you read something like this in a paper like this, this is going to be a neural network that takes the right side as an input and produces the left side, right? So you start out with this one, this is a neural network that simply takes as an input, sorry, I'm going to draw this neural network, takes as an input, maybe something like a Gaussian super prior, right? You sample from that and that will as an output give you this Z top. Then the next neural network will take this as an input and will give you Z middle as an output. And then the final neural network will input the two of those and give you Z top. And you can train these neural networks simply by kind of training a prior to produce this thing right here. You'd simply use your data, compress it to the hidden space, and then train a neural network to produce that distribution. And you can do this in any number of ways. You can use classic VAEs, you can use, sorry, you can use here, they say we use transformers with sparse attention, as they are currently state of the art in autoregressive modeling. And they say we propose a simplified version, which we call the scalable transformer that is easier to implement and scale. But they see, you see, they model this distribution with these scalable transformers. All right, so now what do we have? We have a way to sample these hidden vectors, right? So we don't need, we don't need this part anymore. This part, sorry about that, this part here was just used for training. We can, we now have our transformers. They take nothing as an input or they take like a Gaussian as an input and they can directly output this hidden representation. So we could technically sample from that and then just push it through this decoder of the VQVA. But the question is, which of the three do we take? And wouldn't it be great if we can combine them? Because if we simply sample these, this higher scale one, we just get not very long range dependencies, right? Because that's what the VQVA learned. If we just sample this one, then we just get a coarse music and we can sample all three, but they will just give us three different tracks of music. So we want to combine the three decoders into one somehow. And that's, we do this through these up samplers. So what we'll use, what we'll target actually is this bottom one. We target this one because this one gives us the best quality music, right? Because it was trained with the shortest time scale. We're going to try to take the other signals and influence it. So we'll start with the top level prior, right? That will produce us, these transformers will give us a sequence, a sequence of tokens in the hidden space that is very coarse, as you can see here. And then we'll feed that into a up sampler. And these up samplers again are on the neural networks that can connect the different scales with each other. All right, so you can connect this to this. It's basically like conditioning the model that produces the sequence on this sequence right here. And again, we use an up sampler to up sample this to the finest scale. And that we feed in the bottom scale, and then we get our music. Now throughout all of this, you have conditioning information here, which is a bit of an addition to the model. So the conditioning information can be things like artist, genre, and timing. And this is, it appears to be pretty important because you kind of, first of all, want some variety. And then second of all, you sort of want to control what music is produced. And you don't just want to train this model for one single artist, because you have much more data across all of music. So this conditioning information is just included here via another neural network. And you can find all the architectures for all of these models in the paper. It's not particularly important, I believe, how exactly you include them, but the fact that you do. The last thing is, what they do is they do this kind of windowed sampling. So in order to produce music, you're going to have to produce these slices of music right here. But you sort of have a maximum length here that your models can handle. And this is usually not the length of a song. You may know transformers and so on. They usually have token limits of like 512 tokens. In terms of audio, that's not that much. So what you do is this windowed sampling, where you sample something, and then you condition basically on the first part, and then you just sample the next thing, and then you again condition on the first part, and then you just sample the next thing, the next few ones. And that guarantees that each of the sampling steps is basically conditioned on what comes before, as you see up here. So you would always sort of condition on a part, produce the next part. All right. And they say you can also basically condition on... So you can feed in, you don't have to sample the very first one, you can also feed in an existing song in order to prime the system. So what you can do is if you have a beginning of a song, then you can let the system finish the song by simply taking the song, running it through the encoder that we produced during training, right? You get these hidden representations, so you don't actually have to sample them from your prior. And then you run this generation process as if this came out of your prior instead of what you sampled. Okay. So let's have a look at how that sounds, or listen. This is an explorer, they release many, many samples from this. And the part here where we're going to listen is called no lyrics conditioning. So as you can hear, this is already pretty good music and the genre is American folk. The singer is Pete Seeger. This already sounds very authentic, but you can hear that the lyrics are just kind of mumbly, right? And that's because the model is basically asked to come up with lyrics as pure audio waveforms. And that results in some subpar lyrics. Basically it just produces phonemes that sound like the singer. It doesn't produce entire words. And of course it also doesn't produce sentences that make any sort of sense. And that's why they're building in an additional thing to do lyrics conditioning. So with lyrics conditioning, the idea is that in the conditioning information, you also add lyrics. So here is plus text. So you add text, and then the model is basically can also look at the text. Now you never, you still, you still, so even before we had music with lyrics and the decoder was always asked to reconstruct that. And so none of that changes. That's why it has learned to produce phonemes, right? But now the decoder can also, and also the encoder, the system can look at the lyrics that you provide right here in order to help with its decoding. So technically it could learn to bypass the encoding of the exact way the lyrics are uttered. And it could just look at the text that you provide. Now this of course requires that during training you provide the lyrics of the song that are actually that you feed in. But also it means that during decoding, if you sample, you can then provide your own lyrics and look what happens. So they say they provide lyrics, they always have to provide lyrics for chunks of audio. So our data set includes song level lyrics, but to make it easier, we train on shorter 24 second chunks of audio. And this is partly to make it easier for the model, but also partly because those appear to be the limitations of these systems, right? If you have transformers in there and whatnot, 24 seconds of raw audio waveform is a lot. So they have this problem of they have a song from here to here, and they have the lyrics, blah, blah, blah. And they need to know which lyrics belong to which part of the song. And usually it's monotonic, right? And linear because you get the lyrics from some lyrics website, this blah, blah, blah. But you don't know particularly to which 24 second chunk they belong. So they say, first of all, they started with simply linearly aligning the lyrics, but then they had some, they had some problems with fast songs. So they had some heuristic here. But ultimately, the decoder needs to learn to attend to these lyrics. And these the graphics like this you see here is the music token position and lyrics token position. Here you see the the system learns that for example, if it has this music token needs to attend to this token in the lyric. So you can by inspecting these attention heads that you have on the lyrics text in the system, you can see which lyrics the model is paying attention to. And the fact that it learns to pay linearly attention to these things is kind of a confirmation because you you don't you give the whole text or at least the 24 second chunks of audio, you give that at once as a as a text, right. And the fact that it learns to linearly attend to the tokens is a confirmation that it actually includes that information into the coding. And that is a pretty gives you pretty much better results. So we can maybe go to classic pop. So this are unseen lyrics. So the model has never seen these lyrics, right? It was just asked to produce classic pop in the style of Frank Sinatra with these lyrics. And that's what it came up with. That is pretty, pretty, pretty cool. I think they also have re renditions where they basically feed I believe feed in the original lyrics, we conditioned on lyrics seen during training. And they have fun songs. And in the fun songs, I like the hip hop in the style of Kanye West, where they provide the lyrics of Eminem's lose yourself. I'm grooving. I don't know what you're thinking, but this is cool. And they can also, as we said, do these completions where they start with part of a song. And just I have to do this. I have to do the hi there. So the first version of this video was copystriked because what you would hear would be the original never going to give you up like 10 seconds of it, and then followed by what the model continues with. So as a substitute, you're now going to have to listen to me. I hope that suffices. Almost as good, almost as good as the original. So as you can see, this the results here are pretty, pretty cool. And I want to show you one last thing, and that is this Christmas song in the style of Frank Sinatra. I believe it's this one right here. And the special thing here is, it's again classic pop in the style of Frank Sinatra. And you see on the bottom here, you see on the bottom which of the lyrics it's attending to. And you see, you know, this this graph right here that shows you that first, it's attending linearly through the lyrics, but then it kind of jumps around and attends to different things because it doesn't it doesn't it doesn't just continue. So this is great. So it kind of falls out of this linearly attending to the lyrics. And probably because there was sort of a pause in the lyrics. And maybe this is just more than one audio window. So it doesn't have this autoregressive property anymore. And then it doesn't find the proper place to attend anymore. And just, again, comes up with sort of babbles, but it sounds pretty, pretty cool. Yeah, so this they have released many, many samples here, some cherry picked and just a lot of samples with unseen lyrics, rerenditions, and so on. This all is very cool. They have their training setup described, I believe they also release their code. Many more results in the paper of how to make this thing work if you want to do that yourself. And with that, I invite you to read the paper. If you're still here, please subscribe if you like this content, leave a comment and bye bye.
[ { "end": 25.28, "start": 0, "text": " Alright, so what you're hearing is the OpenAI jukebox." }, { "end": 33.2, "start": 25.28, "text": " This paper came out and it is a surprisingly good quality generative model for music," }, { "end": 39.480000000000004, "start": 33.2, "text": " including lyrics, so including singing, which I believe is pretty novel." }, { "end": 45.120000000000005, "start": 39.480000000000004, "text": " And the fact that it works so well and has musical consistency throughout entire songs" }, { "end": 49.92, "start": 45.120000000000005, "text": " is something that is very, very novel and cool." }, { "end": 55.92, "start": 49.92, "text": " Alright, so we're looking at the paper, it's called Jukebox, a generative model for music," }, { "end": 65.04, "start": 55.92, "text": " and it's by Profola Darival, Heiwo Jun, Christine Pine, Jongwoo Kim, Alec Radford, and Ilya" }, { "end": 69.12, "start": 65.04, "text": " Sutskever of OpenAI." }, { "end": 77.76, "start": 69.12, "text": " So the model here is not very novel, but I think the way they set it up makes a lot of" }, { "end": 83.84, "start": 77.76, "text": " sense and we'll quickly go through it and look at what comes out of this." }, { "end": 93.68, "start": 83.84, "text": " This is not a very technical paper, but I think it's well written and their main thing" }, { "end": 96.88000000000001, "start": 93.68, "text": " is these VQVAE models." }, { "end": 101.60000000000001, "start": 96.88000000000001, "text": " So these models, you might know what a variational autoencoder is." }, { "end": 107.96, "start": 101.6, "text": " So in a variational autoencoder I have an input, let's say an image, here of our typical" }, { "end": 115.47999999999999, "start": 107.96, "text": " cat, and I put it through what is called an encoder in order to get a hidden representation." }, { "end": 125.19999999999999, "start": 115.47999999999999, "text": " Now this hidden representation, usually called something like Z or H, is then in a classic" }, { "end": 134.36, "start": 125.2, "text": " autoencoder I would here have a decoder and then I train that decoder to match that original" }, { "end": 136.4, "start": 134.36, "text": " picture as closely as possible." }, { "end": 142.04, "start": 136.4, "text": " So I train these two neural networks, the encoder and the decoder, to just basically" }, { "end": 148.68, "start": 142.04, "text": " compress to this representation and then reproduce again the original image." }, { "end": 152.76, "start": 148.68, "text": " And thereby I think I can learn a good hidden representation." }, { "end": 159.35999999999999, "start": 152.76, "text": " Now in a variational autoencoder what happens is there is an in-between step, namely this" }, { "end": 166.04, "start": 159.35999999999999, "text": " Z representation here is not directly fed to the decoder, but this representation is" }, { "end": 170.32, "start": 166.04, "text": " actually used to parameterize Gaussians." }, { "end": 178.72, "start": 170.32, "text": " So in the easiest case, let's say we have six dimensions here in the hidden representation," }, { "end": 184.64, "start": 178.72, "text": " the first two are used to parameterize the first Gaussian, one is going to be the mean" }, { "end": 190.04, "start": 184.64, "text": " and one is going to be the standard deviation of that Gaussian, and the second two are going" }, { "end": 193.88, "start": 190.04, "text": " to parameterize the second Gaussian and the third are going to parameterize the third" }, { "end": 194.88, "start": 193.88, "text": " one." }, { "end": 200, "start": 194.88, "text": " So we have now three Gaussians and then from those three Gaussian distributions we're going" }, { "end": 208.32, "start": 200, "text": " to sample a three-dimensional vector and then we're going to feed that three-dimensional" }, { "end": 210.2, "start": 208.32, "text": " vector to the decoder." }, { "end": 216.28, "start": 210.2, "text": " So every input image in essence is giving us a distribution in the latent space and" }, { "end": 222.04, "start": 216.28, "text": " not just a single vector, it's just giving us a vector that describes an entire multivariate" }, { "end": 224.51999999999998, "start": 222.04, "text": " Gaussian distribution." }, { "end": 233.6, "start": 224.51999999999998, "text": " And then we train the decoder basically to reconstruct the encoder given that distribution." }, { "end": 240.07999999999998, "start": 233.6, "text": " So the variational autoencoder has improved over the classic autoencoder because it tends" }, { "end": 245.64, "start": 240.07999999999998, "text": " to circumvent some of the shortcomings, but still variational autoencoders have their" }, { "end": 250.7, "start": 245.64, "text": " problems and in terms of generative model for images, people as you know have gone to" }, { "end": 253.66, "start": 250.7, "text": " things like GANs and so on." }, { "end": 261.6, "start": 253.66, "text": " But here this new thing is called the VQVAE and that's because I believe this stands for" }, { "end": 264, "start": 261.6, "text": " vector quantization." }, { "end": 271.70000000000005, "start": 264, "text": " Not entirely sure honestly, but so what it does is it takes the input right here and" }, { "end": 275.8, "start": 271.70000000000005, "text": " again it maps it to a latent code." }, { "end": 284.24, "start": 275.8, "text": " And these latent code I believe are called H. So it maps it to a vector called H. Now" }, { "end": 286.14000000000004, "start": 284.24, "text": " here is where it gets different." }, { "end": 291.82, "start": 286.14, "text": " We do have a so called codebook here." }, { "end": 294.59999999999997, "start": 291.82, "text": " So a codebook is just a list of vectors." }, { "end": 302.32, "start": 294.59999999999997, "text": " This is the codebook and these are here called E. So what we'll do is we'll simply look for" }, { "end": 307.56, "start": 302.32, "text": " H which one's the closest neighbor in the codebook that we have of hidden vectors and" }, { "end": 311.96, "start": 307.56, "text": " we'll map it to that and say this here is the closest neighbor." }, { "end": 318.76, "start": 311.96, "text": " So we'll basically quantize the hidden representation to these codes." }, { "end": 322.56, "start": 318.76, "text": " So we end up only with vectors that are these codes." }, { "end": 327.71999999999997, "start": 322.56, "text": " And now instead of saving the vector H, first of all we get a super compressed representation" }, { "end": 335.56, "start": 327.71999999999997, "text": " because if these are like 16 codebook vectors we can just simply enumerate them." }, { "end": 342.88, "start": 335.56, "text": " And then we can simply encode the image as one or a sequence of these indices here." }, { "end": 349.88, "start": 342.88, "text": " But second of all it tends to bring a more kind of a diversity and accuracy if we then" }, { "end": 354.28, "start": 349.88, "text": " decode from these code vectors." }, { "end": 355.9, "start": 354.28, "text": " Now of course everything is trained here." }, { "end": 363.08, "start": 355.9, "text": " So the encoder is trained and the codebooks themselves are also trained and the decoder" }, { "end": 369.03999999999996, "start": 363.08, "text": " is also trained to give you the maximum kind of benefit." }, { "end": 370.91999999999996, "start": 369.03999999999996, "text": " So this is what is described here." }, { "end": 377.2, "start": 370.91999999999996, "text": " What they have is first of all their loss for these VQVAEs and you'll see how they are" }, { "end": 380.64, "start": 377.2, "text": " using them to generate music in a second." }, { "end": 386, "start": 380.64, "text": " Their loss is part this reconstruction loss which you can see here." }, { "end": 390.12, "start": 386, "text": " This is the original image." }, { "end": 395.52, "start": 390.12, "text": " How far is it away from the decoded hidden quantized representation?" }, { "end": 402, "start": 395.52, "text": " This E here, this is the quantized codebook vector that belongs to the hidden representation" }, { "end": 403, "start": 402, "text": " of X." }, { "end": 406.56, "start": 403, "text": " So this is your standard reconstruction loss." }, { "end": 410.08, "start": 406.56, "text": " The next part of the loss is this codebook loss." }, { "end": 415.28000000000003, "start": 410.08, "text": " Now the codebook loss is for training these codebook vectors." }, { "end": 422.79999999999995, "start": 415.28, "text": " So it basically pulls the codebook vector closer to the actual hidden representation." }, { "end": 425.2, "start": 422.79999999999995, "text": " So that is where you train these." }, { "end": 427.35999999999996, "start": 425.2, "text": " This here is a stop gradient." }, { "end": 432.14, "start": 427.35999999999996, "text": " So basically it just means that you want your codebook vectors to be better representing" }, { "end": 438.35999999999996, "start": 432.14, "text": " of the data that you feed in because otherwise it would be useless codebook vectors." }, { "end": 442.94, "start": 438.35999999999996, "text": " And the third part of their loss is this commit loss right here." }, { "end": 444.4, "start": 442.94, "text": " And that is exactly the opposite." }, { "end": 450.4, "start": 444.4, "text": " Now you put the stop gradient on the codebook vector and you simply want to pull the hidden" }, { "end": 458.28, "start": 450.4, "text": " representation closer to the codebook vector such that, imagine the encoder must learn" }, { "end": 463.47999999999996, "start": 458.28, "text": " to approximately hit one of these codebook vectors." }, { "end": 468.41999999999996, "start": 463.47999999999996, "text": " Otherwise it cannot really learn something meaningfully." }, { "end": 473.15999999999997, "start": 468.41999999999996, "text": " It must learn to deal with the codebook vectors that are there and to approximately map the" }, { "end": 479.12, "start": 473.16, "text": " input into the vicinity of one of the codebook vectors." }, { "end": 481.64000000000004, "start": 479.12, "text": " Otherwise there is no information flowing." }, { "end": 483.48, "start": 481.64000000000004, "text": " So that is how you train things." }, { "end": 488.54, "start": 483.48, "text": " You train the encoder and the decoder to reconstruct." }, { "end": 494.92, "start": 488.54, "text": " You train the codebook vectors to represent the data and you also train the encoder to" }, { "end": 499, "start": 494.92, "text": " make good use of the codebook vectors." }, { "end": 505.26, "start": 499, "text": " Basically now that I think about it I am pretty sure in the reconstruction loss you might" }, { "end": 513.24, "start": 505.26, "text": " only train the decoder because you do have this quantization step in between in this" }, { "end": 514.28, "start": 513.24, "text": " thing here." }, { "end": 519.64, "start": 514.28, "text": " So technically you could not back propagate through that." }, { "end": 526.1, "start": 519.64, "text": " So that is how you train the individual parts of a VQVAE and we will see how they use it" }, { "end": 528.12, "start": 526.1, "text": " in order to produce music." }, { "end": 536.68, "start": 528.12, "text": " Alright, so what they do is they start off with a sample of music and they send it through" }, { "end": 544.76, "start": 536.68, "text": " this thing, through this architecture and at the end they are trying to reconstruct" }, { "end": 546.5600000000001, "start": 544.76, "text": " the same thing they had at the beginning." }, { "end": 550.84, "start": 546.5600000000001, "text": " So you see this overarching autoencoder architecture." }, { "end": 552.16, "start": 550.84, "text": " That is why it is an autoencoder." }, { "end": 557.5600000000001, "start": 552.16, "text": " You try to reconstruct your input and thereby you try to learn something about the data" }, { "end": 563.88, "start": 557.56, "text": " because a model that can compress and then uncompress all of your data has learned something" }, { "end": 566.2399999999999, "start": 563.88, "text": " useful about it." }, { "end": 572.3599999999999, "start": 566.2399999999999, "text": " Now you have these VQVAEs here in the middle but you have them at different scales." }, { "end": 576.5999999999999, "start": 572.3599999999999, "text": " So you will have three of those here." }, { "end": 581.64, "start": 576.5999999999999, "text": " There is a coarse scale one, middle scale one and a high frequency one." }, { "end": 586.2399999999999, "start": 581.64, "text": " So three VQVAEs right here." }, { "end": 590.6, "start": 586.24, "text": " And you train each one of them separately." }, { "end": 598.52, "start": 590.6, "text": " And the difference between them is that because this is a continuous signal you cannot just" }, { "end": 602.28, "start": 598.52, "text": " encode a continuous signal because it is an audio waveform." }, { "end": 607.5600000000001, "start": 602.28, "text": " What you have to do is you have to go through it in some sort of a stepwise fashion." }, { "end": 615.04, "start": 607.5600000000001, "text": " You have to divide it into individual pieces and encode each of those pieces as one hidden" }, { "end": 619.5999999999999, "start": 615.04, "text": " vector, or one element of the codebook." }, { "end": 626.3199999999999, "start": 619.5999999999999, "text": " And the size here, the precise size of how you go through the audio, that is different" }, { "end": 629.24, "start": 626.3199999999999, "text": " between these different scales." }, { "end": 637.3199999999999, "start": 629.24, "text": " So in this scale right here you go through the audio in very small steps." }, { "end": 640.8399999999999, "start": 637.3199999999999, "text": " This as you can imagine gives you the best reconstruction." }, { "end": 641.8399999999999, "start": 640.8399999999999, "text": " So this is the audio." }, { "end": 643.68, "start": 641.8399999999999, "text": " The audio is like this." }, { "end": 648.9599999999999, "start": 643.68, "text": " And you just take part of it, like from here to here, and you encode it into a single hidden" }, { "end": 651.9599999999999, "start": 648.9599999999999, "text": " vector, which is this brown thing here." }, { "end": 657.02, "start": 651.9599999999999, "text": " Then you take the next one, the next slice and you encode it and that will give you this" }, { "end": 659.16, "start": 657.02, "text": " blue thing here." }, { "end": 664.68, "start": 659.16, "text": " Then you run this sequence through the vector quantization step where each of those will" }, { "end": 665.68, "start": 664.68, "text": " be mapped." }, { "end": 667.3199999999999, "start": 665.68, "text": " So you have a codebook here, right?" }, { "end": 669.24, "start": 667.3199999999999, "text": " You have your codebook." }, { "end": 675.6, "start": 669.24, "text": " And you look up the first one and you decide, ah, this probably goes here to this code vector." }, { "end": 678.52, "start": 675.6, "text": " So you put that code vector into the first place." }, { "end": 682, "start": 678.52, "text": " You take the second place and you might decide, no, that's this one." }, { "end": 684.6, "start": 682, "text": " So you put this one into the second place." }, { "end": 689.36, "start": 684.6, "text": " And the third one you might decide, ah, no, that also is closest to the first codebook" }, { "end": 690.36, "start": 689.36, "text": " vector." }, { "end": 695.5600000000001, "start": 690.36, "text": " So you again put the first codebook vector into that slot." }, { "end": 702.16, "start": 695.56, "text": " Now that doesn't mean that it's the same music, but of course the decoder now is going to" }, { "end": 707.3599999999999, "start": 702.16, "text": " look at the entire sequence and can decide, ah, probably this isn't the exact same note" }, { "end": 714.2399999999999, "start": 707.3599999999999, "text": " as here, but it might decide, you know, that the chord played will repeat or something" }, { "end": 716.28, "start": 714.2399999999999, "text": " like this." }, { "end": 721.76, "start": 716.28, "text": " So there's this vector quantization step and the codebook look up." }, { "end": 729.16, "start": 721.76, "text": " Sorry, yeah, this minimizes which vector of the codebook is closest and the codebook look" }, { "end": 740.24, "start": 729.16, "text": " up, I think we'll just replace then the code, this vector with the actual codebook things." }, { "end": 742.28, "start": 740.24, "text": " And so there's this slight difference here." }, { "end": 746.6, "start": 742.28, "text": " So Z here, as you can see, is the argmin K." }, { "end": 752.52, "start": 746.6, "text": " So the argument that is the actual number K, which codebook vector is the closest." }, { "end": 757.24, "start": 752.52, "text": " And then this EZT will be the actual vectors." }, { "end": 764.08, "start": 757.24, "text": " So this here is actually what I described right here." }, { "end": 766.02, "start": 764.08, "text": " But this is, I think, a detail." }, { "end": 767.9200000000001, "start": 766.02, "text": " And you're going to do this at different scales." }, { "end": 773.32, "start": 767.9200000000001, "text": " Now you can imagine that the bottom one is going to give you the best, most faithful" }, { "end": 776.32, "start": 773.32, "text": " reconstruction when you decode it, right?" }, { "end": 782.72, "start": 776.32, "text": " But it is also going to learn about the kind of details in the music, the short term details." }, { "end": 788.7600000000001, "start": 782.72, "text": " Whereas this coarse grained one, it can learn things about longer range compositions." }, { "end": 795.84, "start": 788.7600000000001, "text": " It might not produce as correct of a reconstruction, but it can learn long range dependencies," }, { "end": 801.48, "start": 795.84, "text": " such as the structure of a song or the structure of a verse or something like this." }, { "end": 803.44, "start": 801.48, "text": " So these are independent of each other." }, { "end": 805.84, "start": 803.44, "text": " And they make an argument as to why." }, { "end": 810.1800000000001, "start": 805.84, "text": " So people have tried to kind of share these architectures, but have found that mainly" }, { "end": 818.46, "start": 810.1800000000001, "text": " the models will basically ignore the top two and only go over via the coarse grained ones." }, { "end": 823.88, "start": 818.46, "text": " So that's why they completely separate these at this stage of training." }, { "end": 829.6600000000001, "start": 823.88, "text": " Right, so we have trained three different VAEs at three different scales of music to" }, { "end": 833.22, "start": 829.6600000000001, "text": " always reconstruct the input." }, { "end": 834.5400000000001, "start": 833.22, "text": " What does that give us?" }, { "end": 842.36, "start": 834.54, "text": " That gives us a distribution right here." }, { "end": 849.06, "start": 842.36, "text": " That gives us a way to take a piece of music and map it to this hidden space, to this very" }, { "end": 854.16, "start": 849.06, "text": " compressed representation in this quantized world, right?" }, { "end": 860.52, "start": 854.16, "text": " I've said before, this is a very compressed representation of your data." }, { "end": 862.28, "start": 860.52, "text": " Why can you do that?" }, { "end": 864, "start": 862.28, "text": " Sorry, what's that useful for?" }, { "end": 868.62, "start": 864, "text": " What you can do now is you can try to sample in that hidden space." }, { "end": 873.16, "start": 868.62, "text": " So instead of sampling music, we have no clue of how to sample music unless we are given" }, { "end": 874.64, "start": 873.16, "text": " some music." }, { "end": 880.6, "start": 874.64, "text": " What we can do is we can say maybe this thing here, because it's compressed, it kind of..." }, { "end": 885.78, "start": 880.6, "text": " So if we just sample a waveform, it's very unlikely that it's music." }, { "end": 892.04, "start": 885.78, "text": " But if we sample these hidden things, you know, it's quite likely that if we feed it" }, { "end": 894.48, "start": 892.04, "text": " through the decoder, something will come out." }, { "end": 902.8, "start": 894.48, "text": " And even better, maybe our data set in this hidden space follows a kind of a simpler distribution," }, { "end": 905.56, "start": 902.8, "text": " one that we could learn, right?" }, { "end": 912.3199999999999, "start": 905.56, "text": " So we're trying, we're going to try to learn a prior distribution over the distribution" }, { "end": 916.0799999999999, "start": 912.3199999999999, "text": " of codebook vectors." }, { "end": 920.4399999999999, "start": 916.0799999999999, "text": " And that is naturally going to be a joint distribution between the top, middle and bottom" }, { "end": 923.36, "start": 920.44, "text": " VQVAEs." }, { "end": 931.9200000000001, "start": 923.36, "text": " And we can decompose this into the following thing simply by applying the standard probabilistic" }, { "end": 933.96, "start": 931.9200000000001, "text": " algebra transformation." }, { "end": 940.24, "start": 933.96, "text": " And we can then, they say, we train separate models, sorry about that, we train separate" }, { "end": 946.32, "start": 940.24, "text": " models for the top level prior, the top, the middle, and the bottom." }, { "end": 952.1800000000001, "start": 946.32, "text": " So what that means is basically, these are now neural networks." }, { "end": 956.6, "start": 952.1800000000001, "text": " If you read something like this in a paper like this, this is going to be a neural network" }, { "end": 964.2, "start": 956.6, "text": " that takes the right side as an input and produces the left side, right?" }, { "end": 971.8800000000001, "start": 964.2, "text": " So you start out with this one, this is a neural network that simply takes as an input," }, { "end": 977.24, "start": 971.88, "text": " sorry, I'm going to draw this neural network, takes as an input, maybe something like a" }, { "end": 981.2, "start": 977.24, "text": " Gaussian super prior, right?" }, { "end": 988.36, "start": 981.2, "text": " You sample from that and that will as an output give you this Z top." }, { "end": 994.32, "start": 988.36, "text": " Then the next neural network will take this as an input and will give you Z middle as" }, { "end": 995.32, "start": 994.32, "text": " an output." }, { "end": 1001.72, "start": 995.32, "text": " And then the final neural network will input the two of those and give you Z top." }, { "end": 1009.76, "start": 1001.72, "text": " And you can train these neural networks simply by kind of training a prior to produce this" }, { "end": 1011.48, "start": 1009.76, "text": " thing right here." }, { "end": 1018.4, "start": 1011.48, "text": " You'd simply use your data, compress it to the hidden space, and then train a neural" }, { "end": 1021.32, "start": 1018.4, "text": " network to produce that distribution." }, { "end": 1023.44, "start": 1021.32, "text": " And you can do this in any number of ways." }, { "end": 1037.9, "start": 1023.44, "text": " You can use classic VAEs, you can use, sorry, you can use here, they say we use transformers" }, { "end": 1045.8400000000001, "start": 1037.9, "text": " with sparse attention, as they are currently state of the art in autoregressive modeling." }, { "end": 1050.88, "start": 1045.8400000000001, "text": " And they say we propose a simplified version, which we call the scalable transformer that" }, { "end": 1053.68, "start": 1050.88, "text": " is easier to implement and scale." }, { "end": 1058.48, "start": 1053.68, "text": " But they see, you see, they model this distribution with these scalable transformers." }, { "end": 1060.64, "start": 1058.48, "text": " All right, so now what do we have?" }, { "end": 1067.96, "start": 1060.64, "text": " We have a way to sample these hidden vectors, right?" }, { "end": 1071.6000000000001, "start": 1067.96, "text": " So we don't need, we don't need this part anymore." }, { "end": 1078.66, "start": 1071.6000000000001, "text": " This part, sorry about that, this part here was just used for training." }, { "end": 1083.0400000000002, "start": 1078.66, "text": " We can, we now have our transformers." }, { "end": 1088.2, "start": 1083.0400000000002, "text": " They take nothing as an input or they take like a Gaussian as an input and they can directly" }, { "end": 1090.8200000000002, "start": 1088.2, "text": " output this hidden representation." }, { "end": 1097.28, "start": 1090.8200000000002, "text": " So we could technically sample from that and then just push it through this decoder of" }, { "end": 1099.2, "start": 1097.28, "text": " the VQVA." }, { "end": 1103, "start": 1099.2, "text": " But the question is, which of the three do we take?" }, { "end": 1105.74, "start": 1103, "text": " And wouldn't it be great if we can combine them?" }, { "end": 1111.92, "start": 1105.74, "text": " Because if we simply sample these, this higher scale one, we just get not very long range" }, { "end": 1113.16, "start": 1111.92, "text": " dependencies, right?" }, { "end": 1114.92, "start": 1113.16, "text": " Because that's what the VQVA learned." }, { "end": 1121.72, "start": 1114.92, "text": " If we just sample this one, then we just get a coarse music and we can sample all three," }, { "end": 1125.72, "start": 1121.72, "text": " but they will just give us three different tracks of music." }, { "end": 1131.66, "start": 1125.72, "text": " So we want to combine the three decoders into one somehow." }, { "end": 1135.88, "start": 1131.66, "text": " And that's, we do this through these up samplers." }, { "end": 1141.52, "start": 1135.88, "text": " So what we'll use, what we'll target actually is this bottom one." }, { "end": 1146.16, "start": 1141.52, "text": " We target this one because this one gives us the best quality music, right?" }, { "end": 1150, "start": 1146.16, "text": " Because it was trained with the shortest time scale." }, { "end": 1154.72, "start": 1150, "text": " We're going to try to take the other signals and influence it." }, { "end": 1159.8200000000002, "start": 1154.72, "text": " So we'll start with the top level prior, right?" }, { "end": 1169.36, "start": 1159.82, "text": " That will produce us, these transformers will give us a sequence, a sequence of tokens in" }, { "end": 1173, "start": 1169.36, "text": " the hidden space that is very coarse, as you can see here." }, { "end": 1177, "start": 1173, "text": " And then we'll feed that into a up sampler." }, { "end": 1183.36, "start": 1177, "text": " And these up samplers again are on the neural networks that can connect the different scales" }, { "end": 1184.36, "start": 1183.36, "text": " with each other." }, { "end": 1189.52, "start": 1184.36, "text": " All right, so you can connect this to this." }, { "end": 1196.28, "start": 1189.52, "text": " It's basically like conditioning the model that produces the sequence on this sequence" }, { "end": 1198.72, "start": 1196.28, "text": " right here." }, { "end": 1203.8, "start": 1198.72, "text": " And again, we use an up sampler to up sample this to the finest scale." }, { "end": 1207.92, "start": 1203.8, "text": " And that we feed in the bottom scale, and then we get our music." }, { "end": 1213.46, "start": 1207.92, "text": " Now throughout all of this, you have conditioning information here, which is a bit of an addition" }, { "end": 1215.16, "start": 1213.46, "text": " to the model." }, { "end": 1223.8000000000002, "start": 1215.16, "text": " So the conditioning information can be things like artist, genre, and timing." }, { "end": 1230.3200000000002, "start": 1223.8000000000002, "text": " And this is, it appears to be pretty important because you kind of, first of all, want some" }, { "end": 1231.42, "start": 1230.3200000000002, "text": " variety." }, { "end": 1237.88, "start": 1231.42, "text": " And then second of all, you sort of want to control what music is produced." }, { "end": 1245.2, "start": 1237.88, "text": " And you don't just want to train this model for one single artist, because you have much" }, { "end": 1248.1200000000001, "start": 1245.2, "text": " more data across all of music." }, { "end": 1253.96, "start": 1248.1200000000001, "text": " So this conditioning information is just included here via another neural network." }, { "end": 1258.64, "start": 1253.96, "text": " And you can find all the architectures for all of these models in the paper." }, { "end": 1265.2800000000002, "start": 1258.64, "text": " It's not particularly important, I believe, how exactly you include them, but the fact" }, { "end": 1268.72, "start": 1265.28, "text": " that you do." }, { "end": 1273.3799999999999, "start": 1268.72, "text": " The last thing is, what they do is they do this kind of windowed sampling." }, { "end": 1280.6399999999999, "start": 1273.3799999999999, "text": " So in order to produce music, you're going to have to produce these slices of music right" }, { "end": 1281.6399999999999, "start": 1280.6399999999999, "text": " here." }, { "end": 1287.36, "start": 1281.6399999999999, "text": " But you sort of have a maximum length here that your models can handle." }, { "end": 1290.96, "start": 1287.36, "text": " And this is usually not the length of a song." }, { "end": 1292.76, "start": 1290.96, "text": " You may know transformers and so on." }, { "end": 1297.44, "start": 1292.76, "text": " They usually have token limits of like 512 tokens." }, { "end": 1299.68, "start": 1297.44, "text": " In terms of audio, that's not that much." }, { "end": 1307.56, "start": 1299.68, "text": " So what you do is this windowed sampling, where you sample something, and then you condition" }, { "end": 1312.6, "start": 1307.56, "text": " basically on the first part, and then you just sample the next thing, and then you again" }, { "end": 1318.44, "start": 1312.6, "text": " condition on the first part, and then you just sample the next thing, the next few ones." }, { "end": 1323.28, "start": 1318.44, "text": " And that guarantees that each of the sampling steps is basically conditioned on what comes" }, { "end": 1326.4, "start": 1323.28, "text": " before, as you see up here." }, { "end": 1333.24, "start": 1326.4, "text": " So you would always sort of condition on a part, produce the next part." }, { "end": 1336.88, "start": 1333.24, "text": " All right." }, { "end": 1341.38, "start": 1336.88, "text": " And they say you can also basically condition on..." }, { "end": 1346.92, "start": 1341.38, "text": " So you can feed in, you don't have to sample the very first one, you can also feed in an" }, { "end": 1350.1000000000001, "start": 1346.92, "text": " existing song in order to prime the system." }, { "end": 1355.74, "start": 1350.1000000000001, "text": " So what you can do is if you have a beginning of a song, then you can let the system finish" }, { "end": 1361.0800000000002, "start": 1355.74, "text": " the song by simply taking the song, running it through the encoder that we produced during" }, { "end": 1362.0800000000002, "start": 1361.0800000000002, "text": " training, right?" }, { "end": 1365.5600000000002, "start": 1362.0800000000002, "text": " You get these hidden representations, so you don't actually have to sample them from your" }, { "end": 1366.7, "start": 1365.5600000000002, "text": " prior." }, { "end": 1372.0800000000002, "start": 1366.7, "text": " And then you run this generation process as if this came out of your prior instead of" }, { "end": 1374.0800000000002, "start": 1372.0800000000002, "text": " what you sampled." }, { "end": 1376.04, "start": 1374.0800000000002, "text": " Okay." }, { "end": 1383.6, "start": 1376.04, "text": " So let's have a look at how that sounds, or listen." }, { "end": 1387.76, "start": 1383.6, "text": " This is an explorer, they release many, many samples from this." }, { "end": 1403.8799999999999, "start": 1387.76, "text": " And the part here where we're going to listen is called no lyrics conditioning." }, { "end": 1413.96, "start": 1403.88, "text": " So as you can hear, this is already pretty good music and the genre is American folk." }, { "end": 1417.96, "start": 1413.96, "text": " The singer is Pete Seeger." }, { "end": 1423.8400000000001, "start": 1417.96, "text": " This already sounds very authentic, but you can hear that the lyrics are just kind of" }, { "end": 1425.92, "start": 1423.8400000000001, "text": " mumbly, right?" }, { "end": 1433.16, "start": 1425.92, "text": " And that's because the model is basically asked to come up with lyrics as pure audio" }, { "end": 1434.48, "start": 1433.16, "text": " waveforms." }, { "end": 1438.5600000000002, "start": 1434.48, "text": " And that results in some subpar lyrics." }, { "end": 1442.44, "start": 1438.5600000000002, "text": " Basically it just produces phonemes that sound like the singer." }, { "end": 1444.44, "start": 1442.44, "text": " It doesn't produce entire words." }, { "end": 1449.88, "start": 1444.44, "text": " And of course it also doesn't produce sentences that make any sort of sense." }, { "end": 1455.3200000000002, "start": 1449.88, "text": " And that's why they're building in an additional thing to do lyrics conditioning." }, { "end": 1462, "start": 1455.3200000000002, "text": " So with lyrics conditioning, the idea is that in the conditioning information, you also" }, { "end": 1463.28, "start": 1462, "text": " add lyrics." }, { "end": 1466.28, "start": 1463.28, "text": " So here is plus text." }, { "end": 1474.96, "start": 1466.28, "text": " So you add text, and then the model is basically can also look at the text." }, { "end": 1483.08, "start": 1474.96, "text": " Now you never, you still, you still, so even before we had music with lyrics and the decoder" }, { "end": 1486.52, "start": 1483.08, "text": " was always asked to reconstruct that." }, { "end": 1489.06, "start": 1486.52, "text": " And so none of that changes." }, { "end": 1491.76, "start": 1489.06, "text": " That's why it has learned to produce phonemes, right?" }, { "end": 1499.52, "start": 1491.76, "text": " But now the decoder can also, and also the encoder, the system can look at the lyrics" }, { "end": 1505.12, "start": 1499.52, "text": " that you provide right here in order to help with its decoding." }, { "end": 1513.64, "start": 1505.12, "text": " So technically it could learn to bypass the encoding of the exact way the lyrics are uttered." }, { "end": 1517.64, "start": 1513.64, "text": " And it could just look at the text that you provide." }, { "end": 1522.8400000000001, "start": 1517.64, "text": " Now this of course requires that during training you provide the lyrics of the song that are" }, { "end": 1527.0800000000002, "start": 1522.8400000000001, "text": " actually that you feed in." }, { "end": 1532.92, "start": 1527.0800000000002, "text": " But also it means that during decoding, if you sample, you can then provide your own" }, { "end": 1536.4, "start": 1532.92, "text": " lyrics and look what happens." }, { "end": 1543.4, "start": 1536.4, "text": " So they say they provide lyrics, they always have to provide lyrics for chunks of audio." }, { "end": 1548.3200000000002, "start": 1543.4, "text": " So our data set includes song level lyrics, but to make it easier, we train on shorter" }, { "end": 1551.0400000000002, "start": 1548.3200000000002, "text": " 24 second chunks of audio." }, { "end": 1555.8400000000001, "start": 1551.0400000000002, "text": " And this is partly to make it easier for the model, but also partly because those appear" }, { "end": 1559.5, "start": 1555.8400000000001, "text": " to be the limitations of these systems, right?" }, { "end": 1569.22, "start": 1559.5, "text": " If you have transformers in there and whatnot, 24 seconds of raw audio waveform is a lot." }, { "end": 1577.88, "start": 1569.22, "text": " So they have this problem of they have a song from here to here, and they have the lyrics," }, { "end": 1579.24, "start": 1577.88, "text": " blah, blah, blah." }, { "end": 1584.32, "start": 1579.24, "text": " And they need to know which lyrics belong to which part of the song." }, { "end": 1586.64, "start": 1584.32, "text": " And usually it's monotonic, right?" }, { "end": 1591.52, "start": 1586.64, "text": " And linear because you get the lyrics from some lyrics website, this blah, blah, blah." }, { "end": 1596.22, "start": 1591.52, "text": " But you don't know particularly to which 24 second chunk they belong." }, { "end": 1601.64, "start": 1596.22, "text": " So they say, first of all, they started with simply linearly aligning the lyrics, but then" }, { "end": 1606.8, "start": 1601.64, "text": " they had some, they had some problems with fast songs." }, { "end": 1609.3600000000001, "start": 1606.8, "text": " So they had some heuristic here." }, { "end": 1617.02, "start": 1609.3600000000001, "text": " But ultimately, the decoder needs to learn to attend to these lyrics." }, { "end": 1621.8, "start": 1617.02, "text": " And these the graphics like this you see here is the music token position and lyrics token" }, { "end": 1622.8, "start": 1621.8, "text": " position." }, { "end": 1629.32, "start": 1622.8, "text": " Here you see the the system learns that for example, if it has this music token needs" }, { "end": 1632.8799999999999, "start": 1629.32, "text": " to attend to this token in the lyric." }, { "end": 1640.68, "start": 1632.8799999999999, "text": " So you can by inspecting these attention heads that you have on the lyrics text in the system," }, { "end": 1644.1, "start": 1640.68, "text": " you can see which lyrics the model is paying attention to." }, { "end": 1651.4199999999998, "start": 1644.1, "text": " And the fact that it learns to pay linearly attention to these things is kind of a confirmation" }, { "end": 1657.5, "start": 1651.42, "text": " because you you don't you give the whole text or at least the 24 second chunks of audio," }, { "end": 1660.54, "start": 1657.5, "text": " you give that at once as a as a text, right." }, { "end": 1666.3200000000002, "start": 1660.54, "text": " And the fact that it learns to linearly attend to the tokens is a confirmation that it actually" }, { "end": 1671.04, "start": 1666.3200000000002, "text": " includes that information into the coding." }, { "end": 1677.76, "start": 1671.04, "text": " And that is a pretty gives you pretty much better results." }, { "end": 1695.92, "start": 1677.76, "text": " So we can maybe go to classic pop." }, { "end": 1716.04, "start": 1695.92, "text": " So this are unseen lyrics." }, { "end": 1719.64, "start": 1716.04, "text": " So the model has never seen these lyrics, right?" }, { "end": 1725.3600000000001, "start": 1719.64, "text": " It was just asked to produce classic pop in the style of Frank Sinatra with these lyrics." }, { "end": 1726.9199999999998, "start": 1725.36, "text": " And that's what it came up with." }, { "end": 1729.1999999999998, "start": 1726.9199999999998, "text": " That is pretty, pretty, pretty cool." }, { "end": 1736.08, "start": 1729.1999999999998, "text": " I think they also have re renditions where they basically feed I believe feed in the" }, { "end": 1742.8, "start": 1736.08, "text": " original lyrics, we conditioned on lyrics seen during training." }, { "end": 1747.36, "start": 1742.8, "text": " And they have fun songs." }, { "end": 1752.52, "start": 1747.36, "text": " And in the fun songs, I like the hip hop in the style of Kanye West, where they provide" }, { "end": 1780.2, "start": 1752.52, "text": " the lyrics of Eminem's lose yourself." }, { "end": 1781.2, "start": 1780.2, "text": " I'm grooving." }, { "end": 1784.16, "start": 1781.2, "text": " I don't know what you're thinking, but this is cool." }, { "end": 1789.48, "start": 1784.16, "text": " And they can also, as we said, do these completions where they start with part of a song." }, { "end": 1791.1200000000001, "start": 1789.48, "text": " And just I have to do this." }, { "end": 1792.8400000000001, "start": 1791.1200000000001, "text": " I have to do the hi there." }, { "end": 1797.88, "start": 1792.8400000000001, "text": " So the first version of this video was copystriked because what you would hear would be the original" }, { "end": 1802.64, "start": 1797.88, "text": " never going to give you up like 10 seconds of it, and then followed by what the model" }, { "end": 1804.96, "start": 1802.64, "text": " continues with." }, { "end": 1809.04, "start": 1804.96, "text": " So as a substitute, you're now going to have to listen to me." }, { "end": 1812.04, "start": 1809.04, "text": " I hope that suffices." }, { "end": 1856.76, "start": 1839.04, "text": " Almost as good, almost as good as the original." }, { "end": 1863.52, "start": 1856.76, "text": " So as you can see, this the results here are pretty, pretty cool." }, { "end": 1870.32, "start": 1863.52, "text": " And I want to show you one last thing, and that is this Christmas song in the style of" }, { "end": 1872, "start": 1870.32, "text": " Frank Sinatra." }, { "end": 1875.16, "start": 1872, "text": " I believe it's this one right here." }, { "end": 1880.48, "start": 1875.16, "text": " And the special thing here is, it's again classic pop in the style of Frank Sinatra." }, { "end": 1887.24, "start": 1880.48, "text": " And you see on the bottom here, you see on the bottom which of the lyrics it's attending" }, { "end": 1888.24, "start": 1887.24, "text": " to." }, { "end": 1892.5, "start": 1888.24, "text": " And you see, you know, this this graph right here that shows you that first, it's attending" }, { "end": 1898.68, "start": 1892.5, "text": " linearly through the lyrics, but then it kind of jumps around and attends to different things" }, { "end": 1902.28, "start": 1898.68, "text": " because it doesn't it doesn't it doesn't just continue." }, { "end": 1930.04, "start": 1902.28, "text": " So" }, { "end": 1942.44, "start": 1930.04, "text": " this is great." }, { "end": 1947.92, "start": 1942.44, "text": " So it kind of falls out of this linearly attending to the lyrics." }, { "end": 1953.26, "start": 1947.92, "text": " And probably because there was sort of a pause in the lyrics." }, { "end": 1957.04, "start": 1953.26, "text": " And maybe this is just more than one audio window." }, { "end": 1960.24, "start": 1957.04, "text": " So it doesn't have this autoregressive property anymore." }, { "end": 1964.72, "start": 1960.24, "text": " And then it doesn't find the proper place to attend anymore." }, { "end": 1972.8, "start": 1964.72, "text": " And just, again, comes up with sort of babbles, but it sounds pretty, pretty cool." }, { "end": 1979.6, "start": 1972.8, "text": " Yeah, so this they have released many, many samples here, some cherry picked and just" }, { "end": 1984.52, "start": 1979.6, "text": " a lot of samples with unseen lyrics, rerenditions, and so on." }, { "end": 1986.96, "start": 1984.52, "text": " This all is very cool." }, { "end": 1992.1200000000001, "start": 1986.96, "text": " They have their training setup described, I believe they also release their code." }, { "end": 1998.04, "start": 1992.1200000000001, "text": " Many more results in the paper of how to make this thing work if you want to do that yourself." }, { "end": 2000.72, "start": 1998.04, "text": " And with that, I invite you to read the paper." }, { "end": 2006.1200000000001, "start": 2000.72, "text": " If you're still here, please subscribe if you like this content, leave a comment and" }, { "end": 2022.8799999999999, "start": 2006.12, "text": " bye bye." } ]
RrBapqCPnmE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML Coding Tips] Separate Computation & Plotting using locals
[ "Science & Technology" ]
[ "deep learning", "machine learning", "coding", "research", "engineering", "ipython", "colab", "notebook", "locals" ]
Here's a lazy way to separate computation and subsequent analysis in a notebook without the overhead of manually saving local variables. WARNING: Don't do this in a serious project. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! So today I just wanted to bring you a quick coding tip that I often encounter in my daily machine learning research or life that might not be super common in let's say traditional software engineering or elsewhere. So often I have a bunch of, let's say I have a bunch of models right, and I use these IPython notebooks or collabs to analyze my data, plot things and so on. So I have a bunch of models, let's say they're called M1 and M2, and I'll usually run my big jobs on a cluster. So let's say I have run some jobs, one for each model, and have some logs that I want to analyze. So I'll go through my models and here I load a bunch of logs and I'll also compute some stuff, some statistics and some just things, right, that I want to have computed. And maybe these things are called A and let's... this and B, right. So now I've computed these things now, I want to analyze them. And let's shortcut and say printing is plotting. So think of these, this might be numbers right, and now I want to plot them here out, let's just print them, which I do this. Now every time I want to, you know, change something here in my printing, maybe I want the separator to be that, I'll have to load all the logs and compute all the stuff right each time I run this cell, which is not super cool, right. So I usually would like to factor out the plotting and stuff like this from the computation. So I could extract one of them into like a function, but then the point of these notebooks is that I can run each of the cells and they'll run right there, right. So functions aren't really cool in these notebooks. So what I'll usually end up doing is you'll have some second loop in here, right, and let's see, you'll have some data dict up here and you'll... here at the end you'll say something like data for this model is A and B or something like this, right, and then down here the first thing I do is I'll get my data and then I'll unpack again. So DA, DB, either I'll unpack or I'll just address them in dictionary notation like this and then I can do my plotting, right. Some people use tuples here, right, they could just go A and B, but then you'll have to do this unpacking. The problem is now if I want to add something here, right, I compute something new. I now need to add something here, right, I need to remember to store it in the data array and then I need to here remember to unpack it in the same order, right, and then I need to produce to put it in in the plotting, right. So this is very cumbersome. This line here and this line here, they're very... because you kind of duplicate your variable names all over the place just because you want to compute them here and use them here. A software engineer would usually tell you let's do something like a data class or in its most simplest form is say of a class and you know there are multiple ways of achieving this but let's just do A and B here. This is more probably the most verbose, you can also do name tuples, adder classes, data classes and so on but ultimately you produce a class like this and then here you say this is a data class A and B and then here down here you can at least address them like this, right, and you don't have to do the dictionary notation or remember the order. But now again if you want to add the C, not only do you have to add it here but you have to add it up here and you have to add it here and beware if there's a doc string and then you can use it here. This is just too cumbersome. So here is a trick and please only use these in like notebooks like this. This will lead to so much memory problems and everything and if you work with a software engineer you might have to get them chocolate for doing this. But alright, so what you can do is you can save your local variables in this dictionary. So this is a built-in that saves the local variables, sorry that gives you a dict of the local variables. The local variables here are A, B, C, M and all a bunch of other things, right. It's not clear in these IPython notebooks they have a bunch of stuff around them such that the meaning is slightly different depending where locals is but in essence you'll get the A, B and C what you want. Now the locals dict refers in order for this to go through the locals dict refers of course to the same objects not newly constructed so we're safe here to make a copy of that right like this and then down here we simply retrieve these local variables and update the current local variables using the local variables we stored and voila we can use A, B and C without ever having defined them, right. If this doesn't work that means that the locals here is a copy basically and this is a Python optimization and I read I don't have to use it because in these IPython notebooks it appears to work but you might want to have an empty exec statement here such that the Python interpreter can never be sure which variables are created in here and therefore can't optimize away. So that that is a horrible horrible trick. Again don't build anything serious upon this but it is very duper super easy you can just add any variable here and then use it down here so easy right and so wrong. Alright this was it I hope you enjoyed this I want to bring these kind of tips every now and then as a as an intermix into the research papers I hope you enjoyed this bye bye
[ { "end": 5.5600000000000005, "start": 0, "text": " Hi there! So today I just wanted to bring you a quick coding tip that I often" }, { "end": 11.76, "start": 5.5600000000000005, "text": " encounter in my daily machine learning research or life that might not be super" }, { "end": 17.52, "start": 11.76, "text": " common in let's say traditional software engineering or elsewhere. So often I have" }, { "end": 21.16, "start": 17.52, "text": " a bunch of, let's say I have a bunch of models right, and I use these IPython" }, { "end": 27, "start": 21.16, "text": " notebooks or collabs to analyze my data, plot things and so on. So I have a bunch" }, { "end": 33.84, "start": 27, "text": " of models, let's say they're called M1 and M2, and I'll usually run my big jobs" }, { "end": 38.36, "start": 33.84, "text": " on a cluster. So let's say I have run some jobs, one for each model, and have" }, { "end": 45.6, "start": 38.36, "text": " some logs that I want to analyze. So I'll go through my models and here I" }, { "end": 52.400000000000006, "start": 45.6, "text": " load a bunch of logs and I'll also compute some stuff, some statistics and" }, { "end": 59.12, "start": 52.4, "text": " some just things, right, that I want to have computed. And maybe these things" }, { "end": 68.08, "start": 59.12, "text": " are called A and let's... this and B, right. So now I've computed these things now, I" }, { "end": 73.92, "start": 68.08, "text": " want to analyze them. And let's shortcut and say printing is plotting. So" }, { "end": 77.44, "start": 73.92, "text": " think of these, this might be numbers right, and now I want to plot them here" }, { "end": 80.92, "start": 77.44, "text": " out, let's just print them, which I do this. Now every time I want to, you know," }, { "end": 88.8, "start": 80.92, "text": " change something here in my printing, maybe I want the separator to be that," }, { "end": 93.04, "start": 88.8, "text": " I'll have to load all the logs and compute all the stuff right each time I" }, { "end": 97.72, "start": 93.04, "text": " run this cell, which is not super cool, right. So I usually would like to factor" }, { "end": 103.64, "start": 97.72, "text": " out the plotting and stuff like this from the computation. So I could" }, { "end": 108.48, "start": 103.64, "text": " extract one of them into like a function, but then the point of these notebooks is" }, { "end": 111.96000000000001, "start": 108.48, "text": " that I can run each of the cells and they'll run right there, right. So" }, { "end": 116.72, "start": 111.96000000000001, "text": " functions aren't really cool in these notebooks. So what I'll usually end up" }, { "end": 123.16, "start": 116.72, "text": " doing is you'll have some second loop in here, right, and let's see, you'll have" }, { "end": 129.48000000000002, "start": 123.16, "text": " some data dict up here and you'll... here at the end you'll say something like" }, { "end": 139.67999999999998, "start": 129.48, "text": " data for this model is A and B or something like this, right, and then down" }, { "end": 149.79999999999998, "start": 139.67999999999998, "text": " here the first thing I do is I'll get my data and then I'll unpack again. So DA," }, { "end": 157.64, "start": 149.79999999999998, "text": " DB, either I'll unpack or I'll just address them in dictionary notation like" }, { "end": 161.6, "start": 157.64, "text": " this and then I can do my plotting, right. Some people use tuples here, right, they" }, { "end": 166.95999999999998, "start": 161.6, "text": " could just go A and B, but then you'll have to do this unpacking. The problem is" }, { "end": 172.33999999999997, "start": 166.95999999999998, "text": " now if I want to add something here, right, I compute something new. I now need" }, { "end": 176.92, "start": 172.33999999999997, "text": " to add something here, right, I need to remember to store it in the data array" }, { "end": 181.11999999999998, "start": 176.92, "text": " and then I need to here remember to unpack it in the same order, right, and" }, { "end": 187.51999999999998, "start": 181.11999999999998, "text": " then I need to produce to put it in in the plotting, right. So this is very" }, { "end": 193.68, "start": 187.52, "text": " cumbersome. This line here and this line here, they're very... because you kind of" }, { "end": 197.24, "start": 193.68, "text": " duplicate your variable names all over the place just because you want to" }, { "end": 201.44, "start": 197.24, "text": " compute them here and use them here. A software engineer would usually tell you" }, { "end": 211.04000000000002, "start": 201.44, "text": " let's do something like a data class or in its most simplest form is say of a" }, { "end": 218, "start": 211.04, "text": " class and you know there are multiple ways of achieving this but let's just do" }, { "end": 224.6, "start": 218, "text": " A and B here. This is more probably the most verbose, you can also do name tuples," }, { "end": 230.56, "start": 224.6, "text": " adder classes, data classes and so on but ultimately you produce a class like this" }, { "end": 237.12, "start": 230.56, "text": " and then here you say this is a data class A and B and then here down here" }, { "end": 244.28, "start": 237.12, "text": " you can at least address them like this, right, and you don't have to do the" }, { "end": 252.56, "start": 244.28, "text": " dictionary notation or remember the order. But now again if you" }, { "end": 257.2, "start": 252.56, "text": " want to add the C, not only do you have to add it here but you have to add" }, { "end": 266.64, "start": 257.2, "text": " it up here and you have to add it here and beware if there's a doc string and" }, { "end": 272.88, "start": 266.64, "text": " then you can use it here. This is just too cumbersome. So here is a" }, { "end": 277.8, "start": 272.88, "text": " trick and please only use these in like notebooks like this. This will lead to so" }, { "end": 281.71999999999997, "start": 277.8, "text": " much memory problems and everything and if you work with a software engineer you" }, { "end": 289.15999999999997, "start": 281.71999999999997, "text": " might have to get them chocolate for doing this. But alright, so what you" }, { "end": 295.88, "start": 289.15999999999997, "text": " can do is you can save your local variables in this dictionary." }, { "end": 302.32, "start": 295.88, "text": " So this is a built-in that saves the local variables, sorry that gives you a" }, { "end": 308.8, "start": 302.32, "text": " dict of the local variables. The local variables here are A, B, C, M and all a bunch" }, { "end": 312.8, "start": 308.8, "text": " of other things, right. It's not clear in these IPython notebooks they have a" }, { "end": 316.8, "start": 312.8, "text": " bunch of stuff around them such that the meaning is slightly different depending" }, { "end": 322.6, "start": 316.8, "text": " where locals is but in essence you'll get the A, B and C what you want. Now the" }, { "end": 329.16, "start": 322.6, "text": " locals dict refers in order for this to go through the locals dict refers of" }, { "end": 333.72, "start": 329.16, "text": " course to the same objects not newly constructed so we're safe here to make a" }, { "end": 341.72, "start": 333.72, "text": " copy of that right like this and then down here we simply retrieve these local" }, { "end": 349.56, "start": 341.72, "text": " variables and update the current local variables using the local variables we" }, { "end": 359.16, "start": 349.56, "text": " stored and voila we can use A, B and C without ever having defined them, right." }, { "end": 366, "start": 359.16, "text": " If this doesn't work that means that the locals here is a copy basically" }, { "end": 371.48, "start": 366, "text": " and this is a Python optimization and I read I don't have to use it because in" }, { "end": 378.66, "start": 371.48, "text": " these IPython notebooks it appears to work but you might want to have an empty" }, { "end": 385.36, "start": 378.66, "text": " exec statement here such that the Python interpreter can never be sure which" }, { "end": 392.56, "start": 385.36, "text": " variables are created in here and therefore can't optimize away. So that" }, { "end": 399.6, "start": 392.56, "text": " that is a horrible horrible trick. Again don't build anything serious upon this" }, { "end": 406.36, "start": 399.6, "text": " but it is very duper super easy you can just add any variable here and then use" }, { "end": 417.56, "start": 406.36, "text": " it down here so easy right and so wrong. Alright this was it I hope you enjoyed" }, { "end": 422.28000000000003, "start": 417.56, "text": " this I want to bring these kind of tips every now and then as a as an intermix" }, { "end": 437.71999999999997, "start": 422.28, "text": " into the research papers I hope you enjoyed this bye bye" } ]
F5aaXrIMWyU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "reinforcement learning", "society", "gini index", "welfare", "taxes", "brackets", "progressive", "regressive", "us", "poor", "rich", "equality", "redistribution", "outer loop", "world", "resources", "labor", "trade", "neural networks", "ppo" ]
Hail the AI Tax Collector! This very visual framework has RL Agents maximize their coins in a tiny world through collecting, building and trading. But at the same time, the government is also an AI trying to maximize social welfare via taxes. What emerges is very interesting. Paper: https://arxiv.org/abs/2004.13332 Blog: https://blog.einstein.ai/the-ai-economist/ Abstract: Tackling real-world socio-economic challenges requires designing and testing economic policies. However, this is hard in practice, due to a lack of appropriate (micro-level) economic data and limited opportunity to experiment. In this work, we train social planners that discover tax policies in dynamic economies that can effectively trade-off economic equality and productivity. We propose a two-level deep reinforcement learning approach to learn dynamic tax policies, based on economic simulations in which both agents and a government learn and adapt. Our data-driven approach does not make use of economic modeling assumptions, and learns from observational data alone. We make four main contributions. First, we present an economic simulation environment that features competitive pressures and market dynamics. We validate the simulation by showing that baseline tax systems perform in a way that is consistent with economic theory, including in regard to learned agent behaviors and specializations. Second, we show that AI-driven tax policies improve the trade-off between equality and productivity by 16% over baseline policies, including the prominent Saez tax framework. Third, we showcase several emergent features: AI-driven tax policies are qualitatively different from baselines, setting a higher top tax rate and higher net subsidies for low incomes. Moreover, AI-driven tax policies perform strongly in the face of emergent tax-gaming strategies learned by AI agents. Lastly, AI-driven tax policies are also effective when used in experiments with human participants. In experiments conducted on MTurk, an AI tax policy provides an equality-productivity trade-off that is similar to that provided by the Saez framework along with higher inverse-income weighted social welfare. Authors: Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C. Parkes, Richard Socher Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, today we're going to find out why AI is much better at governing people, why poor people really should pay more taxes, and how Donald Trump is just a normal human. Alright, we'll dive into it. We're looking at the AI Economist by Salesforce Research. Now Salesforce Research has kind of created a simulated world environment where they can place agents in it and the agents, they can move around, they can collect resources, they can trade those resources, and they can use those resources to build houses and that will earn them coins. And each agent wants to maximize its own coins, but also there's the government and the government can set taxes. So they collect money from everyone and they redistribute it. And the goal now is going to be that the AI handles both the agent and the taxes and we want to maximize the social welfare of the entire population. Alright, that's the goal. So the paper here is called The AI Economist Improving Equality and Productivity with AI Driven Tax Policies by Stefan Cheng and Alexander Trott and other people from Salesforce Research and Harvard University. So as I said, this is a simulated environment and the simulated environment works like this. There is a 2D plane, kind of like a game playing field and in this game there are agents. Here you can see the agents, there are always four agents. Where? Oh, down here. What are you doing in the corner? Come on, be productive. The agents are in this world and they can do certain things. They have certain actions at their disposal. So first of all, they can move around. They can move down, left, right and so on. Whenever they walk past a resource tile, they collect the resource. This is stone and this is wood. So there are two kinds of resources. And then the last actions the agents have is building a house. One wood and one stone will create one house and the house gives you coins. So this is a house and that will give you coins. But how much coins you get is different from agent to agent and this represents the agents different skill levels. This is an abstraction and the kind of economic theory behind it is that the income inequality in people, one of the main drivers of it is that they are skilled differently and therefore are able to convert one unit of labor into more money than another lower skilled worker. So this is here represented by the fact that maybe if this agent here builds the house, they'll get 50 coins. But if this agent here would build the same house, they'll only get 10 coins. So we'll call this here a high skilled worker and this here a low skilled worker. Now the last thing, sorry, I saw the last thing before, but the very last thing the agents can do is they can trade. So if one agent has too many resources and the other one has not enough, they can trade those resources among each other for those coins. So once you build a house, you collect some coins, you can then either go and collect more resources or you can use those coins in order to buy resources off of other people. This is unlucky. No coins, no houses, and no resources. Look at them. Oh yeah, so you also can't move across the water here. You can only move on the grass. You can also not move through a house, which gives you some interesting abilities because you can just build a house right here. And yeah, so and you can't move over other players. But these are the rules are pretty simple. And the goal here is for the agents to maximize the number of coins they get in 1000 steps. So the number H here is 1000, which is the number of steps that the agents can take before the game is over and it restarts again. So each agent is using reinforcement learning in order to learn how to achieve the maximum number of coins. Now the policy is of course going to be different depending on whether that is a high or a low skilled worker. The catch here is that outside of this there is the government, the government here, let's draw this big house with the flag of our fictitious nation, which is like this. That's the flag. And the government will observe what's happening here and they will issue a tax taxes. So it will issue a tax distribution. Now how do you imagine that? So if you imagine the government says something like this for the first 10 coins you own, you owe us 5% of that. For the next 10 coins, so from 10 to 20 you earn, you owe us 10% and so on. So if you earn even more, you owe us more and more percent of those extra coins. This is what you might know as a progressive tax schedule. The more you earn, the more percentage wise you pay on that extra earned money. This is what you might be used to, but there are other tax schedules and the exact histogram you see or the exact how many percent for which amount of coins, that is the action of the government. So the government decides on the taxes and the taxes are just collected from the income. So if an agent earns these coins, then it has to pay taxes to the government and the government will redistribute all the taxes it has collected equally among the population. So if you pay a lot, you might lose through this process and if you just pay a little taxes you might gain through this process. So that's it. That is the basic premise of the game. The agents are using reinforcement learning and I believe the newness of this paper is also that the government now is using reinforcement learning in order to determine the optimal tax policy. There is kind of this inner loop here and there is this outer game where the government also tries to maximize the RL. And what does the government try to maximize? Good question. It is a measure that's called social welfare. Now social welfare consists of two things and they have this here way down in the paper. Social welfare in this paper consists of two things. First of all, economic productivity, which basically just means how many coins has anyone produced. It doesn't matter who, but just the total amount of coins produced. The second one is income equality and this is related to the Gini index. So if you plot the cumulative distribution of wealth, a fully equal society would be a straight line because 50% of the people would have 50% of the money and so on. But almost all true societies have something like this where 50% of the people might have 10% of the money and the rest 50% of the people has the other 90%. And the measure of inequality is this area here. This is called the Gini index and 1 minus this area is what this paper has as an equality measure. So the higher this number, the more equal is the society in terms of their income distribution. Now what is actually optimized for here is this thing, equality times productivity. So you want both to be high, your income equality and your productivity. There's a trade off here of course, but you can have multiple ways to trade that off and that will give you the different thing. They call this the social welfare function. And that's the thing that the government RL agent optimizes for. So you can see here already the free market, even though it's the most productive, produces the most coins because if you have a free market means no taxes. If you have no taxes, then people are basically encouraged to earn more money because they don't have to pay taxes on them. As soon as you tax them, they're less encouraged to earn more money. And therefore if you have no taxes, the most coins will be earned in total. But the equality suffers. So the equality is the lowest among these things considered. If you compare that to the AI economist, the AI economist achieves the highest social welfare. It achieves the highest equality, but it doesn't suffer as much in productivity as other systems here. And the baseline systems are first of all, the US federal system. This is not particularly tied to the US. This is basically every system or most of the systems that you have currently in the world is the progressive tax system and the SAES formula, which I believe is an economically theory based system, which is a regressive tax schedule. You can see them down here where the US federal will be progressive, means the more you earn, the more percentage wise you pay. While the SAES formula will be regressive, which generally means the more you earn, the less you pay. I believe this was derived under some assumptions to be the optimal tax distribution. And the AI economist will come to this in a second. Let's actually just look at one of these things first, one of these games, how this plays out. The cool thing here is that they have pretty flashy animations. So you can look how does one of these games turn out. Now this is a free market game and you can see the agents moving around collecting things, building houses. And you might notice that one of the agents, namely agent one, is just building all of the houses. And generally just kind of being a dick, being in everyone's face and kind of building things everywhere. And the other ones don't. Or just very few, like the light blue on the bottom left builds some houses. On the right you can see how the distribution of wealth is structured. And you see agent one ends up with most of the wealth. Now the size of the circle I think is the total productivity. So you can see this grows over time mainly because agent one becomes so rich. And if you analyze this, if you analyze what's happening here, then you'll see that agent one and I might be... Yeah, they have a graph up here. So it is very interesting what happens. This is kind of the same game. So agent one here is this orange dot and agents two, three and four are these dots here. And this graph here is coin from trading. So how much money they win or lose from trading. Now the green bars are trading wood and the brown bars are trading stone. So you see agent number four, which is the lowest skilled, the skill is just determined at the beginning of the episode. It will just make all of its coins basically by selling wood. And agent three will make all of its coins by selling stone. And agent two will collect both and sell both. And agent one will just spend money in trading. So you'll have a specialization here. Agent one, which is the highest skill one right here, will buy resources in order to build more houses because it clearly profits from building lots and lots and lots and lots of houses. So it will use that money to buy more resources rather than go and collecting them. While all the other ones basically forgo building houses in favor of they just collect the resources and they just trade them way to the agent one that's more profitable for them than building houses themselves. So you see this kind of specialization emerging in these games, which I find, I find this to be pretty cool that you see something like this, like a really stark division of labor emerging just from these very, very small set of rules. And you can analyze this game in different ways. They have a few more plots where this becomes quite apparent that sorry, that these agents specialize. So you see here resources collected, sorry about that, resources collected. If you have the lowest skill and the highest skill labors, the lowest skills, they mainly, this should be a 10. They mainly collect resources, while the highest skill labor mainly goes for building things. It doesn't collect resources, but net income from building is really high while everyone else just doesn't build at all. All right, so we have a division of labor emerging. Now this was a free market. Let's actually compare the different algorithms. So if you look at social welfare, this is this thing here, equality times productivity. You can see that the AI economist will outperform over time over the training progress, it will outperform all of the other systems. So it will outperform the free market, the US federal tax system, and the SAS formula if trained for long enough, which is to be expected, right? If you put RL onto a cost function, it will then optimize that cost function. But it's pretty cool to see that there's a lot of headroom here over what we currently have. Now let's look at some of the strategies it comes up with. So what do these games look like where the AI has imposed different tax strategies? So this is with the SAS strategy. You can see that here. Again, you see this inequality emerging with the yellow player here building most of the houses. With the AI economist, again, there is inequality, but you can see at the distribution that agent one only ends up with about half of the wealth, where if you compare this to the free market here, then agent one ends up with like two thirds of the wealth, right? This is the game we saw before. But there is not qualitatively that much of a difference, but there is in the end result. All right, let's look at what these policies actually come up with. So what is the tax policy that the AI comes up with? So this tax policy outperforms on this social welfare metric. And this is very interesting, right? So first of all, you see that it's right zigzag. It's like down, up, down, up, which is already weird. So the first very weird thing is the spike at the very bottom. So that thing here, what's that thing here? Those are the poorest people in your society, and you're taxing them the highest. Right? So just imagine this, you're here downtrodden by life, abandoned by society, you have no money, no house, no nothing. And you're just trying to get a job, you're just getting like a little bit of money. And you can buy a cheeseburger, and then the government comes. Give us that. Give us that money. Come on. So basically, these are the poor. And the poor in this system is just F you. F you the poor. Now, the reason why this happens is pretty clear, right? The reason why this happens is because you want to encourage people to go here to earn more money. So it's not like the government makes any money from the poor people independently of how high it taxes them. But it is basically an incentive structure to make them move over to the somewhat more productive population. Because here it's assumed kind of that even the lowest skilled ones can move over a bit if you just tax them enough at the low brackets, right? So this is what I find to be you just have to realize that it is so hard, I believe it is almost impossible to encapsulate what we really want in a system into a formula to be into a cost function to be optimized. It is so incredibly hard. And you see that here, of course, it is going to result in a better social outcome, but it just doesn't feel right to tax the poor at what 60%? Okay, so F the poor, right? And then you get to this level right here. And interestingly, if you earn even more, you'll be taxed high again, right? So this, we're kind of used to that. You earn little, you pay little, you earn more. You pay more. But then comes this entire valley here. What's up with that? Right? Like WT, and this can be this is now of course, the same reasoning as you have with this size formula here is where the rich people, you want to tax them less so that they are more productive such that they generate more coins. And even though you tax them less percentage wise, they will end up paying more money in absolute terms. Because because you basically encourage them to produce more. So that is that is kind of that is the, I guess the reasoning behind this. But what you have to wreck, you have to recognize what's happening here, right? What are we optimizing? We're optimizing this productivity times equality. And what do we get? You see, you get two big valleys of attraction, one here, and one here. And that means that this algorithm favors a two class society. Right? And I believe this is this is partially the limitations of this simulation here, the fact that you only have four agents, the fact that you can only do two things either collect or build, right? It encourages a two class society, this specialization that you saw, right? So you say these here are the moneymakers, right? And these here are the collectors. And it is very hard to move from one group to the other. Because if you you earn more coins as a collector, you're here, and you're really discouraged here. If you move there, you want to move all the way over here, right? Now, the people that are already over here, if they earn an extra coin, that doesn't bother them too much. So they're very encouraged to earn more money. But the very, the poorer people on this side, they're basically discouraged from earning more money, because the system needs them to stay at that collector level, right? So the system encourages the two class society because we have not built social mobility into the into the into the equation, we have not built a measure for social social mobility into the cost function. And therefore, the AI doesn't care that the poor people will stay poor and rich people will stay rich. It just knows that this is the best outcome for society overall, given the cost function that we had, again, this just doesn't seem like fair to us, like what we want, we want someone to be able to make it over here, right, even if they start out from the bottom. And so we'd have to we have to build that in. So we have a system that is effing eff the poor, right? No social mobility, mobility. No. And then what's happening at the end? What's happening at the end? This is beautiful. Very rich people. These are the moneymaker, right? This is the this is the monopoly guy top hat monocle wearing Scrooge McDuck bathing in coins. This is where the the government makes their money. And the discrepancy is really stunning, because you could also argue, hey, why don't we apply the same reasoning as we applied here and here? Why is not is it not like the case that if the rich people if you tax them lower, they'll pay more money and so on. I believe again, this might be just a result of this, how the simulation is set up. So we'll move away quickly and we'll come back to this. Here's what I find particularly interesting about this paper, which just confuses the heck out of me. It is a double periodic game. So it's an inner outer loop game. What do I mean by that? They have these episodes, right? Here is the start. And here is the end. And they subdivide this into, as we said, 1000 steps. So an agent is here and it can do step, step, step, step, step, and it can perform these actions. This is the agent. There are 1000 steps here and the agent just tries to collect as much coins. So this is your classic RL problem. But also they divide this into 10, what they call periods. And I'm just going to draw maybe four periods, right? So this thing here, they call one period where the whole thing is an episode. Now the purpose of the period is that at the beginning of each period, the government, the government can impose a new tax schedule. So the government doesn't only fix the taxes once, but it can change the taxes over the course of the episode, right? Now this is what I find. I just don't see why. So now you're formulating the tax giving objective as a sequential decision making. It's like the government saying, well, today we have high taxes, but tomorrow we have low taxes and the day after that we have high taxes again. And it just doesn't make sense for any government to do this. What you should do is you should set taxes once at the beginning of the episode and then see how that turns out and then try to maximize your tax schedule. Because all we're looking at, we're only ever looking at how the taxes are at the end, right? The things that we've examined are just the last taxes that the AI has issued. We don't know the dynamic of what happens in between. This might be super wild actually, what the AI does in between. And I just don't see the framing as a sequential decision problem. And I believe this is just an over engineered thing. Because someone wanted a reason and here is the architecture, right? You see someone wanted a reason to put an LSTM in there. Someone is thinking like, well, RL, that means like sequential decisions and so on. And RL in this outer loop, the way I propose it would just be a one step per episode decision, which is a bandit problem. And as we all know, bandits are boring. So they didn't want this to be a bandit problem. They wanted to be a sequential problem. And that's why they made this period thing, which I find dumb. So another factor here, and I'm going to tell you how this relates to the to the weird rich people are taxed high. Another factor here is look at this. It's a CNN, an MLP, an LSTM and an MLP and the agent as well. And I can tell you right now, the CNN has two layers. Two. And the LSTM has like 128 units in its hidden state. So these are tiny, tiny models. And it is not a model based RL, it's model free RLs, proximal policy optimization. And the the the ability of these agents or planner to learn anything substantial here, I believe is just not super duper well, right. So the I believe that these are rather dumb agents. And you can see the tax rates given by the planner is fed into the agent model. But I don't think that the agent given such a small model can actually adjust to these inputs because you have to do some pretty good logic in order to from these tax brackets to determine how you should act right now. What I think is happening is the agent just kind of is aware of its skill level and through its rewards, it's trying to maximize its future rewards. And then when the government changes the tax rate, it will not, I'm almost positive it will not directly change its response to that. But it will kind of observe that something's happening in the world and then adjust maybe a little bit its overall strategy, but not in that particular instance, and it will be delayed or it will be like an overall strategy. And this might be one of the reasons why the tax brackets here might be screwed up because who says who says if I were this AI, what I could do is in period one through nine, I make the taxes really low for the rich people. So I just encourage everyone to make more money, right? Like come on, become more productive and I get the benefits of that. And then in the last episode, last period, right, I just freaking jack up that final tax bracket. It's like you, you have lots of money, give it to me. And then you just redistribute what you got there to the poor people in the very last period and thereby you achieve your goal of this social welfare function. But of course, this is not sustainable because all the rich people would just be kind of screwed through that and move down again, but it's the end of the episode. So what are they going to do? So I think the fact how this is framed, that there are just two different ways to get coins. But the fact that this is this periodical nature of the outer loop all might lead to something that becomes slowly more and more and more uninterpretable. Still cool though. All right, so the final thing, they do this with humans. Yes, real humans. So they let humans try it and they have this interface here and the humans, they behave quite differently from the AI. So there are a few different things where the humans act. But look at that here, AI economists, this is what the agents do, right? So this AI economist is the tax strategy. They just take these developed tax strategies and let the humans be the agents so that the you just want to observe how the agents act and whether or not the tax strategies also work when it's real humans acting in this environment and not our agents. So compare this to how the humans act. The humans they just build their houses in like neat little packets or straight lines or stuff like this. I just find it to be very funny. Now there are some things lacking in the human environment which I find really important. So first of all, they have no cost for moving, which I guess is minor. But second of all, they have no trade. And I think that just kills the whole experiment. Because now of course what you're going to get is the wealth is just going to be proportional to how much you get coins per house, which is different for each agent, right? So to me that that is now a pointless experiment if you can't trade because the outcome is just predictable. And I don't think that the human behavior changes in response to the different tax brackets. I think they'll just do and however they can make money, they'll make money, they'll build more houses until it becomes unprofitable. And that's it. So I don't see the I don't see the value of these experiments, even though they show that again, the AI economist outperforms the other tax strategies in this equality times productivity metric and also in another metric that they measure. The second problem I have is for the human experiments, they take this distribution here, they say, well, the AI, this is one of the distributions that the AI came up with. But you notice the lack of the F you poor people, and the lack of this big spike here for the rich people, which I find are one of the two features of the other distribution. So I think there's quite a bit of variance in what this AI comes up with. Or maybe it's just because this is periodical. But this is really confusing because they show and discuss that other distribution. And now all of a sudden, they say, well, we use this distribution that was also created by our AI. And it seems to be qualitatively quite different. In any case, let's look at how the humans behave under the different strategies. So in the size formula, you'll see that the light blue person here is kind of spreading out a bit, probably playing correctly. Everyone else is just neatly building their houses. Humans are so territorial. And most of them, they kind of stay in their little corner. And they're like, this is my corner. I'm going to build my houses here in a nice thing. And under the AI economist, again, you don't really see a different thing just because the taxes are different. The qualitative behavior is quite the same. It's just building straight lines. And I think the difference is more between the humans. So I think it's not always the same humans. And the difference might be more between the humans. And you kind of see that humans clearly haven't really trained or discovered the optimal strategy. They're just doing something. And what you're seeing is just a result of the taxation. It's not different behavior. And this here, this is the best. Okay, watch the on the bottom right, the human. They're just first they do something, they're just walling up the other players. And look, this is this is the best. I am going to build a big beautiful wall. And I'm going to have the orange guy pay for it. It's Donald Trump in the game. Amazing. And look at the end, they actually managed to lock in the other players so they can't move anymore. Donald Trump wins. Amazing. And though, actually, the yellow player appears to win economy wise. But what do you want with lots of money if you can't move? So I again, I find these human experiments to be rather pointless here because you disable trade and you don't train the humans to find a good strategy. Alright, but in that, I find the entire paper to be pretty cool code is going to be released, they promise and they have checked that they have no ethical problems. Of course, I invite you to check out the paper. If you like content like this, please subscribe, share and leave a comment of what you think. Thank you so much for listening and bye bye.
[ { "end": 5.76, "start": 0, "text": " Alright, today we're going to find out why AI is much better at governing people, why" }, { "end": 12.8, "start": 5.76, "text": " poor people really should pay more taxes, and how Donald Trump is just a normal human." }, { "end": 15.16, "start": 12.8, "text": " Alright, we'll dive into it." }, { "end": 19.400000000000002, "start": 15.16, "text": " We're looking at the AI Economist by Salesforce Research." }, { "end": 26.92, "start": 19.400000000000002, "text": " Now Salesforce Research has kind of created a simulated world environment where they can" }, { "end": 32.36, "start": 26.92, "text": " place agents in it and the agents, they can move around, they can collect resources, they" }, { "end": 39.36, "start": 32.36, "text": " can trade those resources, and they can use those resources to build houses and that will" }, { "end": 41.56, "start": 39.36, "text": " earn them coins." }, { "end": 47.44, "start": 41.56, "text": " And each agent wants to maximize its own coins, but also there's the government and the government" }, { "end": 49.24, "start": 47.44, "text": " can set taxes." }, { "end": 52.980000000000004, "start": 49.24, "text": " So they collect money from everyone and they redistribute it." }, { "end": 61.31999999999999, "start": 52.98, "text": " And the goal now is going to be that the AI handles both the agent and the taxes and" }, { "end": 65.92, "start": 61.31999999999999, "text": " we want to maximize the social welfare of the entire population." }, { "end": 68.1, "start": 65.92, "text": " Alright, that's the goal." }, { "end": 73.96, "start": 68.1, "text": " So the paper here is called The AI Economist Improving Equality and Productivity with AI" }, { "end": 81.94, "start": 73.96, "text": " Driven Tax Policies by Stefan Cheng and Alexander Trott and other people from Salesforce Research" }, { "end": 85, "start": 81.94, "text": " and Harvard University." }, { "end": 94.16, "start": 85, "text": " So as I said, this is a simulated environment and the simulated environment works like this." }, { "end": 101.96, "start": 94.16, "text": " There is a 2D plane, kind of like a game playing field and in this game there are agents." }, { "end": 105.6, "start": 101.96, "text": " Here you can see the agents, there are always four agents." }, { "end": 106.6, "start": 105.6, "text": " Where?" }, { "end": 108.96, "start": 106.6, "text": " Oh, down here." }, { "end": 111.24, "start": 108.96, "text": " What are you doing in the corner?" }, { "end": 117.08, "start": 111.24, "text": " Come on, be productive." }, { "end": 121.36, "start": 117.08, "text": " The agents are in this world and they can do certain things." }, { "end": 123.11999999999999, "start": 121.36, "text": " They have certain actions at their disposal." }, { "end": 124.88, "start": 123.11999999999999, "text": " So first of all, they can move around." }, { "end": 128.35999999999999, "start": 124.88, "text": " They can move down, left, right and so on." }, { "end": 131.95999999999998, "start": 128.35999999999999, "text": " Whenever they walk past a resource tile, they collect the resource." }, { "end": 134.29999999999998, "start": 131.95999999999998, "text": " This is stone and this is wood." }, { "end": 136.56, "start": 134.29999999999998, "text": " So there are two kinds of resources." }, { "end": 140.78, "start": 136.56, "text": " And then the last actions the agents have is building a house." }, { "end": 147.52, "start": 140.78, "text": " One wood and one stone will create one house and the house gives you coins." }, { "end": 151.72, "start": 147.52, "text": " So this is a house and that will give you coins." }, { "end": 157.3, "start": 151.72, "text": " But how much coins you get is different from agent to agent and this represents the agents" }, { "end": 159.44, "start": 157.3, "text": " different skill levels." }, { "end": 167.28, "start": 159.44, "text": " This is an abstraction and the kind of economic theory behind it is that the income inequality" }, { "end": 174.16, "start": 167.28, "text": " in people, one of the main drivers of it is that they are skilled differently and therefore" }, { "end": 185.76, "start": 174.16, "text": " are able to convert one unit of labor into more money than another lower skilled worker." }, { "end": 190.96, "start": 185.76, "text": " So this is here represented by the fact that maybe if this agent here builds the house," }, { "end": 193.12, "start": 190.96, "text": " they'll get 50 coins." }, { "end": 197.48000000000002, "start": 193.12, "text": " But if this agent here would build the same house, they'll only get 10 coins." }, { "end": 203, "start": 197.48000000000002, "text": " So we'll call this here a high skilled worker and this here a low skilled worker." }, { "end": 206.88, "start": 203, "text": " Now the last thing, sorry, I saw the last thing before, but the very last thing the" }, { "end": 209.32, "start": 206.88, "text": " agents can do is they can trade." }, { "end": 214.4, "start": 209.32, "text": " So if one agent has too many resources and the other one has not enough, they can trade" }, { "end": 218.28, "start": 214.4, "text": " those resources among each other for those coins." }, { "end": 222.28, "start": 218.28, "text": " So once you build a house, you collect some coins, you can then either go and collect" }, { "end": 231.96, "start": 222.28, "text": " more resources or you can use those coins in order to buy resources off of other people." }, { "end": 233.32, "start": 231.96, "text": " This is unlucky." }, { "end": 237.64, "start": 233.32, "text": " No coins, no houses, and no resources." }, { "end": 238.64, "start": 237.64, "text": " Look at them." }, { "end": 243.44, "start": 238.64, "text": " Oh yeah, so you also can't move across the water here." }, { "end": 245.48, "start": 243.44, "text": " You can only move on the grass." }, { "end": 252.12, "start": 245.48, "text": " You can also not move through a house, which gives you some interesting abilities because" }, { "end": 255.4, "start": 252.12, "text": " you can just build a house right here." }, { "end": 260.28000000000003, "start": 255.4, "text": " And yeah, so and you can't move over other players." }, { "end": 263.16, "start": 260.28000000000003, "text": " But these are the rules are pretty simple." }, { "end": 268.62, "start": 263.16, "text": " And the goal here is for the agents to maximize the number of coins they get in 1000 steps." }, { "end": 275.28000000000003, "start": 268.62, "text": " So the number H here is 1000, which is the number of steps that the agents can take before" }, { "end": 278.52, "start": 275.28000000000003, "text": " the game is over and it restarts again." }, { "end": 284.76, "start": 278.52, "text": " So each agent is using reinforcement learning in order to learn how to achieve the maximum" }, { "end": 286.24, "start": 284.76, "text": " number of coins." }, { "end": 290.32, "start": 286.24, "text": " Now the policy is of course going to be different depending on whether that is a high or a low" }, { "end": 292.15999999999997, "start": 290.32, "text": " skilled worker." }, { "end": 297.4, "start": 292.15999999999997, "text": " The catch here is that outside of this there is the government, the government here, let's" }, { "end": 308.4, "start": 297.4, "text": " draw this big house with the flag of our fictitious nation, which is like this." }, { "end": 310, "start": 308.4, "text": " That's the flag." }, { "end": 319.12, "start": 310, "text": " And the government will observe what's happening here and they will issue a tax taxes." }, { "end": 321.71999999999997, "start": 319.12, "text": " So it will issue a tax distribution." }, { "end": 323.44, "start": 321.71999999999997, "text": " Now how do you imagine that?" }, { "end": 329.84, "start": 323.44, "text": " So if you imagine the government says something like this for the first 10 coins you own," }, { "end": 334.58, "start": 329.84, "text": " you owe us 5% of that." }, { "end": 340.84, "start": 334.58, "text": " For the next 10 coins, so from 10 to 20 you earn, you owe us 10% and so on." }, { "end": 347.5, "start": 340.84, "text": " So if you earn even more, you owe us more and more percent of those extra coins." }, { "end": 350.08, "start": 347.5, "text": " This is what you might know as a progressive tax schedule." }, { "end": 356.47999999999996, "start": 350.08, "text": " The more you earn, the more percentage wise you pay on that extra earned money." }, { "end": 362.56, "start": 356.47999999999996, "text": " This is what you might be used to, but there are other tax schedules and the exact histogram" }, { "end": 369.14, "start": 362.56, "text": " you see or the exact how many percent for which amount of coins, that is the action" }, { "end": 370.14, "start": 369.14, "text": " of the government." }, { "end": 375.68, "start": 370.14, "text": " So the government decides on the taxes and the taxes are just collected from the income." }, { "end": 383.76, "start": 375.68, "text": " So if an agent earns these coins, then it has to pay taxes to the government and the" }, { "end": 389.96, "start": 383.76, "text": " government will redistribute all the taxes it has collected equally among the population." }, { "end": 394.28, "start": 389.96, "text": " So if you pay a lot, you might lose through this process and if you just pay a little" }, { "end": 398.96, "start": 394.28, "text": " taxes you might gain through this process." }, { "end": 400.88, "start": 398.96, "text": " So that's it." }, { "end": 403.79999999999995, "start": 400.88, "text": " That is the basic premise of the game." }, { "end": 409, "start": 403.79999999999995, "text": " The agents are using reinforcement learning and I believe the newness of this paper is" }, { "end": 415.4, "start": 409, "text": " also that the government now is using reinforcement learning in order to determine the optimal" }, { "end": 417.76, "start": 415.4, "text": " tax policy." }, { "end": 422.4, "start": 417.76, "text": " There is kind of this inner loop here and there is this outer game where the government" }, { "end": 425.32, "start": 422.4, "text": " also tries to maximize the RL." }, { "end": 427.84, "start": 425.32, "text": " And what does the government try to maximize?" }, { "end": 428.96, "start": 427.84, "text": " Good question." }, { "end": 434.32, "start": 428.96, "text": " It is a measure that's called social welfare." }, { "end": 439.02, "start": 434.32, "text": " Now social welfare consists of two things and they have this here way down in the paper." }, { "end": 442.15999999999997, "start": 439.02, "text": " Social welfare in this paper consists of two things." }, { "end": 449.28000000000003, "start": 442.16, "text": " First of all, economic productivity, which basically just means how many coins has anyone" }, { "end": 450.44, "start": 449.28000000000003, "text": " produced." }, { "end": 453.92, "start": 450.44, "text": " It doesn't matter who, but just the total amount of coins produced." }, { "end": 459.84000000000003, "start": 453.92, "text": " The second one is income equality and this is related to the Gini index." }, { "end": 465.3, "start": 459.84000000000003, "text": " So if you plot the cumulative distribution of wealth, a fully equal society would be" }, { "end": 473.46000000000004, "start": 465.3, "text": " a straight line because 50% of the people would have 50% of the money and so on." }, { "end": 480.56, "start": 473.46000000000004, "text": " But almost all true societies have something like this where 50% of the people might have" }, { "end": 486.92, "start": 480.56, "text": " 10% of the money and the rest 50% of the people has the other 90%." }, { "end": 491.44, "start": 486.92, "text": " And the measure of inequality is this area here." }, { "end": 498.52, "start": 491.44, "text": " This is called the Gini index and 1 minus this area is what this paper has as an equality" }, { "end": 499.52, "start": 498.52, "text": " measure." }, { "end": 506.6, "start": 499.52, "text": " So the higher this number, the more equal is the society in terms of their income distribution." }, { "end": 512.3, "start": 506.6, "text": " Now what is actually optimized for here is this thing, equality times productivity." }, { "end": 518.36, "start": 512.3, "text": " So you want both to be high, your income equality and your productivity." }, { "end": 525.04, "start": 518.36, "text": " There's a trade off here of course, but you can have multiple ways to trade that off and" }, { "end": 528.2, "start": 525.04, "text": " that will give you the different thing." }, { "end": 532.44, "start": 528.2, "text": " They call this the social welfare function." }, { "end": 537.44, "start": 532.44, "text": " And that's the thing that the government RL agent optimizes for." }, { "end": 543.88, "start": 537.44, "text": " So you can see here already the free market, even though it's the most productive, produces" }, { "end": 549, "start": 543.88, "text": " the most coins because if you have a free market means no taxes." }, { "end": 555.68, "start": 549, "text": " If you have no taxes, then people are basically encouraged to earn more money because they" }, { "end": 557.24, "start": 555.68, "text": " don't have to pay taxes on them." }, { "end": 561.64, "start": 557.24, "text": " As soon as you tax them, they're less encouraged to earn more money." }, { "end": 567.2, "start": 561.64, "text": " And therefore if you have no taxes, the most coins will be earned in total." }, { "end": 569.06, "start": 567.2, "text": " But the equality suffers." }, { "end": 573.76, "start": 569.06, "text": " So the equality is the lowest among these things considered." }, { "end": 581.4399999999999, "start": 573.76, "text": " If you compare that to the AI economist, the AI economist achieves the highest social welfare." }, { "end": 587.72, "start": 581.4399999999999, "text": " It achieves the highest equality, but it doesn't suffer as much in productivity as other systems" }, { "end": 588.72, "start": 587.72, "text": " here." }, { "end": 592.84, "start": 588.72, "text": " And the baseline systems are first of all, the US federal system." }, { "end": 594.96, "start": 592.84, "text": " This is not particularly tied to the US." }, { "end": 601.28, "start": 594.96, "text": " This is basically every system or most of the systems that you have currently in the" }, { "end": 607.3199999999999, "start": 601.28, "text": " world is the progressive tax system and the SAES formula, which I believe is an economically" }, { "end": 611.88, "start": 607.3199999999999, "text": " theory based system, which is a regressive tax schedule." }, { "end": 619.64, "start": 611.88, "text": " You can see them down here where the US federal will be progressive, means the more you earn," }, { "end": 622.0799999999999, "start": 619.64, "text": " the more percentage wise you pay." }, { "end": 627.4399999999999, "start": 622.0799999999999, "text": " While the SAES formula will be regressive, which generally means the more you earn, the" }, { "end": 628.4399999999999, "start": 627.4399999999999, "text": " less you pay." }, { "end": 634.6800000000001, "start": 628.44, "text": " I believe this was derived under some assumptions to be the optimal tax distribution." }, { "end": 643.08, "start": 634.6800000000001, "text": " And the AI economist will come to this in a second." }, { "end": 649.6400000000001, "start": 643.08, "text": " Let's actually just look at one of these things first, one of these games, how this plays" }, { "end": 650.6400000000001, "start": 649.6400000000001, "text": " out." }, { "end": 653.5, "start": 650.6400000000001, "text": " The cool thing here is that they have pretty flashy animations." }, { "end": 656.12, "start": 653.5, "text": " So you can look how does one of these games turn out." }, { "end": 661.48, "start": 656.12, "text": " Now this is a free market game and you can see the agents moving around collecting things," }, { "end": 662.48, "start": 661.48, "text": " building houses." }, { "end": 668.04, "start": 662.48, "text": " And you might notice that one of the agents, namely agent one, is just building all of" }, { "end": 669.52, "start": 668.04, "text": " the houses." }, { "end": 675.48, "start": 669.52, "text": " And generally just kind of being a dick, being in everyone's face and kind of building things" }, { "end": 676.64, "start": 675.48, "text": " everywhere." }, { "end": 679.96, "start": 676.64, "text": " And the other ones don't." }, { "end": 685.64, "start": 679.96, "text": " Or just very few, like the light blue on the bottom left builds some houses." }, { "end": 691.76, "start": 685.64, "text": " On the right you can see how the distribution of wealth is structured." }, { "end": 694.84, "start": 691.76, "text": " And you see agent one ends up with most of the wealth." }, { "end": 698.76, "start": 694.84, "text": " Now the size of the circle I think is the total productivity." }, { "end": 706.52, "start": 698.76, "text": " So you can see this grows over time mainly because agent one becomes so rich." }, { "end": 711.84, "start": 706.52, "text": " And if you analyze this, if you analyze what's happening here, then you'll see that agent" }, { "end": 716.4, "start": 711.84, "text": " one and I might be..." }, { "end": 721.64, "start": 716.4, "text": " Yeah, they have a graph up here." }, { "end": 725.5600000000001, "start": 721.64, "text": " So it is very interesting what happens." }, { "end": 727.86, "start": 725.5600000000001, "text": " This is kind of the same game." }, { "end": 737.12, "start": 727.86, "text": " So agent one here is this orange dot and agents two, three and four are these dots here." }, { "end": 740.72, "start": 737.12, "text": " And this graph here is coin from trading." }, { "end": 745.64, "start": 740.72, "text": " So how much money they win or lose from trading." }, { "end": 753, "start": 745.64, "text": " Now the green bars are trading wood and the brown bars are trading stone." }, { "end": 758.44, "start": 753, "text": " So you see agent number four, which is the lowest skilled, the skill is just determined" }, { "end": 761.24, "start": 758.44, "text": " at the beginning of the episode." }, { "end": 766.08, "start": 761.24, "text": " It will just make all of its coins basically by selling wood." }, { "end": 769.34, "start": 766.08, "text": " And agent three will make all of its coins by selling stone." }, { "end": 772.84, "start": 769.34, "text": " And agent two will collect both and sell both." }, { "end": 778.44, "start": 772.84, "text": " And agent one will just spend money in trading." }, { "end": 782.8000000000001, "start": 778.44, "text": " So you'll have a specialization here." }, { "end": 789.26, "start": 782.8000000000001, "text": " Agent one, which is the highest skill one right here, will buy resources in order to" }, { "end": 793.6, "start": 789.26, "text": " build more houses because it clearly profits from building lots and lots and lots and lots" }, { "end": 795.0600000000001, "start": 793.6, "text": " of houses." }, { "end": 799.7199999999999, "start": 795.06, "text": " So it will use that money to buy more resources rather than go and collecting them." }, { "end": 805.8399999999999, "start": 799.7199999999999, "text": " While all the other ones basically forgo building houses in favor of they just collect the resources" }, { "end": 810.7199999999999, "start": 805.8399999999999, "text": " and they just trade them way to the agent one that's more profitable for them than building" }, { "end": 812.1999999999999, "start": 810.7199999999999, "text": " houses themselves." }, { "end": 817.9599999999999, "start": 812.1999999999999, "text": " So you see this kind of specialization emerging in these games, which I find, I find this" }, { "end": 824.3599999999999, "start": 817.9599999999999, "text": " to be pretty cool that you see something like this, like a really stark division of labor" }, { "end": 831, "start": 824.36, "text": " emerging just from these very, very small set of rules." }, { "end": 833.88, "start": 831, "text": " And you can analyze this game in different ways." }, { "end": 841.6800000000001, "start": 833.88, "text": " They have a few more plots where this becomes quite apparent that sorry, that these agents" }, { "end": 843.3000000000001, "start": 841.6800000000001, "text": " specialize." }, { "end": 849.94, "start": 843.3000000000001, "text": " So you see here resources collected, sorry about that, resources collected." }, { "end": 859.6400000000001, "start": 849.94, "text": " If you have the lowest skill and the highest skill labors, the lowest skills, they mainly," }, { "end": 861.86, "start": 859.6400000000001, "text": " this should be a 10." }, { "end": 871.6, "start": 861.86, "text": " They mainly collect resources, while the highest skill labor mainly goes for building things." }, { "end": 876.7600000000001, "start": 871.6, "text": " It doesn't collect resources, but net income from building is really high while everyone" }, { "end": 880.08, "start": 876.76, "text": " else just doesn't build at all." }, { "end": 885.52, "start": 880.08, "text": " All right, so we have a division of labor emerging." }, { "end": 887.3199999999999, "start": 885.52, "text": " Now this was a free market." }, { "end": 890.76, "start": 887.3199999999999, "text": " Let's actually compare the different algorithms." }, { "end": 897.1, "start": 890.76, "text": " So if you look at social welfare, this is this thing here, equality times productivity." }, { "end": 902.8, "start": 897.1, "text": " You can see that the AI economist will outperform over time over the training progress, it will" }, { "end": 906.16, "start": 902.8, "text": " outperform all of the other systems." }, { "end": 912.36, "start": 906.16, "text": " So it will outperform the free market, the US federal tax system, and the SAS formula" }, { "end": 915.78, "start": 912.36, "text": " if trained for long enough, which is to be expected, right?" }, { "end": 921.24, "start": 915.78, "text": " If you put RL onto a cost function, it will then optimize that cost function." }, { "end": 927.52, "start": 921.24, "text": " But it's pretty cool to see that there's a lot of headroom here over what we currently" }, { "end": 929.16, "start": 927.52, "text": " have." }, { "end": 933.92, "start": 929.16, "text": " Now let's look at some of the strategies it comes up with." }, { "end": 941.8399999999999, "start": 933.92, "text": " So what do these games look like where the AI has imposed different tax strategies?" }, { "end": 943.76, "start": 941.8399999999999, "text": " So this is with the SAS strategy." }, { "end": 945.88, "start": 943.76, "text": " You can see that here." }, { "end": 951.56, "start": 945.88, "text": " Again, you see this inequality emerging with the yellow player here building most of the" }, { "end": 953, "start": 951.56, "text": " houses." }, { "end": 959.7199999999999, "start": 953, "text": " With the AI economist, again, there is inequality, but you can see at the distribution that agent" }, { "end": 965.52, "start": 959.72, "text": " one only ends up with about half of the wealth, where if you compare this to the free market" }, { "end": 972.38, "start": 965.52, "text": " here, then agent one ends up with like two thirds of the wealth, right?" }, { "end": 975, "start": 972.38, "text": " This is the game we saw before." }, { "end": 982.64, "start": 975, "text": " But there is not qualitatively that much of a difference, but there is in the end result." }, { "end": 987.84, "start": 982.64, "text": " All right, let's look at what these policies actually come up with." }, { "end": 991.52, "start": 987.84, "text": " So what is the tax policy that the AI comes up with?" }, { "end": 998.1, "start": 991.52, "text": " So this tax policy outperforms on this social welfare metric." }, { "end": 1001.76, "start": 998.1, "text": " And this is very interesting, right?" }, { "end": 1005.12, "start": 1001.76, "text": " So first of all, you see that it's right zigzag." }, { "end": 1010.24, "start": 1005.12, "text": " It's like down, up, down, up, which is already weird." }, { "end": 1017.5, "start": 1010.24, "text": " So the first very weird thing is the spike at the very bottom." }, { "end": 1021.52, "start": 1017.5, "text": " So that thing here, what's that thing here?" }, { "end": 1026.72, "start": 1021.52, "text": " Those are the poorest people in your society, and you're taxing them the highest." }, { "end": 1027.72, "start": 1026.72, "text": " Right?" }, { "end": 1035.2, "start": 1027.72, "text": " So just imagine this, you're here downtrodden by life, abandoned by society, you have no" }, { "end": 1037.04, "start": 1035.2, "text": " money, no house, no nothing." }, { "end": 1042.8, "start": 1037.04, "text": " And you're just trying to get a job, you're just getting like a little bit of money." }, { "end": 1049.32, "start": 1042.8, "text": " And you can buy a cheeseburger, and then the government comes." }, { "end": 1050.32, "start": 1049.32, "text": " Give us that." }, { "end": 1053.32, "start": 1050.32, "text": " Give us that money." }, { "end": 1054.32, "start": 1053.32, "text": " Come on." }, { "end": 1059.6, "start": 1054.32, "text": " So basically, these are the poor." }, { "end": 1063.36, "start": 1059.6, "text": " And the poor in this system is just F you." }, { "end": 1064.36, "start": 1063.36, "text": " F you the poor." }, { "end": 1068.76, "start": 1064.36, "text": " Now, the reason why this happens is pretty clear, right?" }, { "end": 1075.64, "start": 1068.76, "text": " The reason why this happens is because you want to encourage people to go here to earn" }, { "end": 1077.6, "start": 1075.64, "text": " more money." }, { "end": 1081.8799999999999, "start": 1077.6, "text": " So it's not like the government makes any money from the poor people independently of" }, { "end": 1084.28, "start": 1081.8799999999999, "text": " how high it taxes them." }, { "end": 1090.48, "start": 1084.28, "text": " But it is basically an incentive structure to make them move over to the somewhat more" }, { "end": 1092.44, "start": 1090.48, "text": " productive population." }, { "end": 1097.44, "start": 1092.44, "text": " Because here it's assumed kind of that even the lowest skilled ones can move over a bit" }, { "end": 1101.48, "start": 1097.44, "text": " if you just tax them enough at the low brackets, right?" }, { "end": 1110.76, "start": 1101.48, "text": " So this is what I find to be you just have to realize that it is so hard, I believe it" }, { "end": 1117.92, "start": 1110.76, "text": " is almost impossible to encapsulate what we really want in a system into a formula to" }, { "end": 1120.06, "start": 1117.92, "text": " be into a cost function to be optimized." }, { "end": 1121.68, "start": 1120.06, "text": " It is so incredibly hard." }, { "end": 1125.72, "start": 1121.68, "text": " And you see that here, of course, it is going to result in a better social outcome, but" }, { "end": 1132.32, "start": 1125.72, "text": " it just doesn't feel right to tax the poor at what 60%?" }, { "end": 1136.84, "start": 1132.32, "text": " Okay, so F the poor, right?" }, { "end": 1140.04, "start": 1136.84, "text": " And then you get to this level right here." }, { "end": 1147.1200000000001, "start": 1140.04, "text": " And interestingly, if you earn even more, you'll be taxed high again, right?" }, { "end": 1152.04, "start": 1147.1200000000001, "text": " So this, we're kind of used to that." }, { "end": 1155.16, "start": 1152.04, "text": " You earn little, you pay little, you earn more." }, { "end": 1156.8000000000002, "start": 1155.16, "text": " You pay more." }, { "end": 1159.72, "start": 1156.8000000000002, "text": " But then comes this entire valley here." }, { "end": 1161.0400000000002, "start": 1159.72, "text": " What's up with that?" }, { "end": 1162.0400000000002, "start": 1161.0400000000002, "text": " Right?" }, { "end": 1169.92, "start": 1162.0400000000002, "text": " Like WT, and this can be this is now of course, the same reasoning as you have with this size" }, { "end": 1177.92, "start": 1169.92, "text": " formula here is where the rich people, you want to tax them less so that they are more" }, { "end": 1181.24, "start": 1177.92, "text": " productive such that they generate more coins." }, { "end": 1187.84, "start": 1181.24, "text": " And even though you tax them less percentage wise, they will end up paying more money in" }, { "end": 1189.96, "start": 1187.84, "text": " absolute terms." }, { "end": 1194.4, "start": 1189.96, "text": " Because because you basically encourage them to produce more." }, { "end": 1201.6, "start": 1194.4, "text": " So that is that is kind of that is the, I guess the reasoning behind this." }, { "end": 1205.92, "start": 1201.6, "text": " But what you have to wreck, you have to recognize what's happening here, right?" }, { "end": 1207, "start": 1205.92, "text": " What are we optimizing?" }, { "end": 1210.88, "start": 1207, "text": " We're optimizing this productivity times equality." }, { "end": 1213.1200000000001, "start": 1210.88, "text": " And what do we get?" }, { "end": 1219.68, "start": 1213.1200000000001, "text": " You see, you get two big valleys of attraction, one here, and one here." }, { "end": 1225.8400000000001, "start": 1219.68, "text": " And that means that this algorithm favors a two class society." }, { "end": 1227.0600000000002, "start": 1225.8400000000001, "text": " Right?" }, { "end": 1231.8600000000001, "start": 1227.0600000000002, "text": " And I believe this is this is partially the limitations of this simulation here, the fact" }, { "end": 1235.68, "start": 1231.8600000000001, "text": " that you only have four agents, the fact that you can only do two things either collect" }, { "end": 1237.0400000000002, "start": 1235.68, "text": " or build, right?" }, { "end": 1242.8, "start": 1237.04, "text": " It encourages a two class society, this specialization that you saw, right?" }, { "end": 1247.1599999999999, "start": 1242.8, "text": " So you say these here are the moneymakers, right?" }, { "end": 1249.3, "start": 1247.1599999999999, "text": " And these here are the collectors." }, { "end": 1252.8799999999999, "start": 1249.3, "text": " And it is very hard to move from one group to the other." }, { "end": 1259.08, "start": 1252.8799999999999, "text": " Because if you you earn more coins as a collector, you're here, and you're really discouraged" }, { "end": 1260.08, "start": 1259.08, "text": " here." }, { "end": 1263.6399999999999, "start": 1260.08, "text": " If you move there, you want to move all the way over here, right?" }, { "end": 1268.96, "start": 1263.64, "text": " Now, the people that are already over here, if they earn an extra coin, that doesn't bother" }, { "end": 1269.96, "start": 1268.96, "text": " them too much." }, { "end": 1272.0400000000002, "start": 1269.96, "text": " So they're very encouraged to earn more money." }, { "end": 1277.68, "start": 1272.0400000000002, "text": " But the very, the poorer people on this side, they're basically discouraged from earning" }, { "end": 1285.1000000000001, "start": 1277.68, "text": " more money, because the system needs them to stay at that collector level, right?" }, { "end": 1292.2, "start": 1285.1000000000001, "text": " So the system encourages the two class society because we have not built social mobility" }, { "end": 1301.7, "start": 1292.2, "text": " into the into the into the equation, we have not built a measure for social social mobility" }, { "end": 1303.18, "start": 1301.7, "text": " into the cost function." }, { "end": 1308.04, "start": 1303.18, "text": " And therefore, the AI doesn't care that the poor people will stay poor and rich people" }, { "end": 1310.04, "start": 1308.04, "text": " will stay rich." }, { "end": 1314.64, "start": 1310.04, "text": " It just knows that this is the best outcome for society overall, given the cost function" }, { "end": 1320, "start": 1314.64, "text": " that we had, again, this just doesn't seem like fair to us, like what we want, we want" }, { "end": 1326.6, "start": 1320, "text": " someone to be able to make it over here, right, even if they start out from the bottom." }, { "end": 1330.48, "start": 1326.6, "text": " And so we'd have to we have to build that in." }, { "end": 1335.6, "start": 1330.48, "text": " So we have a system that is effing eff the poor, right?" }, { "end": 1340.96, "start": 1335.6, "text": " No social mobility, mobility." }, { "end": 1342.8, "start": 1340.96, "text": " No." }, { "end": 1345.48, "start": 1342.8, "text": " And then what's happening at the end?" }, { "end": 1346.64, "start": 1345.48, "text": " What's happening at the end?" }, { "end": 1348.44, "start": 1346.64, "text": " This is beautiful." }, { "end": 1350.1200000000001, "start": 1348.44, "text": " Very rich people." }, { "end": 1352, "start": 1350.1200000000001, "text": " These are the moneymaker, right?" }, { "end": 1359.92, "start": 1352, "text": " This is the this is the monopoly guy top hat monocle wearing Scrooge McDuck bathing in" }, { "end": 1361.22, "start": 1359.92, "text": " coins." }, { "end": 1365.56, "start": 1361.22, "text": " This is where the the government makes their money." }, { "end": 1373.4, "start": 1365.56, "text": " And the discrepancy is really stunning, because you could also argue, hey, why don't we apply" }, { "end": 1376, "start": 1373.4, "text": " the same reasoning as we applied here and here?" }, { "end": 1382.08, "start": 1376, "text": " Why is not is it not like the case that if the rich people if you tax them lower, they'll" }, { "end": 1383.32, "start": 1382.08, "text": " pay more money and so on." }, { "end": 1389.78, "start": 1383.32, "text": " I believe again, this might be just a result of this, how the simulation is set up." }, { "end": 1393.08, "start": 1389.78, "text": " So we'll move away quickly and we'll come back to this." }, { "end": 1398.4, "start": 1393.08, "text": " Here's what I find particularly interesting about this paper, which just confuses the" }, { "end": 1400.84, "start": 1398.4, "text": " heck out of me." }, { "end": 1404.5, "start": 1400.84, "text": " It is a double periodic game." }, { "end": 1407.02, "start": 1404.5, "text": " So it's an inner outer loop game." }, { "end": 1408.3, "start": 1407.02, "text": " What do I mean by that?" }, { "end": 1409.76, "start": 1408.3, "text": " They have these episodes, right?" }, { "end": 1411.58, "start": 1409.76, "text": " Here is the start." }, { "end": 1416.04, "start": 1411.58, "text": " And here is the end." }, { "end": 1421.16, "start": 1416.04, "text": " And they subdivide this into, as we said, 1000 steps." }, { "end": 1425.32, "start": 1421.16, "text": " So an agent is here and it can do step, step, step, step, step, and it can perform these" }, { "end": 1426.32, "start": 1425.32, "text": " actions." }, { "end": 1427.64, "start": 1426.32, "text": " This is the agent." }, { "end": 1431.48, "start": 1427.64, "text": " There are 1000 steps here and the agent just tries to collect as much coins." }, { "end": 1434.4, "start": 1431.48, "text": " So this is your classic RL problem." }, { "end": 1438.64, "start": 1434.4, "text": " But also they divide this into 10, what they call periods." }, { "end": 1443.2800000000002, "start": 1438.64, "text": " And I'm just going to draw maybe four periods, right?" }, { "end": 1452.6200000000001, "start": 1443.2800000000002, "text": " So this thing here, they call one period where the whole thing is an episode." }, { "end": 1458.0400000000002, "start": 1452.6200000000001, "text": " Now the purpose of the period is that at the beginning of each period, the government," }, { "end": 1462.3000000000002, "start": 1458.0400000000002, "text": " the government can impose a new tax schedule." }, { "end": 1468.12, "start": 1462.3, "text": " So the government doesn't only fix the taxes once, but it can change the taxes over the" }, { "end": 1472.5, "start": 1468.12, "text": " course of the episode, right?" }, { "end": 1475.24, "start": 1472.5, "text": " Now this is what I find." }, { "end": 1477.02, "start": 1475.24, "text": " I just don't see why." }, { "end": 1483.3999999999999, "start": 1477.02, "text": " So now you're formulating the tax giving objective as a sequential decision making." }, { "end": 1488.56, "start": 1483.3999999999999, "text": " It's like the government saying, well, today we have high taxes, but tomorrow we have low" }, { "end": 1492.04, "start": 1488.56, "text": " taxes and the day after that we have high taxes again." }, { "end": 1498.44, "start": 1492.04, "text": " And it just doesn't make sense for any government to do this." }, { "end": 1503.56, "start": 1498.44, "text": " What you should do is you should set taxes once at the beginning of the episode and then" }, { "end": 1508.8, "start": 1503.56, "text": " see how that turns out and then try to maximize your tax schedule." }, { "end": 1515.44, "start": 1508.8, "text": " Because all we're looking at, we're only ever looking at how the taxes are at the end, right?" }, { "end": 1520.48, "start": 1515.44, "text": " The things that we've examined are just the last taxes that the AI has issued." }, { "end": 1523.76, "start": 1520.48, "text": " We don't know the dynamic of what happens in between." }, { "end": 1528.72, "start": 1523.76, "text": " This might be super wild actually, what the AI does in between." }, { "end": 1534.44, "start": 1528.72, "text": " And I just don't see the framing as a sequential decision problem." }, { "end": 1538.6, "start": 1534.44, "text": " And I believe this is just an over engineered thing." }, { "end": 1543.24, "start": 1538.6, "text": " Because someone wanted a reason and here is the architecture, right?" }, { "end": 1548, "start": 1543.24, "text": " You see someone wanted a reason to put an LSTM in there." }, { "end": 1552.88, "start": 1548, "text": " Someone is thinking like, well, RL, that means like sequential decisions and so on." }, { "end": 1560, "start": 1552.88, "text": " And RL in this outer loop, the way I propose it would just be a one step per episode decision," }, { "end": 1561.28, "start": 1560, "text": " which is a bandit problem." }, { "end": 1564, "start": 1561.28, "text": " And as we all know, bandits are boring." }, { "end": 1567.72, "start": 1564, "text": " So they didn't want this to be a bandit problem." }, { "end": 1569.44, "start": 1567.72, "text": " They wanted to be a sequential problem." }, { "end": 1574.56, "start": 1569.44, "text": " And that's why they made this period thing, which I find dumb." }, { "end": 1581.12, "start": 1574.56, "text": " So another factor here, and I'm going to tell you how this relates to the to the weird rich" }, { "end": 1582.84, "start": 1581.12, "text": " people are taxed high." }, { "end": 1585.52, "start": 1582.84, "text": " Another factor here is look at this." }, { "end": 1590.52, "start": 1585.52, "text": " It's a CNN, an MLP, an LSTM and an MLP and the agent as well." }, { "end": 1594.6799999999998, "start": 1590.52, "text": " And I can tell you right now, the CNN has two layers." }, { "end": 1596.22, "start": 1594.6799999999998, "text": " Two." }, { "end": 1601.22, "start": 1596.22, "text": " And the LSTM has like 128 units in its hidden state." }, { "end": 1605.4, "start": 1601.22, "text": " So these are tiny, tiny models." }, { "end": 1610.52, "start": 1605.4, "text": " And it is not a model based RL, it's model free RLs, proximal policy optimization." }, { "end": 1619.6000000000001, "start": 1610.52, "text": " And the the the ability of these agents or planner to learn anything substantial here," }, { "end": 1626.38, "start": 1619.6000000000001, "text": " I believe is just not super duper well, right." }, { "end": 1632.0800000000002, "start": 1626.38, "text": " So the I believe that these are rather dumb agents." }, { "end": 1638.72, "start": 1632.0800000000002, "text": " And you can see the tax rates given by the planner is fed into the agent model." }, { "end": 1645.5600000000002, "start": 1638.72, "text": " But I don't think that the agent given such a small model can actually adjust to these" }, { "end": 1651.38, "start": 1645.5600000000002, "text": " inputs because you have to do some pretty good logic in order to from these tax brackets" }, { "end": 1654.7, "start": 1651.38, "text": " to determine how you should act right now." }, { "end": 1659.0800000000002, "start": 1654.7, "text": " What I think is happening is the agent just kind of is aware of its skill level and through" }, { "end": 1664.76, "start": 1659.0800000000002, "text": " its rewards, it's trying to maximize its future rewards." }, { "end": 1671.8400000000001, "start": 1664.76, "text": " And then when the government changes the tax rate, it will not, I'm almost positive it" }, { "end": 1675.76, "start": 1671.8400000000001, "text": " will not directly change its response to that." }, { "end": 1681.4, "start": 1675.76, "text": " But it will kind of observe that something's happening in the world and then adjust maybe" }, { "end": 1687.1200000000001, "start": 1681.4, "text": " a little bit its overall strategy, but not in that particular instance, and it will be" }, { "end": 1690.74, "start": 1687.1200000000001, "text": " delayed or it will be like an overall strategy." }, { "end": 1700.72, "start": 1690.74, "text": " And this might be one of the reasons why the tax brackets here might be screwed up because" }, { "end": 1707.8000000000002, "start": 1700.72, "text": " who says who says if I were this AI, what I could do is in period one through nine," }, { "end": 1711.96, "start": 1707.8, "text": " I make the taxes really low for the rich people." }, { "end": 1716.1599999999999, "start": 1711.96, "text": " So I just encourage everyone to make more money, right?" }, { "end": 1719.96, "start": 1716.1599999999999, "text": " Like come on, become more productive and I get the benefits of that." }, { "end": 1726.2, "start": 1719.96, "text": " And then in the last episode, last period, right, I just freaking jack up that final" }, { "end": 1727.2, "start": 1726.2, "text": " tax bracket." }, { "end": 1731.3799999999999, "start": 1727.2, "text": " It's like you, you have lots of money, give it to me." }, { "end": 1736.12, "start": 1731.3799999999999, "text": " And then you just redistribute what you got there to the poor people in the very last" }, { "end": 1740.8799999999999, "start": 1736.12, "text": " period and thereby you achieve your goal of this social welfare function." }, { "end": 1745.3999999999999, "start": 1740.8799999999999, "text": " But of course, this is not sustainable because all the rich people would just be kind of" }, { "end": 1749, "start": 1745.3999999999999, "text": " screwed through that and move down again, but it's the end of the episode." }, { "end": 1751.6, "start": 1749, "text": " So what are they going to do?" }, { "end": 1759.3999999999999, "start": 1751.6, "text": " So I think the fact how this is framed, that there are just two different ways to get coins." }, { "end": 1766.2800000000002, "start": 1759.4, "text": " But the fact that this is this periodical nature of the outer loop all might lead to" }, { "end": 1773.96, "start": 1766.2800000000002, "text": " something that becomes slowly more and more and more uninterpretable." }, { "end": 1774.96, "start": 1773.96, "text": " Still cool though." }, { "end": 1779.5600000000002, "start": 1774.96, "text": " All right, so the final thing, they do this with humans." }, { "end": 1781.48, "start": 1779.5600000000002, "text": " Yes, real humans." }, { "end": 1790.92, "start": 1781.48, "text": " So they let humans try it and they have this interface here and the humans, they behave" }, { "end": 1793.82, "start": 1790.92, "text": " quite differently from the AI." }, { "end": 1797.64, "start": 1793.82, "text": " So there are a few different things where the humans act." }, { "end": 1802.6200000000001, "start": 1797.64, "text": " But look at that here, AI economists, this is what the agents do, right?" }, { "end": 1805.72, "start": 1802.6200000000001, "text": " So this AI economist is the tax strategy." }, { "end": 1811.96, "start": 1805.72, "text": " They just take these developed tax strategies and let the humans be the agents so that the" }, { "end": 1817.04, "start": 1811.96, "text": " you just want to observe how the agents act and whether or not the tax strategies also" }, { "end": 1823.1200000000001, "start": 1817.04, "text": " work when it's real humans acting in this environment and not our agents." }, { "end": 1826.76, "start": 1823.1200000000001, "text": " So compare this to how the humans act." }, { "end": 1831.76, "start": 1826.76, "text": " The humans they just build their houses in like neat little packets or straight lines" }, { "end": 1833.76, "start": 1831.76, "text": " or stuff like this." }, { "end": 1836.24, "start": 1833.76, "text": " I just find it to be very funny." }, { "end": 1841.54, "start": 1836.24, "text": " Now there are some things lacking in the human environment which I find really important." }, { "end": 1845.8799999999999, "start": 1841.54, "text": " So first of all, they have no cost for moving, which I guess is minor." }, { "end": 1850.7, "start": 1845.8799999999999, "text": " But second of all, they have no trade." }, { "end": 1853.76, "start": 1850.7, "text": " And I think that just kills the whole experiment." }, { "end": 1857.96, "start": 1853.76, "text": " Because now of course what you're going to get is the wealth is just going to be proportional" }, { "end": 1863.44, "start": 1857.96, "text": " to how much you get coins per house, which is different for each agent, right?" }, { "end": 1870.74, "start": 1863.44, "text": " So to me that that is now a pointless experiment if you can't trade because the outcome is" }, { "end": 1872.0800000000002, "start": 1870.74, "text": " just predictable." }, { "end": 1879.1200000000001, "start": 1872.0800000000002, "text": " And I don't think that the human behavior changes in response to the different tax brackets." }, { "end": 1883.88, "start": 1879.1200000000001, "text": " I think they'll just do and however they can make money, they'll make money, they'll build" }, { "end": 1886, "start": 1883.88, "text": " more houses until it becomes unprofitable." }, { "end": 1887, "start": 1886, "text": " And that's it." }, { "end": 1892.88, "start": 1887, "text": " So I don't see the I don't see the value of these experiments, even though they show that" }, { "end": 1901.3600000000001, "start": 1892.88, "text": " again, the AI economist outperforms the other tax strategies in this equality times productivity" }, { "end": 1905.64, "start": 1901.3600000000001, "text": " metric and also in another metric that they measure." }, { "end": 1911.1000000000001, "start": 1905.64, "text": " The second problem I have is for the human experiments, they take this distribution here," }, { "end": 1915.2800000000002, "start": 1911.1000000000001, "text": " they say, well, the AI, this is one of the distributions that the AI came up with." }, { "end": 1921.44, "start": 1915.2800000000002, "text": " But you notice the lack of the F you poor people, and the lack of this big spike here" }, { "end": 1928.68, "start": 1921.44, "text": " for the rich people, which I find are one of the two features of the other distribution." }, { "end": 1933.04, "start": 1928.68, "text": " So I think there's quite a bit of variance in what this AI comes up with." }, { "end": 1934.88, "start": 1933.04, "text": " Or maybe it's just because this is periodical." }, { "end": 1941.06, "start": 1934.88, "text": " But this is really confusing because they show and discuss that other distribution." }, { "end": 1945.64, "start": 1941.06, "text": " And now all of a sudden, they say, well, we use this distribution that was also created" }, { "end": 1946.68, "start": 1945.64, "text": " by our AI." }, { "end": 1950.16, "start": 1946.68, "text": " And it seems to be qualitatively quite different." }, { "end": 1958.28, "start": 1950.16, "text": " In any case, let's look at how the humans behave under the different strategies." }, { "end": 1964.2, "start": 1958.28, "text": " So in the size formula, you'll see that the light blue person here is kind of spreading" }, { "end": 1966.52, "start": 1964.2, "text": " out a bit, probably playing correctly." }, { "end": 1969.68, "start": 1966.52, "text": " Everyone else is just neatly building their houses." }, { "end": 1970.68, "start": 1969.68, "text": " Humans are so territorial." }, { "end": 1974.64, "start": 1970.68, "text": " And most of them, they kind of stay in their little corner." }, { "end": 1976.6000000000001, "start": 1974.64, "text": " And they're like, this is my corner." }, { "end": 1981.48, "start": 1976.6, "text": " I'm going to build my houses here in a nice thing." }, { "end": 1987.1999999999998, "start": 1981.48, "text": " And under the AI economist, again, you don't really see a different thing just because" }, { "end": 1989.36, "start": 1987.1999999999998, "text": " the taxes are different." }, { "end": 1992.1999999999998, "start": 1989.36, "text": " The qualitative behavior is quite the same." }, { "end": 1994.8, "start": 1992.1999999999998, "text": " It's just building straight lines." }, { "end": 1997.98, "start": 1994.8, "text": " And I think the difference is more between the humans." }, { "end": 2000.36, "start": 1997.98, "text": " So I think it's not always the same humans." }, { "end": 2003.36, "start": 2000.36, "text": " And the difference might be more between the humans." }, { "end": 2009.6399999999999, "start": 2003.36, "text": " And you kind of see that humans clearly haven't really trained or discovered the optimal strategy." }, { "end": 2011.3999999999999, "start": 2009.6399999999999, "text": " They're just doing something." }, { "end": 2015.08, "start": 2011.3999999999999, "text": " And what you're seeing is just a result of the taxation." }, { "end": 2016.08, "start": 2015.08, "text": " It's not different behavior." }, { "end": 2018.7199999999998, "start": 2016.08, "text": " And this here, this is the best." }, { "end": 2023.08, "start": 2018.7199999999998, "text": " Okay, watch the on the bottom right, the human." }, { "end": 2030, "start": 2023.08, "text": " They're just first they do something, they're just walling up the other players." }, { "end": 2034.04, "start": 2030, "text": " And look, this is this is the best." }, { "end": 2037.52, "start": 2034.04, "text": " I am going to build a big beautiful wall." }, { "end": 2041.72, "start": 2037.52, "text": " And I'm going to have the orange guy pay for it." }, { "end": 2043.88, "start": 2041.72, "text": " It's Donald Trump in the game." }, { "end": 2044.88, "start": 2043.88, "text": " Amazing." }, { "end": 2050.4, "start": 2044.88, "text": " And look at the end, they actually managed to lock in the other players so they can't" }, { "end": 2052.2, "start": 2050.4, "text": " move anymore." }, { "end": 2054.44, "start": 2052.2, "text": " Donald Trump wins." }, { "end": 2055.44, "start": 2054.44, "text": " Amazing." }, { "end": 2061.88, "start": 2055.44, "text": " And though, actually, the yellow player appears to win economy wise." }, { "end": 2066.6, "start": 2061.88, "text": " But what do you want with lots of money if you can't move?" }, { "end": 2072.54, "start": 2066.6, "text": " So I again, I find these human experiments to be rather pointless here because you disable" }, { "end": 2077.32, "start": 2072.54, "text": " trade and you don't train the humans to find a good strategy." }, { "end": 2083.86, "start": 2077.32, "text": " Alright, but in that, I find the entire paper to be pretty cool code is going to be released," }, { "end": 2088.8, "start": 2083.86, "text": " they promise and they have checked that they have no ethical problems." }, { "end": 2093, "start": 2088.8, "text": " Of course, I invite you to check out the paper." }, { "end": 2099.76, "start": 2093, "text": " If you like content like this, please subscribe, share and leave a comment of what you think." }, { "end": 2114.44, "start": 2099.76, "text": " Thank you so much for listening and bye bye." } ]
jhCInVFE2sc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "initialization", "mask", "arxiv", "uber", "training", "subnetwork", "overparameterization", "zero", "frozen", "weights" ]
This paper dives into the intrinsics of the Lottery Ticket Hypothesis and attempts to shine some light on what's important and what isn't. https://arxiv.org/abs/1905.01067 Abstract: The recent "Lottery Ticket Hypothesis" paper by Frankle & Carbin showed that a simple approach to creating sparse networks (keeping the large weights) results in models that are trainable from scratch, but only when starting from the same initial weights. The performance of these networks often exceeds the performance of the non-sparse base model, but for reasons that were not well understood. In this paper we study the three critical components of the Lottery Ticket (LT) algorithm, showing that each may be varied significantly without impacting the overall results. Ablating these factors leads to new insights for why LT networks perform as well as they do. We show why setting weights to zero is important, how signs are all you need to make the reinitialized network train, and why masking behaves like training. Finally, we discover the existence of Supermasks, masks that can be applied to an untrained, randomly initialized network to produce a model with performance far better than chance (86% on MNIST, 41% on CIFAR-10). Authors: Hattie Zhou, Janice Lan, Rosanne Liu, Jason Yosinski Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at deconstructing lottery tickets, zeros, signs and the super mask by Hadi Jo, Janis Lan, Rosanne Liu and Jason Yosinski of Uber.ai. So this is a follow-up paper to the original paper that was called the lottery ticket hypothesis. I have done a video on that paper, so if you don't know what the lottery ticket hypothesis is, I suggest you go watch it. Just very quickly, very quickly, the lottery ticket hypothesis states the following. If you have a neural network that has, let's say it has these layers and has some weights, lottery ticket hypothesis states that there are a subset of weights that are significantly less number of weights than in the original network. A subset of weights are already enough for this network to be trained in a successful fashion. So there are sub networks here. If you train them, you will get the same or even higher accuracy than if you train the full network. Now, the intrinsic part here is that the sub network must be initialized at the same place as the full network. So with that goes the lottery ticket algorithm. The lottery ticket algorithm is the following. First, train the full network. Second, select the largest weights. Select the largest weights at the end of the training. And then third, reset the weights to their initial value. And this needs to be the same initial value at which you initialize them at step one. And then train. So once you have the desired weights, these ones, right, you need to reset them to their original value before training. And then you can retrain just the small sub network. And that will work the same or better than the original network. So it's basically a pruning technique. So this is the lottery ticket hypothesis, the fact that there are these sub networks or the proposition. And the lottery ticket algorithm is the process by which you obtain these so-called winning tickets. Again, the full video will make this clearer. So this paper is going to shine some light on different aspects of these winning tickets and what is really important and what isn't and how you can obtain even better ones. So they often show the following 2D plots here. And these 2D plots will spend like a little time understanding them. There is two dimensions here and each one of these plots represents a single weight in the neural network. So a single weight is just one floating point number, right? On the x-axis you have wi which is the initial value of the weight. Now this is randomly initialized, right? So here is zero and you randomly initialize these weights in the neural networks. So this number is random. The wf is on the y-axis and that is the final value of the weight. This is after training, right? This is trained. So if a point is for example here, that means it had this value of 3 before training and then after training it went to a value of 1, right? So it got initialized at 3 and SGD thought no it's better at 1. Why? So you see that there is an ellipsis here. Why is there an ellipsis? That's because very often the initial and the final weight value are positively correlated. So if a weight initially was positive it tends to also be positive at final. And that's just because of the nature of SGD. It just takes little steps and basically tries to do as little effort as possible in order to reach its goal, right? It always just goes downhill in a greedy fashion and that means probably if it can it will elect to not move the weights too far from their initial position or the position of the previous step. So that's why they're correlated and that's why you have an ellipsis. But they don't have to be. That's just the author superimposing their kind of view. So then they say what happens during these lottery ticket pruning is in the original algorithm, right? You had the following pruning technique. You would select all the weights that at the end of the training had a certain magnitude or higher. And that's this here. So on the y-axis which is the final weight you define a threshold here and everything that is smaller magnitude than the threshold you mask to zero, right? You prune away. You don't want to retain. But everything that is above that either positively or negatively you mask to one which means that you retain it in the winning ticket. So the light regions here will be the regions where you set the weight to zero or you mask it to zero and the dark regions will be the weights that you retain. So if a weight was initially here but then it traveled to here during the training then sorry, of course, initially, well if the y-axis is the process during training and we'll visualize this here then we'll say initially it was here, right? Initially it was here and then in subsequent steps it traveled over this line. Then we would retain it because its final value was higher than the threshold. All right. So this paper generalizes the lottery ticket algorithm. It states it in a bit of a convoluted way but just to go quickly over it, it says first initialize a mask to all ones, randomly initialize the parameters of the network like this. Now to convolve it with the mask here is a bit superfluous because it's all ones but they do it for consistency. Then they say train the parameters of the network to completion. Denote the initial weights before training by wi, that's what we saw in the plot, and the final weights by wf. Then here is the first generalization. Use the mask criterion m to produce a masking score for each currently unmasked weight. Rank the weights in each layer by their scores. Set the mask value for the top p percent to one and the bottom 100 minus p percent to zero. So this masking criterion here is now how you select the weights to be in the winning ticket basically. So you select the weights that you want to be part of that trainable sub network. In the original lottery ticket algorithm this was simply the absolute value as you can see here. Then they say there is a mask one action and a mask zero action which is describing what happens to the weights that are part of the winning ticket and what happens to the weights that aren't part of the winning ticket. Now to the second one first. The weights that aren't part of the winning ticket in the original algorithm they were just pruned, set to zero and frozen during any subsequent training. That's what we looked at before. But you can think of different things like setting them to a constant value and just not training them. The common thing is that they are masked to zero so they will not be trained but you can still kind of retain them at like a constant value or a random value something like this. And same for the mask one. All it means here is that they will be trained. In the original algorithm these weights were reset to their initial values and marked for training in the next round but you can think of different things. So this paper will experiment with all of these three steps basically. Step two, step three and step four and decide on what's important and what isn't. So first they go with the mask criteria. This is the criteria how do we select which weights we should retain and which ones we shouldn't. So think of it. We have our full network, we have trained it to completion and for each weight we know its initial and its final value. And based on that we now need to make a decision. Should this particular weight be included in the winning ticket or shouldn't it? The original paper as we said simply took the absolute value of the final weight completely ignoring the original weight. So they do experiment with different things. First and you can see this in the plot here. Large final is what we had and we saw this. Small final is this score here which is just retain the weights that have a small final value. You see the y threshold stays the same but it is inverted. We retain the weights that are inside of the threshold. This is a control criterion just to kind of do the opposite of what the initial paper did. Large init ignores the final value and simply goes on the initial value of the weight. As you can see here now the threshold is on the x axis and the same for small init, the control case for that. Then there is a large init large final where you say okay I only retain a weight if it both was large at initialization and large in the final value. So it's an additional criterion to the original paper. Now of course these are ranking scores so you won't actually have the same threshold you will simply make the thresholds lower and then that region up here that you retain will become larger to reach the same percentage of weights retained. So that's something you have to keep in mind. To control case small init small final. Then the interesting case here magnitude increase which means all everywhere where the final weight is larger than the initial weight or the ranking score is basically based on how much you move. This depicted here if a weight was originally here it just needs to be larger, it just needs to be above that. So basically it needs to be above the diagonal here. The diagonal are basically weights that are as high in the final trained version as they were at initialization and everything above this here or of course below this here. So you need to think of a second one here and then everything in this region and in this region magnitude wise will fulfill that criterion and then movement simply describes how far they move. Now this is the same as the magnitude increase but just they don't do the absolute values before they subtract so it's basically everything above this diagonal. So you don't look at how much the magnitude increase but if a weight goes from very much negative to just a little positive this will already qualify because it moved very far away. And then random you simply mask at random this is a control case. So our focus is going to be on the following the large final which is the original right. The large in it might be interesting, the large in it large final might be interesting and the magnitude increase might be interesting. Now what do they find? We'll go to the plot with the most effects. The star here is simply a significance indicator so disregard the stars for now. The magnitude increase tends to perform the best as you can see. Magnitude increase and compare that to large final. Large final is the original algorithm. Magnitude increase tends to perform better than the large final. Interestingly but if you look across the experiments it doesn't tend to do that consistently or often and there are these effects here when you go to really small networks. And the stars I said disregard them. They are significance indicators for a t-test but the t-test is just across five samples and what you're seeing here is not a standard deviation but the min and max over the five runs. So I see there might be an effect here but I'm absolutely not trusting the claim here that this is significant. Because it's just in one plot in one network on one data set. So if you want to make the claim that the magnitude increase works better than the large final maybe. What you can say for sure is that things like large in it they don't work. We don't really care. Interestingly large in it large final doesn't work as well as you can see here. It kind of goes below these. I just think that's what I said. By imposing two thresholds each of them needs to be lower than the original threshold. So now it's not really the fact that it's large in it large final but it's the fact that the large finals have a lower threshold than the ones that are only thresholding on large final and therefore it's just an additional irrelevant criterion. So those are the results but basically you can see that it really tends to be, in my opinion, that means it tends to be a good criterion to select the large final weights and I don't trust this magnitude increase thing too much. I think it pretty much measures the same thing as the large final and I don't really see that it outperforms. All right. Then they go over the mask one actions and the mask one actions remember these are how should we treat the weights that we have selected to be in the winning ticket. Now we can do the following things. We can re-init which basically means we set them back to the beginning of the optimization procedure. That is what the original algorithm does. We can reshuffle which means that we get all the weights that we got from the that are masked to one and we just shuffle them around. That guarantees us that the same weight distribution is still followed but it's not that each weight is at the original weight. So this if this performs well it could just mean that it is about the distribution of initial weights and not the exact configuration. And then constant it just means we'll set them to some constant. So either we set them to a negative or a positive constant and then the weights that are masked to zero will become a zero. So here are the results. Now as you can see there are a bunch of things performing about at the same level which are the red orange and blue curves here. The blue curve is rewind with large final. Now that is the original algorithm. The orange is reshuffle in it sign and the red is constant in it sign. Now what does in it sign mean? You see that these things will perform well if they have this in it sign instead of ran sign. Now ran sign means a random sign which basically means we reshuffle or we initialize to the constant. The constant will be 50-50 whether it's plus or minus alpha. The reshuffle will mean we don't care how we shuffle the weights as long as we shuffled the same weights somehow. With in it sign what they mean is that they make sure that the sign of the weight that is reinitialized is equal to the sign of the weight in the final train network. So they're basically saying that this weight is positive in the winning ticket so we should initialize them to a positive sign. That means that all the alpha in this case here is going to be a plus alpha if the original weight was a positive weight and a negative alpha if the original weight was a negative weight. Also here the shuffling will only happen between the positive and the negative weights. Now this might actually be at initial not at final but they are extremely correlated so there shouldn't be a big difference. So these perform all about at the same level which again is interesting. The authors here claim that it's just about the sign so the important part here seems to be the sign. I disagree with that. I think what's happening here is if you do these things what you'll do is you'll automatically be closer. Let's actually give a benefit of the doubt here and you say this is the initial. What you'll do is if you do plus alpha only if the initial weight was positive they will be closer together. Those two things are closer together than a random plus or minus alpha. In expectation they will be close together. Also with the reshuffle basically what with this in it sign thing what you ensure is that your initialization is closer to this one here. So it will be more like the large final initialization where you rewind the weights. I don't think you can make the claim it's just about the sign. I would guess that any algorithm that makes the weights closer to this original lottery ticket thing will also perform well. What is true and that's what the author says that the basin of attraction seems to be much larger than you have to exactly hit the original weights. But I think this effect here is not at all about the sign and just about the fact that you make them closer. By matching the sign you already make them closer in expectation and that's why it might work. Also stop testing at 0.005 significance level with five runs. That's no. All right so the last thing they do is the mask zero actions. Basically how do we treat the weights that we want to get rid of that are not part of the trainable winning ticket. So they experiment with different things. They say okay here is the original network. It's at a certain accuracy. These are the black lines and then the blue lines are set the mask zero weights to zero. So forget about them. Don't include them which is what the original algorithm did. So that's why you see this plot right here. And these are the blue lines and as you can see in the original paper this outperforms the original network at first. And then as you prune more and more and more here you just have whatever 1.2 percent of the weights. Then it finally gets worse. Now you can do original sorry different actions. One of them is set them to their initial values. And here they try to allude that by the numbers they put here. So this thing here means whatever you don't mask put it to the initial value which is this i plus. And this means set everything else to zero. Now this thing here means set it to this to the initial value and also set this to the initial value. So set everything to the initial value. Just don't train the ones that you that you mask. So now you end up with a network where some of the connections are simply frozen at their original value. And that as you can see performs worse often. So it's below the especially here you can see it's below the original algorithm where you set it to zero. This is very interesting I think because and I think that's just because you introduce some noise signal in there. So you introduce some unnecessary signal and the authors here claim well these weights you mask them because they were small in magnitude. So the optimal value for those weights seems to be close to zero. So by setting them to zero the original lottery ticket algorithm basically freezes them at their optimal position. And if you freeze them to any other position right away from zero then that means you have a less optimal configuration here. And I can believe that I can follow that. Not fully convinced but I can follow. So they they come up with a cool experiment what I think is that they they say for all the weights that we mask. We're going to set the ones below this line to zero and the ones above them to their initial value. Basically if a weight during training moved so a weight is let's say this is the magnitude and this is now the training steps. If a weight started out here and during training moved up in magnitude but it's still it's still below the masking threshold right. The masking threshold is here so it's not included in the ticket but it moved up. We'll set it to its initial value but if it moved down so it's lower then we'll set it to zero right. So you have the an additional threshold of how it moved during training that's the line here. You can see this here and that often performs better than the original ticket algorithm. Not much and it mainly tends to be in the regions where you really have low weights. Then it come up with a further variant where they also do the same thing to the trainable weights. So these trainable weights up here they do the same thing where they say okay we're going to set the ones that actually move down to during training. Now these are going to be very few ones but some of them are going to move down during training but they don't don't go below the threshold. We're going to set the ones to zero those ones because they were too high initially and that performs even better sometimes right. And again I don't see this as an algorithm where it's set to zero or set to high. It's simply because you were again setting something closer to its optimal value. During training if a weight that is trainable during training went down a bit that means that its optimal value is lower than it originally was. And it can just be that by setting it to zero you end up at a point that is closer in magnitude to the optimal value than at the initial point. So I think my comment here is that a lot of these things I think are a bit over interpreted by the authors and ultimately it's just about getting the weights close to where their optimal value is either at the beginning or at the end of the training. And I think the original lottery ticket paper already did a good job analyzing that. The last section here they call super masks and now super masks are is a thing where they say hey if we have a mask can't we just apply this to the original untrained network. And how will the network perform when we do that. Now if you simply take a network with random weights on let's say on MNIST you have a 10% chance because there are 10 classes right. So it will perform at 10% accuracy. If you randomly mask a bunch of weights then again you'll stay at 10% but if you apply the mask the large final mask you will already get some accuracy. Really interesting. So without training just by applying the mask you'll get some accuracy. And again we can interpret this by simply the fact the masking action it will mask weights that are not part of the winning ticket. It will retain weights that are part of the winning ticket. Weights tend to not move that much by SGD. So basically the masked network is at a place closer to its optimal value than the unmasked network and therefore it will perform better. So I think their findings are fairly easy to interpret here. And the last thing they do is they say can we optimize these masks. Can we train the mask. Now rather than basically just training the network full, determining the mask from there, can we now take that mask and further optimize it. And they do basically a they optimize this mask by SGD. Of course you have to make it continuous during training to do that but what you end up with is a binary mask. And they say here that it works better than the original mask. So interestingly, interestingly the if you apply the mask of the lottery ticket just at the beginning of training without training the network. You can see here that it already reaches whatever 40 percent accuracy on MNIST and it also reaches non negligible accuracy on CIFAR 10 so 20 percent. If you do a special thing where you also look see that the sign agrees. So if the final and the original weight have the same sign, then you get a much higher performance in this. Again, this is the untrained network. And they also do this at constant values for the same sign. So the same as we saw before. And again, they make this big deal about the sign here. I really think this is just because you're closer to the optimum when you do when you match the sign. But that's just my opinion. And then if they train the mask, they get even higher. So you see here you get even higher performance. And this is the top is on MNIST and the bottom is on CIFAR 10. So if you train the mask, if you if you just apply the mask, you get non random performance better than random. If you look at the mask also agrees with the signs so that you have a sign criteria where you say I'm only going to take the initial weights into the mask. If they have the same sign as the end weights, then you get a better performing initial sub network. And if you train the mask again, you've never trained the weights, you just train the mask, you can get an even better performance. And I mean, that's somewhat not surprising because now you train the mask. And yeah, so I don't think that's too surprising. But what you can see here is that the effect on MNIST is appears to be very high between these two. And the effect on CIFAR 10 seems to be different. It seems to be low between these two and then high between these two. So I wonder if there's a big dependence on the actual task here. They also use this dynamic weight rescaling, which is basically a kind of a rescaling trick. And then they put the following table. So here you have the different networks. And here you have the original trained weights, the performance they reach on the task. And here you have the performance that they reach after learned mask and dynamic weight rescaling. And you can see here that the MNIST even outperforms the original trained weights simply by learning the mask. Now you can also see that on CIFAR 10, this effect is not present. And I've already seen a paper that states that on like ResNets and ImageNet, the lottery ticket hypothesis isn't really measurable. So I want to pose another hypothesis here. And the hypothesis is the following that you may find these winning tickets that are performing well at initialization or being trained well, if the task is sufficiently easy. And the easier the task, the more you can basically do with it. And you can already basically MNIST is so easy that you simply have to mask out some of the initial weights and you will already perform extremely well. Where CIFAR 10 is harder, ImageNet is harder again, and I believe as the tasks get harder and harder, these methods will work less and less to the point where they don't work anymore. Right, that's my opinion. So basically, my opinion is it appears to be very much about how close you are to some kind of initial lottery, to some kind of optimal lottery ticket. And I think the experiments here are very cool, are very well designed, but I think they're often a bit over interpreted. Alright, that was it for me. I invite you to check out the paper and bye bye.
[ { "end": 8, "start": 0, "text": " Hi there! Today we're looking at deconstructing lottery tickets, zeros, signs and the super mask by Hadi Jo, Janis Lan," }, { "end": 17, "start": 8, "text": " Rosanne Liu and Jason Yosinski of Uber.ai. So this is a follow-up paper to the original paper that was called" }, { "end": 24, "start": 17, "text": " the lottery ticket hypothesis. I have done a video on that paper, so if you don't know what the lottery ticket" }, { "end": 32, "start": 24, "text": " hypothesis is, I suggest you go watch it. Just very quickly, very quickly, the lottery ticket hypothesis states" }, { "end": 40, "start": 32, "text": " the following. If you have a neural network that has, let's say it has these layers and has some weights," }, { "end": 49, "start": 40, "text": " lottery ticket hypothesis states that there are a subset of weights that are significantly less number of weights" }, { "end": 61, "start": 49, "text": " than in the original network. A subset of weights are already enough for this network to be trained in a successful" }, { "end": 70, "start": 61, "text": " fashion. So there are sub networks here. If you train them, you will get the same or even higher accuracy" }, { "end": 81, "start": 70, "text": " than if you train the full network. Now, the intrinsic part here is that the sub network must be initialized" }, { "end": 88, "start": 81, "text": " at the same place as the full network. So with that goes the lottery ticket algorithm. The lottery ticket" }, { "end": 100, "start": 88, "text": " algorithm is the following. First, train the full network. Second, select the largest weights. Select the" }, { "end": 116, "start": 100, "text": " largest weights at the end of the training. And then third, reset the weights to their initial value. And this needs to be" }, { "end": 127, "start": 116, "text": " the same initial value at which you initialize them at step one. And then train. So once you have the desired" }, { "end": 136, "start": 127, "text": " weights, these ones, right, you need to reset them to their original value before training. And then you can" }, { "end": 144, "start": 136, "text": " retrain just the small sub network. And that will work the same or better than the original network. So it's basically" }, { "end": 151, "start": 144, "text": " a pruning technique. So this is the lottery ticket hypothesis, the fact that there are these sub networks or the" }, { "end": 160, "start": 151, "text": " proposition. And the lottery ticket algorithm is the process by which you obtain these so-called winning" }, { "end": 172, "start": 160, "text": " tickets. Again, the full video will make this clearer. So this paper is going to shine some light on different" }, { "end": 180, "start": 172, "text": " aspects of these winning tickets and what is really important and what isn't and how you can obtain even better" }, { "end": 192, "start": 180, "text": " ones. So they often show the following 2D plots here. And these 2D plots will spend like a little time understanding" }, { "end": 200, "start": 192, "text": " them. There is two dimensions here and each one of these plots represents a single weight in the neural network." }, { "end": 209, "start": 200, "text": " So a single weight is just one floating point number, right? On the x-axis you have wi which is the initial value" }, { "end": 216, "start": 209, "text": " of the weight. Now this is randomly initialized, right? So here is zero and you randomly initialize these" }, { "end": 229, "start": 216, "text": " weights in the neural networks. So this number is random. The wf is on the y-axis and that is the final value of" }, { "end": 244, "start": 229, "text": " the weight. This is after training, right? This is trained. So if a point is for example here, that means it had this" }, { "end": 253, "start": 244, "text": " value of 3 before training and then after training it went to a value of 1, right? So it got initialized at 3 and SGD" }, { "end": 262, "start": 253, "text": " thought no it's better at 1. Why? So you see that there is an ellipsis here. Why is there an ellipsis? That's" }, { "end": 272, "start": 262, "text": " because very often the initial and the final weight value are positively correlated. So if a weight initially" }, { "end": 281, "start": 272, "text": " was positive it tends to also be positive at final. And that's just because of the nature of SGD. It just takes" }, { "end": 290, "start": 281, "text": " little steps and basically tries to do as little effort as possible in order to reach its goal, right? It always" }, { "end": 299, "start": 290, "text": " just goes downhill in a greedy fashion and that means probably if it can it will elect to not move the weights" }, { "end": 306, "start": 299, "text": " too far from their initial position or the position of the previous step. So that's why they're correlated and that's" }, { "end": 315, "start": 306, "text": " why you have an ellipsis. But they don't have to be. That's just the author superimposing their kind of view." }, { "end": 325, "start": 315, "text": " So then they say what happens during these lottery ticket pruning is in the original algorithm, right? You had" }, { "end": 334, "start": 325, "text": " the following pruning technique. You would select all the weights that at the end of the training had a certain" }, { "end": 341, "start": 334, "text": " magnitude or higher. And that's this here. So on the y-axis which is the final weight you define a threshold" }, { "end": 352, "start": 341, "text": " here and everything that is smaller magnitude than the threshold you mask to zero, right? You prune away." }, { "end": 360, "start": 352, "text": " You don't want to retain. But everything that is above that either positively or negatively you mask to one" }, { "end": 371, "start": 360, "text": " which means that you retain it in the winning ticket. So the light regions here will be the regions where you set" }, { "end": 381, "start": 371, "text": " the weight to zero or you mask it to zero and the dark regions will be the weights that you retain. So if a weight" }, { "end": 394, "start": 381, "text": " was initially here but then it traveled to here during the training then sorry, of course, initially," }, { "end": 400, "start": 394, "text": " well if the y-axis is the process during training and we'll visualize this here then we'll say initially it was" }, { "end": 410, "start": 400, "text": " here, right? Initially it was here and then in subsequent steps it traveled over this line. Then we would retain" }, { "end": 420, "start": 410, "text": " it because its final value was higher than the threshold. All right. So this paper generalizes the lottery ticket" }, { "end": 428, "start": 420, "text": " algorithm. It states it in a bit of a convoluted way but just to go quickly over it, it says first initialize a mask" }, { "end": 435, "start": 428, "text": " to all ones, randomly initialize the parameters of the network like this. Now to convolve it with the mask here" }, { "end": 442, "start": 435, "text": " is a bit superfluous because it's all ones but they do it for consistency. Then they say train the parameters" }, { "end": 449, "start": 442, "text": " of the network to completion. Denote the initial weights before training by wi, that's what we saw in the plot," }, { "end": 459, "start": 449, "text": " and the final weights by wf. Then here is the first generalization. Use the mask criterion m to produce a masking" }, { "end": 465, "start": 459, "text": " score for each currently unmasked weight. Rank the weights in each layer by their scores. Set the mask value" }, { "end": 475, "start": 465, "text": " for the top p percent to one and the bottom 100 minus p percent to zero. So this masking criterion here is now" }, { "end": 488, "start": 475, "text": " how you select the weights to be in the winning ticket basically. So you select the weights that you want" }, { "end": 497, "start": 488, "text": " to be part of that trainable sub network. In the original lottery ticket algorithm this was simply the absolute value" }, { "end": 506, "start": 497, "text": " as you can see here. Then they say there is a mask one action and a mask zero action which is describing what" }, { "end": 515, "start": 506, "text": " happens to the weights that are part of the winning ticket and what happens to the weights that aren't part of the" }, { "end": 522, "start": 515, "text": " winning ticket. Now to the second one first. The weights that aren't part of the winning ticket in the original algorithm" }, { "end": 528, "start": 522, "text": " they were just pruned, set to zero and frozen during any subsequent training. That's what we looked at before." }, { "end": 534, "start": 528, "text": " But you can think of different things like setting them to a constant value and just not training them." }, { "end": 543, "start": 534, "text": " The common thing is that they are masked to zero so they will not be trained but you can still kind of retain them" }, { "end": 550, "start": 543, "text": " at like a constant value or a random value something like this. And same for the mask one. All it means here" }, { "end": 560, "start": 550, "text": " is that they will be trained. In the original algorithm these weights were reset to their initial values" }, { "end": 568, "start": 560, "text": " and marked for training in the next round but you can think of different things. So this paper will experiment" }, { "end": 575, "start": 568, "text": " with all of these three steps basically. Step two, step three and step four and decide on what's important" }, { "end": 583, "start": 575, "text": " and what isn't. So first they go with the mask criteria. This is the criteria how do we select which weights" }, { "end": 591, "start": 583, "text": " we should retain and which ones we shouldn't. So think of it. We have our full network, we have trained it" }, { "end": 599, "start": 591, "text": " to completion and for each weight we know its initial and its final value. And based on that we now need to" }, { "end": 606, "start": 599, "text": " make a decision. Should this particular weight be included in the winning ticket or shouldn't it?" }, { "end": 613, "start": 606, "text": " The original paper as we said simply took the absolute value of the final weight completely ignoring the original weight." }, { "end": 625, "start": 613, "text": " So they do experiment with different things. First and you can see this in the plot here. Large final is what we had" }, { "end": 634, "start": 625, "text": " and we saw this. Small final is this score here which is just retain the weights that have a small final value." }, { "end": 642, "start": 634, "text": " You see the y threshold stays the same but it is inverted. We retain the weights that are inside of the threshold." }, { "end": 653, "start": 642, "text": " This is a control criterion just to kind of do the opposite of what the initial paper did. Large init ignores the final value" }, { "end": 663, "start": 653, "text": " and simply goes on the initial value of the weight. As you can see here now the threshold is on the x axis" }, { "end": 672, "start": 663, "text": " and the same for small init, the control case for that. Then there is a large init large final where you say" }, { "end": 681, "start": 672, "text": " okay I only retain a weight if it both was large at initialization and large in the final value." }, { "end": 692, "start": 681, "text": " So it's an additional criterion to the original paper. Now of course these are ranking scores so you won't actually have the same threshold" }, { "end": 699, "start": 692, "text": " you will simply make the thresholds lower and then that region up here that you retain will become larger" }, { "end": 706, "start": 699, "text": " to reach the same percentage of weights retained. So that's something you have to keep in mind." }, { "end": 716, "start": 706, "text": " To control case small init small final. Then the interesting case here magnitude increase which means all everywhere" }, { "end": 724, "start": 716, "text": " where the final weight is larger than the initial weight or the ranking score is basically based on how much you move." }, { "end": 735, "start": 724, "text": " This depicted here if a weight was originally here it just needs to be larger, it just needs to be above that." }, { "end": 748, "start": 735, "text": " So basically it needs to be above the diagonal here. The diagonal are basically weights that are as high in the final trained version" }, { "end": 755, "start": 748, "text": " as they were at initialization and everything above this here or of course below this here." }, { "end": 765, "start": 755, "text": " So you need to think of a second one here and then everything in this region and in this region magnitude wise" }, { "end": 771, "start": 765, "text": " will fulfill that criterion and then movement simply describes how far they move." }, { "end": 781, "start": 771, "text": " Now this is the same as the magnitude increase but just they don't do the absolute values before they subtract" }, { "end": 788, "start": 781, "text": " so it's basically everything above this diagonal. So you don't look at how much the magnitude increase" }, { "end": 802, "start": 788, "text": " but if a weight goes from very much negative to just a little positive this will already qualify because it moved very far away." }, { "end": 814, "start": 802, "text": " And then random you simply mask at random this is a control case. So our focus is going to be on the following" }, { "end": 825, "start": 814, "text": " the large final which is the original right. The large in it might be interesting, the large in it large final might be interesting" }, { "end": 835, "start": 825, "text": " and the magnitude increase might be interesting. Now what do they find? We'll go to the plot with the most effects." }, { "end": 843, "start": 835, "text": " The star here is simply a significance indicator so disregard the stars for now." }, { "end": 853, "start": 843, "text": " The magnitude increase tends to perform the best as you can see. Magnitude increase and compare that to large final." }, { "end": 862, "start": 853, "text": " Large final is the original algorithm. Magnitude increase tends to perform better than the large final." }, { "end": 872, "start": 862, "text": " Interestingly but if you look across the experiments it doesn't tend to do that consistently or often" }, { "end": 880, "start": 872, "text": " and there are these effects here when you go to really small networks. And the stars I said disregard them." }, { "end": 888, "start": 880, "text": " They are significance indicators for a t-test but the t-test is just across five samples and what you're seeing here" }, { "end": 896, "start": 888, "text": " is not a standard deviation but the min and max over the five runs. So I see there might be an effect here" }, { "end": 908, "start": 896, "text": " but I'm absolutely not trusting the claim here that this is significant." }, { "end": 916, "start": 908, "text": " Because it's just in one plot in one network on one data set." }, { "end": 924, "start": 916, "text": " So if you want to make the claim that the magnitude increase works better than the large final maybe." }, { "end": 936, "start": 924, "text": " What you can say for sure is that things like large in it they don't work. We don't really care." }, { "end": 945, "start": 936, "text": " Interestingly large in it large final doesn't work as well as you can see here. It kind of goes below these." }, { "end": 956, "start": 945, "text": " I just think that's what I said. By imposing two thresholds each of them needs to be lower than the original threshold." }, { "end": 967, "start": 956, "text": " So now it's not really the fact that it's large in it large final but it's the fact that the large finals have a lower threshold" }, { "end": 978, "start": 967, "text": " than the ones that are only thresholding on large final and therefore it's just an additional irrelevant criterion." }, { "end": 988, "start": 978, "text": " So those are the results but basically you can see that it really tends to be, in my opinion," }, { "end": 998, "start": 988, "text": " that means it tends to be a good criterion to select the large final weights and I don't trust this magnitude increase thing too much." }, { "end": 1009, "start": 998, "text": " I think it pretty much measures the same thing as the large final and I don't really see that it outperforms." }, { "end": 1020, "start": 1009, "text": " All right. Then they go over the mask one actions and the mask one actions remember these are how should we treat the weights" }, { "end": 1029, "start": 1020, "text": " that we have selected to be in the winning ticket. Now we can do the following things. We can re-init which basically means" }, { "end": 1036, "start": 1029, "text": " we set them back to the beginning of the optimization procedure. That is what the original algorithm does." }, { "end": 1049, "start": 1036, "text": " We can reshuffle which means that we get all the weights that we got from the that are masked to one and we just shuffle them around." }, { "end": 1058, "start": 1049, "text": " That guarantees us that the same weight distribution is still followed but it's not that each weight is at the original weight." }, { "end": 1067, "start": 1058, "text": " So this if this performs well it could just mean that it is about the distribution of initial weights and not the exact configuration." }, { "end": 1081, "start": 1067, "text": " And then constant it just means we'll set them to some constant. So either we set them to a negative or a positive constant" }, { "end": 1093, "start": 1081, "text": " and then the weights that are masked to zero will become a zero. So here are the results." }, { "end": 1103, "start": 1093, "text": " Now as you can see there are a bunch of things performing about at the same level which are the red orange and blue curves here." }, { "end": 1111, "start": 1103, "text": " The blue curve is rewind with large final. Now that is the original algorithm." }, { "end": 1121, "start": 1111, "text": " The orange is reshuffle in it sign and the red is constant in it sign. Now what does in it sign mean?" }, { "end": 1128, "start": 1121, "text": " You see that these things will perform well if they have this in it sign instead of ran sign." }, { "end": 1136, "start": 1128, "text": " Now ran sign means a random sign which basically means we reshuffle or we initialize to the constant." }, { "end": 1142, "start": 1136, "text": " The constant will be 50-50 whether it's plus or minus alpha." }, { "end": 1151, "start": 1142, "text": " The reshuffle will mean we don't care how we shuffle the weights as long as we shuffled the same weights somehow." }, { "end": 1165, "start": 1151, "text": " With in it sign what they mean is that they make sure that the sign of the weight that is reinitialized" }, { "end": 1174, "start": 1165, "text": " is equal to the sign of the weight in the final train network." }, { "end": 1187, "start": 1174, "text": " So they're basically saying that this weight is positive in the winning ticket so we should initialize them to a positive sign." }, { "end": 1201, "start": 1187, "text": " That means that all the alpha in this case here is going to be a plus alpha if the original weight was a positive weight" }, { "end": 1204, "start": 1201, "text": " and a negative alpha if the original weight was a negative weight." }, { "end": 1209, "start": 1204, "text": " Also here the shuffling will only happen between the positive and the negative weights." }, { "end": 1217, "start": 1209, "text": " Now this might actually be at initial not at final but they are extremely correlated so there shouldn't be a big difference." }, { "end": 1222, "start": 1217, "text": " So these perform all about at the same level which again is interesting." }, { "end": 1230, "start": 1222, "text": " The authors here claim that it's just about the sign so the important part here seems to be the sign." }, { "end": 1232, "start": 1230, "text": " I disagree with that." }, { "end": 1241, "start": 1232, "text": " I think what's happening here is if you do these things what you'll do is you'll automatically be closer." }, { "end": 1245, "start": 1241, "text": " Let's actually give a benefit of the doubt here and you say this is the initial." }, { "end": 1256, "start": 1245, "text": " What you'll do is if you do plus alpha only if the initial weight was positive they will be closer together." }, { "end": 1263, "start": 1256, "text": " Those two things are closer together than a random plus or minus alpha." }, { "end": 1266, "start": 1263, "text": " In expectation they will be close together." }, { "end": 1279, "start": 1266, "text": " Also with the reshuffle basically what with this in it sign thing what you ensure is that your initialization is closer to this one here." }, { "end": 1286, "start": 1279, "text": " So it will be more like the large final initialization where you rewind the weights." }, { "end": 1290, "start": 1286, "text": " I don't think you can make the claim it's just about the sign." }, { "end": 1301, "start": 1290, "text": " I would guess that any algorithm that makes the weights closer to this original lottery ticket thing will also perform well." }, { "end": 1312, "start": 1301, "text": " What is true and that's what the author says that the basin of attraction seems to be much larger than you have to exactly hit the original weights." }, { "end": 1319, "start": 1312, "text": " But I think this effect here is not at all about the sign and just about the fact that you make them closer." }, { "end": 1327, "start": 1319, "text": " By matching the sign you already make them closer in expectation and that's why it might work." }, { "end": 1336, "start": 1327, "text": " Also stop testing at 0.005 significance level with five runs." }, { "end": 1339, "start": 1336, "text": " That's no." }, { "end": 1346, "start": 1339, "text": " All right so the last thing they do is the mask zero actions." }, { "end": 1353, "start": 1346, "text": " Basically how do we treat the weights that we want to get rid of that are not part of the trainable winning ticket." }, { "end": 1359, "start": 1353, "text": " So they experiment with different things." }, { "end": 1363, "start": 1359, "text": " They say okay here is the original network." }, { "end": 1366, "start": 1363, "text": " It's at a certain accuracy." }, { "end": 1373, "start": 1366, "text": " These are the black lines and then the blue lines are set the mask zero weights to zero." }, { "end": 1375, "start": 1373, "text": " So forget about them." }, { "end": 1379, "start": 1375, "text": " Don't include them which is what the original algorithm did." }, { "end": 1382, "start": 1379, "text": " So that's why you see this plot right here." }, { "end": 1391, "start": 1382, "text": " And these are the blue lines and as you can see in the original paper this outperforms the original network at first." }, { "end": 1397, "start": 1391, "text": " And then as you prune more and more and more here you just have whatever 1.2 percent of the weights." }, { "end": 1399, "start": 1397, "text": " Then it finally gets worse." }, { "end": 1403, "start": 1399, "text": " Now you can do original sorry different actions." }, { "end": 1407, "start": 1403, "text": " One of them is set them to their initial values." }, { "end": 1414, "start": 1407, "text": " And here they try to allude that by the numbers they put here." }, { "end": 1423, "start": 1414, "text": " So this thing here means whatever you don't mask put it to the initial value which is this i plus." }, { "end": 1428, "start": 1423, "text": " And this means set everything else to zero." }, { "end": 1434, "start": 1428, "text": " Now this thing here means set it to this to the initial value and also set this to the initial value." }, { "end": 1437, "start": 1434, "text": " So set everything to the initial value." }, { "end": 1441, "start": 1437, "text": " Just don't train the ones that you that you mask." }, { "end": 1447, "start": 1441, "text": " So now you end up with a network where some of the connections are simply frozen at their original value." }, { "end": 1453, "start": 1447, "text": " And that as you can see performs worse often." }, { "end": 1461, "start": 1453, "text": " So it's below the especially here you can see it's below the original algorithm where you set it to zero." }, { "end": 1471, "start": 1461, "text": " This is very interesting I think because and I think that's just because you introduce some noise signal in there." }, { "end": 1483, "start": 1471, "text": " So you introduce some unnecessary signal and the authors here claim well these weights you mask them because they were small in magnitude." }, { "end": 1489, "start": 1483, "text": " So the optimal value for those weights seems to be close to zero." }, { "end": 1499, "start": 1489, "text": " So by setting them to zero the original lottery ticket algorithm basically freezes them at their optimal position." }, { "end": 1511, "start": 1499, "text": " And if you freeze them to any other position right away from zero then that means you have a less optimal configuration here." }, { "end": 1515, "start": 1511, "text": " And I can believe that I can follow that." }, { "end": 1517, "start": 1515, "text": " Not fully convinced but I can follow." }, { "end": 1527, "start": 1517, "text": " So they they come up with a cool experiment what I think is that they they say for all the weights that we mask." }, { "end": 1535, "start": 1527, "text": " We're going to set the ones below this line to zero and the ones above them to their initial value." }, { "end": 1546, "start": 1535, "text": " Basically if a weight during training moved so a weight is let's say this is the magnitude and this is now the training steps." }, { "end": 1557, "start": 1546, "text": " If a weight started out here and during training moved up in magnitude but it's still it's still below the masking threshold right." }, { "end": 1560, "start": 1557, "text": " The masking threshold is here so it's not included in the ticket but it moved up." }, { "end": 1569, "start": 1560, "text": " We'll set it to its initial value but if it moved down so it's lower then we'll set it to zero right." }, { "end": 1577, "start": 1569, "text": " So you have the an additional threshold of how it moved during training that's the line here." }, { "end": 1588, "start": 1577, "text": " You can see this here and that often performs better than the original ticket algorithm." }, { "end": 1596, "start": 1588, "text": " Not much and it mainly tends to be in the regions where you really have low weights." }, { "end": 1601, "start": 1596, "text": " Then it come up with a further variant where they also do the same thing to the trainable weights." }, { "end": 1613, "start": 1601, "text": " So these trainable weights up here they do the same thing where they say okay we're going to set the ones that actually move down to during training." }, { "end": 1621, "start": 1613, "text": " Now these are going to be very few ones but some of them are going to move down during training but they don't don't go below the threshold." }, { "end": 1632, "start": 1621, "text": " We're going to set the ones to zero those ones because they were too high initially and that performs even better sometimes right." }, { "end": 1640, "start": 1632, "text": " And again I don't see this as an algorithm where it's set to zero or set to high." }, { "end": 1647, "start": 1640, "text": " It's simply because you were again setting something closer to its optimal value." }, { "end": 1661, "start": 1647, "text": " During training if a weight that is trainable during training went down a bit that means that its optimal value is lower than it originally was." }, { "end": 1681, "start": 1661, "text": " And it can just be that by setting it to zero you end up at a point that is closer in magnitude to the optimal value than at the initial point." }, { "end": 1703, "start": 1681, "text": " So I think my comment here is that a lot of these things I think are a bit over interpreted by the authors and ultimately it's just about getting the weights close to where their optimal value is either at the beginning or at the end of the training." }, { "end": 1710, "start": 1703, "text": " And I think the original lottery ticket paper already did a good job analyzing that." }, { "end": 1732, "start": 1710, "text": " The last section here they call super masks and now super masks are is a thing where they say hey if we have a mask can't we just apply this to the original untrained network." }, { "end": 1750, "start": 1732, "text": " And how will the network perform when we do that. Now if you simply take a network with random weights on let's say on MNIST you have a 10% chance because there are 10 classes right." }, { "end": 1765, "start": 1750, "text": " So it will perform at 10% accuracy. If you randomly mask a bunch of weights then again you'll stay at 10% but if you apply the mask the large final mask you will already get some accuracy." }, { "end": 1780, "start": 1765, "text": " Really interesting. So without training just by applying the mask you'll get some accuracy. And again we can interpret this by simply the fact the masking action it will mask weights that are not part of the winning ticket." }, { "end": 1800, "start": 1780, "text": " It will retain weights that are part of the winning ticket. Weights tend to not move that much by SGD. So basically the masked network is at a place closer to its optimal value than the unmasked network and therefore it will perform better." }, { "end": 1814, "start": 1800, "text": " So I think their findings are fairly easy to interpret here. And the last thing they do is they say can we optimize these masks. Can we train the mask." }, { "end": 1832, "start": 1814, "text": " Now rather than basically just training the network full, determining the mask from there, can we now take that mask and further optimize it. And they do basically a they optimize this mask by SGD." }, { "end": 1846, "start": 1832, "text": " Of course you have to make it continuous during training to do that but what you end up with is a binary mask. And they say here that it works better than the original mask." }, { "end": 1862, "start": 1846, "text": " So interestingly, interestingly the if you apply the mask of the lottery ticket just at the beginning of training without training the network." }, { "end": 1875, "start": 1862, "text": " You can see here that it already reaches whatever 40 percent accuracy on MNIST and it also reaches non negligible accuracy on CIFAR 10 so 20 percent." }, { "end": 1891, "start": 1875, "text": " If you do a special thing where you also look see that the sign agrees. So if the final and the original weight have the same sign, then you get a much higher performance in this." }, { "end": 1907, "start": 1891, "text": " Again, this is the untrained network. And they also do this at constant values for the same sign. So the same as we saw before. And again, they make this big deal about the sign here." }, { "end": 1915, "start": 1907, "text": " I really think this is just because you're closer to the optimum when you do when you match the sign. But that's just my opinion." }, { "end": 1927, "start": 1915, "text": " And then if they train the mask, they get even higher. So you see here you get even higher performance. And this is the top is on MNIST and the bottom is on CIFAR 10." }, { "end": 1935, "start": 1927, "text": " So if you train the mask, if you if you just apply the mask, you get non random performance better than random." }, { "end": 1948, "start": 1935, "text": " If you look at the mask also agrees with the signs so that you have a sign criteria where you say I'm only going to take the initial weights into the mask." }, { "end": 1956, "start": 1948, "text": " If they have the same sign as the end weights, then you get a better performing initial sub network." }, { "end": 1966, "start": 1956, "text": " And if you train the mask again, you've never trained the weights, you just train the mask, you can get an even better performance." }, { "end": 1977, "start": 1966, "text": " And I mean, that's somewhat not surprising because now you train the mask. And yeah, so I don't think that's too surprising." }, { "end": 1992, "start": 1977, "text": " But what you can see here is that the effect on MNIST is appears to be very high between these two." }, { "end": 1999, "start": 1992, "text": " And the effect on CIFAR 10 seems to be different. It seems to be low between these two and then high between these two." }, { "end": 2012, "start": 1999, "text": " So I wonder if there's a big dependence on the actual task here. They also use this dynamic weight rescaling, which is basically a kind of a rescaling trick." }, { "end": 2020, "start": 2012, "text": " And then they put the following table. So here you have the different networks." }, { "end": 2037, "start": 2020, "text": " And here you have the original trained weights, the performance they reach on the task. And here you have the performance that they reach after learned mask and dynamic weight rescaling." }, { "end": 2047, "start": 2037, "text": " And you can see here that the MNIST even outperforms the original trained weights simply by learning the mask." }, { "end": 2052, "start": 2047, "text": " Now you can also see that on CIFAR 10, this effect is not present." }, { "end": 2062, "start": 2052, "text": " And I've already seen a paper that states that on like ResNets and ImageNet, the lottery ticket hypothesis isn't really measurable." }, { "end": 2076, "start": 2062, "text": " So I want to pose another hypothesis here. And the hypothesis is the following that you may find these winning tickets that are performing well at initialization" }, { "end": 2085, "start": 2076, "text": " or being trained well, if the task is sufficiently easy. And the easier the task, the more you can basically do with it." }, { "end": 2098, "start": 2085, "text": " And you can already basically MNIST is so easy that you simply have to mask out some of the initial weights and you will already perform extremely well." }, { "end": 2110, "start": 2098, "text": " Where CIFAR 10 is harder, ImageNet is harder again, and I believe as the tasks get harder and harder, these methods will work less and less to the point where they don't work anymore." }, { "end": 2124, "start": 2110, "text": " Right, that's my opinion. So basically, my opinion is it appears to be very much about how close you are to some kind of initial lottery, to some kind of optimal lottery ticket." }, { "end": 2134, "start": 2124, "text": " And I think the experiments here are very cool, are very well designed, but I think they're often a bit over interpreted." }, { "end": 2155, "start": 2134, "text": " Alright, that was it for me. I invite you to check out the paper and bye bye." } ]
h9w3KffPPmQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Rant] Online Conferences
[ "Science & Technology" ]
[ "machine learning", "deep learning", "online", "conference", "iclr", "virtual", "research" ]
Are virtual conferences good or bad? What's missing? How do we go forward? Pictures from here: https://twitter.com/srush_nlp/status/1253786329575538691 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hey, machine learners! Janek here. Okay, that is stolen. Today I want to give some quick thoughts about online conferences. As you might know, iClear this year is fully online because of the global situation. Big props to the organizers of the conference for putting something together in this short amount of time. iClear is one of the largest machine learning conferences and if you're not registered you don't get access to that website right now. So today I'll just talk about things that are public and we'll do like analysis of what happened when the materials are actually released. If you want to run an online conference there are basically two things you need to take care of. Actually three. One of them is networking but it's going to be online. We're gonna have to sacrifice that. I know there are efforts but let's be real. So there are paper presentations and there are things like talks, panels, workshops and so on. For the papers in iClear what they have is they have a website and you can kind of click on each of the papers. You'll get to a sub page and you'll find a video that the authors uploaded which is about five minutes long. You'll get the abstract and the reviews directly there from open review and during the poster sessions you'll have a chat window where you can chat with the authors and so people can come there and kind of chat about the paper at a given time. What people have pointed out here and I agree is that watching a five minute video often you need like two or three minutes of that video to even see whether you're interested in the paper which is much longer than you would have at a poster. At a poster you could clearly see people say they just open the pdf and just kind of gloss over it and that takes 30 seconds to decide whether or not you're interested. If I were to suggest an improvement it would be also have the authors upload a poster so at a glance you'll be able to see what interests you right. For the talks and the panels the talks are pre-recorded and then there is a question and answer session that is live. The questions are voted beforehand and then the most voted questions I believe will be answered in a live session by either the talk giver or the panel discussionists. That is the conference but what what are we doing here? I I kind of think it's a paradoxical thing to take the live conference and just try to map it as closely as possible to online. Look at these paper poster poster sessions right. It is cool that you have all this but now I have to go at that particular time to chat with the authors and this is in competition to everything else that's happening at the same time right. So there will be a hundred papers that are presented in a given session and now I can go to this one and chat here but I'll miss the chat over there. Of course I can read it later but the only reason that is in the live conference is because everyone's just there for one week right. That's about the span you can hold people in one place. So you need to cram things at the same time. So you're at the poster session you will miss this poster if you go to this poster right. You just don't have time but online we're not constrained by this. So why are we doing this at the same time? Why aren't we doing this asynchronously? We actually have a perfect system for doing things like this. It's called YouTube. You publish a paper you can make it. It can be five minutes. It can be 30 minutes. You put it on YouTube. You link to your paper your abstract and your reviews you can put them in the description and then okay there's no live chat but there is a comment section. I appreciate all of you thank you but we have a perfectly fine system for that to do this in an asynchronous way. I don't see the benefit of having really this live chat and the talks and the panels the same thing. You have you already have pre-recorded talks. What are you doing having them compete with other things at the same time? Like okay I'm going to go to this workshop but this one's happening too. Right now I have to go and decide because everything needs to be crammed into this one week. It just seems to make no sense. For example on our channel Machine Learning Street Talk we have guests on and we'll do Reddit threads to ask people which questions would they like the authors or the people that we have on to answer and people up and down vote the questions and then we have a panel discussion. We could even do this live right on YouTube and then record it and it will be almost the same experience because let's be honest if Joshua Benjo is in a panel in a live conference you'll be lucky to even get a single question in and it will almost never happen that you'll get a follow-up question because you're right there live. It's just not something that is really happening. I think the main advantage of these live conferences is the fact that you're there. If you go to a poster session the face-to-face interaction is something very different from a chat window. You can kind of see what the author is thinking in real time. You can ask them questions. So in writing you can always weasel out of difficult questions or so. Yeah so it seems like you lose all the benefits of the live conference but if you do it in this way you retain all the bad sides namely the crowdedness, different things competing at the same time, entry fees. I get it. It's a lot of work to build this website and so on but we have YouTube and Reddit and that already covers like 95% of what this is doing. I always think of this, I don't know if it's a myth, when the car was first invented it still had the pulleys to pull the horse because people were just used to horse buggies and not cars. It seems like we're doing the same thing with online conferences. We were just so used to the live conferences that we don't see the mega possibilities that we have online. These are my thoughts on online conferences. If you agree, disagree, leave a comment and maybe in the future we'll go to true online conferences asynchronous. Thank you for being here and bye bye.
[ { "end": 7.08, "start": 0, "text": " Hey, machine learners! Janek here. Okay, that is stolen. Today I want to give some quick" }, { "end": 12.88, "start": 7.08, "text": " thoughts about online conferences. As you might know, iClear this year is fully online" }, { "end": 18.56, "start": 12.88, "text": " because of the global situation. Big props to the organizers of the conference for putting" }, { "end": 24.32, "start": 18.56, "text": " something together in this short amount of time. iClear is one of the largest machine" }, { "end": 29.6, "start": 24.32, "text": " learning conferences and if you're not registered you don't get access to that website right" }, { "end": 34.160000000000004, "start": 29.6, "text": " now. So today I'll just talk about things that are public and we'll do like analysis" }, { "end": 39.84, "start": 34.160000000000004, "text": " of what happened when the materials are actually released. If you want to run an online conference" }, { "end": 45.2, "start": 39.84, "text": " there are basically two things you need to take care of. Actually three. One of them is networking" }, { "end": 50.24, "start": 45.2, "text": " but it's going to be online. We're gonna have to sacrifice that. I know there are efforts but" }, { "end": 57.2, "start": 51.040000000000006, "text": " let's be real. So there are paper presentations and there are things like talks, panels, workshops" }, { "end": 63.6, "start": 57.2, "text": " and so on. For the papers in iClear what they have is they have a website and you can kind of click" }, { "end": 68.8, "start": 63.6, "text": " on each of the papers. You'll get to a sub page and you'll find a video that the authors uploaded" }, { "end": 75.84, "start": 68.8, "text": " which is about five minutes long. You'll get the abstract and the reviews directly there from open" }, { "end": 83.12, "start": 75.84, "text": " review and during the poster sessions you'll have a chat window where you can chat with the authors" }, { "end": 88.88000000000001, "start": 83.12, "text": " and so people can come there and kind of chat about the paper at a given time. What people have" }, { "end": 94.64, "start": 88.88000000000001, "text": " pointed out here and I agree is that watching a five minute video often you need like two or three" }, { "end": 100, "start": 94.64, "text": " minutes of that video to even see whether you're interested in the paper which is much longer than" }, { "end": 105.60000000000001, "start": 100, "text": " you would have at a poster. At a poster you could clearly see people say they just open the pdf and" }, { "end": 110.4, "start": 105.60000000000001, "text": " just kind of gloss over it and that takes 30 seconds to decide whether or not you're interested." }, { "end": 118.32000000000001, "start": 110.4, "text": " If I were to suggest an improvement it would be also have the authors upload a poster so at a glance" }, { "end": 124.80000000000001, "start": 118.32000000000001, "text": " you'll be able to see what interests you right. For the talks and the panels the talks are pre-recorded" }, { "end": 131.84, "start": 125.44000000000001, "text": " and then there is a question and answer session that is live. The questions are voted beforehand" }, { "end": 139.6, "start": 131.84, "text": " and then the most voted questions I believe will be answered in a live session by either the talk" }, { "end": 147.44, "start": 139.6, "text": " giver or the panel discussionists. That is the conference but what what are we doing here? I" }, { "end": 154.64, "start": 147.44, "text": " I kind of think it's a paradoxical thing to take the live conference and just try to map it as" }, { "end": 162.64, "start": 154.64, "text": " closely as possible to online. Look at these paper poster poster sessions right. It is cool that you" }, { "end": 169.83999999999997, "start": 162.64, "text": " have all this but now I have to go at that particular time to chat with the authors and" }, { "end": 174.32, "start": 169.83999999999997, "text": " this is in competition to everything else that's happening at the same time right. So there will be" }, { "end": 179.51999999999998, "start": 174.32, "text": " a hundred papers that are presented in a given session and now I can go to this one and chat here" }, { "end": 185.67999999999998, "start": 179.51999999999998, "text": " but I'll miss the chat over there. Of course I can read it later but the only reason that is in the" }, { "end": 190.95999999999998, "start": 185.67999999999998, "text": " live conference is because everyone's just there for one week right. That's about the span you can" }, { "end": 196.72, "start": 190.96, "text": " hold people in one place. So you need to cram things at the same time. So you're at the poster" }, { "end": 201.68, "start": 196.72, "text": " session you will miss this poster if you go to this poster right. You just don't have time but" }, { "end": 207.28, "start": 201.68, "text": " online we're not constrained by this. So why are we doing this at the same time? Why aren't we doing" }, { "end": 213.44, "start": 207.28, "text": " this asynchronously? We actually have a perfect system for doing things like this. It's called" }, { "end": 219.76000000000002, "start": 213.44, "text": " YouTube. You publish a paper you can make it. It can be five minutes. It can be 30 minutes. You put" }, { "end": 225.51999999999998, "start": 219.76, "text": " it on YouTube. You link to your paper your abstract and your reviews you can put them in the description" }, { "end": 232, "start": 225.51999999999998, "text": " and then okay there's no live chat but there is a comment section. I appreciate all of you thank you" }, { "end": 237.28, "start": 232, "text": " but we have a perfectly fine system for that to do this in an asynchronous way. I don't see the" }, { "end": 244.64, "start": 237.28, "text": " benefit of having really this live chat and the talks and the panels the same thing. You have you" }, { "end": 251.35999999999999, "start": 244.64, "text": " already have pre-recorded talks. What are you doing having them compete with other things at the same" }, { "end": 256.88, "start": 251.35999999999999, "text": " time? Like okay I'm going to go to this workshop but this one's happening too. Right now I have to" }, { "end": 262.32, "start": 256.88, "text": " go and decide because everything needs to be crammed into this one week. It just seems to" }, { "end": 270.56, "start": 262.32, "text": " make no sense. For example on our channel Machine Learning Street Talk we have guests on and we'll" }, { "end": 277.68, "start": 270.56, "text": " do Reddit threads to ask people which questions would they like the authors or the people that we" }, { "end": 284.08, "start": 277.68, "text": " have on to answer and people up and down vote the questions and then we have a panel discussion." }, { "end": 290.56, "start": 284.88, "text": " We could even do this live right on YouTube and then record it and it will be almost the same" }, { "end": 298.24, "start": 290.56, "text": " experience because let's be honest if Joshua Benjo is in a panel in a live conference you'll be lucky" }, { "end": 304.56, "start": 298.24, "text": " to even get a single question in and it will almost never happen that you'll get a follow-up" }, { "end": 311.12, "start": 304.56, "text": " question because you're right there live. It's just not something that is really happening. I" }, { "end": 317.28000000000003, "start": 311.12, "text": " think the main advantage of these live conferences is the fact that you're there. If you go to a" }, { "end": 322.96000000000004, "start": 317.28000000000003, "text": " poster session the face-to-face interaction is something very different from a chat window." }, { "end": 329.68, "start": 322.96, "text": " You can kind of see what the author is thinking in real time. You can ask them questions. So in" }, { "end": 336.15999999999997, "start": 329.68, "text": " writing you can always weasel out of difficult questions or so. Yeah so it seems like you lose" }, { "end": 342.15999999999997, "start": 336.15999999999997, "text": " all the benefits of the live conference but if you do it in this way you retain all the bad sides" }, { "end": 348, "start": 342.15999999999997, "text": " namely the crowdedness, different things competing at the same time, entry fees. I get it. It's a lot" }, { "end": 354.64, "start": 348, "text": " of work to build this website and so on but we have YouTube and Reddit and that already covers" }, { "end": 361.6, "start": 354.64, "text": " like 95% of what this is doing. I always think of this, I don't know if it's a myth, when the" }, { "end": 367.2, "start": 361.6, "text": " car was first invented it still had the pulleys to pull the horse because people were just used to" }, { "end": 372.48, "start": 367.2, "text": " horse buggies and not cars. It seems like we're doing the same thing with online conferences. We" }, { "end": 378.96000000000004, "start": 372.48, "text": " were just so used to the live conferences that we don't see the mega possibilities that we have" }, { "end": 385.68, "start": 378.96000000000004, "text": " online. These are my thoughts on online conferences. If you agree, disagree, leave a comment and maybe" }, { "end": 402.96000000000004, "start": 385.68, "text": " in the future we'll go to true online conferences asynchronous. Thank you for being here and bye bye." } ]
fvctpYph8Pc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Do ImageNet Classifiers Generalize to ImageNet? (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "imagenet", "cifar10", "cifar10.1", "generalization", "overfitting", "mturk", "arxiv", "vision", "models", "research", "hardness", "accuracy", "classifier", "resnet" ]
Has the world overfitted to ImageNet? What if we collect another dataset in exactly the same fashion? This paper gives a surprising answer! Paper: https://arxiv.org/abs/1902.10811 Data: https://github.com/modestyachts/ImageNetV2 Abstract: We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets. Authors: Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there today we're looking at to do image net classifiers Generalized to image net by Benjamin wrecked Rebecca are Olaf's Ludwig Schmidt and Vyshal Shankar So the premise of this paper is pretty simple We've been training models on image net now for a while Almost ten years to be exact image net is this data set with a lot of images millions of images categorized into many thousands of categories now the The classic part of image net that people know is that has about 1.5 million images in 1,000 different classes and this Image net was one of the main data sets now or has been for the last few years And as you can see on the right here the error rate year after year Was pretty much I think cut in half every year since 2012 When the first net Alex net was using deep learning instead of the classical visual computer vision approaches so we've been training on image net for a while and the idea or the question this paper asks if we Collect a second test set right so for image net we have a train and a test set If we now collect a second test set here test v2 right if We have a model that was trained on training here and evaluated on test Does it also perform well on this second test set right? the idea of this being that maybe over the years we've Tuned our hyper parameters and all such that the models perform well on that particular test set Let's call this v1 right and it might not it might not be as successful on a new test set So this paper goes about collecting a test set to image net in Exactly the way that the v1 test set was collected right? So they they try to match exactly the process of how v1 was collected To create another test set and then they evaluate models on that new test set They do this not only for image net but also for c4 10 which is a much smaller data set But also a lot of computer vision algorithms are evaluated on c4 10 So let's just put up a hypothesis here The hypothesis is that we have pretty much over fitted to image net by now. This is a very prestigious Number to get if you have state-of-the-art on image net and therefore Tuning your hyper parameters and your learning rate and everything such that it performs well on the test set v1 is very likely So this paper has the most important plots are like this It has two axes and on the bottom axis is the performance on v1 right and so performance and that means Accuracy basically so accuracy and on line two is v2 accuracy now Here is one Here is zero This line here means that if a model is performing 50% accuracy on v1 it is also performing 50% accuracy on v2 So being on this line would basically mean we have not over fitted and the model is performing equally well on both both sets So what what now if we assume that we have over fitted if we have over fitted? We would assume that you know the models that perform really poorly might also you know They perform kind of they're not really over fitted but over the years as we've gotten better on v1 We stray away from this so we get better on v1 But we don't really get better on v2 right and that means we've over fitted to v1 and this might even go down right the more The more kind of we might overfit to v1 The worse we're actually going to get on v2 So this this is kind of a meta over fitting so this is what we would expect if we over fit right over over fit to v1 And I think this was the initial hypothesis behind the people that ran this experiment to check Can we see an effect like this or is the effect more? A continuous one where we don't over fit and what they found was neither And these are basically these interesting plots here So again the dashed line here would be the not over fitting line So what they find if you for example look at image net every dot here is a model right? So this model right here is performing with like a 67 percent accuracy On v1 and it is performing with something like a 53 percent accuracy on v2 If you look at this line here what that means is that Every model kind of drops By about this much right so Not only not we don't see This and we don't see this but we see this line is shifted down And if you look closely especially on c410 you can see the line rather than being tilted like this is actually tilted A bit slanted upwards right? So the the the the angle is not is higher is greater the slope is greater than the one-to-one slope This is extremely interesting If you think about what does that mean? It means that if you Take a model right right here If you look at its order It's it's it's a this Let's let's look at this model here. This model is number one Best model in the world right it will still be number one on v2 This model here is number number three rank three on v1. It will also be rank three on v2 right? So the order of models is is pretty much constant So if if a model is doing well on v1 it is also doing well on v2 In relation to other models but every model experiences this drop in in accuracy And the most interesting part is that and again you can see this more here The the better you're doing the smaller this drop gets right this drop here is smaller than this drop here This is exactly counter to the notion of overfitting Where it seems the more accurate you get on v1 the more you're able to close this gap between v1 and v2 And if you extrapolate here you might as well think that once we are at at or actually sorry 100% is already here If you could go higher maybe or maybe you can see here that in the end these will actually converge But nevertheless if the models that are doing better on v1 aren't only not overfit But they are actually experience less of a drop with regards to a new test set So they generalize better to the new test set than the the worst models right And that is crazy and it is not only neural networks right so up here for up here you have the the deep neural networks and whatnot But you can also go with I believe some of these or even further down here are k nearest neighbor sorry k nearest neighbor classifiers and things like this So it doesn't seem to be a property of neural networks it really seems to be a property of the data set And this paper first of all goes over how they collected these and second of all their hypotheses and investigations into why this phenomenon exists Why we are all of a sudden worse on the new test set but completely worse in a different way than we expected compared to the original test set All right so they first say potential causes of accuracy drops and they propose a model They say here are two here is the entire difference between two data sets with regards to a classifier It can be decomposed into three different gaps and you see the first and the last part here are the ones from the left side So this is an expanding sum in the middle So there is the generalization gap the generalization gap refers to the gap that you have between different data sets of the same distribution So these this is like you know from the train versus test set you train on the training set and then you have a generalization gap to the test set In this case the generalization gap simply refers to the difference between the generalization to the first and to the second set They argue that this isn't really an issue here because they say they can put up confidence intervals So if those were identically distributed how much would the generalization gap be at maximum given some kind of confidence interval 95 confidence interval would only give you plus minus a one percent difference in generalization gap So they rule out that this is the reason for the big discrepancy Then have two others they have the adaptivity gap and the distribution gap So the adaptivity gap is what we hypothesized at the beginning It is the overfitting to the first data set or to one of the two data sets So if you have a big adaptivity gap then you have fitted much more to one than to the other data set Now because of the shape of the curve being the way it is they also rule out the adaptivity gap And we went over why right because it would look completely different than it does Now the only thing remaining here is this distribution gap So they explain that this difference here most likely comes from the fact that the old and the new test set have a different distribution And they go into why that is And I'm going to compress their hypothesis of why that is into a short summary Let's say we won't go over the entire paper They basically say that the Mechanical Turk part of the processing pipeline has a very big influence So what happens when you collect an ImageNet test set? You start with Flickr This is a big image database and the images as far as I can understand they are tagged and you can search for them and so on So they start by going to Flickr and searching for images And their ground truth class labels come, you may know this, from a system called WordNet And WordNet is sort of a linguistic classification of words into groups So it would have hierarchical, it would have animals, animal being a word and then below animal it would have dog and then it would have terrier And it would have these hierarchical groupings of these words And they search on Flickr for images and then they put the images to a human rater So the human rater on a system called Mechanical Turk, you may know it, you can just sign up there and do these kind of tasks They present the human with a grid of images and a class, terrier And they say please select all the images where a terrier appears, so the human might select this, this, this and this one And that will give you what they call a selection frequency So, selection frequency, so how often was a particular image selected given that class And of course the higher, so you do this over many, sorry, the selection frequency is across many of these Mechanical Turk workers So if for a given image the selection frequency is high, let's say going towards 1.0, that means every single human selected that image to be in this class So you can be pretty sure it's in the class, if it goes towards 0, then you can be pretty sure it's not in the class And this selection frequency criteria, the paper thinks that this is the main criteria why the datasets are of different difficulties, let's say Because even though they try to match the process exactly, even the questions they pose to the Turk workers They even restricted their flicker date range to the date range where the original image net was collected They still think that there is a difference in how the Mechanical Turk workers basically rated the images Or then after that how they were selected using the selection frequency So what they do is they do different, they select different images depending on criteria so they can test these hypotheses Their original V2 test set is called matched frequency Now what you do in matched frequency is you kind of play a little game What you do is every now and then you will implant an image here of the V1 test set So thereby, of the V1 test set, either of this class or of another class So you can kind of do a quality control From this you can now find out what is the selection frequency of images in the V1 class for Terrier And then you can simply select the same one in the V2 So if you know the selection frequency for V1 was 0.8, you can just put the threshold here at 0.8 And you know, you can be reasonably sure that you have selected a similar difficulty Or so you would think, right? So if they do it like this, then they get this drop that you saw at the beginning They also do this threshold 0.7, which I guess is arbitrary-ish And then they also say top images, where they say For each class we chose the 10 images with the highest selection frequency So these would be the sort of easiest ones And if you look at the graphs, and this is ImageNet for these different datasets So if you do the threshold 0.7 that they selected, now the old line was somewhere here Now the new line is much closer, right? You see that here And if you do these top images, so you just select the easy ones The new line is actually above, right? This is now Note that the red line here is above the black line, while here it's below It is still extremely interesting that still there is this almost linear relationship between the V1 and V2 accuracies And even here on this easier dataset there is So they basically hypothesize by thresholding differently for the new dataset You have a very good grip on the difficulty of the new dataset So this process of matching the selection frequency, so this matched frequency dataset It might actually not result in the same difficulty in dataset They do have some more experiments where they experiment with different difficulties So let's actually jump down there, it's a bit of a jump because there are over 70 pages in this paper The appendix has its own content directory, that's how crazy that is But I want to show you these plots, so here what they do is they do different bins of this set So they have bins of easy samples, less easy samples and so on And you can see that you have a pretty good grip on where this line is So if you only take the easy samples you're up here, and this is about what we saw before If this is the old line, it's the red line, right? If you take the entire new test set If you just take bin the second hardest bin you're somewhere here or here or here So you have a good hold on where this line is But that still doesn't explain that if you try to follow the protocol exactly Why does the accuracy drop? That is still a mystery Even though they say here is a variable that influences this a lot If they try to set the variable as it was set in v1, it is not equal And still mystery remains, I would say So the last thing they do is they try to just come up with a model for this So their hypothesis now is that the new test set just is harder And they have an analytical kind of a formal model of why if you assume certain things This results in this line, right? The really interesting thing about the paper is that the accuracies, they all fall exactly on this line There's this linear relationship, especially if you do like probit scaling of accuracies There's this line, and so they put up a model where you say What if we assume that each example i has a difficulty, right? It's just a number how difficult it is And each model j has a probability of correctly classifying an image with difficulty tau here Given by this function, right? So this here is the probability that the model will classify an image correctly, given that it's tau hard And so this is an increasing function And they put up this following parameterization This is the CDF of the... So they put up a model for this function now, right? For this, they say if we assume that it is like this That each model has a sort of a skill number So each model has a skill, and each image has a difficulty that is tau If the skill is higher than the tau, probably it will classify correctly If the skill is lower, probably, then this number is negative, it will classify it incorrectly So this is the CDF of a normal distribution, it goes something like this, right? And if the zero point is here, so if this number here is zero, it's like 50-50 Whether it will classify it correctly So if you assume this, and you assume a bunch of other Gaussian error distributions Then the performance of a model on the test set V2 is exactly the performance of the model under test set V1 Times this scalar here, plus this scalar here, which is a linear relationship So they put up a model for this Of course this doesn't explain anything, right? This doesn't explain the phenomena, but it still gives a clue of why the linear relationship here might result from the test set having a different difficulty setting Or a different difficulty properties So they go on, after discussing related work, they go on to say what can one do And suggestions for future research, I especially like the super holdout So if you ever make a data set, then make a super holdout set And once you're almost out of your career, just come up with it and say, oh I have this lost data set here that I made way back It will be fantastic Alright, so I think this paper is very interesting And I think everyone that sees and reads this comes up with their own hypothesis of why this is and what's going on here They have investigated a lot of this Especially I want to highlight an experiment where they taken part of V2 here So they split this one into a train and a test And they put this and this training together into like a super train So you train on both things together and they see whether it improves at this test set You would think that if you put this training in there that it would improve And it does improve, but it improves by like a miniscule amount So they've done a whole bunch of experiments like this to investigate what's going on This is all in this 70 page appendix that you can go over Alright, that was what I had to say for this paper If you like this video, consider subscribing and comment what you think I usually answer or like or read most comments Thanks for listening, bye bye
[ { "end": 3.3000000000000003, "start": 0, "text": " Hi there today we're looking at to do image net classifiers" }, { "end": 9.78, "start": 3.6, "text": " Generalized to image net by Benjamin wrecked Rebecca are Olaf's Ludwig Schmidt and Vyshal Shankar" }, { "end": 12.620000000000001, "start": 10.1, "text": " So the premise of this paper is pretty simple" }, { "end": 16.4, "start": 12.98, "text": " We've been training models on image net now for a while" }, { "end": 24.54, "start": 17.1, "text": " Almost ten years to be exact image net is this data set with a lot of images millions of images" }, { "end": 28.76, "start": 25.62, "text": " categorized into many thousands of categories now the" }, { "end": 35.44, "start": 28.76, "text": " The classic part of image net that people know is that has about 1.5 million images in" }, { "end": 38.24, "start": 36.24, "text": " 1,000 different classes and" }, { "end": 40.84, "start": 39.28, "text": " this" }, { "end": 45.52, "start": 40.84, "text": " Image net was one of the main data sets now or has been for the last few years" }, { "end": 49.84, "start": 45.6, "text": " And as you can see on the right here the error rate year after year" }, { "end": 56.32000000000001, "start": 50.6, "text": " Was pretty much I think cut in half every year since 2012" }, { "end": 62.52, "start": 56.32, "text": " When the first net Alex net was using deep learning instead of the classical" }, { "end": 65.6, "start": 63.6, "text": " visual computer vision approaches" }, { "end": 69.6, "start": 65.92, "text": " so we've been training on image net for a while and the" }, { "end": 74.64, "start": 70.44, "text": " idea or the question this paper asks if we" }, { "end": 82.12, "start": 75.96000000000001, "text": " Collect a second test set right so for image net we have a train and a test set" }, { "end": 89.64, "start": 82.12, "text": " If we now collect a second test set here test v2" }, { "end": 92.24000000000001, "start": 90.36, "text": " right if" }, { "end": 96.78, "start": 92.24000000000001, "text": " We have a model that was trained on training here and evaluated on test" }, { "end": 102, "start": 97, "text": " Does it also perform well on this second test set right?" }, { "end": 108, "start": 103.08000000000001, "text": " the idea of this being that maybe over the years we've" }, { "end": 113.8, "start": 108, "text": " Tuned our hyper parameters and all such that the models perform well on that particular test set" }, { "end": 122, "start": 113.8, "text": " Let's call this v1 right and it might not it might not be as successful on a new test set" }, { "end": 127.03999999999999, "start": 122.32, "text": " So this paper goes about collecting a test set to image net in" }, { "end": 131.4, "start": 127.68, "text": " Exactly the way that the v1 test set was collected right?" }, { "end": 136.8, "start": 132.04, "text": " So they they try to match exactly the process of how v1 was collected" }, { "end": 142.28, "start": 136.8, "text": " To create another test set and then they evaluate models on that new test set" }, { "end": 148.56, "start": 142.76000000000002, "text": " They do this not only for image net but also for c4 10 which is a much smaller data set" }, { "end": 153.32000000000002, "start": 148.56, "text": " But also a lot of computer vision algorithms are evaluated on c4 10" }, { "end": 157.28, "start": 154.32000000000002, "text": " So let's just put up a hypothesis here" }, { "end": 165.20000000000002, "start": 157.28, "text": " The hypothesis is that we have pretty much over fitted to image net by now. This is a very prestigious" }, { "end": 169.44, "start": 165.2, "text": " Number to get if you have state-of-the-art on image net and therefore" }, { "end": 176.95999999999998, "start": 169.76, "text": " Tuning your hyper parameters and your learning rate and everything such that it performs well on the test set v1 is very likely" }, { "end": 181.35999999999999, "start": 177.28, "text": " So this paper has the most important plots are like this" }, { "end": 187.95999999999998, "start": 181.35999999999999, "text": " It has two axes and on the bottom axis is the performance on v1 right and" }, { "end": 190.56, "start": 188.56, "text": " so performance" }, { "end": 192.56, "start": 190.56, "text": " and that means" }, { "end": 198.28, "start": 192.56, "text": " Accuracy basically so accuracy and on line two is v2 accuracy" }, { "end": 200.48, "start": 199.28, "text": " now" }, { "end": 202.48, "start": 200.48, "text": " Here is one" }, { "end": 204.48, "start": 202.48, "text": " Here is zero" }, { "end": 210.16, "start": 205.12, "text": " This line here means that if a model is performing" }, { "end": 216.8, "start": 210.88, "text": " 50% accuracy on v1 it is also performing 50% accuracy on v2" }, { "end": 224.32000000000002, "start": 216.8, "text": " So being on this line would basically mean we have not over fitted and the model is performing equally well on both" }, { "end": 226.88000000000002, "start": 224.88000000000002, "text": " both sets" }, { "end": 233.04000000000002, "start": 226.88000000000002, "text": " So what what now if we assume that we have over fitted if we have over fitted?" }, { "end": 238, "start": 233.04000000000002, "text": " We would assume that you know the models that perform really poorly might also you know" }, { "end": 244, "start": 238, "text": " They perform kind of they're not really over fitted but over the years as we've gotten better on v1" }, { "end": 248.4, "start": 244, "text": " We stray away from this so we get better on v1" }, { "end": 256.56, "start": 248.72, "text": " But we don't really get better on v2 right and that means we've over fitted to v1 and this might even go down right the more" }, { "end": 260, "start": 256.88, "text": " The more kind of we might overfit to v1" }, { "end": 263.28, "start": 260.72, "text": " The worse we're actually going to get on v2" }, { "end": 271.84, "start": 264.24, "text": " So this this is kind of a meta over fitting so this is what we would expect if we over fit right over" }, { "end": 273.84, "start": 271.84, "text": " over fit to v1" }, { "end": 280.56, "start": 274.71999999999997, "text": " And I think this was the initial hypothesis behind the people that ran this experiment to check" }, { "end": 286.08, "start": 280.96, "text": " Can we see an effect like this or is the effect more?" }, { "end": 292.15999999999997, "start": 286.71999999999997, "text": " A continuous one where we don't over fit and what they found was neither" }, { "end": 296.64, "start": 292.96, "text": " And these are basically these interesting plots here" }, { "end": 302.15999999999997, "start": 296.64, "text": " So again the dashed line here would be the not over fitting line" }, { "end": 307.44, "start": 302.15999999999997, "text": " So what they find if you for example look at image net every dot here is a model right?" }, { "end": 313.03999999999996, "start": 307.91999999999996, "text": " So this model right here is performing with like a 67 percent accuracy" }, { "end": 320, "start": 313.68, "text": " On v1 and it is performing with something like a 53 percent accuracy on v2" }, { "end": 324.15999999999997, "start": 320.32, "text": " If you look at this line here what that means is that" }, { "end": 326.64000000000004, "start": 324.16, "text": " Every model kind of drops" }, { "end": 329.36, "start": 327.36, "text": " By about this much" }, { "end": 330.96000000000004, "start": 329.84000000000003, "text": " right" }, { "end": 332.08000000000004, "start": 330.96000000000004, "text": " so" }, { "end": 334.08000000000004, "start": 332.08000000000004, "text": " Not only not we don't see" }, { "end": 340, "start": 334.64000000000004, "text": " This and we don't see this but we see this line is shifted down" }, { "end": 347.44000000000005, "start": 340, "text": " And if you look closely especially on c410 you can see the line rather than being tilted like this is actually tilted" }, { "end": 350.40000000000003, "start": 347.92, "text": " A bit slanted upwards right?" }, { "end": 358.96, "start": 350.4, "text": " So the the the the angle is not is higher is greater the slope is greater than the one-to-one slope" }, { "end": 360.96, "start": 358.96, "text": " This is extremely interesting" }, { "end": 364, "start": 361.52, "text": " If you think about what does that mean?" }, { "end": 367.28, "start": 364.71999999999997, "text": " It means that if you" }, { "end": 371.84, "start": 368.79999999999995, "text": " Take a model right right here" }, { "end": 377.44, "start": 374.23999999999995, "text": " If you look at its order" }, { "end": 380.56, "start": 377.44, "text": " It's it's it's a this" }, { "end": 383.28, "start": 380.56, "text": " Let's let's look at this model here. This model is number one" }, { "end": 388.32, "start": 383.76, "text": " Best model in the world right it will still be number one on v2" }, { "end": 397.28, "start": 389.36, "text": " This model here is number number three rank three on v1. It will also be rank three on v2 right?" }, { "end": 400.88, "start": 397.28, "text": " So the order of models is is pretty much constant" }, { "end": 405.68, "start": 400.88, "text": " So if if a model is doing well on v1 it is also doing well on v2" }, { "end": 415.04, "start": 405.68, "text": " In relation to other models but every model experiences this drop in in accuracy" }, { "end": 419.92, "start": 415.04, "text": " And the most interesting part is that and again you can see this more here" }, { "end": 429.92, "start": 422, "text": " The the better you're doing the smaller this drop gets right this drop here is smaller than this drop here" }, { "end": 436.24, "start": 429.92, "text": " This is exactly counter to the notion of overfitting" }, { "end": 446.88, "start": 437.84000000000003, "text": " Where it seems the more accurate you get on v1 the more you're able to close this gap between v1 and v2" }, { "end": 455.92, "start": 446.88, "text": " And if you extrapolate here you might as well think that once we are at at or actually sorry 100% is already here" }, { "end": 463.92, "start": 455.92, "text": " If you could go higher maybe or maybe you can see here that in the end these will actually converge" }, { "end": 472.88, "start": 463.92, "text": " But nevertheless if the models that are doing better on v1 aren't only not overfit" }, { "end": 477.52000000000004, "start": 472.88, "text": " But they are actually experience less of a drop with regards to a new test set" }, { "end": 484.8, "start": 477.52000000000004, "text": " So they generalize better to the new test set than the the worst models right" }, { "end": 495.28000000000003, "start": 484.8, "text": " And that is crazy and it is not only neural networks right so up here for up here you have the the deep neural networks and whatnot" }, { "end": 506.16, "start": 495.28000000000003, "text": " But you can also go with I believe some of these or even further down here are k nearest neighbor sorry k nearest neighbor classifiers and things like this" }, { "end": 513.92, "start": 506.16, "text": " So it doesn't seem to be a property of neural networks it really seems to be a property of the data set" }, { "end": 528.0799999999999, "start": 513.92, "text": " And this paper first of all goes over how they collected these and second of all their hypotheses and investigations into why this phenomenon exists" }, { "end": 543.4399999999999, "start": 528.0799999999999, "text": " Why we are all of a sudden worse on the new test set but completely worse in a different way than we expected compared to the original test set" }, { "end": 554.72, "start": 543.44, "text": " All right so they first say potential causes of accuracy drops and they propose a model" }, { "end": 563.6, "start": 554.72, "text": " They say here are two here is the entire difference between two data sets with regards to a classifier" }, { "end": 572.1600000000001, "start": 563.6, "text": " It can be decomposed into three different gaps and you see the first and the last part here are the ones from the left side" }, { "end": 576.9599999999999, "start": 572.16, "text": " So this is an expanding sum in the middle" }, { "end": 588.64, "start": 576.9599999999999, "text": " So there is the generalization gap the generalization gap refers to the gap that you have between different data sets of the same distribution" }, { "end": 598.9599999999999, "start": 588.64, "text": " So these this is like you know from the train versus test set you train on the training set and then you have a generalization gap to the test set" }, { "end": 608.1600000000001, "start": 598.96, "text": " In this case the generalization gap simply refers to the difference between the generalization to the first and to the second set" }, { "end": 622.32, "start": 608.1600000000001, "text": " They argue that this isn't really an issue here because they say they can put up confidence intervals" }, { "end": 633.0400000000001, "start": 622.32, "text": " So if those were identically distributed how much would the generalization gap be at maximum given some kind of confidence interval" }, { "end": 640.5600000000001, "start": 633.0400000000001, "text": " 95 confidence interval would only give you plus minus a one percent difference in generalization gap" }, { "end": 645.84, "start": 640.5600000000001, "text": " So they rule out that this is the reason for the big discrepancy" }, { "end": 650, "start": 645.84, "text": " Then have two others they have the adaptivity gap and the distribution gap" }, { "end": 657.36, "start": 650, "text": " So the adaptivity gap is what we hypothesized at the beginning" }, { "end": 663.12, "start": 657.36, "text": " It is the overfitting to the first data set or to one of the two data sets" }, { "end": 672, "start": 663.12, "text": " So if you have a big adaptivity gap then you have fitted much more to one than to the other data set" }, { "end": 680, "start": 672, "text": " Now because of the shape of the curve being the way it is they also rule out the adaptivity gap" }, { "end": 687.6, "start": 680, "text": " And we went over why right because it would look completely different than it does" }, { "end": 692, "start": 687.6, "text": " Now the only thing remaining here is this distribution gap" }, { "end": 704, "start": 692, "text": " So they explain that this difference here most likely comes from the fact that the old and the new test set have a different distribution" }, { "end": 708, "start": 704, "text": " And they go into why that is" }, { "end": 724, "start": 708, "text": " And I'm going to compress their hypothesis of why that is into a short summary" }, { "end": 728, "start": 724, "text": " Let's say we won't go over the entire paper" }, { "end": 744, "start": 728, "text": " They basically say that the Mechanical Turk part of the processing pipeline has a very big influence" }, { "end": 748, "start": 744, "text": " So what happens when you collect an ImageNet test set? You start with Flickr" }, { "end": 760, "start": 748, "text": " This is a big image database and the images as far as I can understand they are tagged and you can search for them and so on" }, { "end": 766, "start": 760, "text": " So they start by going to Flickr and searching for images" }, { "end": 772, "start": 766, "text": " And their ground truth class labels come, you may know this, from a system called WordNet" }, { "end": 778, "start": 772, "text": " And WordNet is sort of a linguistic classification of words into groups" }, { "end": 788, "start": 778, "text": " So it would have hierarchical, it would have animals, animal being a word and then below animal it would have dog and then it would have terrier" }, { "end": 792, "start": 788, "text": " And it would have these hierarchical groupings of these words" }, { "end": 800, "start": 792, "text": " And they search on Flickr for images and then they put the images to a human rater" }, { "end": 809, "start": 800, "text": " So the human rater on a system called Mechanical Turk, you may know it, you can just sign up there and do these kind of tasks" }, { "end": 817, "start": 809, "text": " They present the human with a grid of images and a class, terrier" }, { "end": 827, "start": 817, "text": " And they say please select all the images where a terrier appears, so the human might select this, this, this and this one" }, { "end": 832, "start": 827, "text": " And that will give you what they call a selection frequency" }, { "end": 842, "start": 832, "text": " So, selection frequency, so how often was a particular image selected given that class" }, { "end": 852, "start": 842, "text": " And of course the higher, so you do this over many, sorry, the selection frequency is across many of these Mechanical Turk workers" }, { "end": 865, "start": 852, "text": " So if for a given image the selection frequency is high, let's say going towards 1.0, that means every single human selected that image to be in this class" }, { "end": 873, "start": 865, "text": " So you can be pretty sure it's in the class, if it goes towards 0, then you can be pretty sure it's not in the class" }, { "end": 891, "start": 873, "text": " And this selection frequency criteria, the paper thinks that this is the main criteria why the datasets are of different difficulties, let's say" }, { "end": 899, "start": 891, "text": " Because even though they try to match the process exactly, even the questions they pose to the Turk workers" }, { "end": 907, "start": 899, "text": " They even restricted their flicker date range to the date range where the original image net was collected" }, { "end": 916, "start": 907, "text": " They still think that there is a difference in how the Mechanical Turk workers basically rated the images" }, { "end": 920, "start": 916, "text": " Or then after that how they were selected using the selection frequency" }, { "end": 930, "start": 920, "text": " So what they do is they do different, they select different images depending on criteria so they can test these hypotheses" }, { "end": 935, "start": 930, "text": " Their original V2 test set is called matched frequency" }, { "end": 941, "start": 935, "text": " Now what you do in matched frequency is you kind of play a little game" }, { "end": 951, "start": 941, "text": " What you do is every now and then you will implant an image here of the V1 test set" }, { "end": 957, "start": 951, "text": " So thereby, of the V1 test set, either of this class or of another class" }, { "end": 960, "start": 957, "text": " So you can kind of do a quality control" }, { "end": 970, "start": 960, "text": " From this you can now find out what is the selection frequency of images in the V1 class for Terrier" }, { "end": 977, "start": 970, "text": " And then you can simply select the same one in the V2" }, { "end": 986, "start": 977, "text": " So if you know the selection frequency for V1 was 0.8, you can just put the threshold here at 0.8" }, { "end": 993, "start": 986, "text": " And you know, you can be reasonably sure that you have selected a similar difficulty" }, { "end": 995, "start": 993, "text": " Or so you would think, right?" }, { "end": 1002, "start": 995, "text": " So if they do it like this, then they get this drop that you saw at the beginning" }, { "end": 1010, "start": 1002, "text": " They also do this threshold 0.7, which I guess is arbitrary-ish" }, { "end": 1015, "start": 1010, "text": " And then they also say top images, where they say" }, { "end": 1020, "start": 1015, "text": " For each class we chose the 10 images with the highest selection frequency" }, { "end": 1026, "start": 1020, "text": " So these would be the sort of easiest ones" }, { "end": 1034, "start": 1026, "text": " And if you look at the graphs, and this is ImageNet for these different datasets" }, { "end": 1042, "start": 1034, "text": " So if you do the threshold 0.7 that they selected, now the old line was somewhere here" }, { "end": 1047, "start": 1042, "text": " Now the new line is much closer, right? You see that here" }, { "end": 1052, "start": 1047, "text": " And if you do these top images, so you just select the easy ones" }, { "end": 1057, "start": 1052, "text": " The new line is actually above, right? This is now" }, { "end": 1064, "start": 1057, "text": " Note that the red line here is above the black line, while here it's below" }, { "end": 1075, "start": 1064, "text": " It is still extremely interesting that still there is this almost linear relationship between the V1 and V2 accuracies" }, { "end": 1079, "start": 1075, "text": " And even here on this easier dataset there is" }, { "end": 1088, "start": 1079, "text": " So they basically hypothesize by thresholding differently for the new dataset" }, { "end": 1095, "start": 1088, "text": " You have a very good grip on the difficulty of the new dataset" }, { "end": 1102, "start": 1095, "text": " So this process of matching the selection frequency, so this matched frequency dataset" }, { "end": 1108, "start": 1102, "text": " It might actually not result in the same difficulty in dataset" }, { "end": 1117, "start": 1108, "text": " They do have some more experiments where they experiment with different difficulties" }, { "end": 1127, "start": 1117, "text": " So let's actually jump down there, it's a bit of a jump because there are over 70 pages in this paper" }, { "end": 1135, "start": 1127, "text": " The appendix has its own content directory, that's how crazy that is" }, { "end": 1145, "start": 1135, "text": " But I want to show you these plots, so here what they do is they do different bins of this set" }, { "end": 1150, "start": 1145, "text": " So they have bins of easy samples, less easy samples and so on" }, { "end": 1157, "start": 1150, "text": " And you can see that you have a pretty good grip on where this line is" }, { "end": 1162, "start": 1157, "text": " So if you only take the easy samples you're up here, and this is about what we saw before" }, { "end": 1169, "start": 1162, "text": " If this is the old line, it's the red line, right? If you take the entire new test set" }, { "end": 1176, "start": 1169, "text": " If you just take bin the second hardest bin you're somewhere here or here or here" }, { "end": 1182, "start": 1176, "text": " So you have a good hold on where this line is" }, { "end": 1187, "start": 1182, "text": " But that still doesn't explain that if you try to follow the protocol exactly" }, { "end": 1191, "start": 1187, "text": " Why does the accuracy drop? That is still a mystery" }, { "end": 1197, "start": 1191, "text": " Even though they say here is a variable that influences this a lot" }, { "end": 1203, "start": 1197, "text": " If they try to set the variable as it was set in v1, it is not equal" }, { "end": 1208, "start": 1203, "text": " And still mystery remains, I would say" }, { "end": 1216, "start": 1208, "text": " So the last thing they do is they try to just come up with a model for this" }, { "end": 1223, "start": 1216, "text": " So their hypothesis now is that the new test set just is harder" }, { "end": 1231, "start": 1223, "text": " And they have an analytical kind of a formal model of why if you assume certain things" }, { "end": 1234, "start": 1231, "text": " This results in this line, right?" }, { "end": 1242, "start": 1234, "text": " The really interesting thing about the paper is that the accuracies, they all fall exactly on this line" }, { "end": 1248, "start": 1242, "text": " There's this linear relationship, especially if you do like probit scaling of accuracies" }, { "end": 1255, "start": 1248, "text": " There's this line, and so they put up a model where you say" }, { "end": 1262, "start": 1255, "text": " What if we assume that each example i has a difficulty, right?" }, { "end": 1264, "start": 1262, "text": " It's just a number how difficult it is" }, { "end": 1276, "start": 1264, "text": " And each model j has a probability of correctly classifying an image with difficulty tau here" }, { "end": 1279, "start": 1276, "text": " Given by this function, right?" }, { "end": 1288, "start": 1279, "text": " So this here is the probability that the model will classify an image correctly, given that it's tau hard" }, { "end": 1297, "start": 1288, "text": " And so this is an increasing function" }, { "end": 1304, "start": 1297, "text": " And they put up this following parameterization" }, { "end": 1311, "start": 1304, "text": " This is the CDF of the... So they put up a model for this function now, right?" }, { "end": 1316, "start": 1311, "text": " For this, they say if we assume that it is like this" }, { "end": 1321, "start": 1316, "text": " That each model has a sort of a skill number" }, { "end": 1327, "start": 1321, "text": " So each model has a skill, and each image has a difficulty that is tau" }, { "end": 1333, "start": 1327, "text": " If the skill is higher than the tau, probably it will classify correctly" }, { "end": 1338, "start": 1333, "text": " If the skill is lower, probably, then this number is negative, it will classify it incorrectly" }, { "end": 1345, "start": 1338, "text": " So this is the CDF of a normal distribution, it goes something like this, right?" }, { "end": 1354, "start": 1345, "text": " And if the zero point is here, so if this number here is zero, it's like 50-50" }, { "end": 1359, "start": 1354, "text": " Whether it will classify it correctly" }, { "end": 1365, "start": 1359, "text": " So if you assume this, and you assume a bunch of other Gaussian error distributions" }, { "end": 1380, "start": 1365, "text": " Then the performance of a model on the test set V2 is exactly the performance of the model under test set V1" }, { "end": 1387, "start": 1380, "text": " Times this scalar here, plus this scalar here, which is a linear relationship" }, { "end": 1390, "start": 1387, "text": " So they put up a model for this" }, { "end": 1393, "start": 1390, "text": " Of course this doesn't explain anything, right?" }, { "end": 1408, "start": 1393, "text": " This doesn't explain the phenomena, but it still gives a clue of why the linear relationship here might result from the test set having a different difficulty setting" }, { "end": 1411, "start": 1408, "text": " Or a different difficulty properties" }, { "end": 1420, "start": 1411, "text": " So they go on, after discussing related work, they go on to say what can one do" }, { "end": 1428, "start": 1420, "text": " And suggestions for future research, I especially like the super holdout" }, { "end": 1441, "start": 1428, "text": " So if you ever make a data set, then make a super holdout set" }, { "end": 1452, "start": 1441, "text": " And once you're almost out of your career, just come up with it and say, oh I have this lost data set here that I made way back" }, { "end": 1454, "start": 1452, "text": " It will be fantastic" }, { "end": 1458, "start": 1454, "text": " Alright, so I think this paper is very interesting" }, { "end": 1467, "start": 1458, "text": " And I think everyone that sees and reads this comes up with their own hypothesis of why this is and what's going on here" }, { "end": 1469, "start": 1467, "text": " They have investigated a lot of this" }, { "end": 1476, "start": 1469, "text": " Especially I want to highlight an experiment where they taken part of V2 here" }, { "end": 1486, "start": 1476, "text": " So they split this one into a train and a test" }, { "end": 1492, "start": 1486, "text": " And they put this and this training together into like a super train" }, { "end": 1499, "start": 1492, "text": " So you train on both things together and they see whether it improves at this test set" }, { "end": 1506, "start": 1499, "text": " You would think that if you put this training in there that it would improve" }, { "end": 1510, "start": 1506, "text": " And it does improve, but it improves by like a miniscule amount" }, { "end": 1514, "start": 1510, "text": " So they've done a whole bunch of experiments like this to investigate what's going on" }, { "end": 1519, "start": 1514, "text": " This is all in this 70 page appendix that you can go over" }, { "end": 1522, "start": 1519, "text": " Alright, that was what I had to say for this paper" }, { "end": 1529, "start": 1522, "text": " If you like this video, consider subscribing and comment what you think" }, { "end": 1533, "start": 1529, "text": " I usually answer or like or read most comments" }, { "end": 1553, "start": 1533, "text": " Thanks for listening, bye bye" } ]
hDQNCWR3HLQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
[ "Science & Technology" ]
[ "deep learning", "machine learning", "schmidhuber", "hinton", "seppo", "rummelhardt", "hochreiter", "lstm", "rbm", "backpropagation", "credit", "science" ]
Schmidhuber writes up a critique of Hinton receiving the Honda Price... AND HINTON REPLIES! Schmidhuber's Blog Entry: http://people.idsia.ch/~juergen/critique-honda-prize-hinton.html Hinton's Reply: https://www.reddit.com/r/MachineLearning/comments/g5ali0/d_schmidhuber_critique_of_honda_prize_for_dr/ Thumbnail Images: By Eviatar Bach -https://de.m.wikipedia.org/wiki/Datei:Geoffrey_Hinton_at_UBC.jpg By ITU/R.Farrell - https://www.flickr.com/photos/itupictures/34343385563, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=75018240 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
On April 21st, Jürgen Schmidhuber tweeted out, Stop crediting the wrong people for inventions made by others. At least in science, the facts will always win at the end, as long as the facts have not yet won. It is not yet the end. No fancy award can ever change that. Hashtag self-correcting science, hashtag plagiarism. And links to an article of his own website, where he wrote, Critique of Honda Prize for Dr. Hinton. So this is on Schmidhuber's own website, and it's by himself. Don't you love this? How to pronounce his name, Jürgen Schmidhuber. You again. Sorry. This is absolutely great. So both, actually, Schmidhuber and Hinton are on Twitter. You can tweet at them and follow them. This article here is basically a critique of the press release of Honda when they awarded Jeff Hinton for his achievements. And it goes through it step by step. And we won't look at the whole thing, but just for you to get the flavor. So here Honda says, Dr. Hinton has created a number of technologies that have enabled the broader application of AI, including the backpropagation algorithm that forms the basis of deep learning approach to AI. And Schmidhuber just goes off. He basically claims, while Hinton and his coworkers have made certain significant contributions to deep learning, the claim above is plain wrong. He says Hinton did not invent backpropagation. The person who invented backpropagation was Seppo Linayma. He says basically many papers failed to cite Linayma, who was the original inventor of backprop and so on. And he goes through a history of this and how it's even earlier. I always have a bit of a trouble with claims like who invented what, because when is an algorithm really the same thing? And when is it a variation on another algorithm? And when is it something completely new? It's never entirely clear. But the points here made that the things, the backpropagation algorithm existed before Hinton. And also that some of the papers, some of the seminal papers did not cite the correct origin. Statement 2. In 2002 he introduced a fast learning algorithm for restricted Boltzmann machines that allowed them to learn a single layer of distributed representation without requiring any labeled data. These methods allowed deep learning to work better and they led to the current deep learning revolution. And he basically goes, no, Dr. Hinton's interesting unsupervised pre-training for deep neural networks was irrelevant for the current deep learning revolution. In 2010 our team showed that the feed forward networks can be trained by plain backprop, do not at all require pre-training. And he basically again says, apart from this Hinton's unsupervised pre-training was conceptually a rehash of my unsupervised pre-training for deep recurrent neural networks. So he, you know, as you know, Schmidhuber has done a lot of work in recurrent neural networks and he basically says it was just a rehash of his algorithm. Now I have to say I have, so first of all, he makes a point here, right, that we don't really do unsupervised pre-training anymore until now, of course. But you, like, to train an MNIST classifier, you don't have to do that. But it's also doubtful that this was a step, even though if it wasn't on the exact path to the current situation, it was a thing that got people excited maybe. And so the critique is like half valid. And also it doesn't help Schmidhuber that he always compares it to his own things. Like it just, like, either criticize them for, you know, general things, but then avoid bringing your own things in because it just sounds like I did this before. And also I read some papers from these times. People just wrote papers sometimes. I haven't read this specific one, but sometimes people just wrote papers writing down their ideas. Like, one could do this and this and this. Never doing any experiments or actually specifying exactly what they mean. They just kind of wrote down a bunch of ideas and that got published. Especially, like, there's some reinforcement learning papers where people are just like, oh, one, I imagine agents doing this and learning from that. So it is, again, it is never really clear. Ideas are just had by everyone. I think people mistake this, that think that ideas are unique. It's not ideas that are unique. Many people have the same ideas, but some... There's also execution and exact formalization and so on. And exact level of specificity. All of this is really hard. And then the Honda says, in 2009, Dr. Hinton and two of his students used multilayer neural nets to make major breakthrough in speech recognition. That led directly to greatly improved. And this, of course, Schmidrub goes off by this because speech recognition is, of course, prime LSTM territory. So you don't want to go near this. And the Honda further says, revolutionized computer vision by showing that deep learning worked far better than existing state of the art. And again, he says the basic ingredients were already there and so on. And our team in Switzerland already used his first superior award-winning GPU-based CNN and so on. That's what was called DanNet, was produced by his group. And again, this seems correct, right? This seems when he lays it out like this, it doesn't change the fact that AlexNet won ImageNet in 2012. And that was like the start of the deep learning revolution. It was like, wow, you can cut the error rate by something like 30% simply by doing this deep learning stuff. So again, even if DanNet says it blew away the competition, it always seems like Schmidhuber is kind of right, but then also he's not. He's like, exact academic work and the idea being there on a paper isn't the only thing that drives progress. And it says, to achieve their dramatic results, Dr. Hinton also invented a widely used new method called dropout, which uses overfitting. No, like no, like no, just no. Randomly dropping parts in order to make something more robust, that is surely not a new thing. And he also says much earlier, there's this stochastic delta rule and so on. And he also critiques that this paper did not cite this. They just gave it the name. This is an idea that is kind of so simple that you wouldn't even necessarily think about researching whether that has existed already. I think they just did it because it's a natural idea, and then they gave it a name, and the name stuck. It's not about the idea itself. And then lastly, they say, of the countless AI-based technological services across the world, it is no exaggeration to say that few would have been possible without the results Dr. Hinton created. I love this. Name one that would not have been possible. And he just gives a list of their own group that are basically possible without Hinton's contributions. And this is just a bit of a cheap shot. Clearly, Honda, they're not saying it would have been physically impossible. Without his contributions. But certainly Hinton has, even if he hadn't invented any of those things, he certainly has created a spark. And these things created a splash, got people excited, people thinking about new ways of applying things, even if this is all true. But I would like you to notice, this is a critique of what Honda says about Hinton. And if I read through the statements of Schmidhuber, most of them are technically correct. And you know, that was that. And then I thought, OK, cool. But then someone posted it on Reddit. And then Hinton replies. And this is... Don't you love this? So Hinton says, Having a public debate with Schmidhuber about academic credit is not advisable because it just encourages him. And there is no limit to the time and effort that he is willing to put into trying to discredit his perceived arrivals. He is even escorted to tricks like having multiple aliases in Wikipedia to make it look as if other people agree. The page on his website about Alan Turing is a nice example of how he goes on trying to... These are shots fired. And he says, I'm going to respond once and only once. I have never claimed that I invented backpropagation. David Rumelhard invented it independently. After other people in other fields had invented it. It's true. When we first published, we did not know the history. So he basically says, OK, we did forget to cite it when we first published about backprop, but he doesn't say he invented it. What I've claimed is that I was the person to clearly demonstrate that backprop could learn interesting internal representations and that this is what made it popular. So this goes into the direction. Schmidhuber is very much on academic contributions. Idea was there before. And Hinton basically says, no, what we did is kind of we showed that it works in this particular way and we kind of got people excited about it. I did this by forcing that, blah, blah, blah. And he says, it is true that many people in the press have said I invented backprop and I've spent a lot of time correcting them. Here is an excerpt from 2018 where this is, I guess, a quote from this book that quotes Hinton where he says, lots of people invented different versions of backprop before David Rumelhart. They were mainly independent inventions, something I feel I've got too much credit for. It's one of these rare cases where an academic feels he has gotten too much credit for something. My main contribution was to show you can use it for learning distributor representations. So I'd like to set the record straight on that. Then he says, maybe Juergen would like to set the record straight on who invented LSTMs. Boom, boom. Crazy. Shots fired by Hinton here. This is, I mean, this is just great. But again, look at what Hinton says. Hinton basically says, yes, I have not invented that. I have corrected this on public record in the past. And yeah, so that's what Hinton says. And I mean, the comments here are just gold. I really invite you to read it. And then Schmidhuber, of course, being Schmidhuber, replies again. Down here, he has a response to the reply. And I don't expect Hinton to reply again, so I waited for a bit. But I believe Hinton when he says he does it only once. So he goes into this. It just says, summary, the facts presented in sections one, two, three, four, five are still valid. So he goes kind of statement by statement. Is this having a public debate? Blah, blah, blah. And he just says, well, this is an ad hominem attack, which is true. Right. This is true. And he says he even has multiple aliases in Wikipedia. And he just says another ad hominem attack. And then he goes into that Schmidhuber tries to discredit Alan Turing. And then Schmidhuber goes into this big, long, big, long, basically claim that Alan Turing wasn't as important as people made him out to be. And people invented this kind of Turing machine equivalence before that. Again, it's kind of Schmidhuber's take that the idea basically was already there and these people don't get the correct credit. And also he's correct that this is a true it's an ad hominem attack. Right. So, you know, be it as it may. This is correct. And then when Hinton goes that he didn't invent Backprop and Schmidhuber says this is finally a response related to my post, which is true. Right. However, it does not at all contradict what I wrote. And it is true that Hinton credited his co-author Rommelhardt with the invention, but neither cited Lin-Neymar and also the statement, lots of people. He says it wasn't created by lots of different people, but exactly one person. So this I find, like, can you really say now this is the exact time when Backprop was invented, even though it probably wasn't in the exact current formulation. And it probably existed somewhat like this. So, but again, and he his main claim is Dr. Hinton accepted the Honda price, although he apparently agrees that Honda's claims are false. He should ask Honda to correct their statements. And maybe you're going to would like to set the record straight. We invented LSTMs and, you know, as we as you may know, Sepp Hochreiter kind of invented LSTMs under Jürgen Schmidhuber as a PhD advisor. But the to summarize Dr. Hinton's comments and ad hominem arguments diverge from the contents of my post and do not challenge the facts and so on. And I have to say after reading this, this is this is correct. Right. Hinton basically replies to, hey, I, I never claimed I invented Backprop and other people have invented it. And Schmidhuber doesn't criticize Hinton in this particular post. He may otherwise. Schmidhuber doesn't criticize Hinton for claiming that he criticizes Honda for claiming that Hinton did. And Hinton doesn't. Hinton basically agrees with him. And also Schmidhuber says Dr. Hinton accepted the Honda price, although he apparently agrees that the claims are false. He should ask Honda to correct their statements. And it is true that Hinton accepted this price under this release. Right. Now, you might be able to say Hinton also says he's on the record basically saying he didn't do this. And I guess if you're Hinton and you know, you've had this you've had the successful career and so on. And you have previously really publicly stated that you didn't invent these things and, you know, made it clear. And then you get a prize and they write this thing. Maybe you just don't want to go after every single press statement and correcting that. But, you know, in essence, basically Hinton understood this as an attack on himself that he claims he invented Backprop. And Schmidhuber says Honda claims Hinton invented Backprop and Hinton accepted the price. So he agrees with it and Hinton basically agrees with it, but doesn't say Honda should have corrected it, which I can understand. So this is my take on this issue. It's kind of both are correct and they just kind of talk past each other. And Schmidhuber is always on the idea existed before. And Hinton is correct when he says it's not always just about the idea. Progress is also made by people being excited, people actually getting something to work, people, you know, doing something at the right time, the right place, which is also correct. But it is fun. It is fun. So I just I enjoyed I enjoy this honestly, like because ultimately this is the kind of discussions also need to happen in science because credit assignment is an important thing in science. And even though sometimes it's over the top, like Schmidhuber always going after it, I think we need people like him just kind of to keep the field in check a bit. And yeah, I will link to all of this. I hope you enjoy this and I wish you a nice rest of the weekend. If you're still here, consider subscribing and leave comment if you want. I usually read them. Bye bye.
[ { "end": 4, "start": 0, "text": " On April 21st, Jürgen Schmidhuber tweeted out," }, { "end": 8, "start": 4, "text": " Stop crediting the wrong people for inventions made by others." }, { "end": 11, "start": 8, "text": " At least in science, the facts will always win at the end," }, { "end": 14, "start": 11, "text": " as long as the facts have not yet won." }, { "end": 16, "start": 14, "text": " It is not yet the end." }, { "end": 19, "start": 16, "text": " No fancy award can ever change that." }, { "end": 24, "start": 19, "text": " Hashtag self-correcting science, hashtag plagiarism." }, { "end": 28, "start": 24, "text": " And links to an article of his own website," }, { "end": 30, "start": 28, "text": " where he wrote," }, { "end": 34, "start": 30, "text": " Critique of Honda Prize for Dr. Hinton." }, { "end": 37, "start": 34, "text": " So this is on Schmidhuber's own website," }, { "end": 39, "start": 37, "text": " and it's by himself." }, { "end": 41, "start": 39, "text": " Don't you love this?" }, { "end": 44, "start": 41, "text": " How to pronounce his name, Jürgen Schmidhuber." }, { "end": 47, "start": 44, "text": " You again. Sorry." }, { "end": 49, "start": 47, "text": " This is absolutely great." }, { "end": 52, "start": 49, "text": " So both, actually, Schmidhuber and Hinton are on Twitter." }, { "end": 55, "start": 52, "text": " You can tweet at them and follow them." }, { "end": 59, "start": 55, "text": " This article here is basically a critique" }, { "end": 62, "start": 59, "text": " of the press release of Honda" }, { "end": 65, "start": 62, "text": " when they awarded Jeff Hinton" }, { "end": 68, "start": 65, "text": " for his achievements." }, { "end": 71, "start": 68, "text": " And it goes through it step by step." }, { "end": 73, "start": 71, "text": " And we won't look at the whole thing," }, { "end": 75, "start": 73, "text": " but just for you to get the flavor." }, { "end": 77, "start": 75, "text": " So here Honda says," }, { "end": 80, "start": 77, "text": " Dr. Hinton has created a number of technologies" }, { "end": 83, "start": 80, "text": " that have enabled the broader application of AI," }, { "end": 85, "start": 83, "text": " including the backpropagation algorithm" }, { "end": 89, "start": 85, "text": " that forms the basis of deep learning approach to AI." }, { "end": 92, "start": 89, "text": " And Schmidhuber just goes off." }, { "end": 94, "start": 92, "text": " He basically claims," }, { "end": 96, "start": 94, "text": " while Hinton and his coworkers have made" }, { "end": 98, "start": 96, "text": " certain significant contributions to deep learning," }, { "end": 101, "start": 98, "text": " the claim above is plain wrong." }, { "end": 105, "start": 101, "text": " He says Hinton did not invent backpropagation." }, { "end": 110, "start": 105, "text": " The person who invented backpropagation was Seppo Linayma." }, { "end": 118, "start": 110, "text": " He says basically many papers failed to cite Linayma," }, { "end": 124, "start": 118, "text": " who was the original inventor of backprop and so on." }, { "end": 127, "start": 124, "text": " And he goes through a history of this" }, { "end": 129, "start": 127, "text": " and how it's even earlier." }, { "end": 131, "start": 129, "text": " I always have a bit of a trouble with claims" }, { "end": 132, "start": 131, "text": " like who invented what," }, { "end": 136, "start": 132, "text": " because when is an algorithm really the same thing?" }, { "end": 138, "start": 136, "text": " And when is it a variation on another algorithm?" }, { "end": 140, "start": 138, "text": " And when is it something completely new?" }, { "end": 142, "start": 140, "text": " It's never entirely clear." }, { "end": 146, "start": 142, "text": " But the points here made that the things," }, { "end": 152, "start": 146, "text": " the backpropagation algorithm existed before Hinton." }, { "end": 156, "start": 152, "text": " And also that some of the papers," }, { "end": 160, "start": 156, "text": " some of the seminal papers did not cite the correct origin." }, { "end": 163, "start": 160, "text": " Statement 2. In 2002 he introduced" }, { "end": 167, "start": 163, "text": " a fast learning algorithm for restricted Boltzmann machines" }, { "end": 171, "start": 167, "text": " that allowed them to learn a single layer of distributed representation" }, { "end": 173, "start": 171, "text": " without requiring any labeled data." }, { "end": 176, "start": 173, "text": " These methods allowed deep learning to work better" }, { "end": 179, "start": 176, "text": " and they led to the current deep learning revolution." }, { "end": 184, "start": 179, "text": " And he basically goes, no, Dr. Hinton's interesting unsupervised pre-training" }, { "end": 187, "start": 184, "text": " for deep neural networks was irrelevant" }, { "end": 189, "start": 187, "text": " for the current deep learning revolution." }, { "end": 193, "start": 189, "text": " In 2010 our team showed that the feed forward networks" }, { "end": 195, "start": 193, "text": " can be trained by plain backprop," }, { "end": 198, "start": 195, "text": " do not at all require pre-training." }, { "end": 201, "start": 198, "text": " And he basically again says," }, { "end": 203, "start": 201, "text": " apart from this Hinton's unsupervised pre-training" }, { "end": 207, "start": 203, "text": " was conceptually a rehash of my unsupervised pre-training" }, { "end": 210, "start": 207, "text": " for deep recurrent neural networks." }, { "end": 214, "start": 210, "text": " So he, you know, as you know, Schmidhuber has done a lot of work" }, { "end": 217, "start": 214, "text": " in recurrent neural networks and he basically says" }, { "end": 220, "start": 217, "text": " it was just a rehash of his algorithm." }, { "end": 223, "start": 220, "text": " Now I have to say I have," }, { "end": 227, "start": 223, "text": " so first of all, he makes a point here, right," }, { "end": 231, "start": 227, "text": " that we don't really do unsupervised pre-training anymore" }, { "end": 233, "start": 231, "text": " until now, of course." }, { "end": 236, "start": 233, "text": " But you, like, to train an MNIST classifier," }, { "end": 238, "start": 236, "text": " you don't have to do that." }, { "end": 242, "start": 238, "text": " But it's also doubtful that this was a step," }, { "end": 246, "start": 242, "text": " even though if it wasn't on the exact path" }, { "end": 249, "start": 246, "text": " to the current situation," }, { "end": 252, "start": 249, "text": " it was a thing that got people excited maybe." }, { "end": 255, "start": 252, "text": " And so the critique is like half valid." }, { "end": 258, "start": 255, "text": " And also it doesn't help Schmidhuber" }, { "end": 261, "start": 258, "text": " that he always compares it to his own things." }, { "end": 266, "start": 261, "text": " Like it just, like, either criticize them for, you know," }, { "end": 270, "start": 266, "text": " general things, but then avoid bringing your own things in" }, { "end": 273, "start": 270, "text": " because it just sounds like I did this before." }, { "end": 276, "start": 273, "text": " And also I read some papers from these times." }, { "end": 279, "start": 276, "text": " People just wrote papers sometimes." }, { "end": 281, "start": 279, "text": " I haven't read this specific one," }, { "end": 283, "start": 281, "text": " but sometimes people just wrote papers" }, { "end": 285, "start": 283, "text": " writing down their ideas." }, { "end": 288, "start": 285, "text": " Like, one could do this and this and this." }, { "end": 290, "start": 288, "text": " Never doing any experiments" }, { "end": 294, "start": 290, "text": " or actually specifying exactly what they mean." }, { "end": 296, "start": 294, "text": " They just kind of wrote down a bunch of ideas" }, { "end": 298, "start": 296, "text": " and that got published." }, { "end": 302, "start": 298, "text": " Especially, like, there's some reinforcement learning papers" }, { "end": 304, "start": 302, "text": " where people are just like, oh, one," }, { "end": 308, "start": 304, "text": " I imagine agents doing this and learning from that." }, { "end": 313, "start": 308, "text": " So it is, again, it is never really clear." }, { "end": 315, "start": 313, "text": " Ideas are just had by everyone." }, { "end": 318, "start": 315, "text": " I think people mistake this," }, { "end": 320, "start": 318, "text": " that think that ideas are unique." }, { "end": 322, "start": 320, "text": " It's not ideas that are unique." }, { "end": 326, "start": 322, "text": " Many people have the same ideas, but some..." }, { "end": 331, "start": 326, "text": " There's also execution and exact formalization and so on." }, { "end": 333, "start": 331, "text": " And exact level of specificity." }, { "end": 335, "start": 333, "text": " All of this is really hard." }, { "end": 339, "start": 335, "text": " And then the Honda says, in 2009, Dr. Hinton" }, { "end": 341, "start": 339, "text": " and two of his students used multilayer neural nets" }, { "end": 343, "start": 341, "text": " to make major breakthrough in speech recognition." }, { "end": 345, "start": 343, "text": " That led directly to greatly improved." }, { "end": 348, "start": 345, "text": " And this, of course, Schmidrub goes off by this" }, { "end": 355, "start": 348, "text": " because speech recognition is, of course, prime LSTM territory." }, { "end": 359, "start": 355, "text": " So you don't want to go near this." }, { "end": 361, "start": 359, "text": " And the Honda further says," }, { "end": 364, "start": 361, "text": " revolutionized computer vision" }, { "end": 366, "start": 364, "text": " by showing that deep learning worked far better" }, { "end": 368, "start": 366, "text": " than existing state of the art." }, { "end": 372, "start": 368, "text": " And again, he says the basic ingredients" }, { "end": 375, "start": 372, "text": " were already there and so on." }, { "end": 380, "start": 375, "text": " And our team in Switzerland already used" }, { "end": 384, "start": 380, "text": " his first superior award-winning GPU-based CNN and so on." }, { "end": 388, "start": 384, "text": " That's what was called DanNet, was produced by his group." }, { "end": 391, "start": 388, "text": " And again, this seems correct, right?" }, { "end": 393, "start": 391, "text": " This seems when he lays it out like this," }, { "end": 398, "start": 393, "text": " it doesn't change the fact that AlexNet won ImageNet in 2012." }, { "end": 403, "start": 398, "text": " And that was like the start of the deep learning revolution." }, { "end": 410, "start": 403, "text": " It was like, wow, you can cut the error rate by something like 30%" }, { "end": 414, "start": 410, "text": " simply by doing this deep learning stuff." }, { "end": 420, "start": 414, "text": " So again, even if DanNet says it blew away the competition," }, { "end": 425, "start": 420, "text": " it always seems like Schmidhuber is kind of right," }, { "end": 429, "start": 425, "text": " but then also he's not." }, { "end": 434, "start": 429, "text": " He's like, exact academic work" }, { "end": 437, "start": 434, "text": " and the idea being there on a paper" }, { "end": 442, "start": 437, "text": " isn't the only thing that drives progress." }, { "end": 445, "start": 442, "text": " And it says, to achieve their dramatic results," }, { "end": 449, "start": 445, "text": " Dr. Hinton also invented a widely used new method called dropout," }, { "end": 451, "start": 449, "text": " which uses overfitting." }, { "end": 456, "start": 451, "text": " No, like no, like no, just no." }, { "end": 461, "start": 456, "text": " Randomly dropping parts in order to make something more robust," }, { "end": 466, "start": 461, "text": " that is surely not a new thing." }, { "end": 473, "start": 466, "text": " And he also says much earlier, there's this stochastic delta rule and so on." }, { "end": 478, "start": 473, "text": " And he also critiques that this paper did not cite this." }, { "end": 480, "start": 478, "text": " They just gave it the name." }, { "end": 483, "start": 480, "text": " This is an idea that is kind of so simple" }, { "end": 487, "start": 483, "text": " that you wouldn't even necessarily think about researching" }, { "end": 489, "start": 487, "text": " whether that has existed already." }, { "end": 493, "start": 489, "text": " I think they just did it because it's a natural idea," }, { "end": 496, "start": 493, "text": " and then they gave it a name, and the name stuck." }, { "end": 499, "start": 496, "text": " It's not about the idea itself." }, { "end": 502, "start": 499, "text": " And then lastly, they say, of the countless AI-based technological services" }, { "end": 506, "start": 502, "text": " across the world, it is no exaggeration to say that few would have been possible" }, { "end": 509, "start": 506, "text": " without the results Dr. Hinton created." }, { "end": 510, "start": 509, "text": " I love this." }, { "end": 516, "start": 510, "text": " Name one that would not have been possible." }, { "end": 521, "start": 516, "text": " And he just gives a list of their own group" }, { "end": 525, "start": 521, "text": " that are basically possible without Hinton's contributions." }, { "end": 529, "start": 525, "text": " And this is just a bit of a cheap shot." }, { "end": 535, "start": 529, "text": " Clearly, Honda, they're not saying it would have been physically impossible." }, { "end": 538, "start": 535, "text": " Without his contributions." }, { "end": 547, "start": 538, "text": " But certainly Hinton has, even if he hadn't invented any of those things," }, { "end": 551, "start": 547, "text": " he certainly has created a spark." }, { "end": 555, "start": 551, "text": " And these things created a splash, got people excited," }, { "end": 560, "start": 555, "text": " people thinking about new ways of applying things, even if this is all true." }, { "end": 569, "start": 560, "text": " But I would like you to notice," }, { "end": 574, "start": 569, "text": " this is a critique of what Honda says about Hinton." }, { "end": 577, "start": 574, "text": " And if I read through the statements of Schmidhuber," }, { "end": 580, "start": 577, "text": " most of them are technically correct." }, { "end": 583, "start": 580, "text": " And you know, that was that." }, { "end": 585, "start": 583, "text": " And then I thought, OK, cool." }, { "end": 588, "start": 585, "text": " But then someone posted it on Reddit." }, { "end": 592, "start": 588, "text": " And then Hinton replies." }, { "end": 594, "start": 592, "text": " And this is..." }, { "end": 595, "start": 594, "text": " Don't you love this?" }, { "end": 598, "start": 595, "text": " So Hinton says," }, { "end": 602, "start": 598, "text": " Having a public debate with Schmidhuber about academic credit is not advisable" }, { "end": 604, "start": 602, "text": " because it just encourages him." }, { "end": 608, "start": 604, "text": " And there is no limit to the time and effort that he is willing to put" }, { "end": 613, "start": 608, "text": " into trying to discredit his perceived arrivals." }, { "end": 617, "start": 613, "text": " He is even escorted to tricks like having multiple aliases in Wikipedia" }, { "end": 620, "start": 617, "text": " to make it look as if other people agree." }, { "end": 625, "start": 620, "text": " The page on his website about Alan Turing is a nice example" }, { "end": 627, "start": 625, "text": " of how he goes on trying to..." }, { "end": 631, "start": 627, "text": " These are shots fired." }, { "end": 634, "start": 631, "text": " And he says, I'm going to respond once and only once." }, { "end": 638, "start": 634, "text": " I have never claimed that I invented backpropagation." }, { "end": 643, "start": 638, "text": " David Rumelhard invented it independently." }, { "end": 649, "start": 643, "text": " After other people in other fields had invented it." }, { "end": 652, "start": 649, "text": " It's true. When we first published, we did not know the history." }, { "end": 658, "start": 652, "text": " So he basically says, OK, we did forget to cite it when we first published" }, { "end": 664, "start": 658, "text": " about backprop, but he doesn't say he invented it." }, { "end": 666, "start": 664, "text": " What I've claimed is that I was the person to clearly demonstrate that" }, { "end": 669, "start": 666, "text": " backprop could learn interesting internal representations" }, { "end": 672, "start": 669, "text": " and that this is what made it popular." }, { "end": 674, "start": 672, "text": " So this goes into the direction." }, { "end": 677, "start": 674, "text": " Schmidhuber is very much on academic contributions." }, { "end": 678, "start": 677, "text": " Idea was there before." }, { "end": 682, "start": 678, "text": " And Hinton basically says, no, what we did is kind of we showed" }, { "end": 687, "start": 682, "text": " that it works in this particular way and we kind of got people excited about it." }, { "end": 694, "start": 687, "text": " I did this by forcing that, blah, blah, blah." }, { "end": 698, "start": 694, "text": " And he says, it is true that many people in the press have said" }, { "end": 701, "start": 698, "text": " I invented backprop and I've spent a lot of time correcting them." }, { "end": 708, "start": 701, "text": " Here is an excerpt from 2018 where this is, I guess, a quote from this book" }, { "end": 713, "start": 708, "text": " that quotes Hinton where he says, lots of people invented different versions" }, { "end": 715, "start": 713, "text": " of backprop before David Rumelhart." }, { "end": 721, "start": 715, "text": " They were mainly independent inventions, something I feel I've got too much credit for." }, { "end": 726, "start": 721, "text": " It's one of these rare cases where an academic feels he has gotten too much credit for something." }, { "end": 730, "start": 726, "text": " My main contribution was to show you can use it for learning distributor representations." }, { "end": 734, "start": 730, "text": " So I'd like to set the record straight on that." }, { "end": 740, "start": 734, "text": " Then he says, maybe Juergen would like to set the record straight on who invented LSTMs." }, { "end": 743, "start": 740, "text": " Boom, boom." }, { "end": 744, "start": 743, "text": " Crazy." }, { "end": 748, "start": 744, "text": " Shots fired by Hinton here." }, { "end": 751, "start": 748, "text": " This is, I mean, this is just great." }, { "end": 755, "start": 751, "text": " But again, look at what Hinton says." }, { "end": 759, "start": 755, "text": " Hinton basically says, yes, I have not invented that." }, { "end": 764, "start": 759, "text": " I have corrected this on public record in the past." }, { "end": 768, "start": 764, "text": " And yeah, so that's what Hinton says." }, { "end": 774, "start": 768, "text": " And I mean, the comments here are just gold." }, { "end": 776, "start": 774, "text": " I really invite you to read it." }, { "end": 781, "start": 776, "text": " And then Schmidhuber, of course, being Schmidhuber, replies again." }, { "end": 786, "start": 781, "text": " Down here, he has a response to the reply." }, { "end": 790, "start": 786, "text": " And I don't expect Hinton to reply again, so I waited for a bit." }, { "end": 793, "start": 790, "text": " But I believe Hinton when he says he does it only once." }, { "end": 797, "start": 793, "text": " So he goes into this." }, { "end": 805, "start": 797, "text": " It just says, summary, the facts presented in sections one, two, three, four, five are still valid." }, { "end": 808, "start": 805, "text": " So he goes kind of statement by statement." }, { "end": 811, "start": 808, "text": " Is this having a public debate? Blah, blah, blah." }, { "end": 815, "start": 811, "text": " And he just says, well, this is an ad hominem attack, which is true." }, { "end": 816, "start": 815, "text": " Right. This is true." }, { "end": 820, "start": 816, "text": " And he says he even has multiple aliases in Wikipedia." }, { "end": 825, "start": 820, "text": " And he just says another ad hominem attack." }, { "end": 830, "start": 825, "text": " And then he goes into that Schmidhuber tries to discredit Alan Turing." }, { "end": 841, "start": 830, "text": " And then Schmidhuber goes into this big, long, big, long, basically claim that Alan Turing wasn't as important as people made him out to be." }, { "end": 846, "start": 841, "text": " And people invented this kind of Turing machine equivalence before that." }, { "end": 853, "start": 846, "text": " Again, it's kind of Schmidhuber's take that the idea basically was already there and these people don't get the correct credit." }, { "end": 865, "start": 853, "text": " And also he's correct that this is a true it's an ad hominem attack." }, { "end": 869, "start": 865, "text": " Right. So, you know, be it as it may." }, { "end": 871, "start": 869, "text": " This is correct." }, { "end": 881, "start": 871, "text": " And then when Hinton goes that he didn't invent Backprop and Schmidhuber says this is finally a response related to my post, which is true." }, { "end": 885, "start": 881, "text": " Right. However, it does not at all contradict what I wrote." }, { "end": 896, "start": 885, "text": " And it is true that Hinton credited his co-author Rommelhardt with the invention, but neither cited Lin-Neymar and also the statement, lots of people." }, { "end": 901, "start": 896, "text": " He says it wasn't created by lots of different people, but exactly one person." }, { "end": 916, "start": 901, "text": " So this I find, like, can you really say now this is the exact time when Backprop was invented, even though it probably wasn't in the exact current formulation." }, { "end": 919, "start": 916, "text": " And it probably existed somewhat like this." }, { "end": 931, "start": 919, "text": " So, but again, and he his main claim is Dr. Hinton accepted the Honda price, although he apparently agrees that Honda's claims are false." }, { "end": 934, "start": 931, "text": " He should ask Honda to correct their statements." }, { "end": 938, "start": 934, "text": " And maybe you're going to would like to set the record straight." }, { "end": 951, "start": 938, "text": " We invented LSTMs and, you know, as we as you may know, Sepp Hochreiter kind of invented LSTMs under Jürgen Schmidhuber as a PhD advisor." }, { "end": 962, "start": 951, "text": " But the to summarize Dr. Hinton's comments and ad hominem arguments diverge from the contents of my post and do not challenge the facts and so on." }, { "end": 966, "start": 962, "text": " And I have to say after reading this, this is this is correct." }, { "end": 977, "start": 966, "text": " Right. Hinton basically replies to, hey, I, I never claimed I invented Backprop and other people have invented it." }, { "end": 981, "start": 977, "text": " And Schmidhuber doesn't criticize Hinton in this particular post." }, { "end": 991, "start": 981, "text": " He may otherwise. Schmidhuber doesn't criticize Hinton for claiming that he criticizes Honda for claiming that Hinton did." }, { "end": 994, "start": 991, "text": " And Hinton doesn't. Hinton basically agrees with him." }, { "end": 1000, "start": 994, "text": " And also Schmidhuber says Dr. Hinton accepted the Honda price, although he apparently agrees that the claims are false." }, { "end": 1003, "start": 1000, "text": " He should ask Honda to correct their statements." }, { "end": 1008, "start": 1003, "text": " And it is true that Hinton accepted this price under this release." }, { "end": 1015, "start": 1008, "text": " Right. Now, you might be able to say Hinton also says he's on the record basically saying he didn't do this." }, { "end": 1021, "start": 1015, "text": " And I guess if you're Hinton and you know, you've had this you've had the successful career and so on." }, { "end": 1028, "start": 1021, "text": " And you have previously really publicly stated that you didn't invent these things and, you know, made it clear." }, { "end": 1031, "start": 1028, "text": " And then you get a prize and they write this thing." }, { "end": 1037, "start": 1031, "text": " Maybe you just don't want to go after every single press statement and correcting that." }, { "end": 1047, "start": 1037, "text": " But, you know, in essence, basically Hinton understood this as an attack on himself that he claims he invented Backprop." }, { "end": 1052, "start": 1047, "text": " And Schmidhuber says Honda claims Hinton invented Backprop and Hinton accepted the price." }, { "end": 1062, "start": 1052, "text": " So he agrees with it and Hinton basically agrees with it, but doesn't say Honda should have corrected it, which I can understand." }, { "end": 1066, "start": 1062, "text": " So this is my take on this issue." }, { "end": 1073, "start": 1066, "text": " It's kind of both are correct and they just kind of talk past each other." }, { "end": 1079, "start": 1073, "text": " And Schmidhuber is always on the idea existed before." }, { "end": 1085, "start": 1079, "text": " And Hinton is correct when he says it's not always just about the idea." }, { "end": 1097, "start": 1085, "text": " Progress is also made by people being excited, people actually getting something to work, people, you know, doing something at the right time, the right place, which is also correct." }, { "end": 1100, "start": 1097, "text": " But it is fun. It is fun." }, { "end": 1118, "start": 1100, "text": " So I just I enjoyed I enjoy this honestly, like because ultimately this is the kind of discussions also need to happen in science because credit assignment is an important thing in science." }, { "end": 1128, "start": 1118, "text": " And even though sometimes it's over the top, like Schmidhuber always going after it, I think we need people like him just kind of to keep the field in check a bit." }, { "end": 1131, "start": 1128, "text": " And yeah, I will link to all of this." }, { "end": 1135, "start": 1131, "text": " I hope you enjoy this and I wish you a nice rest of the weekend." }, { "end": 1140, "start": 1135, "text": " If you're still here, consider subscribing and leave comment if you want." }, { "end": 1159, "start": 1140, "text": " I usually read them. Bye bye." } ]
gJR28onlqzs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
How much memory does Longformer use?
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "tensor2tensor", "rnn", "recurrent", "seq2seq" ]
A calculation of the memory requirements of the Longformer. Original video: https://youtu.be/_8KNb5iqblE Paper: https://arxiv.org/abs/2004.05150 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
So I wanted to come back to this paper here about the longformer. I have done a video on this. If you haven't seen it then this video is probably not going to make much sense to you. But in the video I go over what the longformer is, what it does, how it compares and so on. And the gist of the longformer is that it can now do a transformer model on a long document as you can read here. So I've gotten a lot of questions of like does that mean we can now have much longer documents, right? The BERT model doesn't fit into my memory, can this solve my problem? And I just kind of want to go into the math of the longformer memory requirements here because I think I've alluded to it but it is quite a... I think the graphics here are just a bit misleading from the way they implement it. Now I've already gone over something like this in the last thing. So Roberta, let's spell this correctly, Roberta, that is their baseline, has a size, a let's call that N0 of 512. So they can have 512 tokens at the same time. So if you have a sequence that is way longer than 512 you need to chunk it up into pieces of 512 and usually you do something like overlapping pieces or something like this, right? And now the promise of the longformer as it is in the paper is that you can put all of this into the longformer, right? And it will do this sliding window attention thing where it basically slides a window here, this window, across this input sequence and only does this local attention, right, within the window. And then it also has some global attention that it constantly has. Now what I find interesting is that in their experiments their window size here, so the longformer window size is 512, right? So within that window you have the classic N squared full attention, right? So let's just go into that. How much memory does the longformer really do? We've already calculated it here a bit but I want to take this still apart a bit. So as you can see on the left here you have N times W that you have for this middle band, right? So this middle band is N times W. Then you want to add the global attention, right? So the global attention, you can already see it right here, if you have one, two, three, four locations of global attention you have four times two because you also have them in this direction, right? You have them in both directions times your full sequence length. So plus two times full sequence length times the number of global attention. I call this S over here. So as we saw up here the window size here was N zero in their experiments. So let's replace this window size by N zero and actually let's factor out the N. So we'll get to N times N zero plus 2S. Alright, so you can already see that Roberta originally had N zero squared. Now if N is larger than N zero that means you already use more here. The kind of trick, it's not really a trick, it is true that this is order of N, right? If N is your input sequence length but in this here is technically order of N squared if N, if this is N. But the sequence length in Roberta was the window size of the long former. So this is N zero squared, right? And here technically you'd have to say this is N times N zero. So if N is larger than N zero you can see that this uses more memory given that. So in their experiments they use a model that on paper uses more memory than the baseline model and saying that it scales linearly with sequence length is because, I mean of course it scales linearly because they can now input these long sequences, right? And the attention, sorry the memory requirements scales basically linear and also linear with the window size. Now the window size still needs to be apparently large-ish in order to achieve the performance. So the fact that the performance is equal or better is not really a secret because it uses more memory, right? It's not like this model uses less memory but outperforms the old one, it uses more. If you want to look at it you have to ask, okay I have Roberta and right now I can do N squared. So this is N, this is N, so there's N zero, N zero. This is my sequence length that I can put into Roberta. You have to ask yourself what kind of sequence do I want to put in? And if you say I want to put in a sequence that's twice as long, right? I want to put in this long of a sequence, so N here would be twice N zero. Then you have to take this, put it here, put it here and then you realize, yes, that your window size of the long former can only be half, right? So if you have the same amount of memory you can double your sequence length at the cost of having your window size but that doesn't yet include the cost of the global attention. So any global attention you do will come basically on top of the window size. You see this here, right? So you decide on, let's do it like this, you decide on how long you want your thing, your input sequence length to be, then you decide, and that means that's this rectangle here, then you decide how many global attentions do I want and here I say I want one global attention and you have to cross out as many rows here as you want global attention and what remains is your window. Actually you have to cross out twice but we don't have, we only have one left, but you get the point. You have to cross out two times S rows of how many global attentions you want and what remains will be your window size. In this case it's just a window size of one. So that's how you would construct a longformer that takes in the same amount of memory as a your classic model but can take a full n sequence length. Alright? So I just wanted to kind of make that clear, go through the calculation myself and I hope that helped. Thanks for listening and if you liked this consider subscribing, liking and bye bye.
[ { "end": 7.32, "start": 0, "text": " So I wanted to come back to this paper here about the longformer. I have done a" }, { "end": 11, "start": 7.32, "text": " video on this. If you haven't seen it then this video is probably not going to" }, { "end": 16.32, "start": 11, "text": " make much sense to you. But in the video I go over what the longformer is, what it" }, { "end": 21.52, "start": 16.32, "text": " does, how it compares and so on. And the gist of the longformer is that it can" }, { "end": 31.36, "start": 21.52, "text": " now do a transformer model on a long document as you can read here. So I've" }, { "end": 35.96, "start": 31.36, "text": " gotten a lot of questions of like does that mean we can now have much longer" }, { "end": 41.2, "start": 35.96, "text": " documents, right? The BERT model doesn't fit into my memory, can this solve my" }, { "end": 47.519999999999996, "start": 41.2, "text": " problem? And I just kind of want to go into the math of the longformer memory" }, { "end": 54.88, "start": 47.52, "text": " requirements here because I think I've alluded to it but it is quite a..." }, { "end": 62.400000000000006, "start": 54.88, "text": " I think the graphics here are just a bit misleading from the way they implement" }, { "end": 68.36, "start": 62.400000000000006, "text": " it. Now I've already gone over something like this in the last thing. So Roberta," }, { "end": 77.12, "start": 68.36, "text": " let's spell this correctly, Roberta, that is their baseline, has a size, a" }, { "end": 88.4, "start": 77.12, "text": " let's call that N0 of 512. So they can have 512 tokens at the same time. So if" }, { "end": 94.48, "start": 88.4, "text": " you have a sequence that is way longer than 512 you need to chunk it up into" }, { "end": 100.60000000000001, "start": 94.48, "text": " pieces of 512 and usually you do something like overlapping pieces or" }, { "end": 106, "start": 100.60000000000001, "text": " something like this, right? And now the promise of the longformer as it is in" }, { "end": 114.28, "start": 106, "text": " the paper is that you can put all of this into the longformer, right? And it" }, { "end": 121.84, "start": 114.28, "text": " will do this sliding window attention thing where it basically slides a window" }, { "end": 129.24, "start": 121.84, "text": " here, this window, across this input sequence and only does this local" }, { "end": 134.56, "start": 129.24, "text": " attention, right, within the window. And then it also has some global attention" }, { "end": 140.56, "start": 134.56, "text": " that it constantly has. Now what I find interesting is that in their experiments" }, { "end": 149.28, "start": 140.56, "text": " their window size here, so the longformer window size is 512, right? So" }, { "end": 158.84, "start": 149.28, "text": " within that window you have the classic N squared full attention, right? So let's" }, { "end": 166.92000000000002, "start": 158.84, "text": " just go into that. How much memory does the longformer really do? We've" }, { "end": 175.68, "start": 166.92000000000002, "text": " already calculated it here a bit but I want to take this still apart a bit. So" }, { "end": 185.92000000000002, "start": 175.68, "text": " as you can see on the left here you have N times W that you have for this middle" }, { "end": 194, "start": 185.92, "text": " band, right? So this middle band is N times W. Then you want to add the global" }, { "end": 200.04, "start": 194, "text": " attention, right? So the global attention, you can already see it right here, if you" }, { "end": 209.04, "start": 200.04, "text": " have one, two, three, four locations of global attention you have four times two" }, { "end": 214.16, "start": 209.04, "text": " because you also have them in this direction, right? You have them in both" }, { "end": 221.96, "start": 214.16, "text": " directions times your full sequence length. So plus two times full sequence" }, { "end": 231.07999999999998, "start": 221.96, "text": " length times the number of global attention. I call this S over here. So as" }, { "end": 242.07999999999998, "start": 231.07999999999998, "text": " we saw up here the window size here was N zero in their experiments. So let's" }, { "end": 251.24, "start": 242.08, "text": " replace this window size by N zero and actually let's factor out the N. So we'll" }, { "end": 268.04, "start": 251.24, "text": " get to N times N zero plus 2S. Alright, so you can already see that Roberta" }, { "end": 278.96000000000004, "start": 268.04, "text": " originally had N zero squared. Now if N is larger than N zero that means you" }, { "end": 287.84000000000003, "start": 278.96000000000004, "text": " already use more here. The kind of trick, it's not really a trick, it is" }, { "end": 297.44, "start": 287.84000000000003, "text": " true that this is order of N, right? If N is your input sequence length but in" }, { "end": 307.4, "start": 297.44, "text": " this here is technically order of N squared if N, if this is N. But the" }, { "end": 314.42, "start": 307.4, "text": " sequence length in Roberta was the window size of the long former. So this" }, { "end": 320.4, "start": 314.42, "text": " is N zero squared, right? And here technically you'd have to say this is N" }, { "end": 330.32, "start": 320.4, "text": " times N zero. So if N is larger than N zero you can see that this uses more" }, { "end": 338.03999999999996, "start": 330.32, "text": " memory given that. So in their experiments they use a model that on" }, { "end": 345.76, "start": 338.03999999999996, "text": " paper uses more memory than the baseline model and saying that it scales" }, { "end": 351.88, "start": 345.76, "text": " linearly with sequence length is because, I mean of course it scales linearly" }, { "end": 358.56, "start": 351.88, "text": " because they can now input these long sequences, right? And the attention, sorry" }, { "end": 364.15999999999997, "start": 358.56, "text": " the memory requirements scales basically linear and also linear with the window" }, { "end": 371.48, "start": 364.15999999999997, "text": " size. Now the window size still needs to be apparently large-ish in order to" }, { "end": 376.08000000000004, "start": 371.48, "text": " achieve the performance. So the fact that the performance is equal or better is" }, { "end": 386.32, "start": 376.08000000000004, "text": " not really a secret because it uses more memory, right? It's not like this model" }, { "end": 395.28000000000003, "start": 386.32, "text": " uses less memory but outperforms the old one, it uses more. If you want to look at" }, { "end": 407.52, "start": 395.28, "text": " it you have to ask, okay I have Roberta and right now I can do N squared. So this" }, { "end": 413.35999999999996, "start": 407.52, "text": " is N, this is N, so there's N zero, N zero. This is my sequence length that I can put" }, { "end": 419.35999999999996, "start": 413.35999999999996, "text": " into Roberta. You have to ask yourself what kind of sequence do I want to put" }, { "end": 429.44, "start": 419.36, "text": " in? And if you say I want to put in a sequence that's twice as long, right? I" }, { "end": 436.88, "start": 429.44, "text": " want to put in this long of a sequence, so N here would be twice N zero. Then you" }, { "end": 444.96000000000004, "start": 436.88, "text": " have to take this, put it here, put it here and then you realize, yes, that your" }, { "end": 451.32, "start": 444.96, "text": " window size of the long former can only be half, right? So if you have the same" }, { "end": 456.12, "start": 451.32, "text": " amount of memory you can double your sequence length at the cost of having" }, { "end": 462.91999999999996, "start": 456.12, "text": " your window size but that doesn't yet include the cost of the global" }, { "end": 468.56, "start": 462.91999999999996, "text": " attention. So any global attention you do will come basically on top of the window" }, { "end": 478.12, "start": 468.56, "text": " size. You see this here, right? So you decide on, let's do it like this, you" }, { "end": 484.04, "start": 478.12, "text": " decide on how long you want your thing, your input sequence length to be, then" }, { "end": 489.04, "start": 484.04, "text": " you decide, and that means that's this rectangle here, then you decide how many" }, { "end": 496.48, "start": 489.04, "text": " global attentions do I want and here I say I want one global attention and you" }, { "end": 502.16, "start": 496.48, "text": " have to cross out as many rows here as you want global attention and what" }, { "end": 507.84000000000003, "start": 502.16, "text": " remains is your window. Actually you have to cross out twice but we don't have, we" }, { "end": 513.52, "start": 507.84000000000003, "text": " only have one left, but you get the point. You have to cross out two times S rows" }, { "end": 520.12, "start": 513.52, "text": " of how many global attentions you want and what remains will be your window" }, { "end": 525.9200000000001, "start": 520.12, "text": " size. In this case it's just a window size of one. So that's how you would" }, { "end": 533.1999999999999, "start": 525.92, "text": " construct a longformer that takes in the same amount of memory as a your classic" }, { "end": 541.92, "start": 533.1999999999999, "text": " model but can take a full n sequence length. Alright? So I just wanted to kind" }, { "end": 549.8399999999999, "start": 541.92, "text": " of make that clear, go through the calculation myself and I hope that helped." }, { "end": 556.44, "start": 549.84, "text": " Thanks for listening and if you liked this consider subscribing, liking and" }, { "end": 580.36, "start": 556.44, "text": " bye bye." } ]
MpdbFLXOOIw
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Supervised Contrastive Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "supervised learning", "classification", "classifier", "labels", "pretraining", "unsupervised", "self-supervised", "representation learning", "representations", "hidden space", "loss function", "google", "mit", "imagenet" ]
The cross-entropy loss has been the default in deep learning for the last few years for supervised learning. This paper proposes a new loss, the supervised contrastive loss, and uses it to pre-train the network in a supervised fashion. The resulting model, when fine-tuned to ImageNet, achieves new state-of-the-art. https://arxiv.org/abs/2004.11362 Abstract: Cross entropy is the most widely used loss function for supervised training of image classification models. In this paper, we propose a novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations. We modify the batch contrastive loss, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting. We are thus able to leverage label information more effectively than cross entropy. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. In addition to this, we leverage key ingredients such as large batch sizes and normalized embeddings, which have been shown to benefit self-supervised learning. On both ResNet-50 and ResNet-200, we outperform cross entropy by over 1%, setting a new state of the art number of 78.8% among methods that use AutoAugment data augmentation. The loss also shows clear benefits for robustness to natural corruptions on standard benchmarks on both calibration and accuracy. Compared to cross entropy, our supervised contrastive loss is more stable to hyperparameter settings such as optimizers or data augmentations. Authors: Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at supervised contrastive learning by people from Google Research and MIT. Now this paper proposes a new loss for supervised learning. And you might recognize that this is a big claim. So forever now we've basically used this cross entropy loss in order to do supervised training of neural networks. This paper proposes to replace that with the supervised contrastive loss. And let's jump straight into the results here. They say our supervised contrastive loss outperforms the cross entropy loss with standard data augmentations such as auto augment and rand augment. So these are some of the previous state of the art data augmentation techniques used together with the cross entropy loss. And they say their supervised contrastive loss outperforms them. You can see here on ImageNet, which is the biggest vision benchmark or the most famous one, this new loss, the supervised contrastive loss, outperforms these other methods by something like a percent. One percent is a big improvement on ImageNet right now. So they claim it is a big claim, right? We recognize if this is true, this could be a game changer basically for all of supervised learning. And supervised learning is really the only thing right now in deep learning that works. So it could revolutionize the field. So here's the but. It is actually not a new loss to replace the cross entropy loss. And they do come about this pretty quickly. I don't think they're dishonest or lying or anything here. But it is sort of if you start reading you like what this is a new loss. It is not. It is a new way of pre training the network for a classification task. And so let's look into this. So if you look at what does what does it mean to build a classifier, this is what you usually do. You do supervised cross entropy training, you have an image and the image here is of a dog, you put it through your network, and you obtain a representation. So the representation here are is this last layer, or the second to last layer. And you put that through a classification layer and then a softmax. And what you get as an output is basically a probability distribution. And let's say you have three classes here. There's dog, there's cat, and there's horse. And let's say the network doesn't yet isn't yet trained very well. So the probability for dog here is fairly low. So this is basically what the network thinks of that image, like which class does it belong to with what probability. They also have this label right here. So the label dog for that image, what you do with that is you do a one hot vector. So that would look like this. So the one is at the position where the correct class is. And then the cross entropy loss takes all of this and does the following. There's a sum over all your classes. In this case, you have three classes. And let's call these the labels L. And you want to always take the label of the class times the log probability that the network thinks belongs to this class. So you can quickly see that this if the label is zero, so for all the incorrect classes, that means this entire term drops away. And only if the label is one, so only the correct class that will result in the log probability of the class where the label is the correct label. So in order to make this a loss, you actually have to put a negative sign in front of here because you want to this so this entire thing reduces to the log probability of the correct class. This is what you want to maximize. Therefore, you if you want to minimize something you need. So you minimize the negative log probability of the correct class, which means you maximize the probability. If you've never looked at the cross entropy loss like this, it is important to notice that you're going to say, hey, all this does is pull this here up, right? And it doesn't do anything to the other ones. But you have to realize that this softmax operation, since this is a probability distribution, all of this is normalized to sum up to one. So implicitly, you will push these down through the normalization, right? So what this does is it pushes the correct class up, and it pushes the other classes down. So this, to look at this is going to be important later. Because if you look at what this representation here does, so again, you have the network produces a representation here. This is 2000 dimensional, and then it does, it adds on top this classification layer, this classification layer is simply a linear layer, and then a softmax on top. So how you have to imagine this is that there is a representation space, this 2000 dimensional space, and the representations are made in such a way that the labels such that sorry, let's have three classes here. The representations are made in such a way that a linear classifier can separate them correctly, right? So here, this would be like a boundary. And then this would be another boundary. And this maybe would be another decision boundary. So you can see that linear classifier can separate the classes well. That is the goal. If you use this softmax cross entropy loss, that is implicitly what will happen in the representation space W. All it cares about is that the classes are on one side of the decision boundary and everything else is on the other side of a decision boundary. So if you have the network isn't trained very well at the beginning, and you maybe have a sample of the green class here, it will push the network such that the representation of that sample will go onto the other side of this decision boundary and it will push the decision boundary at the same time to make that happen more easily. Right, so it will optimize all of this at the same time. That's what you do. That's how you optimize the representations. So this work here, and another work has said, wouldn't it be great if the representation and decision boundaries weren't just trained at the same time for this, but we learn good representations first, such that classifying them becomes very simple. And in essence, what this paper says is, if we have a representation space W, shouldn't images of the same class, shouldn't we just make them close together? So without caring about decision boundaries, we just want them to be close to each other. And we want them to be far apart from other classes. If that happens, you can see that a linear classifier is going to have a very easy time separating these classes later. So that's exactly what this paper does. It has a pre training stage and a training stage. So in the pre training stage, this is over here, supervised contrastive. In the pre training stage, it simply tries to learn these representations, like over like down here, such that without the decision boundaries, class thing, images of the same class are close together, and images of different classes are far apart, which notice the subtle difference right to the cross entropy loss where you just care about them being on one or the other side of a decision boundary. And in stage this, so this stage one, and then in stage two, and there is where where it comes in, you basically freeze the network. So you freeze these weights down here, these are frozen, you don't train them anymore. All you train is this one classification layer. So the represent you actually freeze also the representation layer here, you only train the classifier on top in stage two, but you train it using softmax and using the cross entropy loss. So you you train the classifier in the old cross entropy way, using just normal supervised learning. So what we see here is that the stage one pre training is is what's training the network and the cross entropy loss only trains the classifier. Right, so let's look at how this pre training actually work what is using what it's using is a method called contrastive pre training. Now in contrastive pre training, and they have a little diagram up here, what this does is if you look at the classic way of doing contrastive pre training, you have to go to the unsupervised pre training literature, people have kind of discovered that they can improve a neural network by pre training it first in an unsupervised way. This is also called some of these methods are called self supervised. So the advantage here of self supervised or unsupervised pre training is that you don't need labels. What you want to do is simply to make the representation space somewhat meaningful, right? So you simply want the network to learn representations of images that are somehow meaningful, right? That are there. And here's how you do it. So you want to take an image like this dog here. And then you want to randomly augment this image, which just means you want to produce different versions of the same image. In this case down here, this is a random crop, it's cropped about here, it's still the same image but it's kind of a different version of it. In the case here, you can see that it's flipped left right and the brightness is slightly increased. So these are just different versions of the same image. And what you also want are what's called negatives. Natives are simply different images from your data set, right? For example, this or this or this, you don't care as long as they're different, right? You just sample a bunch. And what you want, so your embedding space and they make a big deal here that they are normalized and that seems to work better. But this is not necessary for the idea to work. The big idea is here that if you have an image right here, let's say this is the dog, and the blue dots here are the augmented versions of the same dog, and the green dots are all the other images in the data set. What you want is that all the images that come from the original same image are pulled close together and everything else is pushed apart. Right? So that's why these are called positives and these are called negatives. So the contrastive training basically means that you always want to have a set that you pull together in representation space and a set called the negatives that you push apart. So the network basically learns about these random transformations that you have here. The network learns what it means to come from the same image. It learns to be robust to these kind of transformations. It learns about the data in general and how to spread the data and embedding space with these transformations. So this usually ends up in a pretty good representation space and people have been using this in recent years in order to gain significant improvements. Now the problem here, if you specifically do this to pre-train a classifier is the thing they show on the right. So on the left here you have a picture of a dog. But if you just do this self-supervised, you do it without the labels. So it can happen that this image here shows up in the negatives, but it is also of a dog. And now this image here is going to end up maybe being this image here. And you see what happens to it. It's a green one. So it's going to get pushed apart. And this is going to make the entire task for the later classifier much harder because if they are pushed apart from each other, how is a linear classifier going to have them on the same side of the decision boundary while having everything else on a different side? So the task here is implicitly making the task for the later classifier harder by pushing apart samples that should be of the same class. And so this is not happening if you introduce labels to the pre-training objective. That's what they do, the supervised contrastive objective. Now you still, all you want to do is here, we're going to draw the same embedding space and we're going to draw this original dog image. And we're going to draw the augmented version of the original dog image. But now we also have the following. We also have these images, which are images of the same class. So we're going to put them in black here. And let's say the augmented versions around them in smaller black dots, augmented versions of those, right? You can augment them as well. And then you have the negative samples. And the negative samples are not just any images, but just images of different classes. So you just go over your mini batch and all everything that's of the same class becomes positives, including their augmentations, and everything that is not in the same class becomes negatives. And also you can augment them as well. So now we have a bunch of things in our embedding space. And our objective is simply going to be, again, we want to push away all the images that are not of the same class as our original, as our red original image, which is called the anchor. So all of this needs to be pushed away. But now we want to pull together all the augmented versions of the original image, but also we want to pull together all of the other images of the same class, including also their augmented versions. So all of this is going to be pulled together. So not only does the network learn about these augmentations, which again, for this idea, the augmentations aren't even necessary. The network learns a representation space where images of the same class are close together, which again is going to make the task of later linear classifiers that needs to separate this class from other classes very, very easy. And again, the other images aren't just going to be pushed away, but if they're from the same class, let's say this and this image are from the same class, all of those are going to be pushed apart from our red dot, but by themselves being pushed together to their own cluster here of their own class. I hope this makes sense. And I hope the difference to the cross entropy objective is sort of clear. The cross entropy objective simply from the beginning just cares about which side of the decision boundary you're on. While this pre training objective first cares to put things close together that are in the same class, and then the decision classifier will have a much easier time. The reason why this works better than the because because it's not entirely clear from the beginning that why this should work better because it's working with the same information. It's just because people have generally found that these pre training contrastive pre training objectives, they just are somewhat better at exploiting the information in the data set than if you just hammer on hammer with the contrastive sorry with the cross entropy loss from the beginning. So but it is not fully explained yet why this works better because it's working with the same data. Again, the difference here is that the previous methods of contrastive pre training the self supervised ones, they did not have access to the labels. And the advantage of that is you can have a giant database of unlabeled additional data that you do the pre training on. Whereas here we do the pre training, including the labels. So here, the label dog is an intrinsic part because we need to know which of the samples we need to pull together. But that also means we cannot leverage the maybe that we have more unlabeled data and unlabeled data is pretty cheap to obtain. So that's the advantages and disadvantages here. So this new loss, so they they do compare this here. And usually in these contrastive objectives, you have somewhat like two encoders, one to encode the the anchor and one to encode the augmented versions. And this one is like a momentum with shared weights and so on. All of this isn't really important. If you want to look into that look into papers like momentum contrast, or I did one on curl for reinforcement learning. I think the the general gist of it is clear. So they compare the formulation of their loss to the self supervised one, usually it takes the form of things like this. So one is the the anchor here, and then the zji would be the positive example. And you see here that the inner product between the anchor and the positive example, sorry about that, the inner product should be high, because here the loss is the negative of whatever is here. So if you minimize the loss, you say I want the inner product between my anchor, and whatever is the positive sample to be high, and everything else here, which includes the thing on the top, but it also includes everything else, I want the inner product to be low, and which is exactly the thing where you push you pull together the positives, and you push apart everything else. That, that is the standard objective that you had before they, they extend this, but it looks almost the same. So compared to the unsupervised objective now, first of all, they extend this such that you can have more than one positive sample. Now this is also possible in the unsupervised way. So they just augmented by this. And they also now this is the crucial part, they include the labels into the pre turning objective. So they say everywhere where I and J have the same label should be maximized in the inner product, so should be pulled together, while everything else is being pushed apart. Yeah, so they say we generalize to an arbitrary number of positives. And they also say contrastive power increases with more negatives. I think that's just a finding that they have that when they add more negatives, so when they increase the batch size, that contrastive power increases. They do analyze their gradient, which I find is pretty neat. You can already see that if you formulate a loss, of course, the gradient is going to go in the negative direction, but they make it clear that if you look at the gradient for the positive cases, what appears is this one minus p ij quantity and the p ij quantity is exactly the inner product between i and j normalized, of course. So if you minimum, so the gradient is going to point into the negative direction of that for the positives, which means you're going to pull them together. And it's going to push into this direction for the negative classes, which means you push them apart. And they also analyze what happens with relation to hardness. So they say there are two kinds of, if you just look at the positive samples, there are two kinds. There are easy positives where the network has already learned to match them closely, where the inner product is almost one. If you look at them, that means the p ij quantity is large, right? Because that is basically the inner product. And you look at this term, this term is exactly what we saw in the gradient. Then you see that this here, since this is one, this entire thing is zero. This is also high. This is close to one. So this entire thing is zero. This is almost zero. But if you have a hard positive where the network hasn't learned yet to align the inner product properly or align the representation properly, then the angle between the things again, these are normalized, the angle is they're approximately orthogonal. So the gradient magnitude is going to be this here is going to be approximately zero. So this is close to one and this here, since this is also zero is also close to one. So this is going to be larger than zero, which means that their loss focuses on the examples that are that the network cannot yet represent well, according to their objective, which makes sense, right? First of all, but second of all, it that is exactly the same thing as in the cross entropy loss. So if you look at the cross entropy loss and you have a situation where the network is really good already for a given sample, so it already puts a dog into the dog class, then the gradient will not be pulling much for that sample. It might mainly focuses on where you're still wrong. So it is like I appreciate the analysis, but it is not a notable difference. I think what they want to show is that their loss, if you do gradient descent really does what it is supposed to do, namely, first of all, it does this pulling together pushing apart of inner products for the positive and negative samples. And it mainly focuses on samples where you not yet have found a good representation to align them with others. It focuses on pairs that are not yet correctly close or together or far apart. They also connect this to the triplet loss, where they can show after some approximation, that if their loss only has one positive and one negative sample, it is going to be proportional to the triplet loss. The triplet loss is basically where you have an image and you find one positive, I think that's going to be of the same class right here, and you find one negative of a different class and you try to push those apart while pulling those together. The problem here, they say, is the problem of hard negative sampling. In order for this to make sense, you need the negative sample to be what's called a hard negative sample. So this is called this hard negative mining, because you only have one negative sample, you better make this something where the network can learn from. And if it's too easy, the network can't learn anything. And thereby you have the problem of hard negative mining, where you often have to filter through your mini batch or even through your data set to find a good negative sample to go along with this pair of positive samples. But I don't really see how their method, except that it has a bunch of positives and negative samples, except for that, which I guess you could also apply to the triplet loss. There's not really a difference here. Again, if your method is a contrastive method, you do have the problem that if you simply sample at random, your negative samples are going to become easier and easier over the training over the course of training. And you get the problem of at some point, you're going to have to do actively sample hard negatives. I think this paper just gets around it by having huge batch sizes. So yeah, but again, they do get state of the art on ImageNet for these types of networks and augmentation strategies. And they do look at how their loss appears to be more hyperparameter stable. So if they change out the augmentation, if they change the optimizer or the learning rate, you can see here that the spread in accuracy is much smaller than for the cross entropy loss except here, but it is hard to compare variances of things that don't have the same means in terms of accuracy. So take this on the right here with a grain of salt. They also evaluate this on corrupted ImageNet. So there's an ImageNet data set where it has several levels of corruptedness of the data set. And you can see your accuracy goes down, but the accuracy for the cross entropy loss goes down faster than for the supervised contrastive loss. You see they start together like this and they go further apart. Now it is not clear to me whether that's just an effect. Like if you just trained a supervised contrastive loss also to this level, whether it would fall off at the same speed or whether because it is the supervised contrastive loss, it would kind of match that curve. It's not clear whether that's really an effect of the difference of the losses or it's just an effect of the fact that they aren't the same accuracy to begin with. Again, this kind of shifting, you can't really compare things that have different means in the first place. But it is an interesting finding that their method is more stable to these corruptions. I just want to point out at the end, their training details just highlight they train for up to 700 epochs during the pre-training stage, which is, I think, standard but mad. And they trained up models with batch sizes up to 8192. So you need like a super TPU cluster to run these kind of things. And I am never exactly trusting of numbers like this. Even though it's kind of a good improvement, it is still like a 1% improvement. And in these small numbers, I just feel there might be a big effect that things like batch sizes and how much you put into computing, how much compute you put into it and what else you're doing. There might be so much influence of that, that I first want to see this replicated multiple times across the entire field before I'm going to really trust that this is a good thing to do. Alright, so I hope you like this. If you're still here, thank you. Consider subscribing. If you have a comment, please leave it. I usually read them. And with that, bye bye.
[ { "end": 5.96, "start": 0, "text": " Hi there, today we're looking at supervised contrastive learning by people from Google" }, { "end": 8.56, "start": 5.96, "text": " Research and MIT." }, { "end": 14.120000000000001, "start": 8.56, "text": " Now this paper proposes a new loss for supervised learning." }, { "end": 19.12, "start": 14.120000000000001, "text": " And you might recognize that this is a big claim." }, { "end": 25.48, "start": 19.12, "text": " So forever now we've basically used this cross entropy loss in order to do supervised training" }, { "end": 27.68, "start": 25.48, "text": " of neural networks." }, { "end": 33.8, "start": 27.68, "text": " This paper proposes to replace that with the supervised contrastive loss." }, { "end": 36.4, "start": 33.8, "text": " And let's jump straight into the results here." }, { "end": 42.68, "start": 36.4, "text": " They say our supervised contrastive loss outperforms the cross entropy loss with standard data" }, { "end": 47.36, "start": 42.68, "text": " augmentations such as auto augment and rand augment." }, { "end": 52.56, "start": 47.36, "text": " So these are some of the previous state of the art data augmentation techniques used" }, { "end": 55.6, "start": 52.56, "text": " together with the cross entropy loss." }, { "end": 59, "start": 55.6, "text": " And they say their supervised contrastive loss outperforms them." }, { "end": 65.24000000000001, "start": 59, "text": " You can see here on ImageNet, which is the biggest vision benchmark or the most famous" }, { "end": 71.68, "start": 65.24000000000001, "text": " one, this new loss, the supervised contrastive loss, outperforms these other methods by" }, { "end": 74.68, "start": 71.68, "text": " something like a percent." }, { "end": 78.1, "start": 74.68, "text": " One percent is a big improvement on ImageNet right now." }, { "end": 82.56, "start": 78.1, "text": " So they claim it is a big claim, right?" }, { "end": 89.2, "start": 82.56, "text": " We recognize if this is true, this could be a game changer basically for all of supervised" }, { "end": 90.2, "start": 89.2, "text": " learning." }, { "end": 95.18, "start": 90.2, "text": " And supervised learning is really the only thing right now in deep learning that works." }, { "end": 99.12, "start": 95.18, "text": " So it could revolutionize the field." }, { "end": 100.44, "start": 99.12, "text": " So here's the but." }, { "end": 105.76, "start": 100.44, "text": " It is actually not a new loss to replace the cross entropy loss." }, { "end": 110.76, "start": 105.76, "text": " And they do come about this pretty quickly." }, { "end": 114.84, "start": 110.76, "text": " I don't think they're dishonest or lying or anything here." }, { "end": 119.96000000000001, "start": 114.84, "text": " But it is sort of if you start reading you like what this is a new loss." }, { "end": 120.96000000000001, "start": 119.96000000000001, "text": " It is not." }, { "end": 127.60000000000001, "start": 120.96000000000001, "text": " It is a new way of pre training the network for a classification task." }, { "end": 131.04000000000002, "start": 127.60000000000001, "text": " And so let's look into this." }, { "end": 137.8, "start": 131.04000000000002, "text": " So if you look at what does what does it mean to build a classifier, this is what you usually" }, { "end": 138.8, "start": 137.8, "text": " do." }, { "end": 142.76000000000002, "start": 138.8, "text": " You do supervised cross entropy training, you have an image and the image here is of" }, { "end": 148, "start": 142.76000000000002, "text": " a dog, you put it through your network, and you obtain a representation." }, { "end": 154.72, "start": 148, "text": " So the representation here are is this last layer, or the second to last layer." }, { "end": 159.76000000000002, "start": 154.72, "text": " And you put that through a classification layer and then a softmax." }, { "end": 164.24, "start": 159.76000000000002, "text": " And what you get as an output is basically a probability distribution." }, { "end": 167.08, "start": 164.24, "text": " And let's say you have three classes here." }, { "end": 171.32000000000002, "start": 167.08, "text": " There's dog, there's cat, and there's horse." }, { "end": 175.36, "start": 171.32000000000002, "text": " And let's say the network doesn't yet isn't yet trained very well." }, { "end": 179.8, "start": 175.36, "text": " So the probability for dog here is fairly low." }, { "end": 184.64000000000001, "start": 179.8, "text": " So this is basically what the network thinks of that image, like which class does it belong" }, { "end": 186.04000000000002, "start": 184.64000000000001, "text": " to with what probability." }, { "end": 189.3, "start": 186.04000000000002, "text": " They also have this label right here." }, { "end": 195.34, "start": 189.3, "text": " So the label dog for that image, what you do with that is you do a one hot vector." }, { "end": 197.68, "start": 195.34, "text": " So that would look like this." }, { "end": 202.44, "start": 197.68, "text": " So the one is at the position where the correct class is." }, { "end": 206.5, "start": 202.44, "text": " And then the cross entropy loss takes all of this and does the following." }, { "end": 208.28, "start": 206.5, "text": " There's a sum over all your classes." }, { "end": 210.6, "start": 208.28, "text": " In this case, you have three classes." }, { "end": 218.6, "start": 210.6, "text": " And let's call these the labels L. And you want to always take the label of the class" }, { "end": 225.96, "start": 218.6, "text": " times the log probability that the network thinks belongs to this class." }, { "end": 234.1, "start": 225.96, "text": " So you can quickly see that this if the label is zero, so for all the incorrect classes," }, { "end": 236.84, "start": 234.1, "text": " that means this entire term drops away." }, { "end": 244.95999999999998, "start": 236.84, "text": " And only if the label is one, so only the correct class that will result in the log" }, { "end": 253.48000000000002, "start": 244.96, "text": " probability of the class where the label is the correct label." }, { "end": 258.24, "start": 253.48000000000002, "text": " So in order to make this a loss, you actually have to put a negative sign in front of here" }, { "end": 265.12, "start": 258.24, "text": " because you want to this so this entire thing reduces to the log probability of the correct" }, { "end": 266.12, "start": 265.12, "text": " class." }, { "end": 267.72, "start": 266.12, "text": " This is what you want to maximize." }, { "end": 273.28000000000003, "start": 267.72, "text": " Therefore, you if you want to minimize something you need." }, { "end": 278.96, "start": 273.28, "text": " So you minimize the negative log probability of the correct class, which means you maximize" }, { "end": 283.46, "start": 278.96, "text": " the probability." }, { "end": 287.79999999999995, "start": 283.46, "text": " If you've never looked at the cross entropy loss like this, it is important to notice" }, { "end": 293.52, "start": 287.79999999999995, "text": " that you're going to say, hey, all this does is pull this here up, right?" }, { "end": 296.55999999999995, "start": 293.52, "text": " And it doesn't do anything to the other ones." }, { "end": 301.79999999999995, "start": 296.55999999999995, "text": " But you have to realize that this softmax operation, since this is a probability distribution," }, { "end": 304.52000000000004, "start": 301.8, "text": " all of this is normalized to sum up to one." }, { "end": 309.88, "start": 304.52000000000004, "text": " So implicitly, you will push these down through the normalization, right?" }, { "end": 314, "start": 309.88, "text": " So what this does is it pushes the correct class up, and it pushes the other classes" }, { "end": 315.04, "start": 314, "text": " down." }, { "end": 320.28000000000003, "start": 315.04, "text": " So this, to look at this is going to be important later." }, { "end": 326.44, "start": 320.28000000000003, "text": " Because if you look at what this representation here does, so again, you have the network" }, { "end": 328.52, "start": 326.44, "text": " produces a representation here." }, { "end": 334.44, "start": 328.52, "text": " This is 2000 dimensional, and then it does, it adds on top this classification layer," }, { "end": 340.4, "start": 334.44, "text": " this classification layer is simply a linear layer, and then a softmax on top." }, { "end": 346.44, "start": 340.4, "text": " So how you have to imagine this is that there is a representation space, this 2000 dimensional" }, { "end": 357.64, "start": 346.44, "text": " space, and the representations are made in such a way that the labels such that sorry," }, { "end": 360.12, "start": 357.64, "text": " let's have three classes here." }, { "end": 365.52, "start": 360.12, "text": " The representations are made in such a way that a linear classifier can separate them" }, { "end": 367.64, "start": 365.52, "text": " correctly, right?" }, { "end": 370.56, "start": 367.64, "text": " So here, this would be like a boundary." }, { "end": 374.28, "start": 370.56, "text": " And then this would be another boundary." }, { "end": 377, "start": 374.28, "text": " And this maybe would be another decision boundary." }, { "end": 382.41999999999996, "start": 377, "text": " So you can see that linear classifier can separate the classes well." }, { "end": 383.41999999999996, "start": 382.41999999999996, "text": " That is the goal." }, { "end": 389.08000000000004, "start": 383.42, "text": " If you use this softmax cross entropy loss, that is implicitly what will happen in the" }, { "end": 394.88, "start": 389.08000000000004, "text": " representation space W. All it cares about is that the classes are on one side of the" }, { "end": 401.24, "start": 394.88, "text": " decision boundary and everything else is on the other side of a decision boundary." }, { "end": 406.08000000000004, "start": 401.24, "text": " So if you have the network isn't trained very well at the beginning, and you maybe have" }, { "end": 412.44, "start": 406.08000000000004, "text": " a sample of the green class here, it will push the network such that the representation" }, { "end": 418.6, "start": 412.44, "text": " of that sample will go onto the other side of this decision boundary and it will push" }, { "end": 423.24, "start": 418.6, "text": " the decision boundary at the same time to make that happen more easily." }, { "end": 426.56, "start": 423.24, "text": " Right, so it will optimize all of this at the same time." }, { "end": 427.92, "start": 426.56, "text": " That's what you do." }, { "end": 429.96, "start": 427.92, "text": " That's how you optimize the representations." }, { "end": 439.28, "start": 429.96, "text": " So this work here, and another work has said, wouldn't it be great if the representation" }, { "end": 445.23999999999995, "start": 439.28, "text": " and decision boundaries weren't just trained at the same time for this, but we learn good" }, { "end": 451.08, "start": 445.23999999999995, "text": " representations first, such that classifying them becomes very simple." }, { "end": 459.03999999999996, "start": 451.08, "text": " And in essence, what this paper says is, if we have a representation space W, shouldn't" }, { "end": 465.47999999999996, "start": 459.03999999999996, "text": " images of the same class, shouldn't we just make them close together?" }, { "end": 472.02000000000004, "start": 465.48, "text": " So without caring about decision boundaries, we just want them to be close to each other." }, { "end": 476.12, "start": 472.02000000000004, "text": " And we want them to be far apart from other classes." }, { "end": 481.20000000000005, "start": 476.12, "text": " If that happens, you can see that a linear classifier is going to have a very easy time" }, { "end": 485.16, "start": 481.20000000000005, "text": " separating these classes later." }, { "end": 488.48, "start": 485.16, "text": " So that's exactly what this paper does." }, { "end": 491.98, "start": 488.48, "text": " It has a pre training stage and a training stage." }, { "end": 496.8, "start": 491.98, "text": " So in the pre training stage, this is over here, supervised contrastive." }, { "end": 503.36, "start": 496.8, "text": " In the pre training stage, it simply tries to learn these representations, like over" }, { "end": 511.76, "start": 503.36, "text": " like down here, such that without the decision boundaries, class thing, images of the same" }, { "end": 519.12, "start": 511.76, "text": " class are close together, and images of different classes are far apart, which notice the subtle" }, { "end": 523.72, "start": 519.12, "text": " difference right to the cross entropy loss where you just care about them being on one" }, { "end": 527.24, "start": 523.72, "text": " or the other side of a decision boundary." }, { "end": 537.08, "start": 527.24, "text": " And in stage this, so this stage one, and then in stage two, and there is where where" }, { "end": 540.5600000000001, "start": 537.08, "text": " it comes in, you basically freeze the network." }, { "end": 545.44, "start": 540.5600000000001, "text": " So you freeze these weights down here, these are frozen, you don't train them anymore." }, { "end": 550, "start": 545.44, "text": " All you train is this one classification layer." }, { "end": 556, "start": 550, "text": " So the represent you actually freeze also the representation layer here, you only train" }, { "end": 562.6800000000001, "start": 556, "text": " the classifier on top in stage two, but you train it using softmax and using the cross" }, { "end": 563.82, "start": 562.6800000000001, "text": " entropy loss." }, { "end": 571.5200000000001, "start": 563.82, "text": " So you you train the classifier in the old cross entropy way, using just normal supervised" }, { "end": 572.5200000000001, "start": 571.5200000000001, "text": " learning." }, { "end": 579.48, "start": 572.52, "text": " So what we see here is that the stage one pre training is is what's training the network" }, { "end": 582.4, "start": 579.48, "text": " and the cross entropy loss only trains the classifier." }, { "end": 588.64, "start": 582.4, "text": " Right, so let's look at how this pre training actually work what is using what it's using" }, { "end": 592.52, "start": 588.64, "text": " is a method called contrastive pre training." }, { "end": 597.36, "start": 592.52, "text": " Now in contrastive pre training, and they have a little diagram up here, what this does" }, { "end": 603.6, "start": 597.36, "text": " is if you look at the classic way of doing contrastive pre training, you have to go to" }, { "end": 610.48, "start": 603.6, "text": " the unsupervised pre training literature, people have kind of discovered that they can" }, { "end": 616.46, "start": 610.48, "text": " improve a neural network by pre training it first in an unsupervised way." }, { "end": 620.66, "start": 616.46, "text": " This is also called some of these methods are called self supervised." }, { "end": 626.6, "start": 620.66, "text": " So the advantage here of self supervised or unsupervised pre training is that you don't" }, { "end": 628.12, "start": 626.6, "text": " need labels." }, { "end": 634.44, "start": 628.12, "text": " What you want to do is simply to make the representation space somewhat meaningful," }, { "end": 635.44, "start": 634.44, "text": " right?" }, { "end": 642.72, "start": 635.44, "text": " So you simply want the network to learn representations of images that are somehow meaningful, right?" }, { "end": 644.48, "start": 642.72, "text": " That are there." }, { "end": 646.84, "start": 644.48, "text": " And here's how you do it." }, { "end": 653.62, "start": 646.84, "text": " So you want to take an image like this dog here." }, { "end": 659.4, "start": 653.62, "text": " And then you want to randomly augment this image, which just means you want to produce" }, { "end": 661.78, "start": 659.4, "text": " different versions of the same image." }, { "end": 666.92, "start": 661.78, "text": " In this case down here, this is a random crop, it's cropped about here, it's still the same" }, { "end": 669.44, "start": 666.92, "text": " image but it's kind of a different version of it." }, { "end": 674.52, "start": 669.44, "text": " In the case here, you can see that it's flipped left right and the brightness is slightly" }, { "end": 676.5600000000001, "start": 674.52, "text": " increased." }, { "end": 679.8, "start": 676.5600000000001, "text": " So these are just different versions of the same image." }, { "end": 683.5600000000001, "start": 679.8, "text": " And what you also want are what's called negatives." }, { "end": 687.76, "start": 683.56, "text": " Natives are simply different images from your data set, right?" }, { "end": 692, "start": 687.76, "text": " For example, this or this or this, you don't care as long as they're different, right?" }, { "end": 694, "start": 692, "text": " You just sample a bunch." }, { "end": 700.2399999999999, "start": 694, "text": " And what you want, so your embedding space and they make a big deal here that they are" }, { "end": 702.7199999999999, "start": 700.2399999999999, "text": " normalized and that seems to work better." }, { "end": 707.64, "start": 702.7199999999999, "text": " But this is not necessary for the idea to work." }, { "end": 717.48, "start": 707.64, "text": " The big idea is here that if you have an image right here, let's say this is the dog, and" }, { "end": 724.04, "start": 717.48, "text": " the blue dots here are the augmented versions of the same dog, and the green dots are all" }, { "end": 726.3199999999999, "start": 724.04, "text": " the other images in the data set." }, { "end": 734.96, "start": 726.3199999999999, "text": " What you want is that all the images that come from the original same image are pulled" }, { "end": 739.76, "start": 734.96, "text": " close together and everything else is pushed apart." }, { "end": 741.1600000000001, "start": 739.76, "text": " Right?" }, { "end": 746.9200000000001, "start": 741.1600000000001, "text": " So that's why these are called positives and these are called negatives." }, { "end": 751.5600000000001, "start": 746.9200000000001, "text": " So the contrastive training basically means that you always want to have a set that you" }, { "end": 757.64, "start": 751.5600000000001, "text": " pull together in representation space and a set called the negatives that you push apart." }, { "end": 763.9200000000001, "start": 757.64, "text": " So the network basically learns about these random transformations that you have here." }, { "end": 768.28, "start": 763.92, "text": " The network learns what it means to come from the same image." }, { "end": 771.8, "start": 768.28, "text": " It learns to be robust to these kind of transformations." }, { "end": 777.76, "start": 771.8, "text": " It learns about the data in general and how to spread the data and embedding space with" }, { "end": 779.16, "start": 777.76, "text": " these transformations." }, { "end": 785.8399999999999, "start": 779.16, "text": " So this usually ends up in a pretty good representation space and people have been using this in recent" }, { "end": 790.28, "start": 785.8399999999999, "text": " years in order to gain significant improvements." }, { "end": 797.56, "start": 790.28, "text": " Now the problem here, if you specifically do this to pre-train a classifier is the thing" }, { "end": 799.12, "start": 797.56, "text": " they show on the right." }, { "end": 804.6, "start": 799.12, "text": " So on the left here you have a picture of a dog." }, { "end": 809.3199999999999, "start": 804.6, "text": " But if you just do this self-supervised, you do it without the labels." }, { "end": 817.64, "start": 809.3199999999999, "text": " So it can happen that this image here shows up in the negatives, but it is also of a dog." }, { "end": 823.08, "start": 817.64, "text": " And now this image here is going to end up maybe being this image here." }, { "end": 824.4399999999999, "start": 823.08, "text": " And you see what happens to it." }, { "end": 825.4399999999999, "start": 824.4399999999999, "text": " It's a green one." }, { "end": 827.24, "start": 825.4399999999999, "text": " So it's going to get pushed apart." }, { "end": 832.16, "start": 827.24, "text": " And this is going to make the entire task for the later classifier much harder because" }, { "end": 838.8, "start": 832.16, "text": " if they are pushed apart from each other, how is a linear classifier going to have them" }, { "end": 843.4, "start": 838.8, "text": " on the same side of the decision boundary while having everything else on a different" }, { "end": 844.76, "start": 843.4, "text": " side?" }, { "end": 852.42, "start": 844.76, "text": " So the task here is implicitly making the task for the later classifier harder by pushing" }, { "end": 857.2, "start": 852.42, "text": " apart samples that should be of the same class." }, { "end": 864, "start": 857.2, "text": " And so this is not happening if you introduce labels to the pre-training objective." }, { "end": 867.56, "start": 864, "text": " That's what they do, the supervised contrastive objective." }, { "end": 874.4, "start": 867.56, "text": " Now you still, all you want to do is here, we're going to draw the same embedding space" }, { "end": 877.1999999999999, "start": 874.4, "text": " and we're going to draw this original dog image." }, { "end": 881.52, "start": 877.1999999999999, "text": " And we're going to draw the augmented version of the original dog image." }, { "end": 884.52, "start": 881.52, "text": " But now we also have the following." }, { "end": 889.04, "start": 884.52, "text": " We also have these images, which are images of the same class." }, { "end": 892.16, "start": 889.04, "text": " So we're going to put them in black here." }, { "end": 898.1999999999999, "start": 892.16, "text": " And let's say the augmented versions around them in smaller black dots, augmented versions" }, { "end": 899.1999999999999, "start": 898.1999999999999, "text": " of those, right?" }, { "end": 901.1999999999999, "start": 899.1999999999999, "text": " You can augment them as well." }, { "end": 904.6800000000001, "start": 901.2, "text": " And then you have the negative samples." }, { "end": 910.12, "start": 904.6800000000001, "text": " And the negative samples are not just any images, but just images of different classes." }, { "end": 915.32, "start": 910.12, "text": " So you just go over your mini batch and all everything that's of the same class becomes" }, { "end": 920.9200000000001, "start": 915.32, "text": " positives, including their augmentations, and everything that is not in the same class" }, { "end": 922.0400000000001, "start": 920.9200000000001, "text": " becomes negatives." }, { "end": 924.84, "start": 922.0400000000001, "text": " And also you can augment them as well." }, { "end": 928.1600000000001, "start": 924.84, "text": " So now we have a bunch of things in our embedding space." }, { "end": 934.64, "start": 928.16, "text": " And our objective is simply going to be, again, we want to push away all the images that are" }, { "end": 939.68, "start": 934.64, "text": " not of the same class as our original, as our red original image, which is called the" }, { "end": 940.68, "start": 939.68, "text": " anchor." }, { "end": 944.0799999999999, "start": 940.68, "text": " So all of this needs to be pushed away." }, { "end": 950.28, "start": 944.0799999999999, "text": " But now we want to pull together all the augmented versions of the original image, but also we" }, { "end": 957.72, "start": 950.28, "text": " want to pull together all of the other images of the same class, including also their augmented" }, { "end": 958.72, "start": 957.72, "text": " versions." }, { "end": 961.2, "start": 958.72, "text": " So all of this is going to be pulled together." }, { "end": 965.84, "start": 961.2, "text": " So not only does the network learn about these augmentations, which again, for this idea," }, { "end": 968.6, "start": 965.84, "text": " the augmentations aren't even necessary." }, { "end": 975.32, "start": 968.6, "text": " The network learns a representation space where images of the same class are close together," }, { "end": 980.6800000000001, "start": 975.32, "text": " which again is going to make the task of later linear classifiers that needs to separate" }, { "end": 984.3000000000001, "start": 980.6800000000001, "text": " this class from other classes very, very easy." }, { "end": 987.92, "start": 984.3, "text": " And again, the other images aren't just going to be pushed away, but if they're from the" }, { "end": 992.4399999999999, "start": 987.92, "text": " same class, let's say this and this image are from the same class, all of those are" }, { "end": 999.16, "start": 992.4399999999999, "text": " going to be pushed apart from our red dot, but by themselves being pushed together to" }, { "end": 1003.4399999999999, "start": 999.16, "text": " their own cluster here of their own class." }, { "end": 1004.88, "start": 1003.4399999999999, "text": " I hope this makes sense." }, { "end": 1011.1999999999999, "start": 1004.88, "text": " And I hope the difference to the cross entropy objective is sort of clear." }, { "end": 1015.76, "start": 1011.2, "text": " The cross entropy objective simply from the beginning just cares about which side of the" }, { "end": 1017.5600000000001, "start": 1015.76, "text": " decision boundary you're on." }, { "end": 1023.6400000000001, "start": 1017.5600000000001, "text": " While this pre training objective first cares to put things close together that are in the" }, { "end": 1031.24, "start": 1023.6400000000001, "text": " same class, and then the decision classifier will have a much easier time." }, { "end": 1037.46, "start": 1031.24, "text": " The reason why this works better than the because because it's not entirely clear from" }, { "end": 1042.2, "start": 1037.46, "text": " the beginning that why this should work better because it's working with the same information." }, { "end": 1047.72, "start": 1042.2, "text": " It's just because people have generally found that these pre training contrastive pre training" }, { "end": 1053.6000000000001, "start": 1047.72, "text": " objectives, they just are somewhat better at exploiting the information in the data" }, { "end": 1061, "start": 1053.6000000000001, "text": " set than if you just hammer on hammer with the contrastive sorry with the cross entropy" }, { "end": 1064.06, "start": 1061, "text": " loss from the beginning." }, { "end": 1068.86, "start": 1064.06, "text": " So but it is not fully explained yet why this works better because it's working with the" }, { "end": 1070.04, "start": 1068.86, "text": " same data." }, { "end": 1076.62, "start": 1070.04, "text": " Again, the difference here is that the previous methods of contrastive pre training the self" }, { "end": 1081.04, "start": 1076.62, "text": " supervised ones, they did not have access to the labels." }, { "end": 1088.04, "start": 1081.04, "text": " And the advantage of that is you can have a giant database of unlabeled additional data" }, { "end": 1091.8, "start": 1088.04, "text": " that you do the pre training on." }, { "end": 1095.44, "start": 1091.8, "text": " Whereas here we do the pre training, including the labels." }, { "end": 1100.48, "start": 1095.44, "text": " So here, the label dog is an intrinsic part because we need to know which of the samples" }, { "end": 1102.36, "start": 1100.48, "text": " we need to pull together." }, { "end": 1108.44, "start": 1102.36, "text": " But that also means we cannot leverage the maybe that we have more unlabeled data and" }, { "end": 1111.32, "start": 1108.44, "text": " unlabeled data is pretty cheap to obtain." }, { "end": 1115.12, "start": 1111.32, "text": " So that's the advantages and disadvantages here." }, { "end": 1121.4399999999998, "start": 1115.12, "text": " So this new loss, so they they do compare this here." }, { "end": 1127.3999999999999, "start": 1121.4399999999998, "text": " And usually in these contrastive objectives, you have somewhat like two encoders, one to" }, { "end": 1132.6399999999999, "start": 1127.3999999999999, "text": " encode the the anchor and one to encode the augmented versions." }, { "end": 1137.2399999999998, "start": 1132.6399999999999, "text": " And this one is like a momentum with shared weights and so on." }, { "end": 1139.28, "start": 1137.2399999999998, "text": " All of this isn't really important." }, { "end": 1144.6, "start": 1139.28, "text": " If you want to look into that look into papers like momentum contrast, or I did one on curl" }, { "end": 1147.48, "start": 1144.6, "text": " for reinforcement learning." }, { "end": 1154.04, "start": 1147.48, "text": " I think the the general gist of it is clear." }, { "end": 1159.52, "start": 1154.04, "text": " So they compare the formulation of their loss to the self supervised one, usually it takes" }, { "end": 1162, "start": 1159.52, "text": " the form of things like this." }, { "end": 1170.3999999999999, "start": 1162, "text": " So one is the the anchor here, and then the zji would be the positive example." }, { "end": 1175.0800000000002, "start": 1170.4, "text": " And you see here that the inner product between the anchor and the positive example, sorry" }, { "end": 1185.24, "start": 1175.0800000000002, "text": " about that, the inner product should be high, because here the loss is the negative of whatever" }, { "end": 1186.46, "start": 1185.24, "text": " is here." }, { "end": 1192.96, "start": 1186.46, "text": " So if you minimize the loss, you say I want the inner product between my anchor, and whatever" }, { "end": 1199.16, "start": 1192.96, "text": " is the positive sample to be high, and everything else here, which includes the thing on the" }, { "end": 1204.8400000000001, "start": 1199.16, "text": " top, but it also includes everything else, I want the inner product to be low, and which" }, { "end": 1211.8400000000001, "start": 1204.8400000000001, "text": " is exactly the thing where you push you pull together the positives, and you push apart" }, { "end": 1213.88, "start": 1211.8400000000001, "text": " everything else." }, { "end": 1221.0400000000002, "start": 1213.88, "text": " That, that is the standard objective that you had before they, they extend this, but" }, { "end": 1223.1000000000001, "start": 1221.0400000000002, "text": " it looks almost the same." }, { "end": 1229.28, "start": 1223.1, "text": " So compared to the unsupervised objective now, first of all, they extend this such that" }, { "end": 1231.56, "start": 1229.28, "text": " you can have more than one positive sample." }, { "end": 1236.1799999999998, "start": 1231.56, "text": " Now this is also possible in the unsupervised way." }, { "end": 1240.1999999999998, "start": 1236.1799999999998, "text": " So they just augmented by this." }, { "end": 1245.12, "start": 1240.1999999999998, "text": " And they also now this is the crucial part, they include the labels into the pre turning" }, { "end": 1246.12, "start": 1245.12, "text": " objective." }, { "end": 1253.9599999999998, "start": 1246.12, "text": " So they say everywhere where I and J have the same label should be maximized in the" }, { "end": 1261.28, "start": 1253.9599999999998, "text": " inner product, so should be pulled together, while everything else is being pushed apart." }, { "end": 1271.7199999999998, "start": 1261.28, "text": " Yeah, so they say we generalize to an arbitrary number of positives." }, { "end": 1274.9199999999998, "start": 1271.7199999999998, "text": " And they also say contrastive power increases with more negatives." }, { "end": 1279.1200000000001, "start": 1274.92, "text": " I think that's just a finding that they have that when they add more negatives, so when" }, { "end": 1286.3600000000001, "start": 1279.1200000000001, "text": " they increase the batch size, that contrastive power increases." }, { "end": 1292.28, "start": 1286.3600000000001, "text": " They do analyze their gradient, which I find is pretty neat." }, { "end": 1295.44, "start": 1292.28, "text": " You can already see that if you formulate a loss, of course, the gradient is going to" }, { "end": 1301.1200000000001, "start": 1295.44, "text": " go in the negative direction, but they make it clear that if you look at the gradient" }, { "end": 1307.9199999999998, "start": 1301.12, "text": " for the positive cases, what appears is this one minus p ij quantity and the p ij quantity" }, { "end": 1313.2199999999998, "start": 1307.9199999999998, "text": " is exactly the inner product between i and j normalized, of course." }, { "end": 1320.6399999999999, "start": 1313.2199999999998, "text": " So if you minimum, so the gradient is going to point into the negative direction of that" }, { "end": 1324.6799999999998, "start": 1320.6399999999999, "text": " for the positives, which means you're going to pull them together." }, { "end": 1334.04, "start": 1324.68, "text": " And it's going to push into this direction for the negative classes, which means you" }, { "end": 1335.8400000000001, "start": 1334.04, "text": " push them apart." }, { "end": 1341.1200000000001, "start": 1335.8400000000001, "text": " And they also analyze what happens with relation to hardness." }, { "end": 1345.76, "start": 1341.1200000000001, "text": " So they say there are two kinds of, if you just look at the positive samples, there are" }, { "end": 1346.76, "start": 1345.76, "text": " two kinds." }, { "end": 1351.44, "start": 1346.76, "text": " There are easy positives where the network has already learned to match them closely," }, { "end": 1353.3600000000001, "start": 1351.44, "text": " where the inner product is almost one." }, { "end": 1359.04, "start": 1353.36, "text": " If you look at them, that means the p ij quantity is large, right?" }, { "end": 1362.74, "start": 1359.04, "text": " Because that is basically the inner product." }, { "end": 1368.52, "start": 1362.74, "text": " And you look at this term, this term is exactly what we saw in the gradient." }, { "end": 1373.9599999999998, "start": 1368.52, "text": " Then you see that this here, since this is one, this entire thing is zero." }, { "end": 1375.3999999999999, "start": 1373.9599999999998, "text": " This is also high." }, { "end": 1376.3999999999999, "start": 1375.3999999999999, "text": " This is close to one." }, { "end": 1378.1999999999998, "start": 1376.3999999999999, "text": " So this entire thing is zero." }, { "end": 1380.24, "start": 1378.1999999999998, "text": " This is almost zero." }, { "end": 1385.58, "start": 1380.24, "text": " But if you have a hard positive where the network hasn't learned yet to align the inner" }, { "end": 1392.32, "start": 1385.58, "text": " product properly or align the representation properly, then the angle between the things" }, { "end": 1400.06, "start": 1392.32, "text": " again, these are normalized, the angle is they're approximately orthogonal." }, { "end": 1408.76, "start": 1400.06, "text": " So the gradient magnitude is going to be this here is going to be approximately zero." }, { "end": 1414.78, "start": 1408.76, "text": " So this is close to one and this here, since this is also zero is also close to one." }, { "end": 1423.24, "start": 1414.78, "text": " So this is going to be larger than zero, which means that their loss focuses on the examples" }, { "end": 1430.56, "start": 1423.24, "text": " that are that the network cannot yet represent well, according to their objective, which" }, { "end": 1432.2, "start": 1430.56, "text": " makes sense, right?" }, { "end": 1437.32, "start": 1432.2, "text": " First of all, but second of all, it that is exactly the same thing as in the cross entropy" }, { "end": 1438.32, "start": 1437.32, "text": " loss." }, { "end": 1443.12, "start": 1438.32, "text": " So if you look at the cross entropy loss and you have a situation where the network is" }, { "end": 1449, "start": 1443.12, "text": " really good already for a given sample, so it already puts a dog into the dog class," }, { "end": 1454.48, "start": 1449, "text": " then the gradient will not be pulling much for that sample." }, { "end": 1458.24, "start": 1454.48, "text": " It might mainly focuses on where you're still wrong." }, { "end": 1463.8999999999999, "start": 1458.24, "text": " So it is like I appreciate the analysis, but it is not a notable difference." }, { "end": 1471.16, "start": 1463.9, "text": " I think what they want to show is that their loss, if you do gradient descent really does" }, { "end": 1476.98, "start": 1471.16, "text": " what it is supposed to do, namely, first of all, it does this pulling together pushing" }, { "end": 1481, "start": 1476.98, "text": " apart of inner products for the positive and negative samples." }, { "end": 1488.0800000000002, "start": 1481, "text": " And it mainly focuses on samples where you not yet have found a good representation to" }, { "end": 1489.52, "start": 1488.0800000000002, "text": " align them with others." }, { "end": 1496.56, "start": 1489.52, "text": " It focuses on pairs that are not yet correctly close or together or far apart." }, { "end": 1505.08, "start": 1496.56, "text": " They also connect this to the triplet loss, where they can show after some approximation," }, { "end": 1512.12, "start": 1505.08, "text": " that if their loss only has one positive and one negative sample, it is going to be proportional" }, { "end": 1513.28, "start": 1512.12, "text": " to the triplet loss." }, { "end": 1519.08, "start": 1513.28, "text": " The triplet loss is basically where you have an image and you find one positive, I think" }, { "end": 1524.3999999999999, "start": 1519.08, "text": " that's going to be of the same class right here, and you find one negative of a different" }, { "end": 1531.12, "start": 1524.3999999999999, "text": " class and you try to push those apart while pulling those together." }, { "end": 1535.6, "start": 1531.12, "text": " The problem here, they say, is the problem of hard negative sampling." }, { "end": 1540.6799999999998, "start": 1535.6, "text": " In order for this to make sense, you need the negative sample to be what's called a" }, { "end": 1542.32, "start": 1540.6799999999998, "text": " hard negative sample." }, { "end": 1547.28, "start": 1542.32, "text": " So this is called this hard negative mining, because you only have one negative sample," }, { "end": 1551.6399999999999, "start": 1547.28, "text": " you better make this something where the network can learn from." }, { "end": 1555.36, "start": 1551.6399999999999, "text": " And if it's too easy, the network can't learn anything." }, { "end": 1560.36, "start": 1555.36, "text": " And thereby you have the problem of hard negative mining, where you often have to filter through" }, { "end": 1565.24, "start": 1560.36, "text": " your mini batch or even through your data set to find a good negative sample to go along" }, { "end": 1568.24, "start": 1565.24, "text": " with this pair of positive samples." }, { "end": 1574.84, "start": 1568.24, "text": " But I don't really see how their method, except that it has a bunch of positives and negative" }, { "end": 1581.08, "start": 1574.84, "text": " samples, except for that, which I guess you could also apply to the triplet loss." }, { "end": 1583.36, "start": 1581.08, "text": " There's not really a difference here." }, { "end": 1590.22, "start": 1583.36, "text": " Again, if your method is a contrastive method, you do have the problem that if you simply" }, { "end": 1596.8, "start": 1590.22, "text": " sample at random, your negative samples are going to become easier and easier over the" }, { "end": 1599.3999999999999, "start": 1596.8, "text": " training over the course of training." }, { "end": 1606.44, "start": 1599.4, "text": " And you get the problem of at some point, you're going to have to do actively sample" }, { "end": 1607.5600000000002, "start": 1606.44, "text": " hard negatives." }, { "end": 1611.24, "start": 1607.5600000000002, "text": " I think this paper just gets around it by having huge batch sizes." }, { "end": 1618.6000000000001, "start": 1611.24, "text": " So yeah, but again, they do get state of the art on ImageNet for these types of networks" }, { "end": 1621.44, "start": 1618.6000000000001, "text": " and augmentation strategies." }, { "end": 1627.6200000000001, "start": 1621.44, "text": " And they do look at how their loss appears to be more hyperparameter stable." }, { "end": 1632.08, "start": 1627.62, "text": " So if they change out the augmentation, if they change the optimizer or the learning" }, { "end": 1638.52, "start": 1632.08, "text": " rate, you can see here that the spread in accuracy is much smaller than for the cross" }, { "end": 1645, "start": 1638.52, "text": " entropy loss except here, but it is hard to compare variances of things that don't have" }, { "end": 1648.36, "start": 1645, "text": " the same means in terms of accuracy." }, { "end": 1654.1999999999998, "start": 1648.36, "text": " So take this on the right here with a grain of salt." }, { "end": 1657.28, "start": 1654.1999999999998, "text": " They also evaluate this on corrupted ImageNet." }, { "end": 1664.56, "start": 1657.28, "text": " So there's an ImageNet data set where it has several levels of corruptedness of the data" }, { "end": 1665.56, "start": 1664.56, "text": " set." }, { "end": 1671.5, "start": 1665.56, "text": " And you can see your accuracy goes down, but the accuracy for the cross entropy loss goes" }, { "end": 1676.6399999999999, "start": 1671.5, "text": " down faster than for the supervised contrastive loss." }, { "end": 1680.84, "start": 1676.6399999999999, "text": " You see they start together like this and they go further apart." }, { "end": 1684.12, "start": 1680.84, "text": " Now it is not clear to me whether that's just an effect." }, { "end": 1688.36, "start": 1684.12, "text": " Like if you just trained a supervised contrastive loss also to this level, whether it would" }, { "end": 1694.8, "start": 1688.36, "text": " fall off at the same speed or whether because it is the supervised contrastive loss, it" }, { "end": 1697.08, "start": 1694.8, "text": " would kind of match that curve." }, { "end": 1701.4399999999998, "start": 1697.08, "text": " It's not clear whether that's really an effect of the difference of the losses or it's just" }, { "end": 1708.3999999999999, "start": 1701.4399999999998, "text": " an effect of the fact that they aren't the same accuracy to begin with." }, { "end": 1714.2800000000002, "start": 1708.4, "text": " Again, this kind of shifting, you can't really compare things that have different means in" }, { "end": 1715.3200000000002, "start": 1714.2800000000002, "text": " the first place." }, { "end": 1722.52, "start": 1715.3200000000002, "text": " But it is an interesting finding that their method is more stable to these corruptions." }, { "end": 1729.72, "start": 1722.52, "text": " I just want to point out at the end, their training details just highlight they train" }, { "end": 1736.42, "start": 1729.72, "text": " for up to 700 epochs during the pre-training stage, which is, I think, standard but mad." }, { "end": 1741.44, "start": 1736.42, "text": " And they trained up models with batch sizes up to 8192." }, { "end": 1746.88, "start": 1741.44, "text": " So you need like a super TPU cluster to run these kind of things." }, { "end": 1753.52, "start": 1746.88, "text": " And I am never exactly trusting of numbers like this." }, { "end": 1758.64, "start": 1753.52, "text": " Even though it's kind of a good improvement, it is still like a 1% improvement." }, { "end": 1770, "start": 1758.64, "text": " And in these small numbers, I just feel there might be a big effect that things like batch" }, { "end": 1778.14, "start": 1770, "text": " sizes and how much you put into computing, how much compute you put into it and what" }, { "end": 1779.4, "start": 1778.14, "text": " else you're doing." }, { "end": 1785.64, "start": 1779.4, "text": " There might be so much influence of that, that I first want to see this replicated multiple" }, { "end": 1793.64, "start": 1785.64, "text": " times across the entire field before I'm going to really trust that this is a good thing" }, { "end": 1794.64, "start": 1793.64, "text": " to do." }, { "end": 1797.3600000000001, "start": 1794.64, "text": " Alright, so I hope you like this." }, { "end": 1799.6000000000001, "start": 1797.3600000000001, "text": " If you're still here, thank you." }, { "end": 1801.2, "start": 1799.6000000000001, "text": " Consider subscribing." }, { "end": 1802.76, "start": 1801.2, "text": " If you have a comment, please leave it." }, { "end": 1805.0400000000002, "start": 1802.76, "text": " I usually read them." }, { "end": 1816.08, "start": 1805.04, "text": " And with that, bye bye." } ]
pZyxlf6l0N8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Thinking While Moving: Deep Reinforcement Learning with Concurrent Control
[ "Science & Technology" ]
[ "deep learning", "machine learning", "reinforcement learning", "vector to go", "vtg", "continuous", "control", "robot", "concurrent", "deep rl", "deep neural networks", "berkeley", "google", "grasping", "qlearning" ]
Classic RL "stops" the world whenever the Agent computes a new action. This paper considers a more realistic scenario where the agent is thinking about the next action to take while still performing the last action. This results in a fascinating way of reformulating Q-learning in continuous time, then introducing concurrency and finally going back to discrete time. https://arxiv.org/abs/2004.06089 Abstract: We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving". Authors: Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. So if you look at these two robots, the left one labeled blocking, the right one labeled concurrent, the blocking robot, as you can see, always has these little pauses in its movement where it does nothing and then it kind of continues with its motion, while the one on the right is one continuous motion that it does. So the reasoning here is that the robot has a camera and the camera takes some time to register what's going on. And then also the robot has a computer inside and the computer also takes some time to decide what to do based on what the camera saw. And while all of this is happening, the robot on the left just freezes. So it performs an action and then it freezes because it takes time to register a state and compute a new action, whereas the robot on the right, it takes the same amount of time to do these things. It also takes a time to register the state and to compute an action, but it does that as it is executing the last action. So it does this in parallel and then it executes the action. Once it has computed a new action, it executes that new action right on top of the old action. And that gives this one big fluid motion. So this requires a new formulation of reinforcement learning. And that's what this paper does. Thinking while moving deep reinforcement learning with concurrent control by people from Google Brain, UC Berkeley and X. So they have a nice diagram here in the supplementary material to show you what is going on in their framework. So in classic reinforcement learning right here, in classic reinforcement learning, you have this dichotomy between agent and environment, right? So the agent and the environment. Now, the agent is supposed to kind of act in the environment in the following manner. The environment will send an observation to the agent. The observation in this case is the picture of the camera. So it sends an observation and the agent will think what to do with the observation, which is called a policy. The policy pie will take an observation and then output an action of what to do and will send back the action to that environment. So in classic RL, you assume that this part here is kind of freezes time. So the environment will output an observation and as and the process of registering the observation of computing the action and of sending the action back is happens in zero time. Of course, it doesn't happen in zero time, but in our reinforcement learning problems, for example, the OpenAI Gym, the environment just stops until it gets the next action. And then it performs the action, right? It performs the action in the environment and by that the environment changes and time happens. And then it stops again as we think of the next action, right? So this is we usually call this one step in the in the kind of classic formulation of RL. The only point that time happens is when the action is executed. No time happens when the state is registered or when the action is computed. And that's what you see here on the left. So in blue, you have the state registration. This is, for example, the camera. The camera has some time in order to register and store the image that it has taken, right? Maybe post process it a little bit. So that's what the camera does. But in our classic formulation, as you can see here, if this is time, it happens instantaneously all at the same time. And also, this is the policy. This is thinking what to do, right? This is your evaluation of your neural network. If this is a neural network, I'm drawing a small neural network here. This happens instantaneously in these formulations and only as the action is executed, time happens, right? And then until here, time freezes again and only once the action is determined, time happens again. In the new formulation now, as you've already seen, what we have here is this kind of continuous framework where, let's say you're here, it actually takes time for the camera to post process the image. It takes more time for you to think about what to do. And then once you decide on an action, that action is going to happen, right? But we can say, for example, at this point, you tell the camera to take a new picture of the state, right? But that takes time. And while that's happening, the old action is still ongoing, right? So you don't even have to say the action is still ongoing, but the world is still moving, right? The world is still changing while you think, while you post process and while you evaluate your policy. The world is thinking and only after some time, right? After this lag time here, have you decided on a new action? And then you can break that old action and kind of perform your new action. And all of this is happening in time. So this is the new framework. Now you see the problem here. The problem is that you base your decisions on the state and time T on time, sorry, time H here. You base your decisions on a state as it was at that time, right? That's what you use to think right here. That's what you store and think about. But you perform. So you perform the action at this point in time. So there is a considerable difference here because the world has now changed. So you see the problem. The action you perform is based on an old knowledge of the world. And you have basically no way of making the action dependent on the current state of the world, because that would require you to capture the current state and that takes time. And in that time, the world has already shifted again. So the agent is kind of required to think ahead about the action that it is currently performing and how the world changes according to that. So this new formulation of reinforcement learning formulates this in a formal way. It formulates it in a formal way and will quickly go through that. Yes, so they go into the very basics here. We'll quickly, quickly go through them. So they introduce these quantities like the policy pi, the transition distribution, the reward and the Q and value function. Now we'll just quickly go over these. So you have the agent. Sorry about that. You have the agent and you have the environment. Hello. So if you think of the agent and the environment, the environment has this transition function. The transition function, it takes it says, OK, I'm in this state and the agent does this action. And here is the probability distribution over the next state. So it says that your little spaceship is here and the meteors are here. And then you push the button. If you push the button for shoot, then you'll be in the same place. The meteors will still be here, but you'll have a little shot coming out of your spaceship. That's what the environment does. Right. So you give it a state and action and it will give you the next state. It will also give you a reward. Right. The reward in the same thing, reward here will be a second output here that tells you either, let's say, negative one if you die or zero if nothing happens or plus one if you shoot a meteor. That's the reward. So this you can think of as the real world. So these two quantities are in the real world in the environment. So that's how you model the environment. Then the agent has these quantities called the policy. So what pi does is much like the transition, but pi takes in a state and gives you an action. Right. So this is now the agent deciding this is thinking. The policy takes in a state and gives an action. And this contains various forms, but it's just a function for now. The agent also has a Q and V function, and these are quite, quite similar. So the Q function, what the Q function will do if if you were in a state and you have several options of what to do, right, you have action one, action two and action three. You're in state S. The Q function of S and A one superscript pi would tell you the following. It would tell you what's my expected reward if I'm in state S. That's here. And perform action A. So A one. So if I now take this path and after this path, I follow the policy pi, right, the policy pi for each of the following. So it's like right now I take action A one. I don't care about my policy. But after that, I follow the policy pi. What is my expected reward going to be until the end of the episode? That's the Q function. And the value function here, very similar, but it only cares about the state. It says if I'm in state S and I just follow the policy pi, even in the first step, right, I just follow this policy pi. What is my expected reward going to be over the course of the episode? That is the Q and the value functions. You can see why Q learning is popular. If you have a good Q function and the Q and the value function, these are the things that you actually want to learn, right? If you have a good Q function, you can simply always plug in every action into your Q function and then simply take the maximum, the action that has the maximum Q value, because that will give you the best reward if your policy pi, right, is kind of self-referential. If your policy is to always take the maximum Q value, then taking the maximum Q value with the policy, given that you take the maximum Q value, will be optimal. All right, this was very convoluted, but all right, so let's start off with modeling the environment in this continuous framework. So instead of having the next state be determined by the current state in action, in the continuous framework, they do this via differential equation. So the DS is how does the environment change? This is the change in the environment that is determined by two functions, F and G. So F is your classic environment function. It takes in a state and an action at time t, right? These are now functions and it will output how the state changes. And the G here is, this is a Wiener process, is to introduce stochasticity, as I understand it, because in the classic formulation, the transition model gives you a probability up here, a probability distribution. So this Wiener process is responsible for introducing that probabilistic nature into this differential equation. But ultimately, it simply tells you how does the state change depending on my state, current state and action that I perform. So the reward function now is also pretty simple. The tau here is a trajectory and the trajectory is simply the state and action over time. So if I integrate from time zero to infinity or to the end of the episode, my reward function at each point in time, right? So I go through my episode and I get high reward, not so high and so on. So the integral under this curve will be my total reward, just like we sum up the reward of individual steps in the discrete case. In the continuous case, you can think of each infinitesimal time step giving you a tiny bit of reward. So the entire reward is just an integral. Then we go on the value function for a given state at time t. So think about what this is. The value function for a state means what reward can I expect starting in this particular state and then following policy pi until the end of the episode. And that here is the expectation over all trajectories that come from my policy of the reward in that trajectory. So I can, you know, if I'm here, my policy now is also a distribution. It can go multiple trajectories, right? And I want to have the expected value of the reward. So each one of these has a reward, the expected value of the reward over all trajectories starting from state s t. And again, here you say that that is the integral over the now here I have a bit of a problem because here they say t equals zero going from here and here. But here the t is already here. So I believe this should be this should be t equals t prime and then t prime t prime t prime up here. And t minus t prime or something like this. In any case, I think it should actually start from this state here and not from time zero. But I might be missing something. I'm not the biggest integrator in the world. So, you know, all right, then you have the Q function. Now think of it what the Q function is in the discrete case, the Q function tells you if I'm in state s and perform action a, what is my expected reward going to be? I have to introduce some different things here. They say if I'm in state s and I act action a at time t until time h right now, you have to say how long you're going to perform the action for until you perform the next action. Right. So h is your your lag time here until you perform the next action. So this now I actually agree with this formulation with the integral here. So this is going to be the integral from time t to time t plus h. That's how long you perform the action. Your reward of performing that action, right. Given the state plus the value function at the end of that. So you're here. You're in s t and you perform action a right. And then this is your state at time t plus h and then you're here. And from there on, you could perform many, many, many actions. But in the original notion of the Q function, the Q function tells you if I'm here and I perform this action and after that, I act according to policy pi. What is my what is my expected reward? And there's a classic recurrence relation in reinforcement learning where you can say the Q function in s t given to a is the reward that I get from performing a in state s plus the value function at state s at the next state. Because the value function is exactly the reward that you would get by following policy pi in that next state. And the Q function means I perform a now and after that I perform pi. So this is the continuous analog. That's why you have this part here where you perform the action for each time. After each time you just go after go with your policy and that will be the value function. So this is the continuous formulation of the of the problem. Right. And now they can introduce these these lagging times. So in their diagram up here, they define these notions. So you have your state s t right here. Then after this time, you capture the new state. Right. So after that time, you capture the new state and decide on an action and then you perform it for each time. Is that correct? Until here. So the the the I minus one of action is performed at this time and the I action is performed at this time. No, that makes no sense. So let's read it. So this is when you capture the state and you need to time to perform to think. Right. This is thinking. And then you perform this action at that time. This is the lag time now. And you perform this action. You want to know you want to know if I perform this action until this time here, what is what is happening? So this is the new Q function takes into account this thing. It tells you if I'm in state s and I think this is thinking leads me to here. This is the old action, right? This is the old action that's still happening while I observe this state. Right. So it means if I do this right now and after thinking, I do this. Right. So I'm at state. I'm at time t. And this is still happening. And then after I think thinking leads me here t plus t a s. I perform this new action. I'm out of colors. I perform this new action at that point until time H. What's my Q function? So my Q function is going to be the integral time t where I start observing the state and start thinking until t plus t a s. That's when I still perform the old action. Right. So this is going to be the reward in the state given the old action. And then at that time, I switch over to the new action. Right. So at that time until time H, now I perform the new action. So this entire part here, this part until here is taking the place of this first part here in the Q function of this first part. Right. So because before it was simply executing one action, we didn't have this concurrency yet. So executing the action. And after that, it's going to be the value function. And now it's executing two actions. First, execute the old action. Then once you're done thinking, execute the new action. And then it's the value function from there on. I hope this is clear. It wasn't clear to me until just now as well. All right. So they define the Monte Carlo estimator where you can do this with just samples of the trajectories instead of expectations. And then they define the Bellman operator, the Bellman backup operator. Now, the Bellman backup operator is an important quantity in value based reinforcement learning because the Bellman backup operator is basically what I talked about before. It tells you that if your policy is to always select the maximum, the action with the maximum Q value, right? That's what's down here. After you do this action, then the policy you arrive at and you can give certain optimality guarantees. But in essence, this is so-called a contraction. So if you always do that and you calculate your Q function that way, it will mean that in the contraction is defined as if you have an operator. If you have two things that are X1 and X2 that are some apart from each other, then after you apply the operator, this T here, X1 minus T X2, they will be closer together, which basically means that the Q to Q functions of the individual states will be closer together. And you'll converge to a single Q function. So given enough time and enough data, you'll converge on one Q function. There's one fixed point Q function that you'll converge to. And you can show under assumption in classic RL that this is going to be the optimal Q function, the true, let's say, Q function. So they first prove this and then they prove a now they go back to discrete time. So now they were in continuous time. They go back to discrete time, but now they have a discrete time formulation with this lag here. And also they prove that that Bellman operator is a contraction. So the contraction part basically means that if you perform Q learning, you're going to arrive at a solution. That's what this means to be contraction. But now, obviously, that solution in classic RL is going to be the optimal Q function. But here, I actually don't know. All right, so they try this out and they introduce one last important concept here, what they call vector to go, which basically means that at the point where they start thinking, where is a good thing to show this at the point where they start thinking, they give a they give the last action with. So at this point right here, where they capture the state, they also sort of the state contains a information about what part of the action that you started here is still outstanding. So maybe your action was and they illustrate this down here. Maybe your action was to move your robot arm from down here to up here. That was your planned action at this point in time. Now, if you are at step, if you perform the action here and here you start capturing the next state, then you would also give this particular vector here to the to the to the agent. So not only will you tell it, hey, by the way, my last action was a T minus one, as you would need in the Q value. You will also say, and this much is outstanding. This is much is where as I still have to do that much. So basically you're saying I wanted to move my arm right here and I still have to do this part of the action. Now you can see why the algorithm is able to learn much better given that information, because otherwise it has it would have to basically infer that vector from kind of differencing the action minus the what probably happened in the meantime. So they test this out. And what results is the robot videos you've seen before where they say they can recover the original Q learning in this continuous framework. So here on the left side, you have blocking actions and it says when it says yes here, it is kind of the old old framework. You see the grasp success at like 92 percent, where as if you go to non blocking actions, but do none of the none of the concurrent information, the grasp success suffers. But you can recover the grasp success if you if you give these concurrent information like introduce time step penalty and you give this vector to go and the information about the previous action. And you can also see that the episode duration here is much lower when you go for the continuous actions than when you are in the old framework, naturally, because you don't need to pause. Right. In this. So this is the simulated robotics and the real world robotic grasping results. You see kind of similar results in that if you do have blocking actions, your grasp success is higher than if you don't. But your duration of your of your policy is cut in half. So maybe this is a trade off worth considering. I think this is a is pretty cool framework. And I think there's going to be a lot of work still outstanding here. And I invite you to check out the paper and look at their videos and their ablation studies of what's important and what not. And with that, bye bye.
[ { "end": 7, "start": 0, "text": " Hi there. So if you look at these two robots, the left one labeled blocking, the right one labeled concurrent," }, { "end": 15, "start": 7, "text": " the blocking robot, as you can see, always has these little pauses in its movement where it does nothing" }, { "end": 24, "start": 15, "text": " and then it kind of continues with its motion, while the one on the right is one continuous motion that it does." }, { "end": 33, "start": 24, "text": " So the reasoning here is that the robot has a camera and the camera takes some time to register what's going on." }, { "end": 41, "start": 33, "text": " And then also the robot has a computer inside and the computer also takes some time to decide what to do based on what the camera saw." }, { "end": 48, "start": 41, "text": " And while all of this is happening, the robot on the left just freezes. So it performs an action and then it freezes" }, { "end": 58, "start": 48, "text": " because it takes time to register a state and compute a new action, whereas the robot on the right, it takes the same amount of time to do these things." }, { "end": 66, "start": 58, "text": " It also takes a time to register the state and to compute an action, but it does that as it is executing the last action." }, { "end": 75, "start": 66, "text": " So it does this in parallel and then it executes the action. Once it has computed a new action, it executes that new action right on top of the old action." }, { "end": 83, "start": 75, "text": " And that gives this one big fluid motion. So this requires a new formulation of reinforcement learning." }, { "end": 93, "start": 83, "text": " And that's what this paper does. Thinking while moving deep reinforcement learning with concurrent control by people from Google Brain, UC Berkeley and X." }, { "end": 103, "start": 93, "text": " So they have a nice diagram here in the supplementary material to show you what is going on in their framework." }, { "end": 111, "start": 103, "text": " So in classic reinforcement learning right here, in classic reinforcement learning, you have this dichotomy between agent and environment, right?" }, { "end": 120, "start": 111, "text": " So the agent and the environment. Now, the agent is supposed to kind of act in the environment in the following manner." }, { "end": 129, "start": 120, "text": " The environment will send an observation to the agent. The observation in this case is the picture of the camera." }, { "end": 139, "start": 129, "text": " So it sends an observation and the agent will think what to do with the observation, which is called a policy." }, { "end": 149, "start": 139, "text": " The policy pie will take an observation and then output an action of what to do and will send back the action to that environment." }, { "end": 160, "start": 149, "text": " So in classic RL, you assume that this part here is kind of freezes time." }, { "end": 174, "start": 160, "text": " So the environment will output an observation and as and the process of registering the observation of computing the action and of sending the action back is happens in zero time." }, { "end": 186, "start": 174, "text": " Of course, it doesn't happen in zero time, but in our reinforcement learning problems, for example, the OpenAI Gym, the environment just stops until it gets the next action." }, { "end": 194, "start": 186, "text": " And then it performs the action, right? It performs the action in the environment and by that the environment changes and time happens." }, { "end": 206, "start": 194, "text": " And then it stops again as we think of the next action, right? So this is we usually call this one step in the in the kind of classic formulation of RL." }, { "end": 216, "start": 206, "text": " The only point that time happens is when the action is executed. No time happens when the state is registered or when the action is computed." }, { "end": 226, "start": 216, "text": " And that's what you see here on the left. So in blue, you have the state registration. This is, for example, the camera." }, { "end": 236, "start": 226, "text": " The camera has some time in order to register and store the image that it has taken, right? Maybe post process it a little bit." }, { "end": 248, "start": 236, "text": " So that's what the camera does. But in our classic formulation, as you can see here, if this is time, it happens instantaneously all at the same time." }, { "end": 256, "start": 248, "text": " And also, this is the policy. This is thinking what to do, right? This is your evaluation of your neural network." }, { "end": 272, "start": 256, "text": " If this is a neural network, I'm drawing a small neural network here. This happens instantaneously in these formulations and only as the action is executed, time happens, right?" }, { "end": 280, "start": 272, "text": " And then until here, time freezes again and only once the action is determined, time happens again." }, { "end": 292, "start": 280, "text": " In the new formulation now, as you've already seen, what we have here is this kind of continuous framework where, let's say you're here," }, { "end": 300, "start": 292, "text": " it actually takes time for the camera to post process the image. It takes more time for you to think about what to do." }, { "end": 304, "start": 300, "text": " And then once you decide on an action, that action is going to happen, right?" }, { "end": 314, "start": 304, "text": " But we can say, for example, at this point, you tell the camera to take a new picture of the state, right?" }, { "end": 320, "start": 314, "text": " But that takes time. And while that's happening, the old action is still ongoing, right?" }, { "end": 326, "start": 320, "text": " So you don't even have to say the action is still ongoing, but the world is still moving, right?" }, { "end": 333, "start": 326, "text": " The world is still changing while you think, while you post process and while you evaluate your policy." }, { "end": 338, "start": 333, "text": " The world is thinking and only after some time, right?" }, { "end": 343, "start": 338, "text": " After this lag time here, have you decided on a new action?" }, { "end": 349, "start": 343, "text": " And then you can break that old action and kind of perform your new action." }, { "end": 355, "start": 349, "text": " And all of this is happening in time. So this is the new framework." }, { "end": 366, "start": 355, "text": " Now you see the problem here. The problem is that you base your decisions on the state and time T on time, sorry, time H here." }, { "end": 371, "start": 366, "text": " You base your decisions on a state as it was at that time, right?" }, { "end": 376, "start": 371, "text": " That's what you use to think right here. That's what you store and think about." }, { "end": 381, "start": 376, "text": " But you perform. So you perform the action at this point in time." }, { "end": 386, "start": 381, "text": " So there is a considerable difference here because the world has now changed." }, { "end": 392, "start": 386, "text": " So you see the problem. The action you perform is based on an old knowledge of the world." }, { "end": 398, "start": 392, "text": " And you have basically no way of making the action dependent on the current state of the world," }, { "end": 403, "start": 398, "text": " because that would require you to capture the current state and that takes time." }, { "end": 406, "start": 403, "text": " And in that time, the world has already shifted again." }, { "end": 417, "start": 406, "text": " So the agent is kind of required to think ahead about the action that it is currently performing and how the world changes according to that." }, { "end": 425, "start": 417, "text": " So this new formulation of reinforcement learning formulates this in a formal way." }, { "end": 432, "start": 425, "text": " It formulates it in a formal way and will quickly go through that." }, { "end": 440, "start": 432, "text": " Yes, so they go into the very basics here. We'll quickly, quickly go through them." }, { "end": 453, "start": 440, "text": " So they introduce these quantities like the policy pi, the transition distribution, the reward and the Q and value function." }, { "end": 459, "start": 453, "text": " Now we'll just quickly go over these. So you have the agent. Sorry about that." }, { "end": 467, "start": 459, "text": " You have the agent and you have the environment. Hello." }, { "end": 480, "start": 467, "text": " So if you think of the agent and the environment, the environment has this transition function." }, { "end": 489, "start": 480, "text": " The transition function, it takes it says, OK, I'm in this state and the agent does this action." }, { "end": 494, "start": 489, "text": " And here is the probability distribution over the next state." }, { "end": 501, "start": 494, "text": " So it says that your little spaceship is here and the meteors are here." }, { "end": 514, "start": 501, "text": " And then you push the button. If you push the button for shoot, then you'll be in the same place." }, { "end": 520, "start": 514, "text": " The meteors will still be here, but you'll have a little shot coming out of your spaceship." }, { "end": 527, "start": 520, "text": " That's what the environment does. Right. So you give it a state and action and it will give you the next state." }, { "end": 532, "start": 527, "text": " It will also give you a reward. Right." }, { "end": 548, "start": 532, "text": " The reward in the same thing, reward here will be a second output here that tells you either, let's say, negative one if you die or zero if nothing happens or plus one if you shoot a meteor." }, { "end": 557, "start": 548, "text": " That's the reward. So this you can think of as the real world. So these two quantities are in the real world in the environment." }, { "end": 564, "start": 557, "text": " So that's how you model the environment. Then the agent has these quantities called the policy." }, { "end": 579, "start": 564, "text": " So what pi does is much like the transition, but pi takes in a state and gives you an action. Right. So this is now the agent deciding this is thinking." }, { "end": 589, "start": 579, "text": " The policy takes in a state and gives an action. And this contains various forms, but it's just a function for now." }, { "end": 596, "start": 589, "text": " The agent also has a Q and V function, and these are quite, quite similar." }, { "end": 607, "start": 596, "text": " So the Q function, what the Q function will do if if you were in a state and you have several options of what to do, right, you have action one, action two and action three." }, { "end": 620, "start": 607, "text": " You're in state S. The Q function of S and A one superscript pi would tell you the following." }, { "end": 629, "start": 620, "text": " It would tell you what's my expected reward if I'm in state S. That's here. And perform action A." }, { "end": 642, "start": 629, "text": " So A one. So if I now take this path and after this path, I follow the policy pi, right, the policy pi for each of the following." }, { "end": 649, "start": 642, "text": " So it's like right now I take action A one. I don't care about my policy. But after that, I follow the policy pi." }, { "end": 655, "start": 649, "text": " What is my expected reward going to be until the end of the episode? That's the Q function." }, { "end": 661, "start": 655, "text": " And the value function here, very similar, but it only cares about the state." }, { "end": 672, "start": 661, "text": " It says if I'm in state S and I just follow the policy pi, even in the first step, right, I just follow this policy pi." }, { "end": 681, "start": 672, "text": " What is my expected reward going to be over the course of the episode? That is the Q and the value functions." }, { "end": 689, "start": 681, "text": " You can see why Q learning is popular. If you have a good Q function and the Q and the value function, these are the things that you actually want to learn, right?" }, { "end": 703, "start": 689, "text": " If you have a good Q function, you can simply always plug in every action into your Q function and then simply take the maximum, the action that has the maximum Q value," }, { "end": 712, "start": 703, "text": " because that will give you the best reward if your policy pi, right, is kind of self-referential." }, { "end": 726, "start": 712, "text": " If your policy is to always take the maximum Q value, then taking the maximum Q value with the policy, given that you take the maximum Q value, will be optimal." }, { "end": 736, "start": 726, "text": " All right, this was very convoluted, but all right, so let's start off with modeling the environment in this continuous framework." }, { "end": 745, "start": 736, "text": " So instead of having the next state be determined by the current state in action, in the continuous framework, they do this via differential equation." }, { "end": 754, "start": 745, "text": " So the DS is how does the environment change? This is the change in the environment that is determined by two functions, F and G." }, { "end": 762, "start": 754, "text": " So F is your classic environment function. It takes in a state and an action at time t, right?" }, { "end": 768, "start": 762, "text": " These are now functions and it will output how the state changes." }, { "end": 783, "start": 768, "text": " And the G here is, this is a Wiener process, is to introduce stochasticity, as I understand it, because in the classic formulation, the transition model gives you a probability up here, a probability distribution." }, { "end": 793, "start": 783, "text": " So this Wiener process is responsible for introducing that probabilistic nature into this differential equation." }, { "end": 801, "start": 793, "text": " But ultimately, it simply tells you how does the state change depending on my state, current state and action that I perform." }, { "end": 808, "start": 801, "text": " So the reward function now is also pretty simple." }, { "end": 815, "start": 808, "text": " The tau here is a trajectory and the trajectory is simply the state and action over time." }, { "end": 826, "start": 815, "text": " So if I integrate from time zero to infinity or to the end of the episode, my reward function at each point in time, right?" }, { "end": 832, "start": 826, "text": " So I go through my episode and I get high reward, not so high and so on." }, { "end": 843, "start": 832, "text": " So the integral under this curve will be my total reward, just like we sum up the reward of individual steps in the discrete case." }, { "end": 850, "start": 843, "text": " In the continuous case, you can think of each infinitesimal time step giving you a tiny bit of reward." }, { "end": 855, "start": 850, "text": " So the entire reward is just an integral." }, { "end": 863, "start": 855, "text": " Then we go on the value function for a given state at time t." }, { "end": 877, "start": 863, "text": " So think about what this is. The value function for a state means what reward can I expect starting in this particular state and then following policy pi until the end of the episode." }, { "end": 888, "start": 877, "text": " And that here is the expectation over all trajectories that come from my policy of the reward in that trajectory." }, { "end": 894, "start": 888, "text": " So I can, you know, if I'm here, my policy now is also a distribution." }, { "end": 900, "start": 894, "text": " It can go multiple trajectories, right? And I want to have the expected value of the reward." }, { "end": 910, "start": 900, "text": " So each one of these has a reward, the expected value of the reward over all trajectories starting from state s t." }, { "end": 924, "start": 910, "text": " And again, here you say that that is the integral over the now here I have a bit of a problem because here they say t equals zero going from here and here." }, { "end": 944, "start": 924, "text": " But here the t is already here. So I believe this should be this should be t equals t prime and then t prime t prime t prime up here." }, { "end": 955, "start": 944, "text": " And t minus t prime or something like this. In any case, I think it should actually start from this state here and not from time zero." }, { "end": 961, "start": 955, "text": " But I might be missing something. I'm not the biggest integrator in the world." }, { "end": 967, "start": 961, "text": " So, you know, all right, then you have the Q function." }, { "end": 977, "start": 967, "text": " Now think of it what the Q function is in the discrete case, the Q function tells you if I'm in state s and perform action a, what is my expected reward going to be?" }, { "end": 995, "start": 977, "text": " I have to introduce some different things here. They say if I'm in state s and I act action a at time t until time h right now, you have to say how long you're going to perform the action for" }, { "end": 1003, "start": 995, "text": " until you perform the next action. Right. So h is your your lag time here until you perform the next action." }, { "end": 1010, "start": 1003, "text": " So this now I actually agree with this formulation with the integral here." }, { "end": 1016, "start": 1010, "text": " So this is going to be the integral from time t to time t plus h." }, { "end": 1030, "start": 1016, "text": " That's how long you perform the action. Your reward of performing that action, right. Given the state plus the value function at the end of that." }, { "end": 1037, "start": 1030, "text": " So you're here. You're in s t and you perform action a right." }, { "end": 1049, "start": 1037, "text": " And then this is your state at time t plus h and then you're here. And from there on, you could perform many, many, many actions." }, { "end": 1061, "start": 1049, "text": " But in the original notion of the Q function, the Q function tells you if I'm here and I perform this action and after that, I act according to policy pi." }, { "end": 1088, "start": 1061, "text": " What is my what is my expected reward? And there's a classic recurrence relation in reinforcement learning where you can say the Q function in s t given to a is the reward that I get from performing a in state s plus the value function at state s at the next state." }, { "end": 1095, "start": 1088, "text": " Because the value function is exactly the reward that you would get by following policy pi in that next state." }, { "end": 1101, "start": 1095, "text": " And the Q function means I perform a now and after that I perform pi." }, { "end": 1104, "start": 1101, "text": " So this is the continuous analog." }, { "end": 1110, "start": 1104, "text": " That's why you have this part here where you perform the action for each time." }, { "end": 1118, "start": 1110, "text": " After each time you just go after go with your policy and that will be the value function." }, { "end": 1124, "start": 1118, "text": " So this is the continuous formulation of the of the problem." }, { "end": 1129, "start": 1124, "text": " Right. And now they can introduce these these lagging times." }, { "end": 1135, "start": 1129, "text": " So in their diagram up here, they define these notions." }, { "end": 1141, "start": 1135, "text": " So you have your state s t right here." }, { "end": 1147, "start": 1141, "text": " Then after this time, you capture the new state." }, { "end": 1159, "start": 1147, "text": " Right. So after that time, you capture the new state and decide on an action and then you perform it for each time." }, { "end": 1161, "start": 1159, "text": " Is that correct?" }, { "end": 1163, "start": 1161, "text": " Until here." }, { "end": 1175, "start": 1163, "text": " So the the the I minus one of action is performed at this time and the I action is performed at this time." }, { "end": 1178, "start": 1175, "text": " No, that makes no sense." }, { "end": 1185, "start": 1178, "text": " So let's read it." }, { "end": 1193, "start": 1185, "text": " So this is when you capture the state and you need to time to perform to think." }, { "end": 1197, "start": 1193, "text": " Right. This is thinking." }, { "end": 1202, "start": 1197, "text": " And then you perform this action at that time." }, { "end": 1204, "start": 1202, "text": " This is the lag time now." }, { "end": 1207, "start": 1204, "text": " And you perform this action." }, { "end": 1215, "start": 1207, "text": " You want to know you want to know if I perform this action until this time here, what is what is happening?" }, { "end": 1222, "start": 1215, "text": " So this is the new Q function takes into account this thing." }, { "end": 1234, "start": 1222, "text": " It tells you if I'm in state s and I think this is thinking leads me to here." }, { "end": 1238, "start": 1234, "text": " This is the old action, right?" }, { "end": 1243, "start": 1238, "text": " This is the old action that's still happening while I observe this state." }, { "end": 1254, "start": 1243, "text": " Right. So it means if I do this right now and after thinking, I do this." }, { "end": 1256, "start": 1254, "text": " Right." }, { "end": 1260, "start": 1256, "text": " So I'm at state. I'm at time t." }, { "end": 1263, "start": 1260, "text": " And this is still happening." }, { "end": 1274, "start": 1263, "text": " And then after I think thinking leads me here t plus t a s." }, { "end": 1277, "start": 1274, "text": " I perform this new action." }, { "end": 1280, "start": 1277, "text": " I'm out of colors." }, { "end": 1286, "start": 1280, "text": " I perform this new action at that point until time H." }, { "end": 1288, "start": 1286, "text": " What's my Q function?" }, { "end": 1302, "start": 1288, "text": " So my Q function is going to be the integral time t where I start observing the state and start thinking until t plus t a s." }, { "end": 1305, "start": 1302, "text": " That's when I still perform the old action." }, { "end": 1311, "start": 1305, "text": " Right. So this is going to be the reward in the state given the old action." }, { "end": 1314, "start": 1311, "text": " And then at that time, I switch over to the new action." }, { "end": 1315, "start": 1314, "text": " Right." }, { "end": 1322, "start": 1315, "text": " So at that time until time H, now I perform the new action." }, { "end": 1340, "start": 1322, "text": " So this entire part here, this part until here is taking the place of this first part here in the Q function of this first part." }, { "end": 1343, "start": 1340, "text": " Right. So because before it was simply executing one action," }, { "end": 1345, "start": 1343, "text": " we didn't have this concurrency yet." }, { "end": 1347, "start": 1345, "text": " So executing the action." }, { "end": 1349, "start": 1347, "text": " And after that, it's going to be the value function." }, { "end": 1352, "start": 1349, "text": " And now it's executing two actions." }, { "end": 1354, "start": 1352, "text": " First, execute the old action." }, { "end": 1357, "start": 1354, "text": " Then once you're done thinking, execute the new action." }, { "end": 1361, "start": 1357, "text": " And then it's the value function from there on." }, { "end": 1364, "start": 1361, "text": " I hope this is clear." }, { "end": 1367, "start": 1364, "text": " It wasn't clear to me until just now as well." }, { "end": 1368, "start": 1367, "text": " All right." }, { "end": 1378, "start": 1368, "text": " So they define the Monte Carlo estimator where you can do this with just samples of the trajectories instead of expectations." }, { "end": 1383, "start": 1378, "text": " And then they define the Bellman operator, the Bellman backup operator." }, { "end": 1393, "start": 1383, "text": " Now, the Bellman backup operator is an important quantity in value based reinforcement learning because the Bellman backup operator is basically what I talked about before." }, { "end": 1406, "start": 1393, "text": " It tells you that if your policy is to always select the maximum, the action with the maximum Q value, right?" }, { "end": 1407, "start": 1406, "text": " That's what's down here." }, { "end": 1419, "start": 1407, "text": " After you do this action, then the policy you arrive at and you can give certain optimality guarantees." }, { "end": 1424, "start": 1419, "text": " But in essence, this is so-called a contraction." }, { "end": 1435, "start": 1424, "text": " So if you always do that and you calculate your Q function that way, it will mean that in the contraction is defined as if you have an operator." }, { "end": 1446, "start": 1435, "text": " If you have two things that are X1 and X2 that are some apart from each other, then after you apply the operator, this T here," }, { "end": 1460, "start": 1446, "text": " X1 minus T X2, they will be closer together, which basically means that the Q to Q functions of the individual states will be closer together." }, { "end": 1465, "start": 1460, "text": " And you'll converge to a single Q function." }, { "end": 1470, "start": 1465, "text": " So given enough time and enough data, you'll converge on one Q function." }, { "end": 1474, "start": 1470, "text": " There's one fixed point Q function that you'll converge to." }, { "end": 1484, "start": 1474, "text": " And you can show under assumption in classic RL that this is going to be the optimal Q function, the true, let's say, Q function." }, { "end": 1491, "start": 1484, "text": " So they first prove this and then they prove a now they go back to discrete time." }, { "end": 1492, "start": 1491, "text": " So now they were in continuous time." }, { "end": 1499, "start": 1492, "text": " They go back to discrete time, but now they have a discrete time formulation with this lag here." }, { "end": 1504, "start": 1499, "text": " And also they prove that that Bellman operator is a contraction." }, { "end": 1512, "start": 1504, "text": " So the contraction part basically means that if you perform Q learning, you're going to arrive at a solution." }, { "end": 1514, "start": 1512, "text": " That's what this means to be contraction." }, { "end": 1521, "start": 1514, "text": " But now, obviously, that solution in classic RL is going to be the optimal Q function." }, { "end": 1524, "start": 1521, "text": " But here, I actually don't know." }, { "end": 1533, "start": 1524, "text": " All right, so they try this out and they introduce one last important concept here, what they call vector to go," }, { "end": 1545, "start": 1533, "text": " which basically means that at the point where they start thinking," }, { "end": 1556, "start": 1545, "text": " where is a good thing to show this at the point where they start thinking, they give a they give the last action with." }, { "end": 1566, "start": 1556, "text": " So at this point right here, where they capture the state," }, { "end": 1581, "start": 1566, "text": " they also sort of the state contains a information about what part of the action that you started here is still outstanding." }, { "end": 1586, "start": 1581, "text": " So maybe your action was and they illustrate this down here." }, { "end": 1594, "start": 1586, "text": " Maybe your action was to move your robot arm from down here to up here." }, { "end": 1598, "start": 1594, "text": " That was your planned action at this point in time." }, { "end": 1608, "start": 1598, "text": " Now, if you are at step, if you perform the action here and here you start capturing the next state," }, { "end": 1615, "start": 1608, "text": " then you would also give this particular vector here to the to the to the agent." }, { "end": 1622, "start": 1615, "text": " So not only will you tell it, hey, by the way, my last action was a T minus one, as you would need in the Q value." }, { "end": 1627, "start": 1622, "text": " You will also say, and this much is outstanding." }, { "end": 1631, "start": 1627, "text": " This is much is where as I still have to do that much." }, { "end": 1638, "start": 1631, "text": " So basically you're saying I wanted to move my arm right here and I still have to do this part of the action." }, { "end": 1645, "start": 1638, "text": " Now you can see why the algorithm is able to learn much better given that information," }, { "end": 1660, "start": 1645, "text": " because otherwise it has it would have to basically infer that vector from kind of differencing the action minus the what probably happened in the meantime." }, { "end": 1661, "start": 1660, "text": " So they test this out." }, { "end": 1671, "start": 1661, "text": " And what results is the robot videos you've seen before where they say they can recover the original" }, { "end": 1675, "start": 1671, "text": " Q learning in this continuous framework." }, { "end": 1682, "start": 1675, "text": " So here on the left side, you have blocking actions and it says when it says yes here," }, { "end": 1685, "start": 1682, "text": " it is kind of the old old framework." }, { "end": 1694, "start": 1685, "text": " You see the grasp success at like 92 percent, where as if you go to non blocking actions," }, { "end": 1701, "start": 1694, "text": " but do none of the none of the concurrent information, the grasp success suffers." }, { "end": 1715, "start": 1701, "text": " But you can recover the grasp success if you if you give these concurrent information like introduce time step penalty and you give this vector to go and the information about the previous action." }, { "end": 1728, "start": 1715, "text": " And you can also see that the episode duration here is much lower when you go for the continuous actions than when you are in the old framework," }, { "end": 1730, "start": 1728, "text": " naturally, because you don't need to pause." }, { "end": 1734, "start": 1730, "text": " Right." }, { "end": 1735, "start": 1734, "text": " In this." }, { "end": 1739, "start": 1735, "text": " So this is the simulated robotics and the real world robotic grasping results." }, { "end": 1749, "start": 1739, "text": " You see kind of similar results in that if you do have blocking actions, your grasp success is higher than if you don't." }, { "end": 1756, "start": 1749, "text": " But your duration of your of your policy is cut in half." }, { "end": 1760, "start": 1756, "text": " So maybe this is a trade off worth considering." }, { "end": 1763, "start": 1760, "text": " I think this is a is pretty cool framework." }, { "end": 1768, "start": 1763, "text": " And I think there's going to be a lot of work still outstanding here." }, { "end": 1776, "start": 1768, "text": " And I invite you to check out the paper and look at their videos and their ablation studies of what's important and what not." }, { "end": 1799, "start": 1776, "text": " And with that, bye bye." } ]
yPjuAo53uNI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Rant] The Male Only History of Deep Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "neural networks", "history", "groups", "ideology" ]
This casting of our field in terms of ideological narrow-sighted group-think is disgusting. Keep Science about ideas! https://twitter.com/timnitGebru/status/1252752743942328321 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, so instead of reviewing a paper today, I thought I might review this thing. So this person on Twitter posted this link to an article called Brief History of Deep Learning from 1943 to 2019 of Machine Learning Knowledge.ai. So let's look at this. Actually let's look at the tweet first. Because this is... I just saw this. The male-only history of deep learning, where you say AlexNet makes history, but ImageNet doesn't, because women's contributions don't count. And contributions from anyone except for white and white-adjacent people for that matter. That is the tweet, and it has 109 retweets, over 400 likes, and people generally agreeing with this sentiment. So the person is expressing concerns that this article is only going over one particular group of people. So let's look at the article. They basically go over the history of neural networks, of deep learning in an algorithmic sense. So let's check it out. So first we go into neurons, starting at 1943. And the Perceptron paper right here. The first backpropagation algorithm from Kelly. This actually, I think people like Schmidhuber would be proud. As far as I can tell, this is kind of more of a forgotten history, or some of these things are more of a forgotten history. Of course, Minsky's paper, very famous. But here backpropagation attributed to this paper, and so on. And you can see things people like Hinton only coming up later here, the Boltzmann machine, backpropagation in neural networks now. So this, as far as I can tell, it's just a take on kind of the history of algorithmic development. And you can see here, it really is about algorithms. The algorithms behind deep learning. So here is the vanishing gradient problem, the LSTM as an architectural component, deep belief networks. Then you have GPUs for training. Again, vanishing gradients, AlexNet, then GANs, AlphaGo. So we're now going a bit faster. And then the end, it says, the godfathers win the Turing Award for their immense contribution in advancements in area of deep learning and artificial intelligence. This is a defining moment for those who had worked relentlessly on neural networks when the entire machine learning community had moved away from it in the 1970s. So the article clearly is focused on algorithmic developments in deep learning. And that's why AlexNet is here. Now this person rags that AlexNet is here, but ImageNet isn't. And clearly you can see from the article, ImageNet is a data set. It was not made with deep learning in mind. It was simply made as a data set. It's not an algorithmic development. So GANs are here as well, right? But CelebA isn't. C410 isn't. MNIST isn't. The PantryBank isn't. So I think we've skipped a lot of architectural advancements here, like transformers or all kinds of things here. But the history is clearly about the algorithmic developments. And to reframe this, it clearly states ImageNet doesn't because women's contributions don't count. The insinuation here, absolutely, I find this to be absolutely intellectually dishonest. And they say, and contributions from anyone except for white and white-adjacent people for that matter. At this point you just have to laugh. Because of course the narrative that the person wanted to tell was that it's only white people that count. But then you scroll and you're like, arrrrr, it doesn't fit my narrative, right? This GPU is not a white person. So to make it fit your narrative you have to call white-adjacent... What is white-adjacent? It's like, whatever I don't like I now call white. But people just agreeing with this, I find this absolutely disgusting. And I find the article to be okay. I don't know better. But if you have a problem with... I definitely think there is misattribution in science throughout, even systematic. But to say that ImageNet wasn't included because women's contributions don't count, that is just a straight out lie. And to call people white-adjacent is like, how do you not have a bell in your head that goes ding ding ding ding ding when you do something like this? So I find this to be dishonest, either willfully or just because people have so become used to seeing the world in one particular frame. I think these calls only get big whenever there is money and attention going into a field, right? If you look at any field where it's just a bunch of weirdos doing their thing, the weirdos don't care who's there. They just care about the ideas that people have. And I believe we should take that view in science in general. I don't care who has the idea. And these people do. And I disagree. All right, that was it. Keep pushing back on these things if you agree as well. And keep science for ideas. Thanks.
[ { "end": 6.26, "start": 0, "text": " Alright, so instead of reviewing a paper today, I thought I might review this thing." }, { "end": 13.22, "start": 6.26, "text": " So this person on Twitter posted this link to an article called Brief History of Deep" }, { "end": 21.1, "start": 13.22, "text": " Learning from 1943 to 2019 of Machine Learning Knowledge.ai." }, { "end": 24.28, "start": 21.1, "text": " So let's look at this." }, { "end": 26.04, "start": 24.28, "text": " Actually let's look at the tweet first." }, { "end": 28.12, "start": 26.04, "text": " Because this is..." }, { "end": 29.64, "start": 28.12, "text": " I just saw this." }, { "end": 36, "start": 29.64, "text": " The male-only history of deep learning, where you say AlexNet makes history, but ImageNet" }, { "end": 40.72, "start": 36, "text": " doesn't, because women's contributions don't count." }, { "end": 47.84, "start": 40.72, "text": " And contributions from anyone except for white and white-adjacent people for that matter." }, { "end": 56.96, "start": 47.84, "text": " That is the tweet, and it has 109 retweets, over 400 likes, and people generally agreeing" }, { "end": 58.14, "start": 56.96, "text": " with this sentiment." }, { "end": 68.72, "start": 58.14, "text": " So the person is expressing concerns that this article is only going over one particular" }, { "end": 70.68, "start": 68.72, "text": " group of people." }, { "end": 73.2, "start": 70.68, "text": " So let's look at the article." }, { "end": 80.84, "start": 73.2, "text": " They basically go over the history of neural networks, of deep learning in an algorithmic" }, { "end": 81.84, "start": 80.84, "text": " sense." }, { "end": 83.12, "start": 81.84, "text": " So let's check it out." }, { "end": 88.28, "start": 83.12, "text": " So first we go into neurons, starting at 1943." }, { "end": 93.52000000000001, "start": 88.28, "text": " And the Perceptron paper right here." }, { "end": 97.36, "start": 93.52000000000001, "text": " The first backpropagation algorithm from Kelly." }, { "end": 101.96000000000001, "start": 97.36, "text": " This actually, I think people like Schmidhuber would be proud." }, { "end": 108.44, "start": 101.96000000000001, "text": " As far as I can tell, this is kind of more of a forgotten history, or some of these things" }, { "end": 109.72, "start": 108.44, "text": " are more of a forgotten history." }, { "end": 113.44, "start": 109.72, "text": " Of course, Minsky's paper, very famous." }, { "end": 120.6, "start": 113.44, "text": " But here backpropagation attributed to this paper, and so on." }, { "end": 127.6, "start": 120.6, "text": " And you can see things people like Hinton only coming up later here, the Boltzmann machine," }, { "end": 132.44, "start": 127.6, "text": " backpropagation in neural networks now." }, { "end": 139.36, "start": 132.44, "text": " So this, as far as I can tell, it's just a take on kind of the history of algorithmic" }, { "end": 140.36, "start": 139.36, "text": " development." }, { "end": 144.84, "start": 140.36, "text": " And you can see here, it really is about algorithms." }, { "end": 147.60000000000002, "start": 144.84, "text": " The algorithms behind deep learning." }, { "end": 154.16000000000003, "start": 147.60000000000002, "text": " So here is the vanishing gradient problem, the LSTM as an architectural component, deep" }, { "end": 155.52, "start": 154.16000000000003, "text": " belief networks." }, { "end": 158.72000000000003, "start": 155.52, "text": " Then you have GPUs for training." }, { "end": 163.88000000000002, "start": 158.72000000000003, "text": " Again, vanishing gradients, AlexNet, then GANs, AlphaGo." }, { "end": 166.20000000000002, "start": 163.88000000000002, "text": " So we're now going a bit faster." }, { "end": 171.72, "start": 166.2, "text": " And then the end, it says, the godfathers win the Turing Award for their immense contribution" }, { "end": 174.72, "start": 171.72, "text": " in advancements in area of deep learning and artificial intelligence." }, { "end": 179.32, "start": 174.72, "text": " This is a defining moment for those who had worked relentlessly on neural networks when" }, { "end": 183.83999999999997, "start": 179.32, "text": " the entire machine learning community had moved away from it in the 1970s." }, { "end": 191.95999999999998, "start": 183.83999999999997, "text": " So the article clearly is focused on algorithmic developments in deep learning." }, { "end": 194.04, "start": 191.95999999999998, "text": " And that's why AlexNet is here." }, { "end": 200.32, "start": 194.04, "text": " Now this person rags that AlexNet is here, but ImageNet isn't." }, { "end": 205.64, "start": 200.32, "text": " And clearly you can see from the article, ImageNet is a data set." }, { "end": 208.35999999999999, "start": 205.64, "text": " It was not made with deep learning in mind." }, { "end": 210.5, "start": 208.35999999999999, "text": " It was simply made as a data set." }, { "end": 212.76, "start": 210.5, "text": " It's not an algorithmic development." }, { "end": 215.32, "start": 212.76, "text": " So GANs are here as well, right?" }, { "end": 217.39999999999998, "start": 215.32, "text": " But CelebA isn't." }, { "end": 219.2, "start": 217.39999999999998, "text": " C410 isn't." }, { "end": 222.07999999999998, "start": 219.2, "text": " MNIST isn't." }, { "end": 224.60000000000002, "start": 222.08, "text": " The PantryBank isn't." }, { "end": 232.72000000000003, "start": 224.60000000000002, "text": " So I think we've skipped a lot of architectural advancements here, like transformers or all" }, { "end": 234, "start": 232.72000000000003, "text": " kinds of things here." }, { "end": 237.88000000000002, "start": 234, "text": " But the history is clearly about the algorithmic developments." }, { "end": 245.4, "start": 237.88000000000002, "text": " And to reframe this, it clearly states ImageNet doesn't because women's contributions don't" }, { "end": 247.24, "start": 245.4, "text": " count." }, { "end": 254, "start": 247.24, "text": " The insinuation here, absolutely, I find this to be absolutely intellectually dishonest." }, { "end": 257.96000000000004, "start": 254, "text": " And they say, and contributions from anyone except for white and white-adjacent people" }, { "end": 259.72, "start": 257.96000000000004, "text": " for that matter." }, { "end": 262.44, "start": 259.72, "text": " At this point you just have to laugh." }, { "end": 268.8, "start": 262.44, "text": " Because of course the narrative that the person wanted to tell was that it's only white people" }, { "end": 270.2, "start": 268.8, "text": " that count." }, { "end": 275.92, "start": 270.2, "text": " But then you scroll and you're like, arrrrr, it doesn't fit my narrative, right?" }, { "end": 281.6, "start": 275.92, "text": " This GPU is not a white person." }, { "end": 286.36, "start": 281.6, "text": " So to make it fit your narrative you have to call white-adjacent..." }, { "end": 288.56, "start": 286.36, "text": " What is white-adjacent?" }, { "end": 294.96000000000004, "start": 288.56, "text": " It's like, whatever I don't like I now call white." }, { "end": 301, "start": 294.96000000000004, "text": " But people just agreeing with this, I find this absolutely disgusting." }, { "end": 303.48, "start": 301, "text": " And I find the article to be okay." }, { "end": 304.88, "start": 303.48, "text": " I don't know better." }, { "end": 306.24, "start": 304.88, "text": " But if you have a problem with..." }, { "end": 311.6, "start": 306.24, "text": " I definitely think there is misattribution in science throughout, even systematic." }, { "end": 317.04, "start": 311.6, "text": " But to say that ImageNet wasn't included because women's contributions don't count, that is" }, { "end": 319.71999999999997, "start": 317.04, "text": " just a straight out lie." }, { "end": 325.15999999999997, "start": 319.71999999999997, "text": " And to call people white-adjacent is like, how do you not have a bell in your head that" }, { "end": 330.26, "start": 325.15999999999997, "text": " goes ding ding ding ding ding when you do something like this?" }, { "end": 338.15999999999997, "start": 330.26, "text": " So I find this to be dishonest, either willfully or just because people have so become used" }, { "end": 344.36, "start": 338.15999999999997, "text": " to seeing the world in one particular frame." }, { "end": 350.76, "start": 344.36, "text": " I think these calls only get big whenever there is money and attention going into a" }, { "end": 351.96, "start": 350.76, "text": " field, right?" }, { "end": 359, "start": 351.96, "text": " If you look at any field where it's just a bunch of weirdos doing their thing, the weirdos" }, { "end": 360.4, "start": 359, "text": " don't care who's there." }, { "end": 364.56, "start": 360.4, "text": " They just care about the ideas that people have." }, { "end": 370, "start": 364.56, "text": " And I believe we should take that view in science in general." }, { "end": 372.96, "start": 370, "text": " I don't care who has the idea." }, { "end": 376.08, "start": 372.96, "text": " And these people do." }, { "end": 377.08, "start": 376.08, "text": " And I disagree." }, { "end": 379.24, "start": 377.08, "text": " All right, that was it." }, { "end": 383.32, "start": 379.24, "text": " Keep pushing back on these things if you agree as well." }, { "end": 385.76, "start": 383.32, "text": " And keep science for ideas." }, { "end": 389.4, "start": 385.76, "text": " Thanks." } ]
PZypP7PiKi0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Gradient Surgery for Multi-Task Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "neural networks", "multi task", "conflicting gradients", "magnitudes", "adam", "sgd", "momentum", "optimization", "projection" ]
Multi-Task Learning can be very challenging when gradients of different tasks are of severely different magnitudes or point into conflicting directions. PCGrad eliminates this problem by projecting conflicting gradients while still retaining optimality guarantees. https://arxiv.org/abs/2001.06782 Abstract: While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood. In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients. We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task RL problems, this approach leads to substantial gains in efficiency and performance. Further, it is model-agnostic and can be combined with previously-proposed multi-task architectures for enhanced performance. Authors: Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at gradient surgery for multitask learning by Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergei Levine, Carole Hausmann and Chelsea Finn. So in this paper, the concern is a thing called multitask learning. Now what is multitask learning? So this has some very subtle distinctions from other things, that's I think why it's important to look at it a bit. So let's say you have multiple tasks, a learning problem in multiple tasks, this seems easy enough, right? So what we mean is that we have the same input, but then we want to perform two different tasks. So task one and task two. So it could be something like task one, if the input is a food, right? A food object. The task one could be, is it a fruit? Right? Task two could be how many calories does it have? Right? The input is this food item here. And you want to know both things. Is it a fruit? And how many calories does it have? And ideally, so what you could do is you could train two separate machine learning classifiers, right? Classifier one simply does the is it a fruit thing? Task two simply does how many calories does it have? Let's say this is, let's actually say this is a food picture, right? Since Instagram is full of food pictures, we have lots of training data, right? At least unsupervised, people usually label it. And we could train two different things. But it would be nice, since they're both kind of dealing with the same input, so they're not kind of, they actually deal with the same input distribution. It would be nice if we could kind of share a representation, right? So maybe we have some neural network here with many layers. And then we have at the end, we take this hidden representation here. And we just have maybe one or two fully connected layers for each individual task. But our goal would be that the hidden representation here is shared. So shared representation. And why could that help? Because we might have, maybe we have lots of training data for the how many calories does it have, right? But we don't have that much training data for is it a fruit. So lots of training data here, big database, but only like a handful of data points for the second task. Or we might just not have much training data at all for both tasks. And we just might benefit from training this shared representation, you might have already seen this with something like BERT. So in BERT's case, the input is text, right? And then you do something different. That's why BERT is different than multitask learning. What you do in BERT is you do first you do this masked language model pre training. So that's step one. And then in step two, you take this and then you, you fine tune it on a number of tasks, right? So here, question answering, sentiment detection, entailment, and so on. This is different. This is called pre training and fine tuning. Tuning in multitask learning, we actually want to train on different tasks at the same time. Maybe they have different data, right? And we simply want to create this shared representation. And we hope that by combining these tasks, we might learn them better than if we were to learn each task individually. Alright, so this paper says there are there's a big problem with things like this. And they illustrate this in this example right here. So let's say you have a multitask objective, and the learning landscape looks like this. So the objective for task one is the following. So this you have to have to imagine this is maybe a neural network with just two weights, right? Weight one, here is weight two. And this is what the optimization landscape look like looks like for task one. If you're not used to this kind of depiction, the light parts up here and here are high values for the loss function. And the darker parts are low values for the loss function. So you want to get to the darker parts. Now usually we discuss things like this in terms of optimization. So for example, we would talk about SGD. And we would talk about the fact that oh, if we have too large of a so if you're here, where does the gradient point the gradient points towards the direction of steepest increase. So here, so the negative gradient would point down. If we have SGD, maybe we'd go here. And then we take another gradient step, we would go here, right? Oh, now we've gone too far, right? So the gradient now points this direction. So we go here, and then we just continue this, right? So this is a problem with with SGD. And what we can do is we can decrease the step size, for example, and then we converge in this, or we can use something like Adam, that adjusts the gradient to the to the variance of the gradient landscape, things like this, right? So these are these are problems in optimization. But what happens when you have a multitask objective is that for just task one, the optimization landscape would look like this, right? If you were just to train your neural network, if you were to just train this part, and we just look here is is like theta one, and here is theta two, these are the two weights we care about right now, everything else, let's say is fixed. Task one looks like this, but for task two, because it's a different task, right, we need to set the weights differently to get our desired output, it looks different. So our loss function is going to be a combination. So our loss function is going to be the loss of task loss function for a given sample, it's going to be the loss on task one of that sample plus the loss of task two on that sample. So that's going to be the combination the combination you see on the right. So this plus this equals this right here. So you can see in task one, it almost let's say it didn't matter whether we were here or here, both had a relatively low loss value, right? And you can you can see in task two, this point here is not an optimum, well, this point or maybe these are slight, these are somewhat somewhat close together. So if you add them, you can see that now this thing here still has a low value but not as low as this is much darker, right? So the landscape for both tasks together looks differently from the landscape of either task alone. So your goal is to find this optimal point and optimal point here that works for both tasks. Now the paper identifies many, sorry, sorry, the paper identifies problems with this multitask learning and they say the problem is that you can have what are called conflicting gradients. So if you look if you look at if you look at where the gradients point in the different in the different tasks. So if we go by task two, sorry, let me put that in again. We care about the point right here that they care about, right? And they use Adam in this case, and their starting point is right here, and they've come this way so far. So we're going to draw this in here and draw this in right here. And we'll stop a little bit before that valley, right? So let's analyze the gradient, the gradient task one actually points in this direction, you see down the valley, right? And it's pretty big because it's pretty steep, right? You can see the curves here getting closer and closer together. That means the gradient is pretty steep, and it points in that direction. Whereas for task two, if you're here, right, the gradient actually points in this direction, but not as steep, right? Because here the the lines are pretty far apart still. So that means it's relatively flat. This is what the paper calls conflicting gradients, and they're drawn in here. I'm going to draw them just a little bit larger. So these two gradients, first of all, they have different magnitude. You see that the magnitude of this is much larger than the magnitude of this. And also their angle between them is large. That means conflicting, that they're more than 90 degrees apart from each other. And this results if you calculate the resulting gradient, of course, this results in a gradient like this, right? So our algorithm wouldn't actually go down this valley, it will go up the hill again, because you have differently sized gradients from the different tasks that go in different directions. Now, the important point, I was wondering for a long time, what's the difference between this and simply saying, look, any data set, right, your loss on any data set D is just the sum of the loss of your individual data points, Xi. Because it is the same case that you can have different data points and the gradients that you get, right? So that would result, if you've never done optimization, I'm sorry, I'm going a bit fast, that would result in the gradient with respect to your weights of your loss over the entire data set is, of course, approximated by the one over n in your mini batch. So by the gradients in your mini batch, right? So let's call this the loss of Xi. This is completely illegible. But what I'm saying is that your total gradient is the average of your individual data points. And these might be conflicting as well, right? You could have that that one points in this direction, and the other one points in that direction. And we've done this just and and things like things like Adam and SGD actually, are able to to handle that just fine, because we do this average operation. I think what is different here is in multitask learning is that the multitask the tasks distribution is not like stochastically IID, let's say. So in in this case, you can always count on that the expectation will average out this noise. So this noise, if you if you go in expectation, right, if you do mini batches, and aggregate over the whole data set, then that will kind of even out because for the different data points, okay, one gradient might be larger, one might be smaller, but there is no systematic error, or there's no systematic bias that comes from the different data points. Here you have, as we said, one task might be much harder than the other task, right? Or you might have much more data, or the loss function is just larger, like magnitude wise. So you can have any number of systematic biases that that different tasks have with each other. And therefore, the conflicting gradients seem to be a problem. So this paper does a good job of analyzing the situation of conflicting gradients. And what I find particularly interesting is that they first of all, they propose an algorithm to deal with these conflicting gradients. So they say whenever two gradients are conflicting, right, what we would do is we would project them on the normal plane of each other, right? So for example, here in step B, we take the gradient of task I, and we project it onto the normal plane of gradient from the task J, right? And if we do this, and they have a whole algorithm where it's general. So if we do this for multiple tasks, so basically, we get a mini batch of tasks, right? So they generalize this to that you have a bunch of tasks. We get the different gradients, and these can be stochastic because we can we can do this with stochastic data sets. We go through the batch. And if the gradients are conflicting, we simply project the gradients onto the onto each other. And that will result now in a set of non conflicting gradients. You might be a bit appalled by this. I was at first when I saw this. But they actually do, as I said, a good job of analyzing this. So they have two theorems here, which I find interesting. So theorem one is assume these are convex and differentiable. So somewhat standard assumptions in optimization. They say then the PC grad update rule with a step size smaller than one over L, L is the Lipschitz constant, will converge either to a location where the cosine is exactly negative one between two gradients. That never happens except if you construct it or the optimal value, right? So this is basically a consistency theorem saying that this algorithm will still converge to the optimum value. This here is this is the loss. So this loss is the sum of loss one and loss two, right of these two tasks. So for two tasks, they prove that the algorithm will still go to the correct point if you run it long enough. Doesn't say anything about the speed, though. This is where theorem two comes in. Theorem two says, suppose L is differentiable and the gradient of L is Lipschitz continuous. This again, same assumptions, except no longer need convexity. Let theta MT, which is the multitask gradient and theta, sorry, not the gradient, the parameters theta PC grad be the parameters after applying one update to theta with G and PC grad modified G. So this MT is the that would be kind of the original algorithm without their method. And this here would be with their method. Moreover, assume a bunch of things which we'll go into soon. Then the loss function of the PC grad theta is smaller or equal than the loss function of the MT of the original. So what does it mean? It means that if you're in your optimization landscape, and you're somewhere here, right, and your optimum is somewhere here, and your loss function is kind of how far away are you from this from this optimum, right? It means that as long as these conditions are given, if you do your update without the their method, which would be so here would be theta MT or with their method, theta PC grad, then the loss function that you get from their method will be smaller than the loss function that you get without their method. So this is a theorem, they prove it. And for this to be the case, they need these three things. So let's go from the back. The third one is a is a condition on the on the loss function, sorry, on the step size. And you can say, okay, the step size needs to be large enough. You can set the step size. This here, what is this? This here needs to be a this is a condition on the on this epsilon. So what's this thing? It is a curvature bounding measure. And that is compared to little l. And little l here is this thing. It is a constant that must be smaller than h and h is up here is the curvature. So it depends on the curvature, right? It depends on the curvature fulfilling some condition, they state down here. The curvature of the multitask gradient should be large. Yeah, and the first condition we've already seen is that the cosine of the angles needs to be smaller than negative something that depends on the gradients. And this here turns out actually to be the magnitudes of the gradients. So this, this first this here, we can we can neglect that's a step size condition. This here means the gradients should be conflicting. And this here means that there should be sufficient curvature in the loss function. This is exactly what we saw at the beginning in this in this thing here. So there was a sufficient curvature. Because in one direction, the gradient was very steep, and in the other direction, it wasn't, which basically means there is a change of steepness, right? There's a change of steepness in one direction versus the other direction. And also the two gradients were conflicting, which we saw right here. If this is the case, then this algorithm will bring you faster to the optimum than the the normal algorithm, but only if this is given. And notably, this can change step to step. They actually call this the I think the holy trifecta evil trifecta something, they have a name for it. But I'm going to read you out the the the conditions that how they describe it. The conditions are first, the angle between the task gradients is not too small, i.e. the two tasks need to conflict sufficiently. Second, the difference in magnitude needs to be sufficiently large. Third, the curvature of the multitask gradient should be large. And fourth, the learning rate should be big enough such that large curvature would lead to overestimation of performance improvement on the dominating task and underestimation of performance degradation on the dominated task. So here you see a little subtlety. I said before that this condition here was negligible because you can set the task size. In actuality, this you can so I'm not meaning to rag on this, but what does it mean the learning rate should be big enough such that blah blah blah. And what what comes here seems to be negative, right, such that the large curvature would lead to overestimation, which basically means this method, this thing here counts if the step size is large. So that means if I were to play devil's advocate, if I have a problem like this, I could either write I could either use their method PC grad, or I could just decrease my learning rate. And use the classic algorithm, because if I just decrease my learning rate relative to the curvature, then this theorem would no longer hold and it will no longer be the case that their algorithm gives me a faster convergence. So there's there's two ways of looking at these things. It's like, yes, in in these conditions, this algorithm is better, but it is better because someone has set the learning rate too high, and this algorithm kind of fixes that. Now the upside to this is, of course, that the usually you don't want to kind of set your learning rate in accordance with the curvature of the with the curvature of the problem and so on, you don't know the curvature most of the time. So you just set some learning rate, and their algorithm appears to be working. So when this learning rate is smaller, it's just not guaranteed to outperform the classic algorithm. But I just found find this interesting in terms of how you read a paper, right? If you read a paper, you come across something like this, these conditions, you can always see them as here is what needs to happen for us to succeed, or here is what needs to happen for the others to fail. And therefore, we're the only ones that succeed in this regime. As I said, it's a cool algorithm, but I found that to be funny. All right, so they test this on multitask, which these MT 10 and MT 50 benchmarks are these robotic manipulation. So multitask doesn't only mean like supervised learning. In this case, it's actually multitask reinforcement learning. So here you have everything you have mini batches, you have episodes, and you have you have multiple tasks. So this is everything together. Very cool. And you in their actual implementation, they say what they do is they have these multiple tasks. So they have the agent, and they first select the tasks. So for example, here, pull this right? Then they generate an episode by interacting with the environment, forth and back, forth and back, then they put that episode into a replay buffer. Then they maybe select another task and so on. So until they have a bunch of data in the replay buffer from different tasks, then they sample episodes from different tasks, right from task one, task two, and so on. And that will become a mini batch in the learning procedure. So pretty intricate thing. But of course, you the hope is that you can learn kind of a shared representation that you can then perform all of these tasks faster than if you were to learn them each independently. So the MT 10 and MT 50 come from this. And I think they also have goal condition pushing, where it the task is simply to push something to a what they call goal conditioned. And the cool thing about this is, it's not only 50 tasks, but you can produce an infinity of tasks because you can always specify a new location where you should push something to. Right. So that's, that's fairly, fairly cool. And oh, yeah, the curves. So you see that if you do something like soft actor critic, or multi head soft actor critic, so this multi head soft actor critic is probably the closest to what I defined in to what I defined at the beginning, where you have this shared representation, and then and then the individual heads. And you can see that severely under performs against the SAC plus PC grad plus their method that seems to outperform fairly consistently, even against learning the tasks independently. So it learns much faster than if you were to learn these tasks just independently from each other, which is pretty cool, right? So I, I think that's pretty cool. All right, so they do actually interesting investigations. First of all, they research, okay, in during these learning runs, how, what is the curvature here? And the curvature of the loss function, they measure like this. So basically, all this is, is a, a consequence of a Taylor approximation. So if you have like f of x, you can, you can write this as f of some x zero, plus the gradient of f that plus the gradient of f times x here, sorry, at x zero times x in this direction. And then if you subtract, so this is a first order approximation to this, to the function on the right. Then if you bring this over here, you or if you sorry, if you subtract the two sides from each other, then you can see there's the difference between the actual function and the first order approximation of the function that must be, or that is most likely the curvature. Now it is not, it is like every higher order term, but the assumption is that the dominant higher order term will be the curvature. Right? So this is, this would be this, except they don't, they, they do it not doing the x and x zero, they do it at theta t and theta t plus one. So you can see this is the first order approximation, and this is the actual function value after they do a step. And the resulting thing will be the curvature or dominated by the curvature. So they analyze this over the course of learning and they see that it actually increases as you, as you go on. And just, I'm not a big fan of like just large numbers, but they number seem to be large, right? Just compared to what you can handle with a computer, the numbers seem to be large and they seem to be getting larger in order of magnitude steps across training iterations. So I'm going to believe them that this curvature is given. I would have liked to have it seen compared to just a single task instead of a multitask, instead of, you know, comparing these things, which is useless because they reach different losses, right? So it's pretty useless to compare their curvature across the number of iterations. What I would have liked to see is a comparison multitask versus single task. And to show me that in single task learning, this curvature doesn't happen. Here you have the percentage of update steps where conditions A and B are held. You remember condition A was the condition on the conflicting angle. Condition B was the condition that the curvature is large enough and you can see that as you go on with learning these dotted and dashed lines, the conditions hold almost entirely at the beginning of learning, but then still hold by in a big time of the steps. So here is like about half the steps still at the end of training these conditions hold. So it is fairly, fairly good evidence that often the problems that they say are real are really there and then therefore their algorithm helps, right? So here's the average per task average return. And interestingly, they say in the text, look, this task here seems to be easier, right? And the task two, which is the dotted line seems to be harder. So SAC, the baseline algorithm never really manages to learn task two, whereas this PC grad manages after a while to learn it. And at that point, something happens over here, which I'm not super sure. Yeah, that's what they say in the text, but I have to squint a lot to see that exactly at that position, something happens. Suffice to say that the PC grad is able to learn the task that SAC isn't able to learn because probably task one is completely dominating the gradient at that point, right? All right, so this was the paper. I invite you to read it and thanks for listening. Bye bye.
[ { "end": 7, "start": 0, "text": " Hi there, today we're looking at gradient surgery for multitask learning by Tianhe Yu," }, { "end": 15.08, "start": 7, "text": " Saurabh Kumar, Abhishek Gupta, Sergei Levine, Carole Hausmann and Chelsea Finn." }, { "end": 22.28, "start": 15.08, "text": " So in this paper, the concern is a thing called multitask learning." }, { "end": 24.32, "start": 22.28, "text": " Now what is multitask learning?" }, { "end": 29.2, "start": 24.32, "text": " So this has some very subtle distinctions from other things, that's I think why it's" }, { "end": 32.76, "start": 29.2, "text": " important to look at it a bit." }, { "end": 37.76, "start": 32.76, "text": " So let's say you have multiple tasks, a learning problem in multiple tasks, this seems easy" }, { "end": 40.08, "start": 37.76, "text": " enough, right?" }, { "end": 47.519999999999996, "start": 40.08, "text": " So what we mean is that we have the same input, but then we want to perform two different" }, { "end": 48.58, "start": 47.519999999999996, "text": " tasks." }, { "end": 54.519999999999996, "start": 48.58, "text": " So task one and task two." }, { "end": 63.96, "start": 54.52, "text": " So it could be something like task one, if the input is a food, right?" }, { "end": 65.96000000000001, "start": 63.96, "text": " A food object." }, { "end": 71.60000000000001, "start": 65.96000000000001, "text": " The task one could be, is it a fruit?" }, { "end": 73.2, "start": 71.60000000000001, "text": " Right?" }, { "end": 80.56, "start": 73.2, "text": " Task two could be how many calories does it have?" }, { "end": 81.88, "start": 80.56, "text": " Right?" }, { "end": 85.36, "start": 81.88, "text": " The input is this food item here." }, { "end": 87.6, "start": 85.36, "text": " And you want to know both things." }, { "end": 88.8, "start": 87.6, "text": " Is it a fruit?" }, { "end": 91.72, "start": 88.8, "text": " And how many calories does it have?" }, { "end": 99.28, "start": 91.72, "text": " And ideally, so what you could do is you could train two separate machine learning classifiers," }, { "end": 100.28, "start": 99.28, "text": " right?" }, { "end": 104.67999999999999, "start": 100.28, "text": " Classifier one simply does the is it a fruit thing?" }, { "end": 107.56, "start": 104.67999999999999, "text": " Task two simply does how many calories does it have?" }, { "end": 113.56, "start": 107.56, "text": " Let's say this is, let's actually say this is a food picture, right?" }, { "end": 120.64, "start": 113.56, "text": " Since Instagram is full of food pictures, we have lots of training data, right?" }, { "end": 125.02000000000001, "start": 120.64, "text": " At least unsupervised, people usually label it." }, { "end": 127, "start": 125.02000000000001, "text": " And we could train two different things." }, { "end": 131.72, "start": 127, "text": " But it would be nice, since they're both kind of dealing with the same input, so they're" }, { "end": 135.8, "start": 131.72, "text": " not kind of, they actually deal with the same input distribution." }, { "end": 143.26000000000002, "start": 135.8, "text": " It would be nice if we could kind of share a representation, right?" }, { "end": 147.74, "start": 143.26000000000002, "text": " So maybe we have some neural network here with many layers." }, { "end": 154.04000000000002, "start": 147.74, "text": " And then we have at the end, we take this hidden representation here." }, { "end": 159.64000000000001, "start": 154.04000000000002, "text": " And we just have maybe one or two fully connected layers for each individual task." }, { "end": 165.34, "start": 159.64000000000001, "text": " But our goal would be that the hidden representation here is shared." }, { "end": 171.34, "start": 165.34, "text": " So shared representation." }, { "end": 172.8, "start": 171.34, "text": " And why could that help?" }, { "end": 182.6, "start": 172.8, "text": " Because we might have, maybe we have lots of training data for the how many calories" }, { "end": 184.04, "start": 182.6, "text": " does it have, right?" }, { "end": 188.04, "start": 184.04, "text": " But we don't have that much training data for is it a fruit." }, { "end": 194.24, "start": 188.04, "text": " So lots of training data here, big database, but only like a handful of data points for" }, { "end": 195.92000000000002, "start": 194.24, "text": " the second task." }, { "end": 200.5, "start": 195.92000000000002, "text": " Or we might just not have much training data at all for both tasks." }, { "end": 205.98000000000002, "start": 200.5, "text": " And we just might benefit from training this shared representation, you might have already" }, { "end": 210.28, "start": 205.98000000000002, "text": " seen this with something like BERT." }, { "end": 217.24, "start": 210.28, "text": " So in BERT's case, the input is text, right?" }, { "end": 219.72, "start": 217.24, "text": " And then you do something different." }, { "end": 222.48000000000002, "start": 219.72, "text": " That's why BERT is different than multitask learning." }, { "end": 230.44, "start": 222.48, "text": " What you do in BERT is you do first you do this masked language model pre training." }, { "end": 232.72, "start": 230.44, "text": " So that's step one." }, { "end": 240, "start": 232.72, "text": " And then in step two, you take this and then you, you fine tune it on a number of tasks," }, { "end": 241, "start": 240, "text": " right?" }, { "end": 248.72, "start": 241, "text": " So here, question answering, sentiment detection, entailment, and so on." }, { "end": 250.04, "start": 248.72, "text": " This is different." }, { "end": 254.76, "start": 250.04, "text": " This is called pre training and fine tuning." }, { "end": 263.96, "start": 254.76, "text": " Tuning in multitask learning, we actually want to train on different tasks at the same" }, { "end": 264.96, "start": 263.96, "text": " time." }, { "end": 266.44, "start": 264.96, "text": " Maybe they have different data, right?" }, { "end": 270.68, "start": 266.44, "text": " And we simply want to create this shared representation." }, { "end": 277.15999999999997, "start": 270.68, "text": " And we hope that by combining these tasks, we might learn them better than if we were" }, { "end": 280.24, "start": 277.16, "text": " to learn each task individually." }, { "end": 286.20000000000005, "start": 280.24, "text": " Alright, so this paper says there are there's a big problem with things like this." }, { "end": 289.40000000000003, "start": 286.20000000000005, "text": " And they illustrate this in this example right here." }, { "end": 294.66, "start": 289.40000000000003, "text": " So let's say you have a multitask objective, and the learning landscape looks like this." }, { "end": 297.74, "start": 294.66, "text": " So the objective for task one is the following." }, { "end": 303.16, "start": 297.74, "text": " So this you have to have to imagine this is maybe a neural network with just two weights," }, { "end": 304.16, "start": 303.16, "text": " right?" }, { "end": 307.52000000000004, "start": 304.16, "text": " Weight one, here is weight two." }, { "end": 311.24, "start": 307.52000000000004, "text": " And this is what the optimization landscape look like looks like for task one." }, { "end": 317.76000000000005, "start": 311.24, "text": " If you're not used to this kind of depiction, the light parts up here and here are high" }, { "end": 320.32000000000005, "start": 317.76000000000005, "text": " values for the loss function." }, { "end": 325.48, "start": 320.32000000000005, "text": " And the darker parts are low values for the loss function." }, { "end": 329, "start": 325.48, "text": " So you want to get to the darker parts." }, { "end": 333.34000000000003, "start": 329, "text": " Now usually we discuss things like this in terms of optimization." }, { "end": 337.59999999999997, "start": 333.34, "text": " So for example, we would talk about SGD." }, { "end": 342.47999999999996, "start": 337.59999999999997, "text": " And we would talk about the fact that oh, if we have too large of a so if you're here," }, { "end": 347.59999999999997, "start": 342.47999999999996, "text": " where does the gradient point the gradient points towards the direction of steepest increase." }, { "end": 350.96, "start": 347.59999999999997, "text": " So here, so the negative gradient would point down." }, { "end": 354.52, "start": 350.96, "text": " If we have SGD, maybe we'd go here." }, { "end": 358.96, "start": 354.52, "text": " And then we take another gradient step, we would go here, right?" }, { "end": 360.88, "start": 358.96, "text": " Oh, now we've gone too far, right?" }, { "end": 364.94, "start": 360.88, "text": " So the gradient now points this direction." }, { "end": 370.52, "start": 364.94, "text": " So we go here, and then we just continue this, right?" }, { "end": 372.9, "start": 370.52, "text": " So this is a problem with with SGD." }, { "end": 377.36, "start": 372.9, "text": " And what we can do is we can decrease the step size, for example, and then we converge" }, { "end": 386.36, "start": 377.36, "text": " in this, or we can use something like Adam, that adjusts the gradient to the to the variance" }, { "end": 390.84, "start": 386.36, "text": " of the gradient landscape, things like this, right?" }, { "end": 393.91999999999996, "start": 390.84, "text": " So these are these are problems in optimization." }, { "end": 399.76, "start": 393.91999999999996, "text": " But what happens when you have a multitask objective is that for just task one, the optimization" }, { "end": 401.44, "start": 399.76, "text": " landscape would look like this, right?" }, { "end": 409.47999999999996, "start": 401.44, "text": " If you were just to train your neural network, if you were to just train this part, and we" }, { "end": 414.47999999999996, "start": 409.47999999999996, "text": " just look here is is like theta one, and here is theta two, these are the two weights we" }, { "end": 418.47999999999996, "start": 414.47999999999996, "text": " care about right now, everything else, let's say is fixed." }, { "end": 423.06, "start": 418.48, "text": " Task one looks like this, but for task two, because it's a different task, right, we need" }, { "end": 427.94, "start": 423.06, "text": " to set the weights differently to get our desired output, it looks different." }, { "end": 431.62, "start": 427.94, "text": " So our loss function is going to be a combination." }, { "end": 436.96000000000004, "start": 431.62, "text": " So our loss function is going to be the loss of task loss function for a given sample," }, { "end": 443.26, "start": 436.96000000000004, "text": " it's going to be the loss on task one of that sample plus the loss of task two on that sample." }, { "end": 449.2, "start": 443.26, "text": " So that's going to be the combination the combination you see on the right." }, { "end": 455.56, "start": 449.2, "text": " So this plus this equals this right here." }, { "end": 463.15999999999997, "start": 455.56, "text": " So you can see in task one, it almost let's say it didn't matter whether we were here" }, { "end": 467.24, "start": 463.15999999999997, "text": " or here, both had a relatively low loss value, right?" }, { "end": 475.76, "start": 467.24, "text": " And you can you can see in task two, this point here is not an optimum, well, this point" }, { "end": 480.52, "start": 475.76, "text": " or maybe these are slight, these are somewhat somewhat close together." }, { "end": 485.96000000000004, "start": 480.52, "text": " So if you add them, you can see that now this thing here still has a low value but not as" }, { "end": 488.24, "start": 485.96000000000004, "text": " low as this is much darker, right?" }, { "end": 500.04, "start": 488.24, "text": " So the landscape for both tasks together looks differently from the landscape of either task" }, { "end": 501.04, "start": 500.04, "text": " alone." }, { "end": 505.84000000000003, "start": 501.04, "text": " So your goal is to find this optimal point and optimal point here that works for both" }, { "end": 508.2, "start": 505.84000000000003, "text": " tasks." }, { "end": 515.52, "start": 508.2, "text": " Now the paper identifies many, sorry, sorry, the paper identifies problems with this multitask" }, { "end": 523.72, "start": 515.52, "text": " learning and they say the problem is that you can have what are called conflicting gradients." }, { "end": 535.0799999999999, "start": 523.72, "text": " So if you look if you look at if you look at where the gradients point in the different" }, { "end": 538.48, "start": 535.0799999999999, "text": " in the different tasks." }, { "end": 542.3199999999999, "start": 538.48, "text": " So if we go by task two, sorry, let me put that in again." }, { "end": 547.2, "start": 542.32, "text": " We care about the point right here that they care about, right?" }, { "end": 552.96, "start": 547.2, "text": " And they use Adam in this case, and their starting point is right here, and they've" }, { "end": 554.8000000000001, "start": 552.96, "text": " come this way so far." }, { "end": 559.96, "start": 554.8000000000001, "text": " So we're going to draw this in here and draw this in right here." }, { "end": 564.88, "start": 559.96, "text": " And we'll stop a little bit before that valley, right?" }, { "end": 570.2, "start": 564.88, "text": " So let's analyze the gradient, the gradient task one actually points in this direction," }, { "end": 572.6400000000001, "start": 570.2, "text": " you see down the valley, right?" }, { "end": 575.48, "start": 572.6400000000001, "text": " And it's pretty big because it's pretty steep, right?" }, { "end": 579.5200000000001, "start": 575.48, "text": " You can see the curves here getting closer and closer together." }, { "end": 583.6400000000001, "start": 579.5200000000001, "text": " That means the gradient is pretty steep, and it points in that direction." }, { "end": 589.6400000000001, "start": 583.6400000000001, "text": " Whereas for task two, if you're here, right, the gradient actually points in this direction," }, { "end": 591.6600000000001, "start": 589.6400000000001, "text": " but not as steep, right?" }, { "end": 595.36, "start": 591.6600000000001, "text": " Because here the the lines are pretty far apart still." }, { "end": 599.08, "start": 595.36, "text": " So that means it's relatively flat." }, { "end": 603.08, "start": 599.08, "text": " This is what the paper calls conflicting gradients, and they're drawn in here." }, { "end": 608.72, "start": 603.08, "text": " I'm going to draw them just a little bit larger." }, { "end": 612.5200000000001, "start": 608.72, "text": " So these two gradients, first of all, they have different magnitude." }, { "end": 618.2, "start": 612.5200000000001, "text": " You see that the magnitude of this is much larger than the magnitude of this." }, { "end": 622.32, "start": 618.2, "text": " And also their angle between them is large." }, { "end": 628.24, "start": 622.32, "text": " That means conflicting, that they're more than 90 degrees apart from each other." }, { "end": 636.12, "start": 628.24, "text": " And this results if you calculate the resulting gradient, of course, this results in a gradient" }, { "end": 637.78, "start": 636.12, "text": " like this, right?" }, { "end": 644.4, "start": 637.78, "text": " So our algorithm wouldn't actually go down this valley, it will go up the hill again," }, { "end": 652.5600000000001, "start": 644.4, "text": " because you have differently sized gradients from the different tasks that go in different" }, { "end": 653.5600000000001, "start": 652.5600000000001, "text": " directions." }, { "end": 657.16, "start": 653.5600000000001, "text": " Now, the important point, I was wondering for a long time, what's the difference between" }, { "end": 664.36, "start": 657.16, "text": " this and simply saying, look, any data set, right, your loss on any data set D is just" }, { "end": 670.48, "start": 664.36, "text": " the sum of the loss of your individual data points, Xi." }, { "end": 677.4399999999999, "start": 670.48, "text": " Because it is the same case that you can have different data points and the gradients that" }, { "end": 679, "start": 677.4399999999999, "text": " you get, right?" }, { "end": 683.8399999999999, "start": 679, "text": " So that would result, if you've never done optimization, I'm sorry, I'm going a bit fast," }, { "end": 688.8000000000001, "start": 683.84, "text": " that would result in the gradient with respect to your weights of your loss over the entire" }, { "end": 698.62, "start": 688.8000000000001, "text": " data set is, of course, approximated by the one over n in your mini batch." }, { "end": 703.08, "start": 698.62, "text": " So by the gradients in your mini batch, right?" }, { "end": 707.7800000000001, "start": 703.08, "text": " So let's call this the loss of Xi." }, { "end": 710.5400000000001, "start": 707.7800000000001, "text": " This is completely illegible." }, { "end": 717.52, "start": 710.54, "text": " But what I'm saying is that your total gradient is the average of your individual data points." }, { "end": 720.04, "start": 717.52, "text": " And these might be conflicting as well, right?" }, { "end": 725.68, "start": 720.04, "text": " You could have that that one points in this direction, and the other one points in that" }, { "end": 726.68, "start": 725.68, "text": " direction." }, { "end": 732.4, "start": 726.68, "text": " And we've done this just and and things like things like Adam and SGD actually, are able" }, { "end": 736.5999999999999, "start": 732.4, "text": " to to handle that just fine, because we do this average operation." }, { "end": 746.28, "start": 736.6, "text": " I think what is different here is in multitask learning is that the multitask the tasks distribution" }, { "end": 751.28, "start": 746.28, "text": " is not like stochastically IID, let's say." }, { "end": 758.36, "start": 751.28, "text": " So in in this case, you can always count on that the expectation will average out this" }, { "end": 759.36, "start": 758.36, "text": " noise." }, { "end": 765.9200000000001, "start": 759.36, "text": " So this noise, if you if you go in expectation, right, if you do mini batches, and aggregate" }, { "end": 771.1999999999999, "start": 765.92, "text": " over the whole data set, then that will kind of even out because for the different data" }, { "end": 778, "start": 771.1999999999999, "text": " points, okay, one gradient might be larger, one might be smaller, but there is no systematic" }, { "end": 784.24, "start": 778, "text": " error, or there's no systematic bias that comes from the different data points." }, { "end": 792.3399999999999, "start": 784.24, "text": " Here you have, as we said, one task might be much harder than the other task, right?" }, { "end": 800.36, "start": 792.34, "text": " Or you might have much more data, or the loss function is just larger, like magnitude wise." }, { "end": 807.7, "start": 800.36, "text": " So you can have any number of systematic biases that that different tasks have with each other." }, { "end": 812.08, "start": 807.7, "text": " And therefore, the conflicting gradients seem to be a problem." }, { "end": 816.76, "start": 812.08, "text": " So this paper does a good job of analyzing the situation of conflicting gradients." }, { "end": 827.4399999999999, "start": 816.76, "text": " And what I find particularly interesting is that they first of all, they propose an algorithm" }, { "end": 830.08, "start": 827.4399999999999, "text": " to deal with these conflicting gradients." }, { "end": 836.8, "start": 830.08, "text": " So they say whenever two gradients are conflicting, right, what we would do is we would project" }, { "end": 840.08, "start": 836.8, "text": " them on the normal plane of each other, right?" }, { "end": 848.6, "start": 840.08, "text": " So for example, here in step B, we take the gradient of task I, and we project it onto" }, { "end": 855.8000000000001, "start": 848.6, "text": " the normal plane of gradient from the task J, right?" }, { "end": 860.96, "start": 855.8000000000001, "text": " And if we do this, and they have a whole algorithm where it's general." }, { "end": 869.58, "start": 860.96, "text": " So if we do this for multiple tasks, so basically, we get a mini batch of tasks, right?" }, { "end": 875.88, "start": 869.58, "text": " So they generalize this to that you have a bunch of tasks." }, { "end": 880.1600000000001, "start": 875.88, "text": " We get the different gradients, and these can be stochastic because we can we can do" }, { "end": 882.96, "start": 880.1600000000001, "text": " this with stochastic data sets." }, { "end": 885.86, "start": 882.96, "text": " We go through the batch." }, { "end": 894.2800000000001, "start": 885.86, "text": " And if the gradients are conflicting, we simply project the gradients onto the onto each other." }, { "end": 901.28, "start": 894.28, "text": " And that will result now in a set of non conflicting gradients." }, { "end": 903.56, "start": 901.28, "text": " You might be a bit appalled by this." }, { "end": 906.56, "start": 903.56, "text": " I was at first when I saw this." }, { "end": 911.76, "start": 906.56, "text": " But they actually do, as I said, a good job of analyzing this." }, { "end": 914.8, "start": 911.76, "text": " So they have two theorems here, which I find interesting." }, { "end": 919.92, "start": 914.8, "text": " So theorem one is assume these are convex and differentiable." }, { "end": 924, "start": 919.92, "text": " So somewhat standard assumptions in optimization." }, { "end": 929.84, "start": 924, "text": " They say then the PC grad update rule with a step size smaller than one over L, L is" }, { "end": 937.02, "start": 929.84, "text": " the Lipschitz constant, will converge either to a location where the cosine is exactly" }, { "end": 940.32, "start": 937.02, "text": " negative one between two gradients." }, { "end": 945.92, "start": 940.32, "text": " That never happens except if you construct it or the optimal value, right?" }, { "end": 952.16, "start": 945.92, "text": " So this is basically a consistency theorem saying that this algorithm will still converge" }, { "end": 954.4399999999999, "start": 952.16, "text": " to the optimum value." }, { "end": 959.4399999999999, "start": 954.4399999999999, "text": " This here is this is the loss." }, { "end": 966.64, "start": 959.4399999999999, "text": " So this loss is the sum of loss one and loss two, right of these two tasks." }, { "end": 972.4, "start": 966.64, "text": " So for two tasks, they prove that the algorithm will still go to the correct point if you" }, { "end": 974.04, "start": 972.4, "text": " run it long enough." }, { "end": 977.12, "start": 974.04, "text": " Doesn't say anything about the speed, though." }, { "end": 980.64, "start": 977.12, "text": " This is where theorem two comes in." }, { "end": 987.36, "start": 980.64, "text": " Theorem two says, suppose L is differentiable and the gradient of L is Lipschitz continuous." }, { "end": 994.04, "start": 987.36, "text": " This again, same assumptions, except no longer need convexity." }, { "end": 1003.64, "start": 994.04, "text": " Let theta MT, which is the multitask gradient and theta, sorry, not the gradient, the parameters" }, { "end": 1010.08, "start": 1003.64, "text": " theta PC grad be the parameters after applying one update to theta with G and PC grad modified" }, { "end": 1011.08, "start": 1010.08, "text": " G." }, { "end": 1017.44, "start": 1011.08, "text": " So this MT is the that would be kind of the original algorithm without their method." }, { "end": 1021.64, "start": 1017.44, "text": " And this here would be with their method." }, { "end": 1028.1200000000001, "start": 1021.64, "text": " Moreover, assume a bunch of things which we'll go into soon." }, { "end": 1037.98, "start": 1028.1200000000001, "text": " Then the loss function of the PC grad theta is smaller or equal than the loss function" }, { "end": 1042.56, "start": 1037.98, "text": " of the MT of the original." }, { "end": 1043.6200000000001, "start": 1042.56, "text": " So what does it mean?" }, { "end": 1051.24, "start": 1043.6200000000001, "text": " It means that if you're in your optimization landscape, and you're somewhere here, right," }, { "end": 1058.32, "start": 1051.24, "text": " and your optimum is somewhere here, and your loss function is kind of how far away are" }, { "end": 1061.78, "start": 1058.32, "text": " you from this from this optimum, right?" }, { "end": 1069.36, "start": 1061.78, "text": " It means that as long as these conditions are given, if you do your update without the" }, { "end": 1079.8799999999999, "start": 1069.36, "text": " their method, which would be so here would be theta MT or with their method, theta PC" }, { "end": 1087.52, "start": 1079.8799999999999, "text": " grad, then the loss function that you get from their method will be smaller than the" }, { "end": 1092.16, "start": 1087.52, "text": " loss function that you get without their method." }, { "end": 1094.32, "start": 1092.16, "text": " So this is a theorem, they prove it." }, { "end": 1101.08, "start": 1094.32, "text": " And for this to be the case, they need these three things." }, { "end": 1102.8, "start": 1101.08, "text": " So let's go from the back." }, { "end": 1109, "start": 1102.8, "text": " The third one is a is a condition on the on the loss function, sorry, on the step size." }, { "end": 1113.96, "start": 1109, "text": " And you can say, okay, the step size needs to be large enough." }, { "end": 1116.52, "start": 1113.96, "text": " You can set the step size." }, { "end": 1118.8799999999999, "start": 1116.52, "text": " This here, what is this?" }, { "end": 1125.98, "start": 1118.8799999999999, "text": " This here needs to be a this is a condition on the on this epsilon." }, { "end": 1128.46, "start": 1125.98, "text": " So what's this thing?" }, { "end": 1132.96, "start": 1128.46, "text": " It is a curvature bounding measure." }, { "end": 1136.16, "start": 1132.96, "text": " And that is compared to little l." }, { "end": 1140.56, "start": 1136.16, "text": " And little l here is this thing." }, { "end": 1152.96, "start": 1140.56, "text": " It is a constant that must be smaller than h and h is up here is the curvature." }, { "end": 1156.36, "start": 1152.96, "text": " So it depends on the curvature, right?" }, { "end": 1163.26, "start": 1156.36, "text": " It depends on the curvature fulfilling some condition, they state down here." }, { "end": 1168.36, "start": 1163.26, "text": " The curvature of the multitask gradient should be large." }, { "end": 1179.04, "start": 1168.36, "text": " Yeah, and the first condition we've already seen is that the cosine of the angles needs" }, { "end": 1183.1599999999999, "start": 1179.04, "text": " to be smaller than negative something that depends on the gradients." }, { "end": 1187.1999999999998, "start": 1183.1599999999999, "text": " And this here turns out actually to be the magnitudes of the gradients." }, { "end": 1193.1599999999999, "start": 1187.1999999999998, "text": " So this, this first this here, we can we can neglect that's a step size condition." }, { "end": 1198.48, "start": 1193.16, "text": " This here means the gradients should be conflicting." }, { "end": 1205.94, "start": 1198.48, "text": " And this here means that there should be sufficient curvature in the loss function." }, { "end": 1213.1200000000001, "start": 1205.94, "text": " This is exactly what we saw at the beginning in this in this thing here." }, { "end": 1218.1200000000001, "start": 1213.1200000000001, "text": " So there was a sufficient curvature." }, { "end": 1223.32, "start": 1218.12, "text": " Because in one direction, the gradient was very steep, and in the other direction, it" }, { "end": 1227.9599999999998, "start": 1223.32, "text": " wasn't, which basically means there is a change of steepness, right?" }, { "end": 1232.6599999999999, "start": 1227.9599999999998, "text": " There's a change of steepness in one direction versus the other direction." }, { "end": 1238.8, "start": 1232.6599999999999, "text": " And also the two gradients were conflicting, which we saw right here." }, { "end": 1246.76, "start": 1238.8, "text": " If this is the case, then this algorithm will bring you faster to the optimum than the the" }, { "end": 1249.84, "start": 1246.76, "text": " normal algorithm, but only if this is given." }, { "end": 1253.36, "start": 1249.84, "text": " And notably, this can change step to step." }, { "end": 1260.82, "start": 1253.36, "text": " They actually call this the I think the holy trifecta evil trifecta something, they have" }, { "end": 1262.4, "start": 1260.82, "text": " a name for it." }, { "end": 1267.48, "start": 1262.4, "text": " But I'm going to read you out the the the conditions that how they describe it." }, { "end": 1271.36, "start": 1267.48, "text": " The conditions are first, the angle between the task gradients is not too small, i.e." }, { "end": 1274.44, "start": 1271.36, "text": " the two tasks need to conflict sufficiently." }, { "end": 1280, "start": 1274.44, "text": " Second, the difference in magnitude needs to be sufficiently large." }, { "end": 1285.52, "start": 1280, "text": " Third, the curvature of the multitask gradient should be large." }, { "end": 1290.3200000000002, "start": 1285.52, "text": " And fourth, the learning rate should be big enough such that large curvature would lead" }, { "end": 1296.6200000000001, "start": 1290.3200000000002, "text": " to overestimation of performance improvement on the dominating task and underestimation" }, { "end": 1300.6000000000001, "start": 1296.6200000000001, "text": " of performance degradation on the dominated task." }, { "end": 1303.2, "start": 1300.6000000000001, "text": " So here you see a little subtlety." }, { "end": 1311.32, "start": 1303.2, "text": " I said before that this condition here was negligible because you can set the task size." }, { "end": 1321.64, "start": 1311.32, "text": " In actuality, this you can so I'm not meaning to rag on this, but what does it mean the" }, { "end": 1325.4, "start": 1321.64, "text": " learning rate should be big enough such that blah blah blah." }, { "end": 1330.56, "start": 1325.4, "text": " And what what comes here seems to be negative, right, such that the large curvature would" }, { "end": 1339.6399999999999, "start": 1330.56, "text": " lead to overestimation, which basically means this method, this thing here counts if the" }, { "end": 1341.32, "start": 1339.6399999999999, "text": " step size is large." }, { "end": 1349.96, "start": 1341.32, "text": " So that means if I were to play devil's advocate, if I have a problem like this, I could either" }, { "end": 1360.48, "start": 1349.96, "text": " write I could either use their method PC grad, or I could just decrease my learning rate." }, { "end": 1367.08, "start": 1360.48, "text": " And use the classic algorithm, because if I just decrease my learning rate relative" }, { "end": 1371.88, "start": 1367.08, "text": " to the curvature, then this theorem would no longer hold and it will no longer be the" }, { "end": 1376.6, "start": 1371.88, "text": " case that their algorithm gives me a faster convergence." }, { "end": 1379.52, "start": 1376.6, "text": " So there's there's two ways of looking at these things." }, { "end": 1385.52, "start": 1379.52, "text": " It's like, yes, in in these conditions, this algorithm is better, but it is better because" }, { "end": 1392.44, "start": 1385.52, "text": " someone has set the learning rate too high, and this algorithm kind of fixes that." }, { "end": 1399.4, "start": 1392.44, "text": " Now the upside to this is, of course, that the usually you don't want to kind of set" }, { "end": 1405.32, "start": 1399.4, "text": " your learning rate in accordance with the curvature of the with the curvature of the" }, { "end": 1408.32, "start": 1405.32, "text": " problem and so on, you don't know the curvature most of the time." }, { "end": 1414.36, "start": 1408.32, "text": " So you just set some learning rate, and their algorithm appears to be working." }, { "end": 1419.3999999999999, "start": 1414.36, "text": " So when this learning rate is smaller, it's just not guaranteed to outperform the classic" }, { "end": 1420.3999999999999, "start": 1419.3999999999999, "text": " algorithm." }, { "end": 1426.1799999999998, "start": 1420.3999999999999, "text": " But I just found find this interesting in terms of how you read a paper, right?" }, { "end": 1430.34, "start": 1426.1799999999998, "text": " If you read a paper, you come across something like this, these conditions, you can always" }, { "end": 1436.7199999999998, "start": 1430.34, "text": " see them as here is what needs to happen for us to succeed, or here is what needs to happen" }, { "end": 1439.56, "start": 1436.7199999999998, "text": " for the others to fail." }, { "end": 1445.52, "start": 1439.56, "text": " And therefore, we're the only ones that succeed in this regime." }, { "end": 1450.04, "start": 1445.52, "text": " As I said, it's a cool algorithm, but I found that to be funny." }, { "end": 1459.76, "start": 1450.04, "text": " All right, so they test this on multitask, which these MT 10 and MT 50 benchmarks are" }, { "end": 1461.46, "start": 1459.76, "text": " these robotic manipulation." }, { "end": 1465.36, "start": 1461.46, "text": " So multitask doesn't only mean like supervised learning." }, { "end": 1467.96, "start": 1465.36, "text": " In this case, it's actually multitask reinforcement learning." }, { "end": 1473.8, "start": 1467.96, "text": " So here you have everything you have mini batches, you have episodes, and you have you" }, { "end": 1475.48, "start": 1473.8, "text": " have multiple tasks." }, { "end": 1478.64, "start": 1475.48, "text": " So this is everything together." }, { "end": 1479.64, "start": 1478.64, "text": " Very cool." }, { "end": 1488.8400000000001, "start": 1479.64, "text": " And you in their actual implementation, they say what they do is they have these multiple" }, { "end": 1489.8400000000001, "start": 1488.8400000000001, "text": " tasks." }, { "end": 1493.6200000000001, "start": 1489.8400000000001, "text": " So they have the agent, and they first select the tasks." }, { "end": 1497.14, "start": 1493.6200000000001, "text": " So for example, here, pull this right?" }, { "end": 1502.72, "start": 1497.14, "text": " Then they generate an episode by interacting with the environment, forth and back, forth" }, { "end": 1508.64, "start": 1502.72, "text": " and back, then they put that episode into a replay buffer." }, { "end": 1511.6000000000001, "start": 1508.64, "text": " Then they maybe select another task and so on." }, { "end": 1516.68, "start": 1511.6000000000001, "text": " So until they have a bunch of data in the replay buffer from different tasks, then they" }, { "end": 1524, "start": 1516.68, "text": " sample episodes from different tasks, right from task one, task two, and so on." }, { "end": 1527.52, "start": 1524, "text": " And that will become a mini batch in the learning procedure." }, { "end": 1530.36, "start": 1527.52, "text": " So pretty intricate thing." }, { "end": 1534.52, "start": 1530.36, "text": " But of course, you the hope is that you can learn kind of a shared representation that" }, { "end": 1543.16, "start": 1534.52, "text": " you can then perform all of these tasks faster than if you were to learn them each independently." }, { "end": 1546.44, "start": 1543.16, "text": " So the MT 10 and MT 50 come from this." }, { "end": 1553.6, "start": 1546.44, "text": " And I think they also have goal condition pushing, where it the task is simply to push" }, { "end": 1556.4399999999998, "start": 1553.6, "text": " something to a what they call goal conditioned." }, { "end": 1560.9199999999998, "start": 1556.4399999999998, "text": " And the cool thing about this is, it's not only 50 tasks, but you can produce an infinity" }, { "end": 1566.76, "start": 1560.9199999999998, "text": " of tasks because you can always specify a new location where you should push something" }, { "end": 1567.76, "start": 1566.76, "text": " to." }, { "end": 1568.9599999999998, "start": 1567.76, "text": " Right." }, { "end": 1572.1999999999998, "start": 1568.9599999999998, "text": " So that's, that's fairly, fairly cool." }, { "end": 1573.9599999999998, "start": 1572.1999999999998, "text": " And oh, yeah, the curves." }, { "end": 1582.56, "start": 1573.9599999999998, "text": " So you see that if you do something like soft actor critic, or multi head soft actor critic," }, { "end": 1589.76, "start": 1582.56, "text": " so this multi head soft actor critic is probably the closest to what I defined in to what I" }, { "end": 1595.8, "start": 1589.76, "text": " defined at the beginning, where you have this shared representation, and then and then the" }, { "end": 1597.2, "start": 1595.8, "text": " individual heads." }, { "end": 1605.08, "start": 1597.2, "text": " And you can see that severely under performs against the SAC plus PC grad plus their method" }, { "end": 1612.3999999999999, "start": 1605.08, "text": " that seems to outperform fairly consistently, even against learning the tasks independently." }, { "end": 1617.64, "start": 1612.4, "text": " So it learns much faster than if you were to learn these tasks just independently from" }, { "end": 1621.3200000000002, "start": 1617.64, "text": " each other, which is pretty cool, right?" }, { "end": 1624.2, "start": 1621.3200000000002, "text": " So I, I think that's pretty cool." }, { "end": 1630.72, "start": 1624.2, "text": " All right, so they do actually interesting investigations." }, { "end": 1637.52, "start": 1630.72, "text": " First of all, they research, okay, in during these learning runs, how, what is the curvature" }, { "end": 1639.4, "start": 1637.52, "text": " here?" }, { "end": 1643.6000000000001, "start": 1639.4, "text": " And the curvature of the loss function, they measure like this." }, { "end": 1650.0400000000002, "start": 1643.6000000000001, "text": " So basically, all this is, is a, a consequence of a Taylor approximation." }, { "end": 1660.44, "start": 1650.0400000000002, "text": " So if you have like f of x, you can, you can write this as f of some x zero, plus the gradient" }, { "end": 1671.64, "start": 1660.44, "text": " of f that plus the gradient of f times x here, sorry, at x zero times x in this direction." }, { "end": 1678.28, "start": 1671.64, "text": " And then if you subtract, so this is a first order approximation to this, to the function" }, { "end": 1679.28, "start": 1678.28, "text": " on the right." }, { "end": 1686.76, "start": 1679.28, "text": " Then if you bring this over here, you or if you sorry, if you subtract the two sides from" }, { "end": 1693.96, "start": 1686.76, "text": " each other, then you can see there's the difference between the actual function and the first" }, { "end": 1701.08, "start": 1693.96, "text": " order approximation of the function that must be, or that is most likely the curvature." }, { "end": 1707.44, "start": 1701.08, "text": " Now it is not, it is like every higher order term, but the assumption is that the dominant" }, { "end": 1710.48, "start": 1707.44, "text": " higher order term will be the curvature." }, { "end": 1711.8, "start": 1710.48, "text": " Right?" }, { "end": 1718.24, "start": 1711.8, "text": " So this is, this would be this, except they don't, they, they do it not doing the x and" }, { "end": 1721.84, "start": 1718.24, "text": " x zero, they do it at theta t and theta t plus one." }, { "end": 1728.3, "start": 1721.84, "text": " So you can see this is the first order approximation, and this is the actual function value after" }, { "end": 1730.8, "start": 1728.3, "text": " they do a step." }, { "end": 1736.68, "start": 1730.8, "text": " And the resulting thing will be the curvature or dominated by the curvature." }, { "end": 1744.64, "start": 1736.68, "text": " So they analyze this over the course of learning and they see that it actually increases as" }, { "end": 1746.16, "start": 1744.64, "text": " you, as you go on." }, { "end": 1753.88, "start": 1746.16, "text": " And just, I'm not a big fan of like just large numbers, but they number seem to be large," }, { "end": 1755.24, "start": 1753.88, "text": " right?" }, { "end": 1758.88, "start": 1755.24, "text": " Just compared to what you can handle with a computer, the numbers seem to be large and" }, { "end": 1765.6000000000001, "start": 1758.88, "text": " they seem to be getting larger in order of magnitude steps across training iterations." }, { "end": 1770.12, "start": 1765.6, "text": " So I'm going to believe them that this curvature is given." }, { "end": 1779.36, "start": 1770.12, "text": " I would have liked to have it seen compared to just a single task instead of a multitask," }, { "end": 1784.3999999999999, "start": 1779.36, "text": " instead of, you know, comparing these things, which is useless because they reach different" }, { "end": 1786.9199999999998, "start": 1784.3999999999999, "text": " losses, right?" }, { "end": 1792.98, "start": 1786.9199999999998, "text": " So it's pretty useless to compare their curvature across the number of iterations." }, { "end": 1798.98, "start": 1792.98, "text": " What I would have liked to see is a comparison multitask versus single task." }, { "end": 1807.08, "start": 1798.98, "text": " And to show me that in single task learning, this curvature doesn't happen." }, { "end": 1812.76, "start": 1807.08, "text": " Here you have the percentage of update steps where conditions A and B are held." }, { "end": 1817.8, "start": 1812.76, "text": " You remember condition A was the condition on the conflicting angle." }, { "end": 1828.8, "start": 1817.8, "text": " Condition B was the condition that the curvature is large enough and you can see that as you" }, { "end": 1836.84, "start": 1828.8, "text": " go on with learning these dotted and dashed lines, the conditions hold almost entirely" }, { "end": 1843.08, "start": 1836.84, "text": " at the beginning of learning, but then still hold by in a big time of the steps." }, { "end": 1850.1799999999998, "start": 1843.08, "text": " So here is like about half the steps still at the end of training these conditions hold." }, { "end": 1858.72, "start": 1850.1799999999998, "text": " So it is fairly, fairly good evidence that often the problems that they say are real" }, { "end": 1862.9399999999998, "start": 1858.72, "text": " are really there and then therefore their algorithm helps, right?" }, { "end": 1868, "start": 1862.9399999999998, "text": " So here's the average per task average return." }, { "end": 1876.76, "start": 1868, "text": " And interestingly, they say in the text, look, this task here seems to be easier, right?" }, { "end": 1880.2, "start": 1876.76, "text": " And the task two, which is the dotted line seems to be harder." }, { "end": 1887.8, "start": 1880.2, "text": " So SAC, the baseline algorithm never really manages to learn task two, whereas this PC" }, { "end": 1891.6, "start": 1887.8, "text": " grad manages after a while to learn it." }, { "end": 1900.6799999999998, "start": 1891.6, "text": " And at that point, something happens over here, which I'm not super sure." }, { "end": 1908.4399999999998, "start": 1900.6799999999998, "text": " Yeah, that's what they say in the text, but I have to squint a lot to see that exactly" }, { "end": 1911, "start": 1908.4399999999998, "text": " at that position, something happens." }, { "end": 1918.32, "start": 1911, "text": " Suffice to say that the PC grad is able to learn the task that SAC isn't able to learn" }, { "end": 1925.2, "start": 1918.32, "text": " because probably task one is completely dominating the gradient at that point, right?" }, { "end": 1927.84, "start": 1925.2, "text": " All right, so this was the paper." }, { "end": 1932.72, "start": 1927.84, "text": " I invite you to read it and thanks for listening." }, { "end": 1949.08, "start": 1932.72, "text": " Bye bye." } ]
_8KNb5iqblE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Longformer: The Long-Document Transformer
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "attention mechanism", "attention", "transformer", "bert", "roberta", "mlm", "convolution", "memory", "linear", "sliding", "dilated", "sparse" ]
The Longformer extends the Transformer by introducing sliding window attention and sparse global attention. This allows for the processing of much longer documents than classic models like BERT. Paper: https://arxiv.org/abs/2004.05150 Code: https://github.com/allenai/longformer Abstract: Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. Authors: Iz Beltagy, Matthew E. Peters, Arman Cohan Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at Longformer, the long document transformer by Is Beltaji, Matthew Peters and Armin Cohen of Allen AI. So the longformer is a variant of the transformer as you might have guessed. The longformer is a transformer that can deal with long documents, so it's aptly named. So I am going to discuss what differentiates the longformer from the transformer. If you don't know what a transformer is, watch the video on attention is all you need. I have a video on that. And I would also suggest you watch the video on BERT, because a lot of the architecture and training here is based on the BERT or variants of BERT. So I'll basically explain what makes the longformer different such that it gets long documents, right, so she can be applied to long documents. So what is the problem with the original transformer? If you have a transformer model and let's say you're doing an NLP task, which usually is where transformers are used, and you want to have a paragraph like this one right here, the abstract of the paper, and maybe you want to predict whether the paper gets accepted at a conference or not. Now the classic transformers, they have a limit, a very harsh limit, on the amount of tokens that they can look at at the same time. So what you would do in a classic transformer is you couldn't process this entire thing, let's say, you would divide it in chunks, you'd say, okay, here's my first chunk from here to here, my second chunk from here to here, and then here to here, and so on. So you go through the documents, split it up in chunks, process each of the chunks individually, and then maybe aggregate the predictions. But of course the drawback is that the model cannot make specific connections between, let's say, some word here, like operation, and somewhere down here, like language. It cannot connect the two on a neural level, at least not in the classic transformer architectures. Now there are ways to try to alleviate this, but classically, if you split up your documents into individual samples, they become independent, you cannot do attention, this attention mechanism cannot operate over across the boundaries of these chunks. So the long former, the goal is to actually just be able to put this entire document here into the model at the same time. So let's look a bit closer into this. In a classic transformer model, what you'll have is you'll have layers of what is called attention mechanism. I'm gonna draw six units here, and the units are actually the input sequence. So in a transformer, other than like a classic neural network, you don't actually have numbers of units in the layers, but you can input as many as long sequences as you want, until your memory limit is reached basically. So these units, they expose something called keys on the lower layer, and these are vectors that point somewhere, and the upper layer will produce what are called queries. And again, I invite you to look at the attention is all you need video if you want more explanation. And basically the keys and queries, they decide where information gets routed to. So the routing of information is what makes the transformer the transformer. So for example, this here is probably going to be routed to this here. So the information is routed like this, and then this here is going to be routed like this. You see the routing is according to the dot product of the keys and queries. So in essence, if you have an input sequence tokens, and you usually transform in a transformer, you transform the things into same length sequences. That has to do a lot also with how you want to pre-train things and so on. So we're not really going to change that part. If you have n input sequence and n tokens on the next layer, and everything can attend to everything, so all the inner products are computed, right? Everything is connected to everything. That means that you're going to end up with an O of n squared memory requirement, because you have n squared connections. The way to alleviate this is much much like you would alleviate this in a classic neural network. So in a classic neural network, imagine you have this MLP, a multi-layer perceptron, or what usually known as a fully connected layer, right? So here I have the same thing, but it's not a transformer. It's a classic neural network, fully connected. So I have D units right here, and D units in this first hidden layer. And I'll have a weight matrix in here, right? And the weight matrix means everything is connected to everything, right? Everything connects to everything else. Again, my memory requirement here is D squared. Now how do we deal with this in a classic neural network? We go to what is called a convolutional neural network. At least that's one of the methods. So let's make this again, but let's now make this a convolutional neural network. What we'll have is we'll have a convolutional kernel. In this case, it's just of length 3, right? So we just have 3 units here, and they will do the same fully connected pattern, but only over these 3 units right here. And then we slide the kernel over, right? Now it's in this position. It's still the same 3 units, but now these 3 things are connected to these 3 things that they're now over, right? And so you keep sliding this over across the lower layer until you're finally at the end here. And now you've reduced the memory consumption from D squared to just D times, and if this is usually the kernel size, it's called K, to D times K. And K you can keep pretty much constant, so that's O of D, right? The same goes for the long former. So in the long former, the idea is that you have a so-called sliding window attention. It's exactly the same as it is in the convolution, except that you don't have these hidden units here, but these are actually parts of the input sequence, and instead of the weight matrix here, you have the attention mechanism over the keys, queries, and values. But the idea is similar. So you can basically say this is a sort of a convolution, and we've already had this in the video about axial attention a bit. Now of course this is your trade-off memory for performance, because before, right, before, I'm gonna draw, let's draw it on top of this fully connected layer, before all the units could attend to all the units, right? And now the unit can only attend to its immediate neighborhood, right? This green unit here can only attend to itself in the lower layer and its immediate neighbors if the kernel size is 3. But consider what happens in the next layer. So in the next layer I have, for example, this unit right here. This is same unit, right, on the next layer. It can attend to these two and itself in the lower layer, but these two themselves can attend to all of these, right, so that the one on the right can attend to one more. So in the first layer this particular unit had information from these three units, but in the second layer the same unit has now information across these five, right, and this is kind of this cone of attention. It gets bigger and bigger as you go through the layers. So you lose the information to incorporate wide ranges of information in a single layer, but you regain it through depth, right? The deeper you go the more a single unit gets information, right, this unit gets information from this unit over here through the layers, through the layers. It can't watch the unit right here in this layer. That's not possible, but it gets the information through the layers. Of course there's still a trade-off, like a fully connected layer could just do this in one step and then in the next layer it could do it again, right, it can do much more complex computation. But if you believe that the most important information is actually in the neighborhoods of the individual tokens, which is conceivable in something like a convolutional neural network, you know that, you know, in an image usually you have localized information, right, if there's a cat here then the nose and the eyes of the cat are pretty close together. So in order to recognize it's a cat you mostly want local information, more and more local information. So in an image that makes sense and in a text it also makes sense to a degree in that usually words close together in a sentence, they are important for each other, right, but the power of the transformer was initially that it could attend to everything in a sentence, right. So for example if you have again the paragraph here, the power of the transformer, at least that was said, is the fact that this piece of text here could make a connection to this piece of text here and therefore the understanding of the entire paragraph could be reliant on this connection being made, which a local model can't do. But if you go through depth that you might be able to recover that. So the longformer is basically what the convolutional neural network does for MLPs, it does it for transformers, right. So instead of n by n giving you n squared now you go into this way where you have, so if you do the same for the transformer you go to o n times, let's call it w, and w being your window size in this case. They have an illustration of this right here. So in a original transformer this is an attention matrix. So here you have your n units in a sequence and drawn in is which unit can attend to which other unit in a given layer. So you'll see this particular unit i here can attend of course to itself, right, can attend to unit i. But it can also attend to this unit or to this unit or to this unit to any unit, right. And that's what gives you this n squared attention because any unit can attend to any unit. Now in this sliding window attention pattern, and this is one of the core components of the longformer, you see that the i-th unit here right here can attend to itself, right, but also to this and to this, this, but no more. It can only attend to the i-th unit or to i minus w to i plus w, right. And this here is a window of size w. This is this sliding window. So a given unit can only attend to itself or its neighbors in one layer, right. And this is exactly what a convolution is. Like if you see if you see this pattern, this is a this is a convolutional pattern. Now the second core component is they expand on this idea in that they make they create these dilated sliding windows. Now you see you already know what a sliding window is. Now they're saying well if you if you have this sliding window it might take quite a number of layers in order to you know get your attention of the entire sequence incorporated. We saw before it took like three layers to get halfway through this sequence of what was it like six tokens and it took us like three layers and with so basically if you go if you go one layer up right one layer up you gain one more context window in each direction, right. So it's not you'd have to go very very deep in order to incorporate the information from these very long sequences and the sliding the dilated sliding window helps this where they say well technically now any any any sequence here so again if we have this sequence and this is the next layer actually let's just draw so this unit right here it will be able to attend this and this but not this and not this but it will also be able to attend this and this but not this and not this sorry not this so it'll skip one so right these these attention patterns they will always kind of skip skip one and the idea is that now you have a vastly greater window of attention right your your window size is now way bigger that means you can incorporate information way faster across the layers like global information but of course now they're kind of arguing against each other in when they do this sliding window they say well we pose that mostly local information is important for NLP right the words right around the word are important and now if they say this here they basically say oh well it's not so important that we miss this word right here which is right next to the word that they are attending from which is counter counter to what they just said that probably the most important information is around the word they they do get around this by saying well if we have different layers in a transformer and in the lower layers will use this sliding window fully local and in the higher layers will use this dilated window and therefore in the in the lower layers we postulate that local information is actually what's needed to understand local features and then in the higher layers we want more global information because it will incorporate features from the from the local informations of the lower layers all right I can I can get the argumentation but I feel that's just something they've thrown in there to make it work better after they tried it out and the the last idea here in the long former is what they call global attention and these global attention is sparse what it means is that there are some special units here so in this this this and this unit and these special units as you can see from the attention pattern these are these can actually attend to everything so this unit can attend for example to this one or to this one or to anything these can attend to anything and any unit can attend to those right any unit can attend to the the first unit right here right so these are your special tokens your special units and they have global attention and the reason for this particularly is that sometimes this is needed and this is an engineering choice right the example I can give is let's say you have a question answering task in a question answering task what you usually have is a question and a paragraph and let's say the task here is to answer yes or no is the is the question so the question might be a statement right I don't know King James was King of England from 1120 to 1140 and then the paragraph will be the Wikipedia entry for King James and the the question is yes or no is the is the question true or not is the statement made true or not how you would feed this to a birdmold to a transformer is you concatenate these two things quest statement in paragraph right these are the tokens right here and then you would separate them using a special token called the separator token this is just to inform the model that here is where the first thing stops and the next thing starts and then at the beginning you would put a special token called the CLS token now usually what you do is you send these things through your transformer and now in the last layer right you end up as we've seen before because you always transform a sequence into a sequence you end up with a sequence again but you just want a single thing you just want yes or no so you designate you say this particular unit here that corresponds to the CLS token that's what I'm going to throw into a logistic regression and that's what will give me my yes or no answer and that's how you train it right so you you don't want to single out any of these any of these as like special so you simply already include a special token at the beginning that then you take the classification from right it's pretty smart but also you say ah this is such a special token I want that to be able to attend to anything right even though for example this unit right here it can only attend to its neighbors right it has this cone thing and this unit right here has this cone thing this unit right here can always attend to anything at each of the layers right it can attend to anything and anything can attend to it so it can get information from anywhere routed to it in each of the layers and it can send information to any of the other units. This is an engineering choice. So at the beginning, you as an engineer have to say which one of these tokens are special tokens. For these tokens, you'll actually then do full attention. It can attend to and from anything. What are our new memory requirements? What this will give us is, first of all, we have N tokens. And here W is our window size. So we have N times W memory. But then we also add the global attention. So plus the number of special tokens times, if there's a special token, it will have N times 2 memory requirement, because it can attend from and to in each layer. And this entire thing, sorry, with the plus, this entire thing times the number of layers. So this is your new attention memory requirements. And as you can see here, N plus N. So this is going to be order of N, instead of much smaller than order of N squared, as we had for the original transformer. Right. So this is what the longformer basically does. Now they have written custom CUDA kernels for doing this dilated attention and so on, which is pretty cool. And they have code available for the model. They test this on a number of language tasks. And what I find interesting is, actually, they start from the Roberta checkpoint, which Roberta, where is it said? Somewhere, oh yeah, this Roberta model right here is a variant of BERT. Right, you can see the name in here. It's a variant of BERT. And that's their baseline. And they start from these checkpoints, as far as I understand, and they kind of copy over the position embeddings and so on. And therefore, they only need to train not very much past the Roberta. Now the reason why they can copy it over actually is, and this I find very interesting, is they use a window size of 512. So until I read this, I got away from reading the paper thinking that this window size here might be fairly small. Right. So this window size, it might be, you know, maybe 10, 20, 30 tokens or something, right? But actually, this window size is 512 in their formulation, which basically means that this is as much as one of the classic models could take as a document. Right. So, sorry, let's go over. So this here is 512. So this is what a classic model could take as an entire document. And in the classic model, you simply split up the document, feed chunks, right? And then aggregate over them. Now the longformer basically has this. So right now, for now, I said it has less memory requirements. Actually, it has the same memory requirements as a classic model, but it is also able, because of these global attention, to kind of incorporate information from the surrounding things. So that's the new part. Because if you think about it, if this W here is 512, 512 was the original N. So 512 was the N0. Whatever the old models had as an N. So right now, if I replace this, and let's not take care of this. If I replace this, it's actually N times N0. And that regresses to the classic model if you plug in N0 here, right? So the new part really is the fact that you have this sliding window, and the global attention is able to incorporate information from these special tokens as well. Because sliding window, that was done before. So I just don't want to get to you the wrong impression that now we can run transformers on like very small memory machines. We can't. But we can run them on the same memory machines, because this is the same length, right? But also feed in longer documents, and have some information of the entire document be propagated to these blocks, which before we couldn't. Before we could just simply feed these blocks as one and not have global information. So that's the new thing. At least they haven't tested it on the smaller things, which is cool from an engineering point, right? You would want to, because if you want to show that you're better, you would want to basically be able to be as powerful as the old model, but then be more powerful. And that's what they do. All right. So if you want to check out the experiments and the ablations, it's very interesting because they turn on and off a lot of things in their model, and kind of check out where things come from, what helps, what doesn't. And I'll leave this to you, and I'll link it. And with that, thanks for listening, watching, and bye-bye.
[ { "end": 5.54, "start": 0, "text": " Hi there, today we're looking at Longformer, the long document transformer by" }, { "end": 13.48, "start": 5.54, "text": " Is Beltaji, Matthew Peters and Armin Cohen of Allen AI. So the longformer is a" }, { "end": 19.240000000000002, "start": 13.48, "text": " variant of the transformer as you might have guessed. The longformer is a" }, { "end": 26.72, "start": 19.240000000000002, "text": " transformer that can deal with long documents, so it's aptly named. So I am" }, { "end": 32.72, "start": 26.72, "text": " going to discuss what differentiates the longformer from the transformer. If you" }, { "end": 37.92, "start": 32.72, "text": " don't know what a transformer is, watch the video on attention is all you need. I" }, { "end": 43.96, "start": 37.92, "text": " have a video on that. And I would also suggest you watch the video on BERT," }, { "end": 50.120000000000005, "start": 43.96, "text": " because a lot of the architecture and training here is based on the BERT" }, { "end": 56.599999999999994, "start": 50.12, "text": " or variants of BERT. So I'll basically explain what makes the longformer" }, { "end": 61.12, "start": 56.599999999999994, "text": " different such that it gets long documents, right, so she can be applied" }, { "end": 66.2, "start": 61.12, "text": " to long documents. So what is the problem with the original transformer?" }, { "end": 72.44, "start": 66.2, "text": " If you have a transformer model and let's say you're doing an NLP task, which" }, { "end": 78.68, "start": 72.44, "text": " usually is where transformers are used, and you want to have a paragraph like" }, { "end": 83.76, "start": 78.68, "text": " this one right here, the abstract of the paper, and maybe you want to predict" }, { "end": 89.12, "start": 83.76, "text": " whether the paper gets accepted at a conference or not. Now the classic" }, { "end": 96.84, "start": 89.12, "text": " transformers, they have a limit, a very harsh limit, on the amount of tokens that" }, { "end": 100.80000000000001, "start": 96.84, "text": " they can look at at the same time. So what you would do in a classic" }, { "end": 107.32000000000001, "start": 100.80000000000001, "text": " transformer is you couldn't process this entire thing, let's say, you would divide" }, { "end": 112.24, "start": 107.32, "text": " it in chunks, you'd say, okay, here's my first chunk from here to here, my second" }, { "end": 118, "start": 112.24, "text": " chunk from here to here, and then here to here, and so on. So you go through the" }, { "end": 123.24, "start": 118, "text": " documents, split it up in chunks, process each of the chunks individually, and then" }, { "end": 128.92, "start": 123.24, "text": " maybe aggregate the predictions. But of course the drawback is that the model" }, { "end": 135.4, "start": 128.92, "text": " cannot make specific connections between, let's say, some word here, like operation," }, { "end": 140.92000000000002, "start": 135.4, "text": " and somewhere down here, like language. It cannot connect the two on a neural" }, { "end": 145.20000000000002, "start": 140.92000000000002, "text": " level, at least not in the classic transformer architectures. Now there are" }, { "end": 152.52, "start": 145.20000000000002, "text": " ways to try to alleviate this, but classically, if you split up your" }, { "end": 157.72, "start": 152.52, "text": " documents into individual samples, they become independent, you cannot do" }, { "end": 162.6, "start": 157.72, "text": " attention, this attention mechanism cannot operate over across the boundaries" }, { "end": 169.96, "start": 162.6, "text": " of these chunks. So the long former, the goal is to actually just be able" }, { "end": 177.51999999999998, "start": 169.96, "text": " to put this entire document here into the model at the same time. So let's look" }, { "end": 182.64, "start": 177.51999999999998, "text": " a bit closer into this. In a classic transformer model, what you'll have is" }, { "end": 189.2, "start": 182.64, "text": " you'll have layers of what is called attention mechanism. I'm gonna draw six" }, { "end": 195.44, "start": 189.2, "text": " units here, and the units are actually the input sequence. So in a transformer," }, { "end": 200.64, "start": 195.44, "text": " other than like a classic neural network, you don't actually have numbers of units" }, { "end": 208.72, "start": 200.64, "text": " in the layers, but you can input as many as long sequences as you" }, { "end": 216.2, "start": 208.72, "text": " want, until your memory limit is reached basically. So these units, they expose" }, { "end": 221.67999999999998, "start": 216.2, "text": " something called keys on the lower layer, and these are vectors that" }, { "end": 228.56, "start": 221.67999999999998, "text": " point somewhere, and the upper layer will produce what are called queries. And" }, { "end": 234.23999999999998, "start": 228.56, "text": " again, I invite you to look at the attention is all you need video if you" }, { "end": 238.88, "start": 234.23999999999998, "text": " want more explanation. And basically the keys and queries, they decide where" }, { "end": 245.44, "start": 238.88, "text": " information gets routed to. So the routing of information is what makes" }, { "end": 250.92, "start": 245.44, "text": " the transformer the transformer. So for example, this here is probably going to" }, { "end": 255.68, "start": 250.92, "text": " be routed to this here. So the information is routed like this, and then" }, { "end": 258.88, "start": 255.68, "text": " this here is going to be routed like this. You see the routing is according to" }, { "end": 267.15999999999997, "start": 258.88, "text": " the dot product of the keys and queries. So in essence, if you have" }, { "end": 274.64, "start": 267.15999999999997, "text": " an input sequence tokens, and you usually transform in a transformer, you transform" }, { "end": 281.71999999999997, "start": 274.64, "text": " the things into same length sequences. That has to do a lot also with how you" }, { "end": 286.96, "start": 281.71999999999997, "text": " want to pre-train things and so on. So we're not really going to change that" }, { "end": 294.08, "start": 286.96, "text": " part. If you have n input sequence and n tokens on the next layer, and everything" }, { "end": 298.4, "start": 294.08, "text": " can attend to everything, so all the inner products are computed, right?" }, { "end": 302.96, "start": 298.4, "text": " Everything is connected to everything. That means that you're going to end up" }, { "end": 308.23999999999995, "start": 302.96, "text": " with an O of n squared memory requirement, because you have n squared" }, { "end": 316.84, "start": 308.23999999999995, "text": " connections. The way to alleviate this is much much like you would alleviate this" }, { "end": 321.76, "start": 316.84, "text": " in a classic neural network. So in a classic neural network, imagine you have" }, { "end": 327, "start": 321.76, "text": " this MLP, a multi-layer perceptron, or what usually known as a fully connected" }, { "end": 331.88, "start": 327, "text": " layer, right? So here I have the same thing, but it's not a transformer. It's a" }, { "end": 338.12, "start": 331.88, "text": " classic neural network, fully connected. So I have D units right here, and D units" }, { "end": 342.88, "start": 338.12, "text": " in this first hidden layer. And I'll have a weight matrix in here, right? And the" }, { "end": 346.92, "start": 342.88, "text": " weight matrix means everything is connected to everything, right? Everything" }, { "end": 354.28, "start": 346.92, "text": " connects to everything else. Again, my memory requirement here is D squared. Now" }, { "end": 358.6, "start": 354.28, "text": " how do we deal with this in a classic neural network? We go to what is called a" }, { "end": 363.52000000000004, "start": 358.6, "text": " convolutional neural network. At least that's one of the methods. So let's" }, { "end": 369.88, "start": 363.52000000000004, "text": " make this again, but let's now make this a convolutional neural network. What we'll" }, { "end": 375.52000000000004, "start": 369.88, "text": " have is we'll have a convolutional kernel. In this case, it's just of length 3, right?" }, { "end": 382.8, "start": 375.52000000000004, "text": " So we just have 3 units here, and they will do the same fully connected pattern," }, { "end": 389.92, "start": 382.8, "text": " but only over these 3 units right here. And then we slide the kernel over, right?" }, { "end": 395.32, "start": 389.92, "text": " Now it's in this position. It's still the same 3 units, but now these 3 things" }, { "end": 401.08000000000004, "start": 395.32, "text": " are connected to these 3 things that they're now over, right? And so you keep" }, { "end": 408.48, "start": 401.08000000000004, "text": " sliding this over across the lower layer until you're finally at the end here. And" }, { "end": 413.76, "start": 408.48, "text": " now you've reduced the memory consumption from D squared to just D" }, { "end": 420.84000000000003, "start": 413.76, "text": " times, and if this is usually the kernel size, it's called K, to D times K. And K" }, { "end": 428.28000000000003, "start": 420.84000000000003, "text": " you can keep pretty much constant, so that's O of D, right? The same goes for" }, { "end": 433.44, "start": 428.28000000000003, "text": " the long former. So in the long former, the idea is that you have a so-called" }, { "end": 438.48, "start": 433.44, "text": " sliding window attention. It's exactly the same as it is in the convolution," }, { "end": 444, "start": 438.48, "text": " except that you don't have these hidden units here, but these are actually" }, { "end": 449.28, "start": 444, "text": " parts of the input sequence, and instead of the weight matrix here, you have the" }, { "end": 454.76, "start": 449.28, "text": " attention mechanism over the keys, queries, and values. But the idea is" }, { "end": 460.24, "start": 454.76, "text": " similar. So you can basically say this is a sort of a convolution, and we've" }, { "end": 465.64, "start": 460.24, "text": " already had this in the video about axial attention a bit. Now of course this" }, { "end": 474.48, "start": 465.64, "text": " is your trade-off memory for performance, because before, right, before, I'm gonna" }, { "end": 481.52, "start": 474.48, "text": " draw, let's draw it on top of this fully connected layer, before all the units" }, { "end": 487.44, "start": 481.52, "text": " could attend to all the units, right? And now the unit can only attend to its" }, { "end": 493.92, "start": 487.44, "text": " immediate neighborhood, right? This green unit here can only attend to itself in" }, { "end": 500.32, "start": 493.92, "text": " the lower layer and its immediate neighbors if the kernel size is 3. But" }, { "end": 505.24, "start": 500.32, "text": " consider what happens in the next layer. So in the next layer I have, for example," }, { "end": 511.96, "start": 505.24, "text": " this unit right here. This is same unit, right, on the next layer. It can attend to" }, { "end": 518.8, "start": 511.96, "text": " these two and itself in the lower layer, but these two themselves can attend to" }, { "end": 524.92, "start": 518.8, "text": " all of these, right, so that the one on the right can attend to one more. So in" }, { "end": 532.8, "start": 524.92, "text": " the first layer this particular unit had information from these three units, but" }, { "end": 537.56, "start": 532.8, "text": " in the second layer the same unit has now information across these five, right," }, { "end": 544.16, "start": 537.56, "text": " and this is kind of this cone of attention. It gets bigger and bigger as" }, { "end": 549.1999999999999, "start": 544.16, "text": " you go through the layers. So you lose the information to incorporate wide" }, { "end": 554.4399999999999, "start": 549.1999999999999, "text": " ranges of information in a single layer, but you regain it through depth, right?" }, { "end": 561, "start": 554.4399999999999, "text": " The deeper you go the more a single unit gets information, right, this unit gets" }, { "end": 566.0799999999999, "start": 561, "text": " information from this unit over here through the layers, through the layers." }, { "end": 571.88, "start": 566.08, "text": " It can't watch the unit right here in this layer. That's not possible, but it" }, { "end": 575.72, "start": 571.88, "text": " gets the information through the layers. Of course there's still a trade-off, like" }, { "end": 579.84, "start": 575.72, "text": " a fully connected layer could just do this in one step and then in the next" }, { "end": 584.88, "start": 579.84, "text": " layer it could do it again, right, it can do much more complex computation. But if" }, { "end": 588.72, "start": 584.88, "text": " you believe that the most important information is actually in the" }, { "end": 593.76, "start": 588.72, "text": " neighborhoods of the individual tokens, which is conceivable in something like a" }, { "end": 599.8, "start": 593.76, "text": " convolutional neural network, you know that, you know, in an image usually you" }, { "end": 606.48, "start": 599.8, "text": " have localized information, right, if there's a cat here then the nose and the" }, { "end": 610.48, "start": 606.48, "text": " eyes of the cat are pretty close together. So in order to recognize it's a" }, { "end": 617, "start": 610.48, "text": " cat you mostly want local information, more and more local information. So in" }, { "end": 621.68, "start": 617, "text": " an image that makes sense and in a text it also makes sense to a degree in that" }, { "end": 627.8399999999999, "start": 621.68, "text": " usually words close together in a sentence, they are important for" }, { "end": 634.12, "start": 627.8399999999999, "text": " each other, right, but the power of the transformer was initially that it could" }, { "end": 641.28, "start": 634.12, "text": " attend to everything in a sentence, right. So for example if you have again the" }, { "end": 645.0799999999999, "start": 641.28, "text": " paragraph here, the power of the transformer, at least that was said, is" }, { "end": 653, "start": 645.08, "text": " the fact that this piece of text here could make a connection to this" }, { "end": 657.36, "start": 653, "text": " piece of text here and therefore the understanding of the entire paragraph" }, { "end": 662.8000000000001, "start": 657.36, "text": " could be reliant on this connection being made, which a local model can't do." }, { "end": 668, "start": 662.8000000000001, "text": " But if you go through depth that you might be able to recover that. So the" }, { "end": 673.32, "start": 668, "text": " longformer is basically what the convolutional neural network does for" }, { "end": 682.24, "start": 673.32, "text": " MLPs, it does it for transformers, right. So instead of n by n giving you n squared" }, { "end": 689.5200000000001, "start": 682.24, "text": " now you go into this way where you have, so if you do the same for the" }, { "end": 697.1600000000001, "start": 689.5200000000001, "text": " transformer you go to o n times, let's call it w, and w being your window size" }, { "end": 706.16, "start": 697.16, "text": " in this case. They have an illustration of this right here. So in a original" }, { "end": 712.04, "start": 706.16, "text": " transformer this is an attention matrix. So here you have your n units in a" }, { "end": 718.4, "start": 712.04, "text": " sequence and drawn in is which unit can attend to which other unit in a given" }, { "end": 725.8399999999999, "start": 718.4, "text": " layer. So you'll see this particular unit i here can attend of course to itself," }, { "end": 732.5600000000001, "start": 725.84, "text": " right, can attend to unit i. But it can also attend to this unit or to this unit" }, { "end": 739.88, "start": 732.5600000000001, "text": " or to this unit to any unit, right. And that's what gives you this n squared" }, { "end": 745.88, "start": 739.88, "text": " attention because any unit can attend to any unit. Now in this sliding window" }, { "end": 751, "start": 745.88, "text": " attention pattern, and this is one of the core components of the longformer, you" }, { "end": 761.64, "start": 751, "text": " see that the i-th unit here right here can attend to itself, right, but also to" }, { "end": 774.28, "start": 761.64, "text": " this and to this, this, but no more. It can only attend to the i-th unit or to i" }, { "end": 784.64, "start": 774.28, "text": " minus w to i plus w, right. And this here is a window of size w. This is this" }, { "end": 791.4, "start": 784.64, "text": " sliding window. So a given unit can only attend to itself or its neighbors in one" }, { "end": 796.28, "start": 791.4, "text": " layer, right. And this is exactly what a convolution is. Like if you see if you" }, { "end": 804.36, "start": 796.28, "text": " see this pattern, this is a this is a convolutional pattern. Now the second" }, { "end": 810.28, "start": 804.36, "text": " core component is they expand on this idea in that they make they create these" }, { "end": 816.0799999999999, "start": 810.28, "text": " dilated sliding windows. Now you see you already know what a sliding window is." }, { "end": 822.0799999999999, "start": 816.0799999999999, "text": " Now they're saying well if you if you have this sliding window it might take" }, { "end": 828.2800000000001, "start": 822.08, "text": " quite a number of layers in order to you know get your attention of the entire" }, { "end": 834.2800000000001, "start": 828.2800000000001, "text": " sequence incorporated. We saw before it took like three layers to get halfway" }, { "end": 841.84, "start": 834.2800000000001, "text": " through this sequence of what was it like six tokens and it took us like" }, { "end": 849.6400000000001, "start": 841.84, "text": " three layers and with so basically if you go if you go one layer up right one" }, { "end": 857.3199999999999, "start": 849.64, "text": " layer up you gain one more context window in each direction, right. So it's" }, { "end": 862.3199999999999, "start": 857.3199999999999, "text": " not you'd have to go very very deep in order to incorporate the information" }, { "end": 870.64, "start": 862.3199999999999, "text": " from these very long sequences and the sliding the dilated sliding window helps" }, { "end": 881.28, "start": 870.64, "text": " this where they say well technically now any any any sequence here so again if we" }, { "end": 889.56, "start": 881.28, "text": " have this sequence and this is the next layer actually let's just draw so this" }, { "end": 896.12, "start": 889.56, "text": " unit right here it will be able to attend this and this but not this and" }, { "end": 900.92, "start": 896.12, "text": " not this but it will also be able to attend this and this but not this and" }, { "end": 905.88, "start": 900.92, "text": " not this sorry not this so it'll skip one so right these these attention" }, { "end": 912.62, "start": 905.88, "text": " patterns they will always kind of skip skip one and the idea is that now you" }, { "end": 918.28, "start": 912.62, "text": " have a vastly greater window of attention right your your window size is" }, { "end": 923.4, "start": 918.28, "text": " now way bigger that means you can incorporate information way faster" }, { "end": 929.4, "start": 923.4, "text": " across the layers like global information but of course now they're" }, { "end": 934, "start": 929.4, "text": " kind of arguing against each other in when they do this sliding window they" }, { "end": 941.24, "start": 934, "text": " say well we pose that mostly local information is important for NLP right" }, { "end": 945.84, "start": 941.24, "text": " the words right around the word are important and now if they say this here" }, { "end": 950.56, "start": 945.84, "text": " they basically say oh well it's not so important that we miss this word right" }, { "end": 957.04, "start": 950.56, "text": " here which is right next to the word that they are attending from which is" }, { "end": 961.16, "start": 957.04, "text": " counter counter to what they just said that probably the most important" }, { "end": 968, "start": 961.16, "text": " information is around the word they they do get around this by saying well if we" }, { "end": 973, "start": 968, "text": " have different layers in a transformer and in the lower layers will use this" }, { "end": 979.76, "start": 973, "text": " sliding window fully local and in the higher layers will use this dilated" }, { "end": 986.4, "start": 979.76, "text": " window and therefore in the in the lower layers we postulate that local" }, { "end": 991.88, "start": 986.4, "text": " information is actually what's needed to understand local features and then in" }, { "end": 997.92, "start": 991.88, "text": " the higher layers we want more global information because it will incorporate" }, { "end": 1003.88, "start": 997.92, "text": " features from the from the local informations of the lower layers all" }, { "end": 1008.92, "start": 1003.88, "text": " right I can I can get the argumentation but I feel that's just something they've" }, { "end": 1017.0799999999999, "start": 1008.92, "text": " thrown in there to make it work better after they tried it out and the the last" }, { "end": 1023.24, "start": 1017.0799999999999, "text": " idea here in the long former is what they call global attention and these" }, { "end": 1029, "start": 1023.24, "text": " global attention is sparse what it means is that there are some special units" }, { "end": 1037.1599999999999, "start": 1029, "text": " here so in this this this and this unit and these special units as you can see" }, { "end": 1041.92, "start": 1037.16, "text": " from the attention pattern these are these can actually attend to everything" }, { "end": 1047, "start": 1041.92, "text": " so this unit can attend for example to this one or to this one or to anything" }, { "end": 1052.3200000000002, "start": 1047, "text": " these can attend to anything and any unit can attend to those right any unit" }, { "end": 1059.3200000000002, "start": 1052.3200000000002, "text": " can attend to the the first unit right here right so these are your special" }, { "end": 1066.68, "start": 1059.3200000000002, "text": " tokens your special units and they have global attention and the reason for" }, { "end": 1072.88, "start": 1066.68, "text": " this particularly is that sometimes this is needed and this is an engineering" }, { "end": 1077.48, "start": 1072.88, "text": " choice right the example I can give is let's say you have a question answering" }, { "end": 1081.3600000000001, "start": 1077.48, "text": " task in a question answering task what you usually have is a question and a" }, { "end": 1087.5600000000002, "start": 1081.3600000000001, "text": " paragraph and let's say the task here is to answer yes or no is the is the" }, { "end": 1092.68, "start": 1087.5600000000002, "text": " question so the question might be a statement right I don't know King James" }, { "end": 1099.44, "start": 1092.68, "text": " was King of England from 1120 to 1140 and then the paragraph will be the" }, { "end": 1107.1200000000001, "start": 1099.44, "text": " Wikipedia entry for King James and the the question is yes or no is the is the" }, { "end": 1111.72, "start": 1107.1200000000001, "text": " question true or not is the statement made true or not how you would feed this" }, { "end": 1118.8400000000001, "start": 1111.72, "text": " to a birdmold to a transformer is you concatenate these two things quest" }, { "end": 1124.9199999999998, "start": 1118.84, "text": " statement in paragraph right these are the tokens right here and then you would" }, { "end": 1129.6399999999999, "start": 1124.9199999999998, "text": " separate them using a special token called the separator token this is just" }, { "end": 1133.72, "start": 1129.6399999999999, "text": " to inform the model that here is where the first thing stops and the next" }, { "end": 1139.24, "start": 1133.72, "text": " thing starts and then at the beginning you would put a special token called the" }, { "end": 1145.9599999999998, "start": 1139.24, "text": " CLS token now usually what you do is you send these things through your" }, { "end": 1152.68, "start": 1145.96, "text": " transformer and now in the last layer right you end up as we've seen before" }, { "end": 1156.24, "start": 1152.68, "text": " because you always transform a sequence into a sequence you end up with a" }, { "end": 1161.24, "start": 1156.24, "text": " sequence again but you just want a single thing you just want yes or no so" }, { "end": 1168.8400000000001, "start": 1161.24, "text": " you designate you say this particular unit here that corresponds to the CLS" }, { "end": 1174.48, "start": 1168.8400000000001, "text": " token that's what I'm going to throw into a logistic regression and that's" }, { "end": 1178.96, "start": 1174.48, "text": " what will give me my yes or no answer and that's how you train it right so you" }, { "end": 1185.68, "start": 1178.96, "text": " you don't want to single out any of these any of these as like special so" }, { "end": 1191.08, "start": 1185.68, "text": " you simply already include a special token at the beginning that then you" }, { "end": 1197.56, "start": 1191.08, "text": " take the classification from right it's pretty smart but also you say ah this is" }, { "end": 1204.24, "start": 1197.56, "text": " such a special token I want that to be able to attend to anything right even" }, { "end": 1209.4, "start": 1204.24, "text": " though for example this unit right here it can only attend to its neighbors" }, { "end": 1214.06, "start": 1209.4, "text": " right it has this cone thing and this unit right here has this cone thing this" }, { "end": 1220.4, "start": 1214.06, "text": " unit right here can always attend to anything at each of the layers right it" }, { "end": 1225.72, "start": 1220.4, "text": " can attend to anything and anything can attend to it so it can get information" }, { "end": 1231.78, "start": 1225.72, "text": " from anywhere routed to it in each of the layers and it can send information" }, { "end": 1233.78, "start": 1231.78, "text": " to any of the other units." }, { "end": 1236.66, "start": 1234.58, "text": " This is an engineering choice." }, { "end": 1243.18, "start": 1236.74, "text": " So at the beginning, you as an engineer have to say which one of these tokens are special tokens." }, { "end": 1247.62, "start": 1243.18, "text": " For these tokens, you'll actually then do full attention." }, { "end": 1250.42, "start": 1247.62, "text": " It can attend to and from anything." }, { "end": 1253.7, "start": 1251.18, "text": " What are our new memory requirements?" }, { "end": 1255.98, "start": 1253.98, "text": " What this will give us is," }, { "end": 1259.46, "start": 1256.1399999999999, "text": " first of all, we have N tokens." }, { "end": 1263.14, "start": 1259.46, "text": " And here W is our window size." }, { "end": 1266.94, "start": 1263.14, "text": " So we have N times W memory." }, { "end": 1269.98, "start": 1266.94, "text": " But then we also add the global attention." }, { "end": 1275.74, "start": 1269.98, "text": " So plus the number of special tokens times," }, { "end": 1278.46, "start": 1275.74, "text": " if there's a special token," }, { "end": 1284.66, "start": 1278.46, "text": " it will have N times 2 memory requirement," }, { "end": 1288.06, "start": 1284.66, "text": " because it can attend from and to in each layer." }, { "end": 1293.1399999999999, "start": 1288.06, "text": " And this entire thing, sorry, with the plus," }, { "end": 1296.82, "start": 1293.1399999999999, "text": " this entire thing times the number of layers." }, { "end": 1300.86, "start": 1296.82, "text": " So this is your new attention memory requirements." }, { "end": 1303.98, "start": 1300.86, "text": " And as you can see here, N plus N." }, { "end": 1306.98, "start": 1303.98, "text": " So this is going to be order of N," }, { "end": 1311.1, "start": 1306.98, "text": " instead of much smaller than order of N squared," }, { "end": 1313.8999999999999, "start": 1311.1, "text": " as we had for the original transformer." }, { "end": 1315.9, "start": 1313.9, "text": " Right." }, { "end": 1320.5, "start": 1315.9, "text": " So this is what the longformer basically does." }, { "end": 1323.5, "start": 1320.5, "text": " Now they have written custom CUDA kernels" }, { "end": 1329.5, "start": 1323.5, "text": " for doing this dilated attention and so on," }, { "end": 1330.5, "start": 1329.5, "text": " which is pretty cool." }, { "end": 1333.5, "start": 1330.5, "text": " And they have code available for the model." }, { "end": 1338.5, "start": 1333.5, "text": " They test this on a number of language tasks." }, { "end": 1342.5, "start": 1338.5, "text": " And what I find interesting is," }, { "end": 1346.5, "start": 1342.5, "text": " actually, they start from the Roberta checkpoint," }, { "end": 1350.5, "start": 1346.5, "text": " which Roberta, where is it said?" }, { "end": 1354.3, "start": 1350.5, "text": " Somewhere, oh yeah, this Roberta model right here" }, { "end": 1356.1, "start": 1354.3, "text": " is a variant of BERT." }, { "end": 1358.5, "start": 1356.1, "text": " Right, you can see the name in here." }, { "end": 1360.1, "start": 1358.5, "text": " It's a variant of BERT." }, { "end": 1361.5, "start": 1360.1, "text": " And that's their baseline." }, { "end": 1363.3, "start": 1361.5, "text": " And they start from these checkpoints," }, { "end": 1364.7, "start": 1363.3, "text": " as far as I understand," }, { "end": 1368.1, "start": 1364.7, "text": " and they kind of copy over the position embeddings and so on." }, { "end": 1372.8999999999999, "start": 1368.1, "text": " And therefore, they only need to train not very much" }, { "end": 1374.3, "start": 1372.8999999999999, "text": " past the Roberta." }, { "end": 1377.6999999999998, "start": 1374.3, "text": " Now the reason why they can copy it over actually is," }, { "end": 1380.1, "start": 1377.6999999999998, "text": " and this I find very interesting," }, { "end": 1383.8999999999999, "start": 1380.1, "text": " is they use a window size of 512." }, { "end": 1385.6999999999998, "start": 1383.8999999999999, "text": " So until I read this," }, { "end": 1389.6999999999998, "start": 1385.6999999999998, "text": " I got away from reading the paper" }, { "end": 1394.8999999999999, "start": 1389.6999999999998, "text": " thinking that this window size here might be fairly small." }, { "end": 1395.5, "start": 1394.8999999999999, "text": " Right." }, { "end": 1398.9, "start": 1395.5, "text": " So this window size, it might be, you know," }, { "end": 1402.9, "start": 1398.9, "text": " maybe 10, 20, 30 tokens or something, right?" }, { "end": 1411.5, "start": 1402.9, "text": " But actually, this window size is 512 in their formulation," }, { "end": 1416.1, "start": 1411.5, "text": " which basically means that this is as much" }, { "end": 1419.9, "start": 1416.1, "text": " as one of the classic models could take as a document." }, { "end": 1420.5, "start": 1419.9, "text": " Right." }, { "end": 1423.1, "start": 1420.5, "text": " So, sorry, let's go over." }, { "end": 1426.5, "start": 1423.1, "text": " So this here is 512." }, { "end": 1436.6999999999998, "start": 1426.5, "text": " So this is what a classic model could take as an entire document." }, { "end": 1437.6999999999998, "start": 1436.6999999999998, "text": " And in the classic model," }, { "end": 1441.1, "start": 1437.6999999999998, "text": " you simply split up the document, feed chunks, right?" }, { "end": 1442.6999999999998, "start": 1441.1, "text": " And then aggregate over them." }, { "end": 1446.6999999999998, "start": 1442.6999999999998, "text": " Now the longformer basically has this." }, { "end": 1451.3, "start": 1446.6999999999998, "text": " So right now, for now, I said it has less memory requirements." }, { "end": 1455.5, "start": 1451.3, "text": " Actually, it has the same memory requirements as a classic model," }, { "end": 1456.8999999999999, "start": 1455.5, "text": " but it is also able," }, { "end": 1458.5, "start": 1456.8999999999999, "text": " because of these global attention," }, { "end": 1462.8999999999999, "start": 1458.5, "text": " to kind of incorporate information from the surrounding things." }, { "end": 1466.1, "start": 1462.8999999999999, "text": " So that's the new part." }, { "end": 1468.3, "start": 1466.1, "text": " Because if you think about it," }, { "end": 1474.8999999999999, "start": 1468.3, "text": " if this W here is 512, 512 was the original N." }, { "end": 1480.5, "start": 1474.8999999999999, "text": " So 512 was the N0." }, { "end": 1483.7, "start": 1480.5, "text": " Whatever the old models had as an N." }, { "end": 1489.7, "start": 1483.7, "text": " So right now, if I replace this," }, { "end": 1492.5, "start": 1489.7, "text": " and let's not take care of this." }, { "end": 1496.1, "start": 1492.5, "text": " If I replace this, it's actually N times N0." }, { "end": 1501.7, "start": 1496.1, "text": " And that regresses to the classic model if you plug in N0 here, right?" }, { "end": 1508.3, "start": 1501.7, "text": " So the new part really is the fact that you have this sliding window," }, { "end": 1515.3, "start": 1508.3, "text": " and the global attention is able to incorporate information from these special tokens as well." }, { "end": 1520.7, "start": 1515.3, "text": " Because sliding window, that was done before." }, { "end": 1524.7, "start": 1520.7, "text": " So I just don't want to get to you the wrong impression" }, { "end": 1528.1, "start": 1524.7, "text": " that now we can run transformers on like very small memory machines." }, { "end": 1529.7, "start": 1528.1, "text": " We can't." }, { "end": 1533.5, "start": 1529.7, "text": " But we can run them on the same memory machines," }, { "end": 1535.8999999999999, "start": 1533.5, "text": " because this is the same length, right?" }, { "end": 1539.1000000000001, "start": 1535.9, "text": " But also feed in longer documents," }, { "end": 1547.3000000000002, "start": 1539.1000000000001, "text": " and have some information of the entire document be propagated to these blocks," }, { "end": 1548.7, "start": 1547.3000000000002, "text": " which before we couldn't." }, { "end": 1554.3000000000002, "start": 1548.7, "text": " Before we could just simply feed these blocks as one and not have global information." }, { "end": 1555.9, "start": 1554.3000000000002, "text": " So that's the new thing." }, { "end": 1559.3000000000002, "start": 1555.9, "text": " At least they haven't tested it on the smaller things," }, { "end": 1561.3000000000002, "start": 1559.3000000000002, "text": " which is cool from an engineering point, right?" }, { "end": 1562.9, "start": 1561.3000000000002, "text": " You would want to," }, { "end": 1564.7, "start": 1562.9, "text": " because if you want to show that you're better," }, { "end": 1571.1000000000001, "start": 1564.7, "text": " you would want to basically be able to be as powerful as the old model," }, { "end": 1573.5, "start": 1571.1000000000001, "text": " but then be more powerful." }, { "end": 1575.5, "start": 1573.5, "text": " And that's what they do." }, { "end": 1575.9, "start": 1575.5, "text": " All right." }, { "end": 1579.1000000000001, "start": 1575.9, "text": " So if you want to check out the experiments and the ablations," }, { "end": 1583.9, "start": 1579.1000000000001, "text": " it's very interesting because they turn on and off a lot of things in their model," }, { "end": 1586.1000000000001, "start": 1583.9, "text": " and kind of check out where things come from," }, { "end": 1587.9, "start": 1586.1000000000001, "text": " what helps, what doesn't." }, { "end": 1590.5, "start": 1587.9, "text": " And I'll leave this to you, and I'll link it." }, { "end": 1595.1, "start": 1590.5, "text": " And with that, thanks for listening, watching, and bye-bye." } ]
a0f07M2uj_A
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Backpropagation and the brain
[ "Science & Technology" ]
[ "deep learning", "machine learning", "biologically plausible", "neural networks", "spiking", "neurons", "neuroscience", "hinton", "google", "deepmind", "brain", "cells", "soma", "axon", "interneurons", "action potential", "backprop" ]
Geoffrey Hinton and his co-authors describe a biologically plausible variant of backpropagation and report evidence that such an algorithm might be responsible for learning in the brain. https://www.nature.com/articles/s41583-020-0277-3 Abstract: During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embedded within multilayered networks, making it difficult to determine the effect of an individual synaptic modification on the behaviour of the system. The backpropagation algorithm solves this problem in deep artificial neural networks, but historically it has been viewed as biologically problematic. Nonetheless, recent developments in neuroscience and the successes of artificial neural networks have reinvigorated interest in whether backpropagation offers insights for understanding learning in the cortex. The backpropagation algorithm learns quickly by computing synaptic updates using feedback connections to deliver error signals. Although feedback connections are ubiquitous in the cortex, it is difficult to see how they could deliver the error signals required by strict formulations of backpropagation. Here we build on past and recent developments to argue that feedback connections may instead induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain. Authors: Timothy P. Lillicrap, Adam Santoro, Luke Marris, Colin J. Akerman & Geoffrey Hinton Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at Backpropagation and the Brain by Timothy Lilikrup, Adam Santoro, Luke Morris, Colin Ackerman and Jeffrey Hinton. So this is a bit of an unusual paper for the machine learning community but nevertheless it's interesting and let's be honest at least half of our interest comes from the fact that Jeffrey Hinton is one of the authors of this paper. So this is a paper that basically proposes a hypothesis on how the algorithm of backpropagation works in the brain because previously there has been a lot of evidence against there being something like backpropagation in the brain. So the question is how do neural networks in the brain learn? And they say there can be many different ways that neural networks learn and they list them up in in this kind of diagram where you have a network and it maps from input to output by having these weighted connections between neurons. So the input is two-dimensional and then it maps using this weights to a three-dimensional hidden layer. Usually there is a nonlinear function somewhere at the output here of these so they do a weighted sum of the inputs and then they do a nonlinear function and then they propagate that signal to the next layer and to then to finally to the output. Alright so how do these networks learn? The one way of learning is called Hebbian learning. The interesting thing here is that it requires no feedback from the outside world. Basically what you want to do in Hebbian learning is you want to update the connections such that they kind of match their own previous outputs or even increase their own previous outputs. So you propagate a signal and then maybe this neuron spikes really hard and this neuron spikes really low then if you propagate the signal again right then you want to match that those those activations or if you if you propagate similar signals no feedback required so basically it's a self-amplifying or self dampening process. Ultimately though you want to learn something about the world and that means you have to have some some feedback from outside right so with feedback what we mean is usually that the output here let's put this away the output here is goes into the world let's say this is a motor neuron right you do something with your arm like you hammer on a nail and then you either hit the nail or you don't let's say you don't hit the nail so after it looks like crooked there you have feedback right so feedback usually in the form of some sort of error signal right so feedback it can be like this was good or this was bad or it can be this was a bit too much to the left or so on the important part is you get kind of one number of feedback right and how bad you were and now your goal is to adjust all of the individual neurons or weights between neurons such that the error will be lower so in Hebbian learning there is no feedback it's just simply a self reinforcing pattern activation machine in the first in these kind of first instances of perturbation learning what you'll have is you'll have one single feedback and that you can see this is a diffuse cloud here what you're basically saying is that every single neuron is kind of punished let's say the the feedback here was negative one that means every single neuron is is punished for that so how you can imagine something if you have your input X and you map it through through your function F and the function F has a weight W1 and so on right so you map X through it right and then you get a feedback of negative one and then you map X with a little bit of noise plus M right da da da da and you get a feedback of negative two right then you that means that the direction of this noise was probably a bad direction so ultimately you want to update X into the direction of negative that noise by modulated of course by by some some factor here that's that it kind of tells you how bad it was so this could be the negative two minus negative one yeah that makes big sense no yes that would be no it would be negative one minus negative never mind so basically with a scalar feedback you simply tell each neuron what it did right or sorry if if the entire network right the entire network did right or wrong so the entire network will lead to this feedback you don't have accountability of the individual neurons all you can say is that whatever I'm doing here is wrong and whatever I'm doing here is right so I'm going to do more of the right things now in back propagation it is very different right in back propagation what you'll do is you'll have your feedback here let's say that's negative one and then you do a reverse computation so the forward computation in this case was these weighted sum of this layer now you do a little layer wise reverse computation which means that you know how this function here this output came to be out of the out of the inputs and that means you can inverse and you can do an inverse propagation of the error signal which is of course the gradient so this would be your your you would derive your error by the inputs to the layer right so this basically tells in the back propagation algorithm you can exactly determine if you are this node how do I have to adjust my input weights how do I have to adjust them in order to make this number here go down right and then because you always propagate the error according to that what you'll have in each in each layer is basically a vector target so it's no longer just one number but each layer now has a target of vectors and it says okay these are the outputs that would be beneficial please this layer please change your outputs in the direction of negative 2 negative 3 plus 4 so you see this is so the negative 2 would be this unit the negative 3 would be this unit and the plus 4 would be this unit so each unit is instructed individually to say please this is the direction that each unit should change in in order to make this number go lower you see how this is much more information than the perturbation learning in the perturbation learning all the units simply know well before was bad and now is better so let's you know change a bit and here you have detailed instructions for each unit because of the back propagation algorithm so ultimately people have kind of thought that since back propagation wasn't really possible with biological neurons that the brain might be doing something like perturbation learning but this paper argues that something like back propagation is not only possible but likely in the brain and they propose this kind of back prop like learning with the feedback network so they basically concern all the they differentiate hard between these two regimes here in this hand you have the scalar feedback which means that the entire network gets one number as a feedback and each neuron just gets that number and here you have vector feedback where each neuron gets an individual instruction of how to update and they achieve this not by back propagation because still the original formulation of back prop as we use it in neural networks is not biologically plausible but they achieve this with this back prop like learning with the feedback network and we'll see how this does but in in essence this feedback network is constructed such that it can give each neuron in the forward pass here detailed instructions on how to update itself alright so yeah they have a little bit of a diagram here of if you do heavy in if this if this is an error landscape if you do heavy in learning you're basically you don't care about the error you're just reinforcing yourself if you do perturbation learning then you it's very slow because you don't have a detailed signal you just you just relying on this one number it's kind of if you were to update every single neuron in your neural network with reinforcement learning considering the output the of the neural networks or the error considering that the reward not using back prop and then with back prop you have a much smoother much faster optimization trajectory so they look at this and they they come to some some conclusions first of all so here's here's back prop basically so in back prop as we said you have the forward pass and there you simply compute these weighted averages and you you also pass them usually through some sort of non linear activation right and the cool thing about this is in artificial neural networks is that once the error comes in you can exactly reverse that so you can do a backward pass of errors where you can propagate these errors through because you know it's kind of invertible the function doesn't have to be invertible but that the gradients will flow backwards if you know how the forward pass was computed so first of all they go into a discussion of back prop in the brain how can we even expect that and one cool piece of evidence is where I find is that they cite several examples where they use artificial neural networks to learn the same task as humans right and or as as animal brains and then I have no clue how how they measure any of this but then they compare the hidden representations of the living neural networks and the artificial neural networks and it turns out that the these the networks that were trained with back prop can clear up much more of the variance of these hidden activations than networks that were not trained with back prop so basically that means if you train a network with back prop it matches the biological networks much closer in how they form their hidden representations and they they do number they cite a number of experiments here that show this so this gives you very good evidence that if the hidden representations they look as if they had been computed by back prop and not by any of these scalar updating algorithms so it is conceivable that we find back prop in the brain that's why they go here next they go into problems with back prop so basically why why would we why so far have we believed that back prop isn't happening in the brain so now let's I want to highlight two factors here that I find think are suffice they have more but first of all back prop demands synaptic symmetry in the forward and backward paths right so basically if you have a neuron and it has output to another neuron what you need to be able to do is to pass back information along that neuron so it kind of has to be a symmetric connection idea of the forward and the backward paths and these need to be exact right and this is just not if you know how neurons are structured they have kind of input dendrites and then there's this accent action potential and along the axon the signal travels and the back traveling of the signal just I think is very is very very very slow if even possible and so it's generally not invertible or inverse compute capable so this is one reason why back prop seems unlikely and then the second reason here is error signals are signed and potentially extreme valued and I want to add to that they also talk about this somewhere that error signals are of a different type right that's a different type so first let's see what signed error signals are signed yes we need to be able to adjust neurons in a specific directions right if you look at again what we've drawn before here we said here this is how these neurons must update so the first neuron must decrease by two this must decrease by three and this must increase by four now in back prop we need this but in if if we assume that there is something like a reverse computation or signaling here happening then we still have the problem that usually these output signals are in the form of spiking rates which means that over time right so if a neuron wants to if a neuron has zero activation there's just no signal but if a neuron has a high activation it spikes a lot if has a low activation it kind of spikes sometimes what what it can't do is negative spike right like zero is as low as it goes so the the thought that there are signed information in in the backward pass is conceivable even if you have something like a second so you can imagine here instead of this backward connection because of the symmetry problem that we have some kind of second neural network that goes in this direction still you'd have the problem that here you can only have positive signal or zero and they might be extreme valued which okay it can't be really encoded with these spiking because they are they're limited in the range they can assume but they are also of a different type and I'm what I mean by that is basically if you think of this as a programming problem then the forward passes here are activations right and the backward passes here they are deltas so in the backward passes you either propagate deltas or you propagate kind of directions so the activations are sort of impulses whereas the backward signals are this is how you need to change their gradients ultimately so it's fundamentally a different type of data that is propagated along would be propagated along these directions and that makes it very unlikely because we are not aware as this paper says that the neural networks get neurons can kind of switch the data type that they're they're transmitting alright so then the paper goes into their n-grad hypothesis and what this is the hypothesis basically states that the brain could implement something like neural networks by using by using an approximate backprop like algorithm based on autoencoders and I want to jump straight into the algorithm no actually first they do talk about autoencoders which which I find very interesting so if you think of auto encoders what is an autoencoder an autoencoder is a network that basically starts out with an input layer and then has a bunch of hidden layers and at the end it tries to reconstruct its own input right so you feed a data in here you get data out here and then your error the error signal it will be your difference to your original input now that the usually when we train auto encoders in deep learning we also train this by backprop right we feed and this error here and this goes back but if you just think of single layer auto encoders so let's let's go over here single layer autoencoder with let's say the the same number of the same number of of units in the in this layer what you'll have is so this this is input this is output and this is the hidden layer right you'll have a weight matrix here and you'll probably have some sort of nonlinear function and then you have another weight matrix here and they call them W and B another way to draw this is I have weight matrix going up then I have a nonlinear function going transforming this into this signal and then I have the B going back right so I'm drawing I'm drawing it in two different ways up here or over here and with the second way you can see that it is kind of a forward backward algorithm where now the error if you look at what is the error here the error is the difference between this and this and the difference between this and this and the difference between this and this right and you can train an autoencoder simply by saying W please make sure that the that the the the input here gets mapped closer to the output and the B the same thing this will become clear in a second so but basically sorry this I mean the the hidden representations you'll see basically the idea is that you can train an autoencoder only by using local update rules you don't have to do backprop and that's what this algorithm is proposing namely if you think of a stack of autoencoders this this this transforming one hidden representation into the next right this is the feed forward function right what you can do is you first of all you can assume that for each of these functions here you have a perfect inverse right you can you can perfectly compute the inverse function that's this this G here of course this doesn't exist but assume you have it what you then could do is you could if if you knew in one layer and on the top layer of course you know if you knew that okay I got this from my forward pass but I would like to have this this is my desired output right so in the output layer you get this this is your error signal if you knew you you you could compute an error right here this is what you do in the output right now in backprop we would back propagate this error along the layers but now we don't do this instead of what we do is we use this G function to invert the F function right and by that what we'll say is what hidden representation in layer 2 what should the hidden representation have been in order for us to obtain this thing right so the the claim here is if in layer 2 we had had h2 as a hidden representation then we would have landed exactly where we want it right that's what this G function does right because here we use F so had we had F H2 and used F on it we would be exactly where we want instead we had H2 here and used F on it and then we landed here where we don't want so this is where we want we would want to be in layer 2 and this is where we were so again we can compute an error here again instead of back propagating that error what we'll do is we'll use the inverse of the forward function in order to back propagate our desired hidden representation and you can see there is of course a relationship to the true back prop here but the the important distinction is we are not trying to back propagate the error signal we're trying to invert the desired hidden states of the network and then in each layer we can compute from the forward pass we can compute the difference to the desired hidden state and thereby compute an error signal and now we have achieved what we wanted we want an algorithm that doesn't do backprop that only uses local information in order to compute the error signal that it needs to adjust and by local I mean information in the same layer and also the data type that is propagated by F is activations right of hidden representations and by G is also activations of hidden representations both of them are always positive can be encoded by spiking neurons and so on so this algorithm achieves what we want they go a bit into detail how the actual error update here can be achieved and apparently neurons can achieve you know in the same layer to to adjust themselves to a given desired activation so this algorithm achieves it of course we don't have this G we don't have it and therefore we need to go a bit more complicated what they introduce is the this following algorithm the goals are the same but now we assume we do not have a perfect inverse but we have something that is a bit like an inverse so we have an approximate inverse and they basically suggest if we have an approximate inverse we can do the following so G is now an approximate inverse to F what we can do is this is our input signal right we use F to map it forward to this and so on all the way up until we get our true our error right here this is our error from the environment right this is the nail being wrong and then we do two applications of G right so this is an application of F we do two applications of G one we apply G to this to what we got in the forward pass right and this now gives us a measure of how bad our inverse is right so if G is now an approximate inverse and this now we see here oh okay we we had H2 in the forward pass and we basically forward passed and then went through our inverse and we didn't land quite exactly where we started but we know that okay this this is basically the difference between our our inverse our forward inverse H and our true H and then we also back project using G again the desired outcome so we invert the desired outcome here now before we have adjusted directly these two right because we said this is what we got this is what we want but now we include for the fact that G isn't a perfect inverse and our assumption is that G here probably makes about the same mistakes as G here so what we'll do is we'll take this vector right here and apply it here in order to achieve this thing and this thing is now the corrected thing our corrected desired hidden representation corrected for the fact that we don't have a perfect inverse and now again we have our error here that we can locally adjust again all the signals propagated here here and here are just neural activations and all the information required to update a layer of neurons is now contained within that layer of neurons right and and this goes back through the network so this is how they achieve how they achieve this this is a bit of a of a close-up look and here are the computations to do this so basically for the forward updates you want to adjust W into the direction of the H minus the H tilde and the H tilde in this case would be this the the hidden representation that you would like to have so you would update your forward forward weights into the direction such that your hidden representations are closer sorry that your forward hidden representation is closer to your backward hidden representation and the backward updates now your goal is to get a more better to make G so sorry W here these are W are the weight of F and B are the weights of G so in the backward updates your goal is to make G a better inverse right so what you'll do is again you'll take the difference between now you see the difference here here here right not the same error so here you use you in the W update use what we labeled error here in the G update you use this error here so this is the error of G so when you update the function G you want to make these two closer together such that G becomes a better inverse right because you're dealing with an approximate inverse you still need to obtain that approximate inverse and and this here is how you learn it this algorithm now achieves what we wanted right local updates data types check assigned check and so on I hope this was enough clear in essence is pretty simple but it's pretty cool how they work around this they call this a difference target propagation and I'm not these these kind of papers I don't think they invented this maybe I'm not sure maybe they did and maybe they didn't and this paper just kind of frames it in this hypothesis it is unclear to me I I'm not familiar with this kind of papers so sorry if I misattribute something here all right then they go into into how could these things be implemented biologically and they go for some evidence and they also state that we used to look at neurons basically in this way where you had input and feedback here very simple simplistic view of neurons whereas nowadays even the the company to computational community use neurons in a more differentiated way where you have for example different regions here on the soma that can be separated from each other and you have interneuron interference and so on I'm not qualified too much to comment on this stuff but I invite you to read it for yourself if you want alright so this was my take on this paper I find the algorithm they proposed pretty cool if you I hope you liked it and check it out bye bye
[ { "end": 5.22, "start": 0, "text": " Hi there! Today we're looking at Backpropagation and the Brain by Timothy" }, { "end": 13.200000000000001, "start": 5.22, "text": " Lilikrup, Adam Santoro, Luke Morris, Colin Ackerman and Jeffrey Hinton. So this is a" }, { "end": 18.12, "start": 13.200000000000001, "text": " bit of an unusual paper for the machine learning community but nevertheless it's" }, { "end": 22.96, "start": 18.12, "text": " interesting and let's be honest at least half of our interest comes from the fact" }, { "end": 30.520000000000003, "start": 22.96, "text": " that Jeffrey Hinton is one of the authors of this paper. So this is a paper" }, { "end": 38.160000000000004, "start": 30.520000000000003, "text": " that basically proposes a hypothesis on how the algorithm of backpropagation" }, { "end": 44.96, "start": 38.160000000000004, "text": " works in the brain because previously there has been a lot of evidence against" }, { "end": 50.52, "start": 44.96, "text": " there being something like backpropagation in the brain. So the" }, { "end": 57.96, "start": 50.52, "text": " question is how do neural networks in the brain learn? And they say there" }, { "end": 65.28, "start": 57.96, "text": " can be many different ways that neural networks learn and they list them up in" }, { "end": 72.56, "start": 65.28, "text": " in this kind of diagram where you have a network and it maps from input to output" }, { "end": 76.76, "start": 72.56, "text": " by having these weighted connections between neurons. So the input is" }, { "end": 81.2, "start": 76.76, "text": " two-dimensional and then it maps using this weights to a three-dimensional" }, { "end": 87.80000000000001, "start": 81.2, "text": " hidden layer. Usually there is a nonlinear function somewhere at the" }, { "end": 93.80000000000001, "start": 87.80000000000001, "text": " output here of these so they do a weighted sum of the inputs and then they" }, { "end": 99.08000000000001, "start": 93.80000000000001, "text": " do a nonlinear function and then they propagate that signal to the" }, { "end": 106.24000000000001, "start": 99.08000000000001, "text": " next layer and to then to finally to the output. Alright so how do these networks" }, { "end": 112.72, "start": 106.24, "text": " learn? The one way of learning is called Hebbian learning. The interesting thing" }, { "end": 117.28, "start": 112.72, "text": " here is that it requires no feedback from the outside world. Basically what" }, { "end": 122.39999999999999, "start": 117.28, "text": " you want to do in Hebbian learning is you want to update the connections such" }, { "end": 127.08, "start": 122.39999999999999, "text": " that they kind of match their own previous outputs or even increase their" }, { "end": 132.48, "start": 127.08, "text": " own previous outputs. So you propagate a signal and then maybe this neuron spikes" }, { "end": 137.07999999999998, "start": 132.48, "text": " really hard and this neuron spikes really low then if you propagate the" }, { "end": 143.76, "start": 137.07999999999998, "text": " signal again right then you want to match that those those activations or if you" }, { "end": 151.16, "start": 143.76, "text": " if you propagate similar signals no feedback required so basically it's a" }, { "end": 157.83999999999997, "start": 151.16, "text": " self-amplifying or self dampening process. Ultimately though you want to" }, { "end": 161.88, "start": 157.83999999999997, "text": " learn something about the world and that means you have to have some some" }, { "end": 167.32, "start": 161.88, "text": " feedback from outside right so with feedback what we mean is usually that" }, { "end": 177.96, "start": 167.32, "text": " the output here let's put this away the output here is goes into the world let's" }, { "end": 184.16, "start": 177.96, "text": " say this is a motor neuron right you do something with your arm like you hammer" }, { "end": 192.04, "start": 184.16, "text": " on a nail and then you either hit the nail or you don't let's say you don't" }, { "end": 198.88, "start": 192.04, "text": " hit the nail so after it looks like crooked there you have feedback right so" }, { "end": 205.2, "start": 198.88, "text": " feedback usually in the form of some sort of error signal right so feedback" }, { "end": 210.35999999999999, "start": 205.2, "text": " it can be like this was good or this was bad or it can be this was a bit too much" }, { "end": 215.44000000000003, "start": 210.36, "text": " to the left or so on the important part is you get kind of one number of" }, { "end": 223.08, "start": 215.44000000000003, "text": " feedback right and how bad you were and now your goal is to adjust all of the" }, { "end": 230.08, "start": 223.08, "text": " individual neurons or weights between neurons such that the error will be" }, { "end": 234.20000000000002, "start": 230.08, "text": " lower so in Hebbian learning there is no feedback it's just simply a self" }, { "end": 242.35999999999999, "start": 234.2, "text": " reinforcing pattern activation machine in the first in these kind of first" }, { "end": 249.48, "start": 242.35999999999999, "text": " instances of perturbation learning what you'll have is you'll have one single" }, { "end": 255.04, "start": 249.48, "text": " feedback and that you can see this is a diffuse cloud here what you're basically" }, { "end": 260.03999999999996, "start": 255.04, "text": " saying is that every single neuron is kind of punished let's say the the" }, { "end": 266.48, "start": 260.04, "text": " feedback here was negative one that means every single neuron is is punished" }, { "end": 273.44, "start": 266.48, "text": " for that so how you can imagine something if you have your input X and" }, { "end": 281.52000000000004, "start": 273.44, "text": " you map it through through your function F and the function F has a weight W1 and" }, { "end": 290.12, "start": 281.52, "text": " so on right so you map X through it right and then you get a feedback of" }, { "end": 298.76, "start": 290.12, "text": " negative one and then you map X with a little bit of noise plus M right da da" }, { "end": 304.71999999999997, "start": 298.76, "text": " da da and you get a feedback of negative two right then you that means that the" }, { "end": 310.44, "start": 304.71999999999997, "text": " direction of this noise was probably a bad direction so ultimately you want to" }, { "end": 321.6, "start": 310.44, "text": " update X into the direction of negative that noise by modulated of course by by" }, { "end": 326.6, "start": 321.6, "text": " some some factor here that's that it kind of tells you how bad it was so this" }, { "end": 337.72, "start": 326.6, "text": " could be the negative two minus negative one yeah that makes big sense" }, { "end": 345.44000000000005, "start": 337.72, "text": " no yes that would be no it would be negative one minus negative never mind" }, { "end": 350.52000000000004, "start": 345.44000000000005, "text": " so basically with a scalar feedback you simply tell each neuron what it did" }, { "end": 357.16, "start": 350.52000000000004, "text": " right or sorry if if the entire network right the entire network did right or" }, { "end": 361.32000000000005, "start": 357.16, "text": " wrong so the entire network will lead to this feedback you don't have" }, { "end": 365.88000000000005, "start": 361.32000000000005, "text": " accountability of the individual neurons all you can say is that whatever I'm" }, { "end": 369.56, "start": 365.88, "text": " doing here is wrong and whatever I'm doing here is right so I'm going to do" }, { "end": 376.32, "start": 369.56, "text": " more of the right things now in back propagation it is very different right" }, { "end": 380.92, "start": 376.32, "text": " in back propagation what you'll do is you'll have your feedback here let's say" }, { "end": 387.28, "start": 380.92, "text": " that's negative one and then you do a reverse computation so the forward" }, { "end": 392.88, "start": 387.28, "text": " computation in this case was these weighted sum of this layer now you do a" }, { "end": 400.24, "start": 392.88, "text": " little layer wise reverse computation which means that you know how this" }, { "end": 405.48, "start": 400.24, "text": " function here this output came to be out of the out of the inputs and that means" }, { "end": 411.28, "start": 405.48, "text": " you can inverse and you can do an inverse propagation of the error signal" }, { "end": 419.36, "start": 411.28, "text": " which is of course the gradient so this would be your your you would derive your" }, { "end": 427.72, "start": 419.36, "text": " error by the inputs to the layer right so this basically tells in the back" }, { "end": 434, "start": 427.72, "text": " propagation algorithm you can exactly determine if you are this node how do I" }, { "end": 441.32, "start": 434, "text": " have to adjust my input weights how do I have to adjust them in order to make" }, { "end": 448.44, "start": 441.32, "text": " this number here go down right and then because you always propagate the error" }, { "end": 453.84, "start": 448.44, "text": " according to that what you'll have in each in each layer is basically a vector" }, { "end": 458.12, "start": 453.84, "text": " target so it's no longer just one number but each layer now has a target of" }, { "end": 465.72, "start": 458.12, "text": " vectors and it says okay these are the outputs that would be beneficial please" }, { "end": 470.48, "start": 465.72, "text": " this layer please change your outputs in the direction of negative 2 negative 3" }, { "end": 475.68, "start": 470.48, "text": " plus 4 so you see this is so the negative 2 would be this unit the" }, { "end": 479.64, "start": 475.68, "text": " negative 3 would be this unit and the plus 4 would be this unit so each unit" }, { "end": 486.84000000000003, "start": 479.64, "text": " is instructed individually to say please this is the direction that each unit" }, { "end": 492.8, "start": 486.84000000000003, "text": " should change in in order to make this number go lower you see how this is much" }, { "end": 496.04, "start": 492.8, "text": " more information than the perturbation learning in the perturbation learning" }, { "end": 501.44, "start": 496.04, "text": " all the units simply know well before was bad and now is better so let's you" }, { "end": 507.56, "start": 501.44, "text": " know change a bit and here you have detailed instructions for each unit" }, { "end": 513.96, "start": 507.56, "text": " because of the back propagation algorithm so ultimately people have kind" }, { "end": 520.28, "start": 513.96, "text": " of thought that since back propagation wasn't really possible with biological" }, { "end": 526.8, "start": 520.28, "text": " neurons that the brain might be doing something like perturbation learning but" }, { "end": 532.12, "start": 526.8, "text": " this paper argues that something like back propagation is not only possible" }, { "end": 539.12, "start": 532.12, "text": " but likely in the brain and they propose this kind of back prop like learning" }, { "end": 545.4, "start": 539.12, "text": " with the feedback network so they basically concern all the they" }, { "end": 550.1999999999999, "start": 545.4, "text": " differentiate hard between these two regimes here in this hand you have the" }, { "end": 555.92, "start": 550.1999999999999, "text": " scalar feedback which means that the entire network gets one number as a" }, { "end": 562.4, "start": 555.92, "text": " feedback and each neuron just gets that number and here you have vector feedback" }, { "end": 568.52, "start": 562.4, "text": " where each neuron gets an individual instruction of how to update and they" }, { "end": 573.9599999999999, "start": 568.52, "text": " achieve this not by back propagation because still the original formulation" }, { "end": 579.5999999999999, "start": 573.9599999999999, "text": " of back prop as we use it in neural networks is not biologically plausible" }, { "end": 583.36, "start": 579.5999999999999, "text": " but they achieve this with this back prop like learning with the feedback" }, { "end": 590.24, "start": 583.36, "text": " network and we'll see how this does but in in essence this feedback network is" }, { "end": 595.6, "start": 590.24, "text": " constructed such that it can give each neuron in the forward pass here detailed" }, { "end": 606.6800000000001, "start": 595.6, "text": " instructions on how to update itself alright so yeah they have a little bit" }, { "end": 611.72, "start": 606.6800000000001, "text": " of a diagram here of if you do heavy in if this if this is an error landscape" }, { "end": 615.24, "start": 611.72, "text": " if you do heavy in learning you're basically you don't care about the error" }, { "end": 621.9200000000001, "start": 615.24, "text": " you're just reinforcing yourself if you do perturbation learning then you it's" }, { "end": 626.36, "start": 621.9200000000001, "text": " very slow because you don't have a detailed signal you just you just" }, { "end": 631.48, "start": 626.36, "text": " relying on this one number it's kind of if you were to update every single neuron" }, { "end": 636.02, "start": 631.48, "text": " in your neural network with reinforcement learning considering the" }, { "end": 642.28, "start": 636.02, "text": " output the of the neural networks or the error considering that the reward not" }, { "end": 646.76, "start": 642.28, "text": " using back prop and then with back prop you have a much smoother much faster" }, { "end": 655.28, "start": 646.76, "text": " optimization trajectory so they look at this and they they come to some some" }, { "end": 660.88, "start": 655.28, "text": " conclusions first of all so here's here's back prop basically so in back" }, { "end": 668.28, "start": 660.88, "text": " prop as we said you have the forward pass and there you simply compute these" }, { "end": 676.72, "start": 668.28, "text": " weighted averages and you you also pass them usually through some sort of non" }, { "end": 683.88, "start": 676.72, "text": " linear activation right and the cool thing about this is in artificial" }, { "end": 690.48, "start": 683.88, "text": " neural networks is that once the error comes in you can exactly reverse that so" }, { "end": 694.5600000000001, "start": 690.48, "text": " you can do a backward pass of errors where you can propagate these errors" }, { "end": 699.9200000000001, "start": 694.5600000000001, "text": " through because you know it's kind of invertible the function doesn't have to" }, { "end": 705.16, "start": 699.9200000000001, "text": " be invertible but that the gradients will flow backwards if you know how the" }, { "end": 713.88, "start": 705.16, "text": " forward pass was computed so first of all they go into a discussion of back" }, { "end": 721.2, "start": 713.88, "text": " prop in the brain how can we even expect that and one cool piece of evidence is" }, { "end": 730.12, "start": 721.2, "text": " where I find is that they cite several examples where they use artificial" }, { "end": 738.52, "start": 730.12, "text": " neural networks to learn the same task as humans right and or as as animal" }, { "end": 743.6, "start": 738.52, "text": " brains and then I have no clue how how they measure any of this but then they" }, { "end": 750.4, "start": 743.6, "text": " compare the hidden representations of the living neural networks and the" }, { "end": 755.96, "start": 750.4, "text": " artificial neural networks and it turns out that the these the networks that" }, { "end": 765.12, "start": 755.96, "text": " were trained with back prop can clear up much more of the variance of these hidden" }, { "end": 770.24, "start": 765.12, "text": " activations than networks that were not trained with back prop so basically that" }, { "end": 776.28, "start": 770.24, "text": " means if you train a network with back prop it matches the biological networks" }, { "end": 782.52, "start": 776.28, "text": " much closer in how they form their hidden representations and they they do" }, { "end": 786.96, "start": 782.52, "text": " number they cite a number of experiments here that show this so this gives you" }, { "end": 793.76, "start": 786.96, "text": " very good evidence that if the hidden representations they look as if they had" }, { "end": 799.76, "start": 793.76, "text": " been computed by back prop and not by any of these scalar updating algorithms" }, { "end": 808.72, "start": 799.76, "text": " so it is conceivable that we find back prop in the brain that's why they go" }, { "end": 814.84, "start": 808.72, "text": " here next they go into problems with back prop so basically why why would we" }, { "end": 823, "start": 814.84, "text": " why so far have we believed that back prop isn't happening in the brain so now" }, { "end": 829.52, "start": 823, "text": " let's I want to highlight two factors here that I find think are suffice they" }, { "end": 835.36, "start": 829.52, "text": " have more but first of all back prop demands synaptic symmetry in the forward" }, { "end": 842.04, "start": 835.36, "text": " and backward paths right so basically if you have a neuron and it has output to" }, { "end": 848.42, "start": 842.04, "text": " another neuron what you need to be able to do is to pass back information along" }, { "end": 855.12, "start": 848.42, "text": " that neuron so it kind of has to be a symmetric connection idea of the forward" }, { "end": 861.5600000000001, "start": 855.12, "text": " and the backward paths and these need to be exact right and this is just not if" }, { "end": 865.4, "start": 861.5600000000001, "text": " you know how neurons are structured they have kind of input dendrites and then" }, { "end": 872.76, "start": 865.4, "text": " there's this accent action potential and along the axon the signal travels and" }, { "end": 878.88, "start": 872.76, "text": " the back traveling of the signal just I think is very is very very very slow if" }, { "end": 889.08, "start": 878.88, "text": " even possible and so it's generally not invertible or inverse compute capable so" }, { "end": 894.04, "start": 889.08, "text": " this is one reason why back prop seems unlikely and then the second reason here" }, { "end": 899.4399999999999, "start": 894.04, "text": " is error signals are signed and potentially extreme valued and I want to" }, { "end": 905.88, "start": 899.4399999999999, "text": " add to that they also talk about this somewhere that error signals are of a" }, { "end": 916.28, "start": 905.88, "text": " different type right that's a different type so first let's see what signed error" }, { "end": 921.48, "start": 916.28, "text": " signals are signed yes we need to be able to adjust neurons in a specific" }, { "end": 927.4, "start": 921.48, "text": " directions right if you look at again what we've drawn before here we said" }, { "end": 936.8, "start": 927.4, "text": " here this is how these neurons must update so the first neuron must decrease" }, { "end": 941.6, "start": 936.8, "text": " by two this must decrease by three and this must increase by four now in" }, { "end": 949.28, "start": 941.6, "text": " back prop we need this but in if if we assume that there is something like a" }, { "end": 956.96, "start": 949.28, "text": " reverse computation or signaling here happening then we still have the problem" }, { "end": 963.2, "start": 956.96, "text": " that usually these output signals are in the form of spiking rates which means" }, { "end": 971.0400000000001, "start": 963.2, "text": " that over time right so if a neuron wants to if a neuron has zero activation" }, { "end": 977.84, "start": 971.0400000000001, "text": " there's just no signal but if a neuron has a high activation it spikes a lot if" }, { "end": 983.48, "start": 977.84, "text": " has a low activation it kind of spikes sometimes what what it can't do is" }, { "end": 989.44, "start": 983.48, "text": " negative spike right like zero is as low as it goes so the the thought that there" }, { "end": 996.28, "start": 989.44, "text": " are signed information in in the backward pass is conceivable even if you" }, { "end": 1000.16, "start": 996.28, "text": " have something like a second so you can imagine here instead of this backward" }, { "end": 1003.72, "start": 1000.16, "text": " connection because of the symmetry problem that we have some kind of second" }, { "end": 1007.72, "start": 1003.72, "text": " neural network that goes in this direction still you'd have the problem" }, { "end": 1016.0400000000001, "start": 1007.72, "text": " that here you can only have positive signal or zero and they might be extreme" }, { "end": 1020.76, "start": 1016.0400000000001, "text": " valued which okay it can't be really encoded with these spiking because they" }, { "end": 1026.1200000000001, "start": 1020.76, "text": " are they're limited in the range they can assume but they are also of a" }, { "end": 1030.96, "start": 1026.1200000000001, "text": " different type and I'm what I mean by that is basically if you think of this" }, { "end": 1037.88, "start": 1030.96, "text": " as a programming problem then the forward passes here are activations" }, { "end": 1044.64, "start": 1037.88, "text": " right and the backward passes here they are deltas so in the backward passes you" }, { "end": 1053.04, "start": 1044.64, "text": " either propagate deltas or you propagate kind of directions so the activations" }, { "end": 1062.52, "start": 1053.04, "text": " are sort of impulses whereas the backward signals are this is how you" }, { "end": 1066.8799999999999, "start": 1062.52, "text": " need to change their gradients ultimately so it's fundamentally a" }, { "end": 1071.8799999999999, "start": 1066.8799999999999, "text": " different type of data that is propagated along would be propagated" }, { "end": 1077.32, "start": 1071.8799999999999, "text": " along these directions and that makes it very unlikely because we are not aware" }, { "end": 1084.6, "start": 1077.32, "text": " as this paper says that the neural networks get neurons can kind of switch" }, { "end": 1091.4399999999998, "start": 1084.6, "text": " the data type that they're they're transmitting alright so then the paper" }, { "end": 1098.4399999999998, "start": 1091.4399999999998, "text": " goes into their n-grad hypothesis and what this is the hypothesis basically" }, { "end": 1105.4399999999998, "start": 1098.4399999999998, "text": " states that the brain could implement something like neural networks by using" }, { "end": 1112.16, "start": 1105.44, "text": " by using an approximate backprop like algorithm based on autoencoders and I" }, { "end": 1119.16, "start": 1112.16, "text": " want to jump straight into the algorithm no actually first they do talk about" }, { "end": 1124.04, "start": 1119.16, "text": " autoencoders which which I find very interesting so if you think of auto" }, { "end": 1129.6000000000001, "start": 1124.04, "text": " encoders what is an autoencoder an autoencoder is a network that basically" }, { "end": 1136.1999999999998, "start": 1129.6, "text": " starts out with an input layer and then has a bunch of hidden layers and at the" }, { "end": 1143.48, "start": 1136.1999999999998, "text": " end it tries to reconstruct its own input right so you feed a data in here" }, { "end": 1150.76, "start": 1143.48, "text": " you get data out here and then your error the error signal it will be your" }, { "end": 1162.76, "start": 1150.76, "text": " difference to your original input now that the usually when we train auto" }, { "end": 1166.28, "start": 1162.76, "text": " encoders in deep learning we also train this by backprop right we feed and this" }, { "end": 1170.68, "start": 1166.28, "text": " error here and this goes back but if you just think of single layer auto encoders" }, { "end": 1178.28, "start": 1170.68, "text": " so let's let's go over here single layer autoencoder with let's say the the same" }, { "end": 1189.3999999999999, "start": 1178.28, "text": " number of the same number of of units in the in this layer what you'll have is so" }, { "end": 1197.56, "start": 1189.3999999999999, "text": " this this is input this is output and this is the hidden layer right you'll" }, { "end": 1202.12, "start": 1197.56, "text": " have a weight matrix here and you'll probably have some sort of nonlinear" }, { "end": 1206.84, "start": 1202.12, "text": " function and then you have another weight matrix here and they call them W" }, { "end": 1213, "start": 1206.84, "text": " and B another way to draw this is I have weight matrix going up then I have a" }, { "end": 1219.72, "start": 1213, "text": " nonlinear function going transforming this into this signal and then I have" }, { "end": 1228.32, "start": 1219.72, "text": " the B going back right so I'm drawing I'm drawing it in two different ways up" }, { "end": 1233.3999999999999, "start": 1228.32, "text": " here or over here and with the second way you can see that it is kind of a" }, { "end": 1239.96, "start": 1233.4, "text": " forward backward algorithm where now the error if you look at what is the error" }, { "end": 1245.2800000000002, "start": 1239.96, "text": " here the error is the difference between this and this and the difference between" }, { "end": 1252.76, "start": 1245.2800000000002, "text": " this and this and the difference between this and this right and you can train an" }, { "end": 1264.32, "start": 1252.76, "text": " autoencoder simply by saying W please make sure that the that the the the" }, { "end": 1273.4, "start": 1264.32, "text": " input here gets mapped closer to the output and the B the same thing this" }, { "end": 1284.2, "start": 1273.4, "text": " will become clear in a second so but basically sorry this I mean the the" }, { "end": 1289.2800000000002, "start": 1284.2, "text": " hidden representations you'll see basically the idea is that you can train" }, { "end": 1296.1200000000001, "start": 1289.2800000000002, "text": " an autoencoder only by using local update rules you don't have to do" }, { "end": 1301.3200000000002, "start": 1296.1200000000001, "text": " backprop and that's what this algorithm is proposing namely if you think of a" }, { "end": 1307.1599999999999, "start": 1301.32, "text": " stack of autoencoders this this this transforming one hidden representation" }, { "end": 1313.08, "start": 1307.1599999999999, "text": " into the next right this is the feed forward function right what you can do" }, { "end": 1319.04, "start": 1313.08, "text": " is you first of all you can assume that for each of these functions here you" }, { "end": 1324.28, "start": 1319.04, "text": " have a perfect inverse right you can you can perfectly compute the inverse" }, { "end": 1330.56, "start": 1324.28, "text": " function that's this this G here of course this doesn't exist but assume you" }, { "end": 1342.12, "start": 1330.56, "text": " have it what you then could do is you could if if you knew in one layer and on" }, { "end": 1348.52, "start": 1342.12, "text": " the top layer of course you know if you knew that okay I got this from my" }, { "end": 1353.72, "start": 1348.52, "text": " forward pass but I would like to have this this is my desired output right so" }, { "end": 1360.84, "start": 1353.72, "text": " in the output layer you get this this is your error signal if you knew you you" }, { "end": 1365.1200000000001, "start": 1360.84, "text": " you could compute an error right here this is what you do in the output right" }, { "end": 1371, "start": 1365.1200000000001, "text": " now in backprop we would back propagate this error along the layers but now we" }, { "end": 1378.68, "start": 1371, "text": " don't do this instead of what we do is we use this G function to invert the F" }, { "end": 1388.92, "start": 1378.68, "text": " function right and by that what we'll say is what hidden representation in" }, { "end": 1395.0800000000002, "start": 1388.92, "text": " layer 2 what should the hidden representation have been in order for us" }, { "end": 1404.28, "start": 1395.0800000000002, "text": " to obtain this thing right so the the claim here is if in layer 2 we had had" }, { "end": 1411.04, "start": 1404.28, "text": " h2 as a hidden representation then we would have landed exactly where we want" }, { "end": 1416.84, "start": 1411.04, "text": " it right that's what this G function does right because here we use F so had" }, { "end": 1424.96, "start": 1416.84, "text": " we had F H2 and used F on it we would be exactly where we want instead we had H2" }, { "end": 1431.76, "start": 1424.96, "text": " here and used F on it and then we landed here where we don't want so this is" }, { "end": 1438.16, "start": 1431.76, "text": " where we want we would want to be in layer 2 and this is where we were so" }, { "end": 1444, "start": 1438.16, "text": " again we can compute an error here again instead of back propagating that error" }, { "end": 1449.16, "start": 1444, "text": " what we'll do is we'll use the inverse of the forward function in order to" }, { "end": 1455.8799999999999, "start": 1449.16, "text": " back propagate our desired hidden representation and you can see there is" }, { "end": 1461.36, "start": 1455.8799999999999, "text": " of course a relationship to the true back prop here but the the important" }, { "end": 1465.8, "start": 1461.36, "text": " distinction is we are not trying to back propagate the error signal we're trying" }, { "end": 1472.24, "start": 1465.8, "text": " to invert the desired hidden states of the network and then in each layer we" }, { "end": 1478.56, "start": 1472.24, "text": " can compute from the forward pass we can compute the difference to the desired" }, { "end": 1484.04, "start": 1478.56, "text": " hidden state and thereby compute an error signal and now we have achieved" }, { "end": 1490.76, "start": 1484.04, "text": " what we wanted we want an algorithm that doesn't do backprop that only uses local" }, { "end": 1497.32, "start": 1490.76, "text": " information in order to compute the error signal that it needs to adjust and" }, { "end": 1503.6, "start": 1497.32, "text": " by local I mean information in the same layer and also the data type that is" }, { "end": 1511, "start": 1503.6, "text": " propagated by F is activations right of hidden representations and by G is also" }, { "end": 1516.4, "start": 1511, "text": " activations of hidden representations both of them are always positive can be" }, { "end": 1522.76, "start": 1516.4, "text": " encoded by spiking neurons and so on so this algorithm achieves what we want they" }, { "end": 1528.72, "start": 1522.76, "text": " go a bit into detail how the actual error update here can be achieved and" }, { "end": 1534.64, "start": 1528.72, "text": " apparently neurons can achieve you know in the same layer to to adjust" }, { "end": 1542.16, "start": 1534.64, "text": " themselves to a given desired activation so this algorithm achieves it of course" }, { "end": 1547.52, "start": 1542.16, "text": " we don't have this G we don't have it and therefore we need to go a bit more" }, { "end": 1554.8000000000002, "start": 1547.52, "text": " complicated what they introduce is the this following algorithm the goals are" }, { "end": 1560, "start": 1554.8000000000002, "text": " the same but now we assume we do not have a perfect inverse but we have" }, { "end": 1566.64, "start": 1560, "text": " something that is a bit like an inverse so we have an approximate inverse and" }, { "end": 1570.5800000000002, "start": 1566.64, "text": " they basically suggest if we have an approximate inverse we can do the" }, { "end": 1575.36, "start": 1570.58, "text": " following so G is now an approximate inverse to F what we can do is this is" }, { "end": 1582.96, "start": 1575.36, "text": " our input signal right we use F to map it forward to this and so on all the way" }, { "end": 1588.48, "start": 1582.96, "text": " up until we get our true our error right here this is our error from the" }, { "end": 1595.6, "start": 1588.48, "text": " environment right this is the nail being wrong and then we do two applications of" }, { "end": 1601.84, "start": 1595.6, "text": " G right so this is an application of F we do two applications of G one we" }, { "end": 1610.8799999999999, "start": 1601.84, "text": " apply G to this to what we got in the forward pass right and this now gives us" }, { "end": 1616.52, "start": 1610.8799999999999, "text": " a measure of how bad our inverse is right so if G is now an approximate" }, { "end": 1623.3, "start": 1616.52, "text": " inverse and this now we see here oh okay we we had H2 in the forward pass and we" }, { "end": 1628.52, "start": 1623.3, "text": " basically forward passed and then went through our inverse and we didn't land" }, { "end": 1635.12, "start": 1628.52, "text": " quite exactly where we started but we know that okay this this is basically" }, { "end": 1642.48, "start": 1635.12, "text": " the difference between our our inverse our forward inverse H and our true H and" }, { "end": 1653.2, "start": 1642.48, "text": " then we also back project using G again the desired outcome so we invert the" }, { "end": 1659.0800000000002, "start": 1653.2, "text": " desired outcome here now before we have adjusted directly these two right" }, { "end": 1666.56, "start": 1659.0800000000002, "text": " because we said this is what we got this is what we want but now we include for" }, { "end": 1672.32, "start": 1666.56, "text": " the fact that G isn't a perfect inverse and our assumption is that G here" }, { "end": 1678.52, "start": 1672.32, "text": " probably makes about the same mistakes as G here so what we'll do is we'll take" }, { "end": 1685.8, "start": 1678.52, "text": " this vector right here and apply it here in order to achieve this thing and this" }, { "end": 1692.08, "start": 1685.8, "text": " thing is now the corrected thing our corrected desired hidden representation" }, { "end": 1696.34, "start": 1692.08, "text": " corrected for the fact that we don't have a perfect inverse and now again we" }, { "end": 1702.36, "start": 1696.34, "text": " have our error here that we can locally adjust again all the signals propagated" }, { "end": 1709.6, "start": 1702.36, "text": " here here and here are just neural activations and all the information" }, { "end": 1714.36, "start": 1709.6, "text": " required to update a layer of neurons is now contained within that layer of" }, { "end": 1721.9599999999998, "start": 1714.36, "text": " neurons right and and this goes back through the network so this is how they" }, { "end": 1729.84, "start": 1721.9599999999998, "text": " achieve how they achieve this this is a bit of a of a close-up look and here are" }, { "end": 1736.3799999999999, "start": 1729.84, "text": " the computations to do this so basically for the forward updates you want to" }, { "end": 1744.4399999999998, "start": 1736.3799999999999, "text": " adjust W into the direction of the H minus the H tilde and the H tilde in" }, { "end": 1749.28, "start": 1744.4399999999998, "text": " this case would be this the the hidden representation that you would like to" }, { "end": 1755.04, "start": 1749.28, "text": " have so you would update your forward forward weights into the direction such" }, { "end": 1759.4399999999998, "start": 1755.04, "text": " that your hidden representations are closer sorry that your forward hidden" }, { "end": 1764.48, "start": 1759.44, "text": " representation is closer to your backward hidden representation and the" }, { "end": 1773.6000000000001, "start": 1764.48, "text": " backward updates now your goal is to get a more better to make G so sorry W here" }, { "end": 1782.1200000000001, "start": 1773.6000000000001, "text": " these are W are the weight of F and B are the weights of G so in the backward" }, { "end": 1788.04, "start": 1782.1200000000001, "text": " updates your goal is to make G a better inverse right so what you'll do is again" }, { "end": 1794.72, "start": 1788.04, "text": " you'll take the difference between now you see the difference here here here" }, { "end": 1801.8, "start": 1794.72, "text": " right not the same error so here you use you in the W update use what we labeled" }, { "end": 1812.6, "start": 1801.8, "text": " error here in the G update you use this error here so this is the error of G so" }, { "end": 1819.1999999999998, "start": 1812.6, "text": " when you update the function G you want to make these two closer together such" }, { "end": 1823.12, "start": 1819.1999999999998, "text": " that G becomes a better inverse right because you're dealing with an" }, { "end": 1827.28, "start": 1823.12, "text": " approximate inverse you still need to obtain that approximate inverse and and" }, { "end": 1834.24, "start": 1827.28, "text": " this here is how you learn it this algorithm now achieves what we wanted" }, { "end": 1842.32, "start": 1834.24, "text": " right local updates data types check assigned check and so on I hope this was" }, { "end": 1848.9199999999998, "start": 1842.32, "text": " enough clear in essence is pretty simple but it's pretty cool how they work" }, { "end": 1857.1599999999999, "start": 1848.9199999999998, "text": " around this they call this a difference target propagation and I'm not these" }, { "end": 1866.28, "start": 1857.1599999999999, "text": " these kind of papers I don't think they invented this maybe I'm not sure maybe" }, { "end": 1871.2, "start": 1866.28, "text": " they did and maybe they didn't and this paper just kind of frames it in this" }, { "end": 1879.76, "start": 1871.2, "text": " hypothesis it is unclear to me I I'm not familiar with this kind of papers so" }, { "end": 1886.16, "start": 1879.76, "text": " sorry if I misattribute something here all right then they go into into how" }, { "end": 1891, "start": 1886.16, "text": " could these things be implemented biologically and they go for some" }, { "end": 1895.16, "start": 1891, "text": " evidence and they also state that we used to look at neurons basically in" }, { "end": 1903.3200000000002, "start": 1895.16, "text": " this way where you had input and feedback here very simple simplistic" }, { "end": 1907.96, "start": 1903.3200000000002, "text": " view of neurons whereas nowadays even the the company to computational" }, { "end": 1914.8400000000001, "start": 1907.96, "text": " community use neurons in a more differentiated way where you have for" }, { "end": 1921.24, "start": 1914.8400000000001, "text": " example different regions here on the soma that can be separated from each" }, { "end": 1925.72, "start": 1921.24, "text": " other and you have interneuron interference and so on I'm not qualified" }, { "end": 1933.68, "start": 1925.72, "text": " too much to comment on this stuff but I invite you to read it for yourself if" }, { "end": 1939.36, "start": 1933.68, "text": " you want alright so this was my take on this paper I find the algorithm they" }, { "end": 1952.24, "start": 1939.36, "text": " proposed pretty cool if you I hope you liked it and check it out bye bye" } ]
D-eg7k8YSfs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Shortcut Learning in Deep Neural Networks
[ "Science & Technology" ]
[ "deep learning", "machine learning", "adversarial examples", "iid", "ood", "distribution", "bias", "discrimination", "neural networks", "bugs", "distortions", "data pipeline", "causality", "intention", "grounding" ]
This paper establishes a framework for looking at out-of-distribution generalization failures of modern deep learning as the models learning false shortcuts that are present in the training data. The paper characterizes why and when shortcut learning can happen and gives recommendations for how to counter its effect. https://arxiv.org/abs/2004.07780 Abstract: Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence. Numerous success stories have rapidly spread all over science, industry and society, but its limitations have only recently come into focus. In this perspective we seek to distil how many of deep learning's problem can be seen as different symptoms of the same underlying problem: shortcut learning. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios. Related issues are known in Comparative Psychology, Education and Linguistics, suggesting that shortcut learning may be a common characteristic of learning systems, biological and artificial alike. Based on these observations, we develop a set of recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications. Authors: Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi, today we're looking at shortcut learning in deep neural networks by a number of authors from the University of Tubingen, the Max Planck Research Center and the University of Toronto. So I'm not gonna read all of them but all of them are either joint first authors or joint senior authors. What is this? It's just a team of people who did this work together. This whole I have a star, I don't have a star, I have a cross, whatever. Okay, sorry, bit of a rant. Alright, so this paper discusses what they call shortcut learning and they actually don't propose something new here. They discuss this phenomenon and they try to link several things together under the name of shortcut learning which they claim is a problem in current deep learning and they discuss why it happens and what can be done about it. I just want to jump into this example real quick. So in this case you can see you have a training set of images and the training set is these four images here along with these labels and also these four images along with these labels. So you can think you can train a machine learning model, let's say you have a bunch of those and then you're gonna test them on the IID test set on this test set and what you'll find is that if you let a human do this task, the human would give this an A, this an A, this a B and this a B which is what you can think of is probably what a human would do is like these are the stars and these are the moons and the human would see the stars and the humans would see the moons and if you do this by the neural network also you'd get the labels AA, B and B and now you go about this out of distribution test set and we'll go over it why that is out of distribution in a second. Again you'll see that the human will classify this as the A's because it has the stars and these as B's but the neural network will classify these as B's and these as A's so I'm not saying this is what's gonna happen every time but imagine that happens and this is a conceivable situation and you can think of what happens here so you see in the training set all of the stars were either on the bottom left right or in the top right of the image where if I you know whereas the moons were either in the bottom right or the top left right you see that so the neural network might have learned that this is moon and this is moon and this is star and this is star. And then if it applies that rule to this new test set right then you can see that it'll classify these as moons and these as stars which is incorrect. So this might happen for example if the person that wrote the generator for the data set for some reason it produced only data here that had this property of the bottom left top right being a star and otherwise being a moon. So what generally happens if we do machine learning test set is we collect a data set a big data set but we collect it in a single pass right so this is our data set and what we'll do then is we'll split it right into a fairly large train and maybe a bit of a smaller test set right but this it's important that we first collect the data and then second we randomly split it. Now this out of distribution test set what that might be is that might be a second person right so this was done in the first step but then later right a person collects another bunch of data so this is data too and and they think it's it should be the same as this data but then you apply the the the classifier that you learned in train and test you apply that here right so what is different in this case is the data collection process that happens beforehand right so somewhere here is the real world I'm gonna draw the real world this is it's a globe this is the real world and you draw data sets from the real world and you draw this data set first and then you split it in train and test and then you draw this data set second so the second data set has a fundamentally is a different sample of data and this data whereas these train and test you can think of them they are closer together than these two data sets are here I think that's that's all that's kind of intuitive so what we usually do is we train here and then we evaluate on the test set right but the train and test that they've they've just they're just like randomly split versions of the same data set and that means that if there is some kind of bias in the training set it's probably also in the test set like we saw here with the moons right so this training set is this this test set is this and both have this this moon moon star star property because that was introduced so this pattern here by accident in this case was introduced when the data was produced right whereas the OOD test set now is maybe a second person writing their own generator that doesn't have this property and then that will lead to this data and of course since we train on the this training then evaluate on IID data this is now the IID assumption and evaluate on the IID test data we're gonna do fairly well with our you know crooked decision rule because it has the same bias but then if we once we evaluate on the on the out of distribution data then we we will fail right because now this doesn't have this in this bias in it right this this is not here so shortcut learning refers to the phenomenon that there might be features in the training set that the model starts to learn such such that it learns something else that we want it to learn right we want it to learn the shape here but it learns something else it learns the position and usually these things will not be recognized by generalizing to this test set right because the test set being an IID split from the same training set will have these same biases and therefore they will only become apparent once we do out of distribution generalization evaluation so this is shortcut learning and this paper goes into the origins and kind of descriptions of this and while I think this is a good approach and paper and it says many correct things I think the framing is a bit off at times and we'll go through it so first of all they say they give some examples in biological neural networks so they have this one example where they have a rat and the rat learned to navigate a complex maze right based on color differences of the walls and this was surprising because rats don't really have color vision or it was kind of known that rats don't have super good color vision so it was very surprising and then they discovered that the rats did actually not use the visual system at all they simply discriminated the colors by the odor of the color paint right so if you painted the wall red or blue that smelled differently and the rats could smell it once they controlled for the smell the remarkable color discrimination ability disappeared right so the second example they gave here is so Alice loves history and Alice had spent weeks immersing herself in the world of Hannibal and his exploits in the Roman Empire and now the exam questions are just like how many elephants did Hannibal employ in his army so the exam question are multiple choice not focus on understanding and Bob had just learned it by heart and is now doing much better than Alice who has actually understood the topic right so they give this as examples of shortcut learning where the model learns something that we don't intend it to do right all right and I I think this is the crucial point right the model learns something that we don't intend it to do and so here and this seems this this this might be pretty clear to when you observe it but what do we want we want we want shape and the model learns something else right just something else and the crucial part here and I think this paper isn't putting that much as much emphasis as it deserves is the two words we and want so my my basically my answer to this is first of all the word want we want shape and my answer my comment to this is you can't you can't formulate that you can't formulate it this this is very crucial and I think the the paper it almost ignores this point you cannot formulate what it means to classify things by shape and that this seems so oblivious to us because we're so used to it as humans right we were just like oh it's just use the shape right this is the shape right but you cannot you cannot program a computer to do this that's why we use deep learning in the first place right because we have no freaking idea how to program an algorithm that extracts the shape of something it might be possible for like a star and the moon but not for a cat or a car or anything right so you cannot formulate your objective that's the problem right and and it's easy to then say oh the model doesn't do what we wanted to do it's like you can't even formulate what you wanted to do in a precise way so basically all all you're you're saying here what you're saying here is I'll train a shape classifier right once you've gone through this process of training and evaluating you you say ah now I have a now I have a shape classifier right so say you haven't you hadn't done this OOD evaluation you've gone through this and you you know you now claim you proclaim I have trained a shape classifier no no you have trained something that given the entire process of how you create your data can classify these two images right so at here is your generator this is your little program that you wrote to produce these images and your generator assigns either the star right at random it produces these things the star or the moon it does these two things and then it creates the image from it and that will give you your data set right what you have trained is not a shape classifier what you have trained is a classifier that can distinguish data that comes from this data generation process right the entire notion of calling it a shape classifier is because you as a human have thought of shape when you programmed this generator right when you collected the data set that's what you thought of but this isn't the case you can't call it a shape classifier just because you like this is what your intent was you have a classifier that classifies images from this particular data generation process and you can't actually formulate a shape classifier right okay the second word is we we sorry we humans right we want a shape classifier now I've I've said this before and this this very much refers back to the two for example the paper about the contrast sets in NLP and so on humans have grounded knowledge right humans sorry humans have grounding this is very important here grounding means that the humans live in a world of physics and culture sorry physics and culture and the need for food biology humans live in this world and this this generated our brain right so what that means is that humans live in a world of objects and of of people sorry of people and of being eaten right being eaten or needing to eat food right humans lit grew up and live in this world your brain was literally structured according to these things and thus we understand everything with an eye to this grounded knowledge of reality where there is such a thing as objects now if you have image net and you train a classifier for objects right this is what I find so crazy right and in the you know you collect this thing and there's a car and you say that's a car right you know this is a car because there is an object of a car and and and but the the neural network is not does not have a bias for object in neural network simply sees these pixels same here what you will do immediately here is you'll recognize the object of the star right you will transform this into a 3d scene right into a 3d cube where you are here watching in 3d space and there is this star object somewhere here right and then you understand that the star could move around then it would still be the same star but that's because you have the inherent bias of there being objects and shape for example the word shape is nothing more than a property of an object and the neural network simply do not have a inherent bias for objects right or people or intent or what it means to eat right this this becomes super super obvious if you ever try to solve for example a jigsaw puzzle you know like these these things here I'm terrible at this if you solve this on its head right say this has like a face on it and you try to solve it like this and you try to solve it on its head like try it you'll do it's the same task you simply need to match the the border shapes right and you need to make sure that the lines are continuous of the picture it becomes so much harder just because you have this brain and so that is my my entire criticism and it will it will pull through this entire paper and we'll go through it for now relatively quickly because we've already like touched touched on it keep in mind this is my commentary on it this is not superior superior knowledge or something that's just me all right so what they do is they have this taxonomy of decision rules that they they point out what they're saying is okay you're you these there's a set of all possible decision rules right and this is the this is the outer set here all possible decision rules that one could think of to discriminate data and let's say we'll talk about images here right to to discriminate images most of them will just be crap and these will be using these uninformative features what they say but then there are some decision rules that perform well on training data right this is this big circle here right so that there are some decision rules that perform well on the training set and they call these overfitting features so these are all the features that perform well on the training set only on the training set sorry the overfitting features but to me it's it's a bit unclear I think they only call this band the overfitting features but they call the entire circle the all possible training solutions any case so there are decision rules that perform well on the training set but some of them are overfitting as you know that problem right then the next circle inside of that are all decision rules that perform well on the training set and the IID test set and this would be our our location classifier from before would fall into this category right and now there these are still a much larger set as you see here as this inside set of the intended solution performs well on training set IID and all relevant out-of-distribution test sets and they draw this in here the out-of-distribution test sets are subsets of the IID test set or the sorry the solutions that work on the OOD test sets are subsets of the solutions that work well on the IID test sets I don't have a problem with this characterization of decision rules right here is specifically these are decision rules what I have a problem of characterization with is the fact that you cannot specify what the intended solution is you cannot and therefore this diagram I think is misleading because you have ultimately you have no idea where this dot is right you you can't you can't specify it beforehand you can't even specify the rules how you get there all you can do is give better data and they kind of advocate for this here with these OOD test sets but again I think when they say all relevant out-of-distribution test sets I'm a bit I'm a bit wary because they they suggest this as one of the measures to assess whether or not a model has learned these shortcut rules is to measure its performance on out-of-distribution test sets and this is very much like the contrast sets in in in the NLP but I I think actually this is a pretty pretty pretty pretty bad solution in most cases and let me explain why so if we go back to here right what we saw is that these these discrepancy it comes about because here from the real world we produce the data in a very specific form right and then this other out-of-distribution test set is produced in a slightly different form right now what you can think of this is if you look at your cost function that you train right what usually says my cost function is some sort of a loss for my data points and my labels right but this is often left out I mean you write it in your introductory classes what is important is that this is an expected loss that you're minimizing here over a data distribution right over X and Y sampled from a particular data distribution now when you talk about this this out-of-distribution classifiers what you'll have is you'll have a slightly different data distribution D prime right so but if you simply have one out of distribution thing think of this as the contrast set right if you haven't seen the video about contrast sets it's it's basically an out handcrafted out of distribution test set right my problem with this it's just one it's one it's a single one and I I think even if you try ten of them ten of those sets you won't even get close to a true measure because so the cool thing about an IID test set is at least it is precisely the same distribution right so it kind of gives you an unbiased number that for this particular data generation pipeline you get this number if you evaluate on an out-of-distribution test set you now have two effects you first have this generalization effect and you have the effect of having produced this in a different fashion here but you only have one of them what you would like to do is you would what you would like to assess is your loss of X and Y in expectation with X and Y coming from data in expectation let me a different color in expectation with your data distribution coming from all possible data distributions in the real world right that's what you would like to to say to do now if you only have a single contrast set it is a kin you can think of what if like how how well how well of a machine learning engineer would I be if my test set here only had one sample right so so I give you a train and the test set and I'm saying your performance will be if I make a Kaggle challenge and I say your performance will be evaluated on this one single sample test set right that's basically what you're doing if you have a single OOD test set is you're saying I'm going to give you one out-of-distribution data set that I have biased in one particular way right and will measure how well you capture our intent right our shape classifier intent will measure how well you capture that using this one single out-of-distribution thing I think what that will do is right you say I want to approximate this by a sum of I equals one to one that will just pump the variance beyond what beyond any reasonable meaning that the upcoming number will will be able to give you what you'd have to do is you'd have to have this entire process and sample train and test sets according to this day or at least test sets according to this data distribution this underlying data distribution which you have no clue what it is because if you could specify this directly you could get the solution for free right if you could specify the underlying mechanism you could you you would already know the solution you would need machine learning so I think the model puts like way too little emphasis sorry the paper puts a bit too little emphasis suffice to say they with their taxonomy they can say if you use for example only the overfitting features right then you will do well on the training set but not on the IID and OD test sets if you use the intended features again intended no one knows what that is no one can specify it you'll do well on all the OD test sets if you use the shortcut features you will do well on the training and ID test set but not on the OD test this is valid right I'm not discrediting this paper here and they do allude to a lot of the things I'm saying but not not all and I don't think they frame it correctly so they ask shortcuts where do they come from and they say a lot of the things that I've been saying here but for example they ask what makes a cow a cow and they give this example here where they say familiar background can be as important for recognition to deep neural networks where the deep neural networks will misclassify this picture because they used to seeing a cow in grass now consider this in our framework right if let's say this is an image net classifier image net is not an object classifier it is not right that's what we say that's our intent but what it is it is a classifier if you go through the pipeline what how do you generate the data image net is a classifier of naturally taken images right with a certain camera cropped center cropped to a particular object labeled by human radars filtered in some capacity right from flickr and for that particular data set we train a classifier it is not an object classifier it is a classifier for that and it doesn't has no clue of objects so in fact and also what you have to see is that the output isn't even if the output is shape it isn't shape it is actually probability of shape right or probability of object or probability of something right and it is completely conceivable right that if it's not grass in the background it's probably not as much a cow now I see the problem here this is clearly a cow and this is actually a conceivable natural image but imagine a picture of the cow oops what happened on the moon right this is the moon and here's the cow cow moo this is terrible horns tell a cow on the moon like who can fault the neural network it it it and I would say that's not a cow either because it in terms of the data generation process if you ask me please classify this as a natural image that has been taken blah blah blah blah blah right I'm gonna say there's no way there's a cow on the moon so I don't know what this is but it is very improbable that this is a cow right because all the training examples I've seen cow on grass so yeah so I mean they do they do actually allude to this right they call this data set biases and so on but I'm pretty sure that yet the interpretation is just a bit off where they say they the stat the point of this is like ah it's it's you know we want an object classifier but this we want the second I find even more kind of strange is they say shortcuts from discriminative learning and they allude to this picture here and they ask what makes a cat a cat and they basically their argument is that the neural networks they don't understand things they just discriminate right they have these thousand classes and the output layer this is the neural network and they just need to discriminate so they just need to learn what is different from one class to the other class and they will often rely on features such as texture like here so they rely on detection they classify this as an elephant right so they say what makes a cat a cat to standard DNNs the example image on the left clearly shows an elephant not a cat and again I agree if you tell me this is data from naturally to it taken images with standard cameras right then I will I will have two possibilities I will say is this a cat there's no way that if you take anywhere in the universe a picture with a like a phone camera of anything of a cat it will look like this no way I don't like this just not possible right however is it possible that there is an elephant that as a skin fold pattern by random chance elephant big ears raw trunk as a skin fold pattern looks like a cat like looks like the shape of a cat yes that's possible so if you ask me according to the data generation process this is way more likely to be an elephant than a cat right and the paper here makes it seem like it is so obvious that this is a cat but what do these standard stupid DNNs think it's an elephant not a cat and the DNN oh it's just looking at object text and other local structures and not that shape right what we like what we wanted to do and this this is I find just stop calling things object classifiers if they're not object classifiers that classifier between images of a data generation process if you want them to be object classifiers make up a data set that actually has different objects but you can't specify that so yeah and then they go into some sort of adversarial examples and I find this to be I also find this to be a bit maybe not belonging here like where they say oh look here the DNNs predict guitar with high certainty again it's just a discriminator but this this pattern why not a guitar if you had to you know if you had to get one of the thousand classes out why it could this not be most likely a guitar but I have a further problem with this is I see I kind of see this in so what what would you have is I ID data let's go with their taxonomy and say I ID data has from from the same generation process and then there is OOD data now I think there are a number of effects here that they try to lump together with this thing where they just say oh oh D data whenever my model doesn't work on OOD data it has learned a shortcut but it's very very weird so first of all there I would say the OOD data you can probably divide into what I would call unnatural OOD data let's say our task here is to build an object an object detector whatever that means for natural images so then there's unnatural OOD data which which in here you'll find something like adversarial examples adversarial examples are constructed at least if you go by the interpretation of modri and adversarial examples are features not bugs then you'll go into the direction of adversarial examples actually constructed by combining features that don't naturally go together so you'll you'll get the low frequency features of a cat and add the high frequency for example features of a dog so much with this with this lambda factor here so high that it to a DNN it looks like a dog because it has many of the features but to a human that kind of ignores the high frequency features it'll look like a cat right but these are unnatural because the features in actual nature in the real world they never occur in this combination so it seems like this is a very very very different phenomenon from what I would call natural OOD OOD data where simply the the features that you're seeing have never occurred in the training data set but there is there is if you go from the real world and you construct in different ways data set there is some data set where where the where the data actually occurs in the way that you have here so natural OOD data is what most of the examples for now were were about like a cow on the beach it's just because you've never you've never seen that because your data generation here always get cow plus grass right so I think these are very different and then the last thing they also lump in here is fairness like the the fairness and bias literature where for example you have a resume classifier and the resume classifier ends up being biased by gender or something like this and again so I I kind of struggle with this although they say not all the fairness problems come from here but I would also like to stress that some of the fairness problem goes exactly here it occurs because your data generation process is different from what you want for example if you do this this hiring classifier you have to understand what that is what your training is a system that will tell you how would my human data set creators have decided on this particular application now of course there is this problem of bias amplification and so on but it is not it is not an infallible system it simply tells you how the humans would have predicted if you collect the data set in a biased way of course the the machine will inherit that but on the other hand the fairness why I don't think this really belongs in here because in fairness you have a you actually have kind of an alternate world draw this in green prime world prime right where in this OOD and IID setting you always assume that the world is is the world and you want to kind of really learn a system that understands the world where in fairness you this here this is your super world so actually for the fairness literature it doesn't really matter if in the real world to let's say two groups of people are equal in some respect or not equal in the true world right what they care about is that they are treated equally by the system right so they will impose they will impose some restrictions or some some some condition on their model and they don't they don't naturally I'm like this sounds bad but it is the mathematical formulation is such that you start with the super knowledge of two things must be equal and then you this is how you imagine your world you think I know the world and then I try to learn the model such that that happens right whereas over here you do something different now some of it is in as I said is in the same category but it is I think a different different take and a different literature so I would I would focus on let's say this part right here sorry on this part and not on the adversarial examples and also not in the fairness literature too much alright so yeah you can see here like like no wonder this this and this screws up and this screws up an image net classifier yeah and even even this like how do we know that that is naturally natural though I can see that that is that looks pretty natural but still it's probably like really specifically constructed such that the probability that someone would take this picture with a camera in the real world is zero cool so they give some examples where they say okay shortcut learning exists in computer vision for example adversarial examples you see shifting the image by a few pixels though you have to say shifting the image very precisely by a few pixels such that the probability of this occurring the data generation pipeline is zero and so on then they call it domain transfer that that of course is I think that's the that's a good example they say natural language processing where BERT has been found to rely on superficial keywords for instance it learned within a data set of natural language arguments detecting the presence of not was sufficient to perform above chance in finding the correct line of argumentation again this is like all we can do is construct datasets we cannot if we could tell the model what to look at we would we would we would just program the solution so the solution is there's only one solution program better datasets get better datasets or I mean okay the second solution is get better inductive biases but if we knew the correct inductive biases we wouldn't have the problem yeah I like that there is a in NLP this is very this is very very prevalent even more than in vision right this this fact of hey these spurious correlations in NLP the models usually just learn kind of correlation between some words and then they they don't learn to understand the sentences at all right but this this is because in NLP we have even more problems with constructing datasets that force the model to learn to understand the the text again I could not tell you what understanding the text means they go in by the way humans do that too humans most in most of NLP that happens in humans humans do this right in many many many forms this is simply because the cost function is not aligned with what you would want what is the specific oh well a specific example is that news stories nowadays right you have a news you say news what do people expect what is the intent of news the intent of news is to inform you maybe but the cost right the cost function is clicks so what do you do you news story and very on the top in the title you say orange man bad and then people and you highlight this right so a news story I don't know Brad Pitt had a new baby you just append but orange man bad people click on it much more your cost goes up and sorry your clicks go up your cost function goes up and so I think this happens everywhere right you can't even do this with humans right how do you expect neural networks to to do that all right agent based reinforcement learning I think this pretty funny where is it where it learned how to play Tetris yeah instead of learning how to play Tetris an algorithm simply learned to pause the game to fit come on is genius right like it is objectively genius and then of course fairness and algorithmic decision-making right so they say understanding these shortcuts and they they touch on a lot of the things that I've touched on including these what I find what I find well is for example this Morgan's can for machine learning where you say probably a machine learning system will learn the easiest feature it can and that's oftentimes not what you want right so this even now amplifies things they also touch on this thing of anthropomorphism where you view everything through a human lens and that is not correct if you look at these neural networks they're not humans and we should never attribute human nests to their solutions never attribute to high level abilities that which can be adequately explained by shortcut learning yes I agree with this like I agree with this paper in in all the things it says right except this detecting shortcuts making od generalization tests a standard practice for the reasons I specified before I think that is counterproductive and yeah I think I've already said enough right designing good od tests this you can only design good od tests if you know the real underlying data distribution which you don't and let's go through yeah again the principle of least effort they say why are they learned because it's just easier right to it's just easier to write a news story with just the words you know people will click on right like or these top 10 things of blah blah blah number seven will surprise you you don't actually have to come up with 10 relevant things the entire title is enough to get you the clicks so it's the least effort to solve the cost function might not align with what you want and also the inductive biases as I said we are humans we have some inductive biases the neural networks don't have them and we need to take this into account but the solution is to make training data sets that take this into account all right they say beyond shortcut learnings is kind of an outlook and then a conclusion where they remind but we're already at some 45 minutes of video and if you're still here like respect or maybe you just have this in the background and have some company during this time I will finish with saying thank you for watching and leave your comments since this is mostly opinion I would be interested in hearing your comments on this with that I say bye bye
[ { "end": 6.32, "start": 0, "text": " Hi, today we're looking at shortcut learning in deep neural networks by a" }, { "end": 11.44, "start": 6.32, "text": " number of authors from the University of Tubingen, the Max Planck Research Center" }, { "end": 18.04, "start": 11.44, "text": " and the University of Toronto. So I'm not gonna read all of them but all of" }, { "end": 27.52, "start": 18.04, "text": " them are either joint first authors or joint senior authors. What is" }, { "end": 33.4, "start": 27.52, "text": " this? It's just a team of people who did this work together. This whole" }, { "end": 40.24, "start": 33.4, "text": " I have a star, I don't have a star, I have a cross, whatever. Okay, sorry, bit of a rant." }, { "end": 46.72, "start": 40.24, "text": " Alright, so this paper discusses what they call shortcut learning and" }, { "end": 52.2, "start": 46.72, "text": " they actually don't propose something new here. They discuss this" }, { "end": 57.92, "start": 52.2, "text": " phenomenon and they try to link several things together under the name of" }, { "end": 63.24, "start": 57.92, "text": " shortcut learning which they claim is a problem in current deep learning" }, { "end": 69.72, "start": 63.24, "text": " and they discuss why it happens and what can be done about it. I just" }, { "end": 74.80000000000001, "start": 69.72, "text": " want to jump into this example real quick. So in this case you can see" }, { "end": 81.48, "start": 74.80000000000001, "text": " you have a training set of images and the training set is these four images" }, { "end": 87.2, "start": 81.48, "text": " here along with these labels and also these four images along with these" }, { "end": 93.52000000000001, "start": 87.2, "text": " labels. So you can think you can train a machine learning model, let's say you" }, { "end": 99.84, "start": 93.52000000000001, "text": " have a bunch of those and then you're gonna test them on the IID test set" }, { "end": 107.72, "start": 99.84, "text": " on this test set and what you'll find is that if you let a human do this" }, { "end": 113.36, "start": 107.72, "text": " task, the human would give this an A, this an A, this a B and this a B which" }, { "end": 117.96, "start": 113.36, "text": " is what you can think of is probably what a human would do is like these are" }, { "end": 123.28, "start": 117.96, "text": " the stars and these are the moons and the human would see the stars and the" }, { "end": 127.92, "start": 123.28, "text": " humans would see the moons and if you do this by the neural network also you'd" }, { "end": 135.64, "start": 127.92, "text": " get the labels AA, B and B and now you go about this out of distribution test set" }, { "end": 140.67999999999998, "start": 135.64, "text": " and we'll go over it why that is out of distribution in a second. Again you'll" }, { "end": 146.6, "start": 140.67999999999998, "text": " see that the human will classify this as the A's because it has the stars and" }, { "end": 154.67999999999998, "start": 146.6, "text": " these as B's but the neural network will classify these as B's and these as A's" }, { "end": 159.95999999999998, "start": 154.67999999999998, "text": " so I'm not saying this is what's gonna happen every time but imagine that" }, { "end": 166.16, "start": 159.96, "text": " happens and this is a conceivable situation and you can think of" }, { "end": 171.84, "start": 166.16, "text": " what happens here so you see in the training set all of the stars were" }, { "end": 180.68, "start": 171.84, "text": " either on the bottom left right or in the top right of the image where if I" }, { "end": 187, "start": 180.68, "text": " you know whereas the moons were either in the bottom right or the top left" }, { "end": 194.04, "start": 187, "text": " right you see that so the neural network might have learned that this is moon and" }, { "end": 207.92000000000002, "start": 194.04, "text": " this is moon and this is star and this is star. And then if it applies that" }, { "end": 216.76, "start": 207.92000000000002, "text": " rule to this new test set right then you can see that it'll classify these as" }, { "end": 222.72, "start": 216.76, "text": " moons and these as stars which is incorrect. So this might happen for" }, { "end": 229.23999999999998, "start": 222.72, "text": " example if the person that wrote the generator for the data set for" }, { "end": 239.64, "start": 229.23999999999998, "text": " some reason it produced only data here that had this property of the bottom" }, { "end": 244.35999999999999, "start": 239.64, "text": " left top right being a star and otherwise being a moon. So what generally" }, { "end": 250, "start": 244.36, "text": " happens if we do machine learning test set is we collect a data set a big data" }, { "end": 256.28000000000003, "start": 250, "text": " set but we collect it in a single pass right so this is our data set and what" }, { "end": 264.72, "start": 256.28000000000003, "text": " we'll do then is we'll split it right into a fairly large train and maybe a" }, { "end": 271.16, "start": 264.72, "text": " bit of a smaller test set right but this it's important that we first collect the" }, { "end": 281.28000000000003, "start": 271.16, "text": " data and then second we randomly split it. Now this out of distribution test set" }, { "end": 286.84000000000003, "start": 281.28000000000003, "text": " what that might be is that might be a second person right so this was done in" }, { "end": 293.96000000000004, "start": 286.84000000000003, "text": " the first step but then later right a person collects another bunch of data so" }, { "end": 300.08000000000004, "start": 293.96000000000004, "text": " this is data too and and they think it's it should be the same as this data but" }, { "end": 306.8, "start": 300.08, "text": " then you apply the the the classifier that you learned in train and test you" }, { "end": 312.32, "start": 306.8, "text": " apply that here right so what is different in this case is the data" }, { "end": 317.68, "start": 312.32, "text": " collection process that happens beforehand right so somewhere here is" }, { "end": 323.12, "start": 317.68, "text": " the real world I'm gonna draw the real world this is it's a globe this is the" }, { "end": 328.79999999999995, "start": 323.12, "text": " real world and you draw data sets from the real world and you draw this data" }, { "end": 333.56, "start": 328.8, "text": " set first and then you split it in train and test and then you draw this data set" }, { "end": 340.08, "start": 333.56, "text": " second so the second data set has a fundamentally is a different sample of" }, { "end": 344.92, "start": 340.08, "text": " data and this data whereas these train and test you can think of them they are" }, { "end": 352.72, "start": 344.92, "text": " closer together than these two data sets are here I think that's that's all" }, { "end": 361.28000000000003, "start": 352.72, "text": " that's kind of intuitive so what we usually do is we train here and then we" }, { "end": 367.56, "start": 361.28000000000003, "text": " evaluate on the test set right but the train and test that they've they've just" }, { "end": 371.44000000000005, "start": 367.56, "text": " they're just like randomly split versions of the same data set and that" }, { "end": 378.16, "start": 371.44000000000005, "text": " means that if there is some kind of bias in the training set it's probably also" }, { "end": 383.6, "start": 378.16, "text": " in the test set like we saw here with the moons right so this training set is" }, { "end": 391.08000000000004, "start": 383.6, "text": " this this test set is this and both have this this moon moon star star property" }, { "end": 397.16, "start": 391.08000000000004, "text": " because that was introduced so this pattern here by accident in this case" }, { "end": 404.16, "start": 397.16, "text": " was introduced when the data was produced right whereas the OOD test set" }, { "end": 408.56, "start": 404.16, "text": " now is maybe a second person writing their own generator that doesn't have" }, { "end": 415, "start": 408.56, "text": " this property and then that will lead to this data and of course since we train" }, { "end": 420.84000000000003, "start": 415, "text": " on the this training then evaluate on IID data this is now the IID assumption" }, { "end": 427.76000000000005, "start": 420.84000000000003, "text": " and evaluate on the IID test data we're gonna do fairly well with our you know" }, { "end": 432, "start": 427.76000000000005, "text": " crooked decision rule because it has the same bias but then if we once we" }, { "end": 442.84, "start": 432, "text": " evaluate on the on the out of distribution data then we we will fail" }, { "end": 448.86, "start": 442.84, "text": " right because now this doesn't have this in this bias in it right this this is" }, { "end": 457.68, "start": 448.86, "text": " not here so shortcut learning refers to the phenomenon that there might be" }, { "end": 467.04, "start": 457.68, "text": " features in the training set that the model starts to learn such such that it" }, { "end": 472.2, "start": 467.04, "text": " learns something else that we want it to learn right we want it to learn the" }, { "end": 478.68, "start": 472.2, "text": " shape here but it learns something else it learns the position and usually these" }, { "end": 485.88, "start": 478.68, "text": " things will not be recognized by generalizing to this test set right" }, { "end": 491.96, "start": 485.88, "text": " because the test set being an IID split from the same training set will have" }, { "end": 496.84, "start": 491.96, "text": " these same biases and therefore they will only become apparent once we do out" }, { "end": 502.88, "start": 496.84, "text": " of distribution generalization evaluation so this is shortcut learning" }, { "end": 510.76, "start": 502.88, "text": " and this paper goes into the origins and kind of descriptions of this and while I" }, { "end": 517.04, "start": 510.76, "text": " think this is a good approach and paper and it says many correct things I think" }, { "end": 524.4399999999999, "start": 517.04, "text": " the framing is a bit off at times and we'll go through it so first of all they" }, { "end": 530.92, "start": 524.4399999999999, "text": " say they give some examples in biological neural networks so they have" }, { "end": 537.6, "start": 530.92, "text": " this one example where they have a rat and the rat learned to navigate a complex" }, { "end": 543.84, "start": 537.6, "text": " maze right based on color differences of the walls and this was surprising" }, { "end": 550.32, "start": 543.84, "text": " because rats don't really have color vision or it was kind of known that rats" }, { "end": 555.2, "start": 550.32, "text": " don't have super good color vision so it was very surprising and then they" }, { "end": 560.6800000000001, "start": 555.2, "text": " discovered that the rats did actually not use the visual system at all they" }, { "end": 566.9200000000001, "start": 560.6800000000001, "text": " simply discriminated the colors by the odor of the color paint right so if you" }, { "end": 571.12, "start": 566.92, "text": " painted the wall red or blue that smelled differently and the rats could" }, { "end": 576.3199999999999, "start": 571.12, "text": " smell it once they controlled for the smell the remarkable color" }, { "end": 584.16, "start": 576.3199999999999, "text": " discrimination ability disappeared right so the second example they gave here is" }, { "end": 592.4399999999999, "start": 584.16, "text": " so Alice loves history and Alice had spent weeks immersing herself in the" }, { "end": 597.6400000000001, "start": 592.44, "text": " world of Hannibal and his exploits in the Roman Empire and now the exam" }, { "end": 601.44, "start": 597.6400000000001, "text": " questions are just like how many elephants did Hannibal employ in his" }, { "end": 606.8000000000001, "start": 601.44, "text": " army so the exam question are multiple choice not focus on understanding and" }, { "end": 613.5600000000001, "start": 606.8000000000001, "text": " Bob had just learned it by heart and is now doing much better than Alice who has" }, { "end": 620.0400000000001, "start": 613.5600000000001, "text": " actually understood the topic right so they give this as examples of shortcut" }, { "end": 625.16, "start": 620.04, "text": " learning where the model learns something that we don't intend it to do" }, { "end": 633.28, "start": 625.16, "text": " right all right and I I think this is the crucial point right the model learns" }, { "end": 641.56, "start": 633.28, "text": " something that we don't intend it to do and so here and this seems this this" }, { "end": 647.88, "start": 641.56, "text": " this might be pretty clear to when you observe it but what do we want we want" }, { "end": 662.56, "start": 647.88, "text": " we want shape and the model learns something else right just something else" }, { "end": 670.6, "start": 662.56, "text": " and the crucial part here and I think this paper isn't putting that much as" }, { "end": 681.08, "start": 670.6, "text": " much emphasis as it deserves is the two words we and want so my my basically my" }, { "end": 691.48, "start": 681.08, "text": " answer to this is first of all the word want we want shape and my answer my" }, { "end": 703.44, "start": 691.48, "text": " comment to this is you can't you can't formulate that you can't formulate it" }, { "end": 714.12, "start": 704.28, "text": " this this is very crucial and I think the the paper it almost ignores this" }, { "end": 720.6, "start": 714.12, "text": " point you cannot formulate what it means to classify things by shape and that" }, { "end": 726.5600000000001, "start": 720.6, "text": " this seems so oblivious to us because we're so used to it as humans right we" }, { "end": 730.96, "start": 726.5600000000001, "text": " were just like oh it's just use the shape right this is the shape right but" }, { "end": 736.96, "start": 730.96, "text": " you cannot you cannot program a computer to do this that's why we use deep" }, { "end": 742.32, "start": 736.96, "text": " learning in the first place right because we have no freaking idea how to" }, { "end": 747.88, "start": 742.32, "text": " program an algorithm that extracts the shape of something it might be possible" }, { "end": 755.2, "start": 747.88, "text": " for like a star and the moon but not for a cat or a car or anything right so you" }, { "end": 761.12, "start": 755.2, "text": " cannot formulate your objective that's the problem right and and it's easy to" }, { "end": 766.92, "start": 761.12, "text": " then say oh the model doesn't do what we wanted to do it's like you can't even" }, { "end": 774.56, "start": 766.92, "text": " formulate what you wanted to do in a precise way so basically all all you're" }, { "end": 779.5999999999999, "start": 774.56, "text": " you're saying here what you're saying here is I'll train a shape classifier" }, { "end": 784.3599999999999, "start": 779.5999999999999, "text": " right once you've gone through this process of training and evaluating you" }, { "end": 791.56, "start": 784.3599999999999, "text": " you say ah now I have a now I have a shape classifier right so say you" }, { "end": 797.16, "start": 791.56, "text": " haven't you hadn't done this OOD evaluation you've gone through this and" }, { "end": 804.1199999999999, "start": 797.16, "text": " you you know you now claim you proclaim I have trained a shape classifier no no" }, { "end": 812.84, "start": 804.12, "text": " you have trained something that given the entire process of how you create" }, { "end": 821, "start": 812.84, "text": " your data can classify these two images right so at here is your generator this" }, { "end": 825.72, "start": 821, "text": " is your little program that you wrote to produce these images and your generator" }, { "end": 834.08, "start": 825.72, "text": " assigns either the star right at random it produces these things the star or the" }, { "end": 841.12, "start": 834.08, "text": " moon it does these two things and then it creates the image from it and that" }, { "end": 845.96, "start": 841.12, "text": " will give you your data set right what you have trained is not a shape" }, { "end": 851.2, "start": 845.96, "text": " classifier what you have trained is a classifier that can distinguish data" }, { "end": 858.88, "start": 851.2, "text": " that comes from this data generation process right the entire notion of" }, { "end": 867.4, "start": 858.88, "text": " calling it a shape classifier is because you as a human have thought of shape" }, { "end": 874.84, "start": 867.4, "text": " when you programmed this generator right when you collected the data set that's" }, { "end": 878.6, "start": 874.84, "text": " what you thought of but this isn't the case you can't call it a shape" }, { "end": 883.36, "start": 878.6, "text": " classifier just because you like this is what your intent was you have a" }, { "end": 888.76, "start": 883.36, "text": " classifier that classifies images from this particular data generation" }, { "end": 897.28, "start": 888.76, "text": " process and you can't actually formulate a shape classifier right okay the second" }, { "end": 914.8, "start": 897.28, "text": " word is we we sorry we humans right we want a shape classifier now I've I've" }, { "end": 919.24, "start": 914.8, "text": " said this before and this this very much refers back to the two for example the" }, { "end": 928.3199999999999, "start": 919.24, "text": " paper about the contrast sets in NLP and so on humans have grounded knowledge" }, { "end": 940.0799999999999, "start": 928.3199999999999, "text": " right humans sorry humans have grounding this is very important here grounding" }, { "end": 950.96, "start": 940.08, "text": " means that the humans live in a world of physics and culture sorry physics and" }, { "end": 965.12, "start": 950.96, "text": " culture and the need for food biology humans live in this world and this this" }, { "end": 972.4, "start": 965.12, "text": " generated our brain right so what that means is that humans live in a world of" }, { "end": 988.08, "start": 972.4, "text": " objects and of of people sorry of people and of being eaten right being eaten or" }, { "end": 995.2, "start": 988.08, "text": " needing to eat food right humans lit grew up and live in this world your" }, { "end": 1003.0400000000001, "start": 995.2, "text": " brain was literally structured according to these things and thus we understand" }, { "end": 1007.9200000000001, "start": 1003.0400000000001, "text": " everything with an eye to this grounded knowledge of reality where there is" }, { "end": 1015.24, "start": 1007.9200000000001, "text": " such a thing as objects now if you have image net and you train a classifier for" }, { "end": 1020.24, "start": 1015.24, "text": " objects right this is what I find so crazy right and in the you know you" }, { "end": 1027, "start": 1020.24, "text": " collect this thing and there's a car and you say that's a car right you know this" }, { "end": 1033, "start": 1027, "text": " is a car because there is an object of a car and and and but the the neural" }, { "end": 1037.84, "start": 1033, "text": " network is not does not have a bias for object in neural network simply sees" }, { "end": 1045.08, "start": 1037.84, "text": " these pixels same here what you will do immediately here is you'll recognize the" }, { "end": 1051.84, "start": 1045.08, "text": " object of the star right you will transform this into a 3d scene right" }, { "end": 1060.84, "start": 1051.84, "text": " into a 3d cube where you are here watching in 3d space and there is this" }, { "end": 1067.72, "start": 1060.84, "text": " star object somewhere here right and then you understand that the star could" }, { "end": 1071.48, "start": 1067.72, "text": " move around then it would still be the same star but that's because you have" }, { "end": 1076.88, "start": 1071.48, "text": " the inherent bias of there being objects and shape for example the word shape is" }, { "end": 1083.56, "start": 1076.88, "text": " nothing more than a property of an object and the neural network simply do" }, { "end": 1093.48, "start": 1083.56, "text": " not have a inherent bias for objects right or people or intent or what it" }, { "end": 1099.48, "start": 1093.48, "text": " means to eat right this this becomes super super obvious if you ever try to" }, { "end": 1106.56, "start": 1099.48, "text": " solve for example a jigsaw puzzle you know like these these things here I'm" }, { "end": 1115.08, "start": 1106.56, "text": " terrible at this if you solve this on its head right say this has like a face" }, { "end": 1120.92, "start": 1115.08, "text": " on it and you try to solve it like this and you try to solve it on its head like" }, { "end": 1126.88, "start": 1120.92, "text": " try it you'll do it's the same task you simply need to match the the border" }, { "end": 1131.92, "start": 1126.88, "text": " shapes right and you need to make sure that the lines are continuous of the" }, { "end": 1141.16, "start": 1131.92, "text": " picture it becomes so much harder just because you have this brain and so that" }, { "end": 1147.24, "start": 1141.16, "text": " is my my entire criticism and it will it will pull through this entire paper and" }, { "end": 1153.0400000000002, "start": 1147.24, "text": " we'll go through it for now relatively quickly because we've already like" }, { "end": 1159.52, "start": 1153.04, "text": " touched touched on it keep in mind this is my commentary on it this is not" }, { "end": 1167.84, "start": 1161, "text": " superior superior knowledge or something that's just me all right so what they do" }, { "end": 1173.2, "start": 1167.84, "text": " is they have this taxonomy of decision rules that they they point out what" }, { "end": 1179.34, "start": 1173.2, "text": " they're saying is okay you're you these there's a set of all possible decision" }, { "end": 1184.6399999999999, "start": 1179.34, "text": " rules right and this is the this is the outer set here all possible decision" }, { "end": 1189.24, "start": 1184.6399999999999, "text": " rules that one could think of to discriminate data and let's say we'll" }, { "end": 1194.08, "start": 1189.24, "text": " talk about images here right to to discriminate images most of them will" }, { "end": 1198.48, "start": 1194.08, "text": " just be crap and these will be using these uninformative features what they" }, { "end": 1203.3999999999999, "start": 1198.48, "text": " say but then there are some decision rules that perform well on training data" }, { "end": 1209.8000000000002, "start": 1203.4, "text": " right this is this big circle here right so that there are some decision rules" }, { "end": 1216.44, "start": 1209.8000000000002, "text": " that perform well on the training set and they call these overfitting features" }, { "end": 1222.8400000000001, "start": 1216.44, "text": " so these are all the features that perform well on the training set only on" }, { "end": 1227.6000000000001, "start": 1222.8400000000001, "text": " the training set sorry the overfitting features but to me it's it's a bit" }, { "end": 1232.2800000000002, "start": 1227.6000000000001, "text": " unclear I think they only call this band the overfitting features but they call" }, { "end": 1237.56, "start": 1232.28, "text": " the entire circle the all possible training solutions any case so there are" }, { "end": 1241.16, "start": 1237.56, "text": " decision rules that perform well on the training set but some of them are" }, { "end": 1247.62, "start": 1241.16, "text": " overfitting as you know that problem right then the next circle inside of" }, { "end": 1253.96, "start": 1247.62, "text": " that are all decision rules that perform well on the training set and the IID" }, { "end": 1260.08, "start": 1253.96, "text": " test set and this would be our our location classifier from before would" }, { "end": 1268.6399999999999, "start": 1260.08, "text": " fall into this category right and now there these are still a much larger set" }, { "end": 1275.28, "start": 1268.6399999999999, "text": " as you see here as this inside set of the intended solution performs well on" }, { "end": 1280.76, "start": 1275.28, "text": " training set IID and all relevant out-of-distribution test sets and they" }, { "end": 1284.9199999999998, "start": 1280.76, "text": " draw this in here the out-of-distribution test sets are subsets of" }, { "end": 1293, "start": 1284.92, "text": " the IID test set or the sorry the solutions that work on the OOD test sets" }, { "end": 1298.88, "start": 1293, "text": " are subsets of the solutions that work well on the IID test sets I don't have a" }, { "end": 1305.0800000000002, "start": 1298.88, "text": " problem with this characterization of decision rules right here is" }, { "end": 1310, "start": 1305.0800000000002, "text": " specifically these are decision rules what I have a problem of characterization" }, { "end": 1318.96, "start": 1310, "text": " with is the fact that you cannot specify what the intended solution is you cannot" }, { "end": 1325.48, "start": 1318.96, "text": " and therefore this diagram I think is misleading because you have ultimately" }, { "end": 1331.12, "start": 1325.48, "text": " you have no idea where this dot is right you you can't you can't specify it" }, { "end": 1336.08, "start": 1331.12, "text": " beforehand you can't even specify the rules how you get there all you can do" }, { "end": 1340.6, "start": 1336.08, "text": " is give better data and they kind of advocate for this here with these OOD" }, { "end": 1345.6, "start": 1340.6, "text": " test sets but again I think when they say all relevant out-of-distribution test" }, { "end": 1353.32, "start": 1345.6, "text": " sets I'm a bit I'm a bit wary because they they suggest this as one of the" }, { "end": 1357.52, "start": 1353.32, "text": " measures to assess whether or not a model has learned these shortcut rules" }, { "end": 1363.54, "start": 1357.52, "text": " is to measure its performance on out-of-distribution test sets and this is" }, { "end": 1372.96, "start": 1363.54, "text": " very much like the contrast sets in in in the NLP but I I think actually this" }, { "end": 1379.28, "start": 1372.96, "text": " is a pretty pretty pretty pretty bad solution in most cases and let me" }, { "end": 1387.36, "start": 1379.28, "text": " explain why so if we go back to here right what we saw is that these these" }, { "end": 1395.76, "start": 1387.36, "text": " discrepancy it comes about because here from the real world we produce the data" }, { "end": 1400.9199999999998, "start": 1395.76, "text": " in a very specific form right and then this other out-of-distribution test set" }, { "end": 1408.52, "start": 1400.9199999999998, "text": " is produced in a slightly different form right now what you can think of this is" }, { "end": 1414.32, "start": 1408.52, "text": " if you look at your cost function that you train right what usually says my" }, { "end": 1425.8, "start": 1414.32, "text": " cost function is some sort of a loss for my data points and my labels right but" }, { "end": 1430.6799999999998, "start": 1425.8, "text": " this is often left out I mean you write it in your introductory classes what is" }, { "end": 1436.52, "start": 1430.6799999999998, "text": " important is that this is an expected loss that you're minimizing here over a" }, { "end": 1444.72, "start": 1436.52, "text": " data distribution right over X and Y sampled from a particular data" }, { "end": 1451.92, "start": 1444.72, "text": " distribution now when you talk about this this out-of-distribution classifiers" }, { "end": 1457.76, "start": 1451.92, "text": " what you'll have is you'll have a slightly different data distribution D" }, { "end": 1467.4, "start": 1457.76, "text": " prime right so but if you simply have one out of distribution thing think of" }, { "end": 1474, "start": 1467.4, "text": " this as the contrast set right if you haven't seen the video about contrast" }, { "end": 1478.32, "start": 1474, "text": " sets it's it's basically an out handcrafted out of distribution test" }, { "end": 1488.24, "start": 1478.32, "text": " set right my problem with this it's just one it's one it's a single one and I I" }, { "end": 1493.8799999999999, "start": 1488.24, "text": " think even if you try ten of them ten of those sets you won't even get close to a" }, { "end": 1502, "start": 1493.8799999999999, "text": " true measure because so the cool thing about an IID test set is at least it is" }, { "end": 1507.04, "start": 1502, "text": " precisely the same distribution right so it kind of gives you an unbiased number" }, { "end": 1512.8, "start": 1507.04, "text": " that for this particular data generation pipeline you get this number if you" }, { "end": 1519.04, "start": 1512.8, "text": " evaluate on an out-of-distribution test set you now have two effects you first" }, { "end": 1528.8, "start": 1519.04, "text": " have this generalization effect and you have the effect of having produced this" }, { "end": 1534.04, "start": 1528.8, "text": " in a different fashion here but you only have one of them what you would like to" }, { "end": 1543.96, "start": 1534.04, "text": " do is you would what you would like to assess is your loss of X and Y in" }, { "end": 1553.84, "start": 1543.96, "text": " expectation with X and Y coming from data in expectation let me a different" }, { "end": 1560.6399999999999, "start": 1553.84, "text": " color in expectation with your data distribution coming from all possible" }, { "end": 1565.8000000000002, "start": 1560.64, "text": " data distributions in the real world right that's what you would like to to" }, { "end": 1573.0400000000002, "start": 1565.8000000000002, "text": " say to do now if you only have a single contrast set it is a kin you can think" }, { "end": 1579.8000000000002, "start": 1573.0400000000002, "text": " of what if like how how well how well of a machine learning engineer would I be" }, { "end": 1589.2, "start": 1579.8000000000002, "text": " if my test set here only had one sample right so so I give you a train and the" }, { "end": 1595.04, "start": 1589.2, "text": " test set and I'm saying your performance will be if I make a Kaggle challenge and" }, { "end": 1601.88, "start": 1595.04, "text": " I say your performance will be evaluated on this one single sample test set right" }, { "end": 1607.32, "start": 1601.88, "text": " that's basically what you're doing if you have a single OOD test set is you're" }, { "end": 1613.8, "start": 1607.32, "text": " saying I'm going to give you one out-of-distribution data set that I have" }, { "end": 1620.9199999999998, "start": 1613.8, "text": " biased in one particular way right and will measure how well you capture our" }, { "end": 1628.56, "start": 1620.9199999999998, "text": " intent right our shape classifier intent will measure how well you capture that" }, { "end": 1634.96, "start": 1628.56, "text": " using this one single out-of-distribution thing I think what that" }, { "end": 1642.52, "start": 1634.96, "text": " will do is right you say I want to approximate this by a sum of I equals" }, { "end": 1652.52, "start": 1642.52, "text": " one to one that will just pump the variance beyond what beyond any" }, { "end": 1658.68, "start": 1652.52, "text": " reasonable meaning that the upcoming number will will be able to give you" }, { "end": 1665.32, "start": 1658.68, "text": " what you'd have to do is you'd have to have this entire process and sample" }, { "end": 1670.76, "start": 1665.32, "text": " train and test sets according to this day or at least test sets according to" }, { "end": 1676.12, "start": 1670.76, "text": " this data distribution this underlying data distribution which you have no clue" }, { "end": 1682.8799999999999, "start": 1676.12, "text": " what it is because if you could specify this directly you could get the solution" }, { "end": 1688.16, "start": 1682.8799999999999, "text": " for free right if you could specify the underlying mechanism you could you you" }, { "end": 1693.5, "start": 1688.16, "text": " would already know the solution you would need machine learning so I think" }, { "end": 1698.68, "start": 1693.5, "text": " the model puts like way too little emphasis sorry the paper puts a bit too" }, { "end": 1704.8, "start": 1698.68, "text": " little emphasis suffice to say they with their taxonomy they can say if you use" }, { "end": 1710.44, "start": 1704.8, "text": " for example only the overfitting features right then you will do well on" }, { "end": 1716.6000000000001, "start": 1710.44, "text": " the training set but not on the IID and OD test sets if you use the intended" }, { "end": 1721.72, "start": 1716.6000000000001, "text": " features again intended no one knows what that is no one can specify it you'll" }, { "end": 1726.98, "start": 1721.72, "text": " do well on all the OD test sets if you use the shortcut features you will do" }, { "end": 1732.44, "start": 1726.98, "text": " well on the training and ID test set but not on the OD test this is valid right" }, { "end": 1737.64, "start": 1732.44, "text": " I'm not discrediting this paper here and they do allude to a lot of the things" }, { "end": 1744.28, "start": 1737.64, "text": " I'm saying but not not all and I don't think they frame it correctly so they" }, { "end": 1747.76, "start": 1744.28, "text": " ask shortcuts where do they come from and they say a lot of the things that" }, { "end": 1754.8, "start": 1747.76, "text": " I've been saying here but for example they ask what makes a cow a cow and they" }, { "end": 1760.08, "start": 1754.8, "text": " give this example here where they say familiar background can be as important" }, { "end": 1764.52, "start": 1760.08, "text": " for recognition to deep neural networks where the deep neural networks will" }, { "end": 1770.52, "start": 1764.52, "text": " misclassify this picture because they used to seeing a cow in grass now" }, { "end": 1775.12, "start": 1770.52, "text": " consider this in our framework right if let's say this is an image net" }, { "end": 1781.96, "start": 1775.12, "text": " classifier image net is not an object classifier it is not right that's what" }, { "end": 1787.44, "start": 1781.96, "text": " we say that's our intent but what it is it is a classifier if you go through the" }, { "end": 1792.4, "start": 1787.44, "text": " pipeline what how do you generate the data image net is a classifier of" }, { "end": 1800.88, "start": 1792.4, "text": " naturally taken images right with a certain camera cropped center cropped to" }, { "end": 1806.2, "start": 1800.88, "text": " a particular object labeled by human radars filtered in some capacity right" }, { "end": 1812.44, "start": 1806.2, "text": " from flickr and for that particular data set we train a classifier it is not an" }, { "end": 1817.2, "start": 1812.44, "text": " object classifier it is a classifier for that and it doesn't has no clue of" }, { "end": 1824.0800000000002, "start": 1817.2, "text": " objects so in fact and also what you have to see is that the output isn't" }, { "end": 1828.32, "start": 1824.0800000000002, "text": " even if the output is shape it isn't shape it is actually probability of" }, { "end": 1837.2, "start": 1828.32, "text": " shape right or probability of object or probability of something right and it is" }, { "end": 1843.76, "start": 1837.2, "text": " completely conceivable right that if it's not grass in the background it's" }, { "end": 1850.08, "start": 1843.76, "text": " probably not as much a cow now I see the problem here this is clearly a cow and" }, { "end": 1857.4399999999998, "start": 1850.08, "text": " this is actually a conceivable natural image but imagine a picture of the cow" }, { "end": 1864.8, "start": 1857.44, "text": " oops what happened on the moon right this is the moon and here's the cow" }, { "end": 1877.44, "start": 1864.8, "text": " cow moo this is terrible horns tell a cow on the moon like who can fault the" }, { "end": 1883.16, "start": 1877.44, "text": " neural network it it it and I would say that's not a cow either because it in" }, { "end": 1889.0400000000002, "start": 1883.16, "text": " terms of the data generation process if you ask me please classify this as a" }, { "end": 1893.5600000000002, "start": 1889.0400000000002, "text": " natural image that has been taken blah blah blah blah blah right I'm gonna say" }, { "end": 1897.4, "start": 1893.5600000000002, "text": " there's no way there's a cow on the moon so I don't know what this is but it is" }, { "end": 1903.5600000000002, "start": 1897.4, "text": " very improbable that this is a cow right because all the training examples I've" }, { "end": 1912.3200000000002, "start": 1903.5600000000002, "text": " seen cow on grass so yeah so I mean they do they do actually allude to this" }, { "end": 1920.2, "start": 1912.32, "text": " right they call this data set biases and so on but I'm pretty sure that yet the" }, { "end": 1925.8, "start": 1920.2, "text": " interpretation is just a bit off where they say they the stat the point of this" }, { "end": 1931.72, "start": 1925.8, "text": " is like ah it's it's you know we want an object classifier but this we want the" }, { "end": 1938.4399999999998, "start": 1931.72, "text": " second I find even more kind of strange is they say shortcuts from discriminative" }, { "end": 1944.44, "start": 1938.44, "text": " learning and they allude to this picture here and they ask what makes a cat a cat" }, { "end": 1948.4, "start": 1944.44, "text": " and they basically their argument is that the neural networks they don't" }, { "end": 1952.8400000000001, "start": 1948.4, "text": " understand things they just discriminate right they have these thousand classes" }, { "end": 1957.1200000000001, "start": 1952.8400000000001, "text": " and the output layer this is the neural network and they just need to" }, { "end": 1962.3200000000002, "start": 1957.1200000000001, "text": " discriminate so they just need to learn what is different from one class to the" }, { "end": 1970.76, "start": 1962.32, "text": " other class and they will often rely on features such as texture like here so" }, { "end": 1975.28, "start": 1970.76, "text": " they rely on detection they classify this as an elephant right so they say" }, { "end": 1979.04, "start": 1975.28, "text": " what makes a cat a cat to standard DNNs the example image on the left clearly" }, { "end": 1988.52, "start": 1979.04, "text": " shows an elephant not a cat and again I agree if you tell me this is data from" }, { "end": 1996.48, "start": 1988.52, "text": " naturally to it taken images with standard cameras right then I will I" }, { "end": 2002.4, "start": 1996.48, "text": " will have two possibilities I will say is this a cat there's no way that if you" }, { "end": 2007.8799999999999, "start": 2002.4, "text": " take anywhere in the universe a picture with a like a phone camera of anything" }, { "end": 2014.72, "start": 2007.8799999999999, "text": " of a cat it will look like this no way I don't like this just not possible right" }, { "end": 2025.56, "start": 2014.72, "text": " however is it possible that there is an elephant that as a skin fold pattern by" }, { "end": 2036.56, "start": 2025.56, "text": " random chance elephant big ears raw trunk as a skin fold pattern looks like" }, { "end": 2043.2, "start": 2036.56, "text": " a cat like looks like the shape of a cat yes that's possible so if you ask me" }, { "end": 2050.2, "start": 2043.2, "text": " according to the data generation process this is way more likely to be an elephant" }, { "end": 2056.4, "start": 2050.2, "text": " than a cat right and the paper here makes it seem like it is so" }, { "end": 2061.6, "start": 2056.4, "text": " obvious that this is a cat but what do these standard stupid DNNs think it's an" }, { "end": 2066.76, "start": 2061.6, "text": " elephant not a cat and the DNN oh it's just looking at object text and other" }, { "end": 2072.52, "start": 2066.76, "text": " local structures and not that shape right what we like what we wanted to do" }, { "end": 2078.48, "start": 2072.52, "text": " and this this is I find just stop calling things object classifiers if" }, { "end": 2082.6, "start": 2078.48, "text": " they're not object classifiers that classifier between images of a data" }, { "end": 2089.08, "start": 2082.6, "text": " generation process if you want them to be object classifiers make up a data set" }, { "end": 2096.72, "start": 2089.08, "text": " that actually has different objects but you can't specify that so yeah and then" }, { "end": 2102.9199999999996, "start": 2096.72, "text": " they go into some sort of adversarial examples and I find this to be I also" }, { "end": 2108.9199999999996, "start": 2102.9199999999996, "text": " find this to be a bit maybe not belonging here like where they say oh" }, { "end": 2114.8399999999997, "start": 2108.9199999999996, "text": " look here the DNNs predict guitar with high certainty again it's just a" }, { "end": 2121.24, "start": 2114.8399999999997, "text": " discriminator but this this pattern why not a guitar if you had to you know if" }, { "end": 2125.3199999999997, "start": 2121.24, "text": " you had to get one of the thousand classes out why it could this not be" }, { "end": 2131.2400000000002, "start": 2125.32, "text": " most likely a guitar but I have a further problem with this is I see I" }, { "end": 2137.76, "start": 2131.2400000000002, "text": " kind of see this in so what what would you have is I ID data let's go with" }, { "end": 2141.6800000000003, "start": 2137.76, "text": " their taxonomy and say I ID data has from from the same generation process" }, { "end": 2150.04, "start": 2141.6800000000003, "text": " and then there is OOD data now I think there are a number of effects here that" }, { "end": 2155.2000000000003, "start": 2150.04, "text": " they try to lump together with this thing where they just say oh oh D data" }, { "end": 2158.68, "start": 2155.2, "text": " whenever my model doesn't work on OOD data it has learned a shortcut but it's" }, { "end": 2165.52, "start": 2158.68, "text": " very very weird so first of all there I would say the OOD data you can probably" }, { "end": 2172.6, "start": 2165.52, "text": " divide into what I would call unnatural OOD data let's say our task here is to" }, { "end": 2177.68, "start": 2172.6, "text": " build an object an object detector whatever that means for natural images" }, { "end": 2183.3599999999997, "start": 2177.68, "text": " so then there's unnatural OOD data which which in here you'll find something" }, { "end": 2189.08, "start": 2183.36, "text": " like adversarial examples adversarial examples are constructed at least if you" }, { "end": 2194.3, "start": 2189.08, "text": " go by the interpretation of modri and adversarial examples are features not" }, { "end": 2199.56, "start": 2194.3, "text": " bugs then you'll go into the direction of adversarial examples actually" }, { "end": 2206.4, "start": 2199.56, "text": " constructed by combining features that don't naturally go together so you'll" }, { "end": 2213.4, "start": 2206.4, "text": " you'll get the low frequency features of a cat and add the high frequency for" }, { "end": 2219.6800000000003, "start": 2213.4, "text": " example features of a dog so much with this with this lambda factor here so" }, { "end": 2224.52, "start": 2219.6800000000003, "text": " high that it to a DNN it looks like a dog because it has many of the features" }, { "end": 2228.76, "start": 2224.52, "text": " but to a human that kind of ignores the high frequency features it'll look like" }, { "end": 2235.44, "start": 2228.76, "text": " a cat right but these are unnatural because the features in actual nature in" }, { "end": 2242.32, "start": 2235.44, "text": " the real world they never occur in this combination so it seems like this is a" }, { "end": 2250.96, "start": 2242.32, "text": " very very very different phenomenon from what I would call natural OOD OOD data" }, { "end": 2257.04, "start": 2250.96, "text": " where simply the the features that you're seeing have never occurred in the" }, { "end": 2262.48, "start": 2257.04, "text": " training data set but there is there is if you go from the real world and you" }, { "end": 2269.84, "start": 2262.48, "text": " construct in different ways data set there is some data set where where the" }, { "end": 2274.48, "start": 2269.84, "text": " where the data actually occurs in the way that you have here so natural OOD" }, { "end": 2280.64, "start": 2274.48, "text": " data is what most of the examples for now were were about like a cow on the" }, { "end": 2284.56, "start": 2280.64, "text": " beach it's just because you've never you've never seen that because your data" }, { "end": 2290.96, "start": 2284.56, "text": " generation here always get cow plus grass right so I think these are very" }, { "end": 2296.88, "start": 2290.96, "text": " different and then the last thing they also lump in here is fairness like the" }, { "end": 2302.2400000000002, "start": 2296.88, "text": " the fairness and bias literature where for example you have a resume classifier" }, { "end": 2307.56, "start": 2302.2400000000002, "text": " and the resume classifier ends up being biased by gender or something like this" }, { "end": 2316.64, "start": 2307.56, "text": " and again so I I kind of struggle with this although they say not all the" }, { "end": 2320.8, "start": 2316.64, "text": " fairness problems come from here but I would also like to stress that some of" }, { "end": 2328.1200000000003, "start": 2320.8, "text": " the fairness problem goes exactly here it occurs because your data generation" }, { "end": 2334.04, "start": 2328.1200000000003, "text": " process is different from what you want for example if you do this this hiring" }, { "end": 2341.38, "start": 2334.04, "text": " classifier you have to understand what that is what your training is a system" }, { "end": 2346.44, "start": 2341.38, "text": " that will tell you how would my human data set creators have decided on this" }, { "end": 2351.06, "start": 2346.44, "text": " particular application now of course there is this problem of bias amplification" }, { "end": 2354.96, "start": 2351.06, "text": " and so on but it is not it is not an infallible system it simply tells you" }, { "end": 2358.68, "start": 2354.96, "text": " how the humans would have predicted if you collect the data set in a biased way" }, { "end": 2366.44, "start": 2358.68, "text": " of course the the machine will inherit that but on the other hand the fairness" }, { "end": 2373.48, "start": 2366.44, "text": " why I don't think this really belongs in here because in fairness you have a you" }, { "end": 2379.44, "start": 2373.48, "text": " actually have kind of an alternate world draw this in green prime world prime" }, { "end": 2387.2400000000002, "start": 2379.44, "text": " right where in this OOD and IID setting you always assume that the world is is" }, { "end": 2393.96, "start": 2387.2400000000002, "text": " the world and you want to kind of really learn a system that understands the" }, { "end": 2404.68, "start": 2393.96, "text": " world where in fairness you this here this is your super world so actually for" }, { "end": 2409.56, "start": 2404.68, "text": " the fairness literature it doesn't really matter if in the real world to" }, { "end": 2414.32, "start": 2409.56, "text": " let's say two groups of people are equal in some respect or not equal in the true" }, { "end": 2419.44, "start": 2414.32, "text": " world right what they care about is that they are treated equally by the system" }, { "end": 2427.7200000000003, "start": 2419.44, "text": " right so they will impose they will impose some restrictions or some some" }, { "end": 2433.36, "start": 2427.7200000000003, "text": " some condition on their model and they don't they don't naturally I'm like this" }, { "end": 2438.36, "start": 2433.36, "text": " sounds bad but it is the mathematical formulation is such that you start with" }, { "end": 2445, "start": 2438.36, "text": " the super knowledge of two things must be equal and then you this is how you" }, { "end": 2450.24, "start": 2445, "text": " imagine your world you think I know the world and then I try to learn the model" }, { "end": 2455.92, "start": 2450.24, "text": " such that that happens right whereas over here you do something different now" }, { "end": 2461.88, "start": 2455.92, "text": " some of it is in as I said is in the same category but it is I think a" }, { "end": 2470.76, "start": 2461.88, "text": " different different take and a different literature so I would I would focus on" }, { "end": 2479, "start": 2470.76, "text": " let's say this part right here sorry on this part and not on the adversarial" }, { "end": 2487.88, "start": 2479, "text": " examples and also not in the fairness literature too much alright so yeah you" }, { "end": 2494.1600000000003, "start": 2487.88, "text": " can see here like like no wonder this this and this screws up and this screws" }, { "end": 2502.7999999999997, "start": 2494.16, "text": " up an image net classifier yeah and even even this like how do we know that that" }, { "end": 2507.2, "start": 2502.7999999999997, "text": " is naturally natural though I can see that that is that looks pretty natural" }, { "end": 2512.48, "start": 2507.2, "text": " but still it's probably like really specifically constructed such that the" }, { "end": 2517.68, "start": 2512.48, "text": " probability that someone would take this picture with a camera in the real world" }, { "end": 2529.7999999999997, "start": 2517.68, "text": " is zero cool so they give some examples where they say okay shortcut learning" }, { "end": 2534.8799999999997, "start": 2529.7999999999997, "text": " exists in computer vision for example adversarial examples you see shifting" }, { "end": 2538.3599999999997, "start": 2534.8799999999997, "text": " the image by a few pixels though you have to say shifting the image very" }, { "end": 2542.52, "start": 2538.3599999999997, "text": " precisely by a few pixels such that the probability of this occurring the data" }, { "end": 2551.08, "start": 2542.52, "text": " generation pipeline is zero and so on then they call it domain transfer that" }, { "end": 2557.08, "start": 2551.08, "text": " that of course is I think that's the that's a good example they say natural" }, { "end": 2563.64, "start": 2557.08, "text": " language processing where BERT has been found to rely on superficial keywords for" }, { "end": 2568.44, "start": 2563.64, "text": " instance it learned within a data set of natural language arguments detecting the" }, { "end": 2572.96, "start": 2568.44, "text": " presence of not was sufficient to perform above chance in finding the" }, { "end": 2578.52, "start": 2572.96, "text": " correct line of argumentation again this is like all we can do is construct" }, { "end": 2587.32, "start": 2578.52, "text": " datasets we cannot if we could tell the model what to look at we would we would" }, { "end": 2592.92, "start": 2587.32, "text": " we would just program the solution so the solution is there's only one" }, { "end": 2598.6800000000003, "start": 2592.92, "text": " solution program better datasets get better datasets or I mean okay the" }, { "end": 2603.36, "start": 2598.6800000000003, "text": " second solution is get better inductive biases but if we knew the correct" }, { "end": 2613.4, "start": 2603.36, "text": " inductive biases we wouldn't have the problem yeah I like that there is a in" }, { "end": 2618.44, "start": 2613.4, "text": " NLP this is very this is very very prevalent even more than in vision" }, { "end": 2627.48, "start": 2618.44, "text": " right this this fact of hey these spurious correlations in NLP the models" }, { "end": 2632.52, "start": 2627.48, "text": " usually just learn kind of correlation between some words and then they they" }, { "end": 2637.52, "start": 2632.52, "text": " don't learn to understand the sentences at all right but this this is because in" }, { "end": 2641.96, "start": 2637.52, "text": " NLP we have even more problems with constructing datasets that force the" }, { "end": 2648.16, "start": 2641.96, "text": " model to learn to understand the the text again I could not tell you what" }, { "end": 2654.52, "start": 2648.16, "text": " understanding the text means they go in by the way humans do that too humans" }, { "end": 2660, "start": 2654.52, "text": " most in most of NLP that happens in humans humans do this right in many many" }, { "end": 2664.52, "start": 2660, "text": " many forms this is simply because the cost function is not aligned with what" }, { "end": 2673.3599999999997, "start": 2664.52, "text": " you would want what is the specific oh well a specific example is that news" }, { "end": 2680.32, "start": 2673.36, "text": " stories nowadays right you have a news you say news what do people expect what" }, { "end": 2685.1200000000003, "start": 2680.32, "text": " is the intent of news the intent of news is to inform you maybe but the cost" }, { "end": 2694.04, "start": 2685.1200000000003, "text": " right the cost function is clicks so what do you do you news story and very" }, { "end": 2702.6, "start": 2694.04, "text": " on the top in the title you say orange man bad and then people and you highlight" }, { "end": 2707.7599999999998, "start": 2702.6, "text": " this right so a news story I don't know Brad Pitt had a new baby you just append" }, { "end": 2713.52, "start": 2707.7599999999998, "text": " but orange man bad people click on it much more your cost goes up and sorry" }, { "end": 2719.52, "start": 2713.52, "text": " your clicks go up your cost function goes up and so I think this happens" }, { "end": 2724.96, "start": 2719.52, "text": " everywhere right you can't even do this with humans right how do you expect" }, { "end": 2733.12, "start": 2724.96, "text": " neural networks to to do that all right agent based reinforcement learning I" }, { "end": 2743.56, "start": 2733.12, "text": " think this pretty funny where is it where it learned how to play Tetris yeah" }, { "end": 2747.8, "start": 2743.56, "text": " instead of learning how to play Tetris an algorithm simply learned to pause the" }, { "end": 2760.8, "start": 2747.8, "text": " game to fit come on is genius right like it is objectively genius and then of" }, { "end": 2765, "start": 2760.8, "text": " course fairness and algorithmic decision-making right so they say" }, { "end": 2770.7200000000003, "start": 2765, "text": " understanding these shortcuts and they they touch on a lot of the things that" }, { "end": 2780.2799999999997, "start": 2770.72, "text": " I've touched on including these what I find what I find well is for example this" }, { "end": 2783.56, "start": 2780.2799999999997, "text": " Morgan's can for machine learning where you say probably a machine learning" }, { "end": 2788.9199999999996, "start": 2783.56, "text": " system will learn the easiest feature it can and that's oftentimes not what you" }, { "end": 2794.2799999999997, "start": 2788.9199999999996, "text": " want right so this even now amplifies things they also touch on this thing of" }, { "end": 2800.92, "start": 2794.28, "text": " anthropomorphism where you view everything through a human lens and that" }, { "end": 2806.44, "start": 2800.92, "text": " is not correct if you look at these neural networks they're not humans and" }, { "end": 2812.48, "start": 2806.44, "text": " we should never attribute human nests to their solutions never attribute to high" }, { "end": 2815.96, "start": 2812.48, "text": " level abilities that which can be adequately explained by shortcut" }, { "end": 2821, "start": 2815.96, "text": " learning yes I agree with this like I agree with this paper in in all the" }, { "end": 2827.04, "start": 2821, "text": " things it says right except this detecting shortcuts making od" }, { "end": 2832.12, "start": 2827.04, "text": " generalization tests a standard practice for the reasons I specified before I" }, { "end": 2838.52, "start": 2832.12, "text": " think that is counterproductive and yeah I think I've already said enough" }, { "end": 2844.58, "start": 2838.52, "text": " right designing good od tests this you can only design good od tests if you" }, { "end": 2853, "start": 2844.58, "text": " know the real underlying data distribution which you don't and let's" }, { "end": 2856.88, "start": 2853, "text": " go through yeah again the principle of least effort they say why are they" }, { "end": 2865.84, "start": 2856.88, "text": " learned because it's just easier right to it's just easier to write a news story" }, { "end": 2872.3199999999997, "start": 2865.84, "text": " with just the words you know people will click on right like or these top 10" }, { "end": 2877, "start": 2872.32, "text": " things of blah blah blah number seven will surprise you you don't actually" }, { "end": 2883.7200000000003, "start": 2877, "text": " have to come up with 10 relevant things the entire title is enough to get you" }, { "end": 2890.8, "start": 2883.7200000000003, "text": " the clicks so it's the least effort to solve the cost function might not align" }, { "end": 2898.36, "start": 2890.8, "text": " with what you want and also the inductive biases as I said we are humans" }, { "end": 2904, "start": 2898.36, "text": " we have some inductive biases the neural networks don't have them and we need to" }, { "end": 2910.28, "start": 2904, "text": " take this into account but the solution is to make training data sets that take" }, { "end": 2917, "start": 2910.28, "text": " this into account all right they say beyond shortcut learnings is kind of an" }, { "end": 2925.04, "start": 2917, "text": " outlook and then a conclusion where they remind but we're already at some 45" }, { "end": 2930.7599999999998, "start": 2925.04, "text": " minutes of video and if you're still here like respect or maybe you just have" }, { "end": 2935.96, "start": 2930.7599999999998, "text": " this in the background and have some company during this time I will finish" }, { "end": 2942.92, "start": 2935.96, "text": " with saying thank you for watching and leave your comments since this is mostly" }, { "end": 2949.56, "start": 2942.92, "text": " opinion I would be interested in hearing your comments on this with that I say" }, { "end": 2955.36, "start": 2949.56, "text": " bye bye" } ]
Ok44otx90D4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Feature Visualization & The OpenAI microscope
[ "Science & Technology" ]
[ "deep learning", "machine learning", "imagenet", "visualization", "features", "intermediate", "hidden layers", "activations", "patterns", "openai", "google", "interactive", "explanation" ]
A closer look at the OpenAI microscope, a database of visualizations of the inner workings of ImageNet classifiers, along with an explanation of how to obtain these visualizations. https://distill.pub/2017/feature-visualization/ https://microscope.openai.com/models https://github.com/tensorflow/lucid Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're going to take a look at the OpenAI microscope and this article on Distill called Feature Visualization. So the Feature Visualization article is by Chris Ola, Alexander Mortwintsev and Ludwig Schubert of the Google Brain team, while the OpenAI microscope is by OpenAI. So keep that in mind. These tools are tools for visualizing what neural networks learn and specifically here we're dealing with image classifiers and even more specifically ImageNet classifiers. So you know ImageNet is this big data set with a thousand classes of images and the images are somewhat like 200 by 200 pixels and you're just supposed to put them into one of these 1000 classes and networks have become really good at these kinds of things. So our question is what do these networks learn? And there are a number of very cool works that have started to investigate what these networks learn. So this started with work like Deep Dream or even before that but this is a very good summary and also an overview of this. So in this article they first showcase these patterns here where you can see that as you go through the network in the layer, this is layer conf2d0, which is a very low layer in the network, you have things what looks like pattern, just pattern detectors. So these are the things that the network is excited by and we'll just get, we'll get in a second at how you create these things. Just be sure these are the things that these particular layers in the networks are most excited by. So this layer is super excited by these textures. But as you go higher the network gets excited by these kind of textures, then it gets excited by more complex things. So the pattern is the higher you go in the layers and these people have been seeing and claiming this and measuring this since RBF networks and whatnot, that the higher you go in these networks the more complex features that they build. So they always build hypothesis is they build very complex features from the lower layers and the lower layers they have less less complex features and until you're the very bottom layer they simply extract edges and patterns of texture like this. Whereas in the top layers all of these are hierarchically assembled to give you very very intricate features. So these usually look pretty funky that's why I like to investigate. So this article focuses on how is this done and the answer is by optimization. Now what do we mean? I actually have the article here somewhat printed out but the graphics they don't really print very well to the notebook here. So imagine this here. So what you want to do is you want to see how much activation is in the network for a particular input and you can do this in many many different ways. The easiest form, let's actually start over here, if you have a neural network here and this is the softmax classifier and these are the classes in this case let's say this is dog, this is cat and this is car, house. Let's go with house. You can think of okay I want to know when the network sees a cat. What does the network think of cats? So what I would do is I would take an image that is just noise, just random noise like this one on the left here and I would start to optimize using backpropagation. I would start to optimize this image. Now you usually know backpropagation as the thing that will optimize these weights if we have given an image and a label. But right now you have to, so and we optimize the weights here, sorry I should say that, but right now you have to rethink what we have given is the label and the weights. We keep them constant but we optimize the input image to maximize this label as much as possible. So we ask of xx please please update yourself such that the output is as much cat as possible. And then we just optimize for that. So we hope that this picture would turn out to be as much cat as we like. And usually here, I don't know exactly which class they have here, but you won't get a cat. Sorry over here. You won't get a cat or here if you optimize the logits it's the same, just you do the same thing before the softmax. What you will get is some weird trippy thing. So in our classifier you might get something like there's a cat here but there's also one here right because two cats are more cat than one cat and there's like a giant cat head here and inside the cat head there is another cat head and there's like a cat tail somewhere here and again there's a cat eye right here. So you get like something super trippy that is as much cat as possible. Alright now so this is somewhat interesting because you can find out what does the network think is the most cat-like thing there is and the most dog-like thing there is. But what you can also do is you can see what do the intermediate layers of the networks, what do they get excited by. So what you can do is for example you can take an individual neuron in one of the layers right so this here might be a convolutional layer with its convolutional filters right and these are the different channels of the convolutional layer and then this thing here is just a single neuron there and what you can do is you can say okay X we have now again we input the image X and we optimize the image X such that with a given weight this particular neuron let's call it N and N is maybe this neuron right here right this particular neuron is as much activated as possible right so we no longer optimize for a label but we optimize for a particular neuron to be activated as much as possible and then we can sort of see what is an image that activates this neuron as much as possible in this case that thing you can actually do the same thing with an entire channel if you simply say when is the channel as a whole activated as much as possible and something like deep dream did the same thing but with an entire layer in the neural network like what is this layer activated by they can imagine there's not only one image that activates it the most but probably depending on your start here it's it's it's a you'll get different different results for depending on where you start we'll go into that later as well so these are the kinds of things you can do to investigate these neural networks and see what they what they pay attention to in each of the layers right so let's go on here and so they say they say okay if we optimize by optimization you get something like this here on the bottom row looks pretty funky right but what you could also do is you can visualize by what there's called data set examples and date with data set examples you don't do this thing you don't do the optimization procedure but you go into your into your database into your data set and you find the images that activate so the XI from your data set that activate a particular neuron the most so you simply sort all the images in your database and you just pick the ten or so that activate that particular neuron the most and that would also give you an understanding it's actually a valid thing and the AI microscope combines both of them so on the bottom you see what particular neurons are most excited by and at the top you see that that the data set examples they're most excited by so you can see that there is a there's a healthy diversity in the data set examples but they also all kind of map to the to the what it's most excited by it's pretty cool the last point they make in this article is that of diversity with the data set examples you do get naturally sort of a diverse set of images of course where you can guess you can also say okay whether the maximum activations and only slightly positive you can even give negative examples but with the positive you can only either maximize fully or you can you know take the negative and you can say okay what is this neuron not excited by at all but you you won't get kind of a spectrum of what the neuron is excited by or the unit or the layer and what they're doing is simply they add a diversity term they say that works best so what does that mean it means that here if we optimize X and Y right let's go up here if we optimize X in that right and try to maximize the activation of n we don't want to do X by itself but we want to do is we want to do an entire set of X I right so we feed an entire mini batch in there and we want to maximize n but also maximize a diversity term let's call that D between X 1 to X be right so you want to maximize the activation of the neuron but at the same time in your loss function that you optimize you also have this diversity term where you say the images that you produce should be far apart from each other or kind of apart from each other and thus you do get diverse samples okay the printing again doesn't work so here you see that if you just simply optimize you get the thing on the left but if you have a batch of things and you optimize them to be diverse from each other but also activate the layer or the unit you get a variety of high activations and you can see here that this is some sort of curve this could be a beak of a bird but on the right here this could also be kind of a snout of a monkey or something okay so it's just curvy curvy things that is activated by here they give another example you can clearly see that there is like some sort of eye in the picture but then if you optimize with diversity you can see that some of them do not have this eye thing and in fact also some of the data set examples do not have this eye thing so it might be interesting to to to optimize here with diversity and even in they say in higher layers it gets even more more diverse with what you achieve with this ball detector they also say they research interactions between neurons where they can interpolate between them right so you can here have two different new units let's say this top left one here is the thing that we've just seen with the curvature activator and then on the right we can select this thing here that is appears to be activated by these bird like bird like things and if you optimize an image that activates both of them you get the thing on the bottom left and this is very good for understanding how neural networks work because what a neural network will do is exactly it will take the thing on the top left and the top right from lower layers and it will combine them to form features of higher layers so while the top right thing looks like generic birds the bottom left thing looks much more like birds with let's say long necks and then kind of curved necks so more stork ish birds right because we've added in this curvature thing so this is very very cool to play around you can also here interpolate between into neurons like you would interpolate in a in a GAN right and yeah so they do make a point of regularization I don't want to go into that particularly but you have to be careful you can't just apply the optimization procedure as I said right now you have to actually have to do some regularization and to get to get rid of what are essentially adversarial examples in this process because if you just straight-up optimize you will get pretty high frequency crappy results I actually want to jump over now to this OpenAI microscope so this is a tool that lets you explore these visualizations so at the beginning you can pick one of these models and I'll pick inception v1 just because some of the other ones they don't have everything done quite yet the all the all the all the visualization so on the right here you can actually see the architecture of the network if you know what an inception network is this what this looks like and you would be able from here to select one of these units straight away but I'm gonna sorry we're gonna go to the left here so you have deep dream activated which means the entire layer is optimized for so per layer on the right side you have an image here and you can already see that if we go from the bottom what we saw before we get patterns that become more and more complex as you go up the layers and then more and more until you finally have what the network appears to be most activated by is mostly dogs which is okay because image net is dominated by dogs so you can click on any of these right here like this one and now you'll be able to inspect the individual nodes in this layer so before we had the whole layer right the whole layer was activated by something but these layers they have different channels and also different neurons within the channel so you can select this here you can go neuron activation or channel activation and these are the images that these channels are excited by the most you see you get pretty funky pattern if we select one interesting one maybe this this one right here you can see on the left this is the channel optimizing optimization on the right this is the neuron optimization and here you get the data set examples that are most activated that mostly activate this particular channel or neuron so sorry this particular yeah channel so you can see this is pretty similar to the thing I drew where except for it being a cat it's some sort of a fox dog thing right and and you can explore the neural network in this fashion so you can go through the layer here and look at that good units this seems to be whiskers classifier and lo and behold things with whiskers will activate the the neuron and as you go up the layer and this is the the cool thing right so we're right now we're here in this layer for as you go up you will see more and more intricate patterns of activations I I could play around this for very very long time but I won't I won't waste your time too much they there is a dist sorry there is a slack workspace where people discuss interesting patterns what is this okay yes this is a some sort of temple temple constructor very cool there is a slack workspace where people discuss interesting things for example they discuss how the car detector that you see right here is one of the units there is literally endless units to look at that detects cars can be clearly seen to be built from lower level features such as this wheel detector you see this wheel detector here is the unit three three seven in the mixed four layer and this car hood detector right here is unit two three seven also in the in one of the layer fours right so these are both from layer four and then the car detector I haven't looked this up ah isn't layer four as well but let's check it out this isn't layer 4b this isn't layer 4b and this isn't layer 4c ah so you see I this was a risk the car detector is built from lower level features of car hood and car wheel right the car wheel right here detects wheels and the car hood detector detects hoods and then the car detector detects cars so there are very like I really invite you to go look at it check out what people find and explore these models all of this is based on this lucid library right here also invite you to check that out where you can perform such optimizations yourself I'll link to that and with that bye bye
[ { "end": 6.8, "start": 0, "text": " Hi there! Today we're going to take a look at the OpenAI microscope and this" }, { "end": 11.46, "start": 6.8, "text": " article on Distill called Feature Visualization. So the Feature" }, { "end": 17.72, "start": 11.46, "text": " Visualization article is by Chris Ola, Alexander Mortwintsev and Ludwig" }, { "end": 25.32, "start": 17.72, "text": " Schubert of the Google Brain team, while the OpenAI microscope is by OpenAI." }, { "end": 31.64, "start": 25.32, "text": " So keep that in mind. These tools are tools for visualizing what neural" }, { "end": 37.56, "start": 31.64, "text": " networks learn and specifically here we're dealing with image classifiers and" }, { "end": 41.879999999999995, "start": 37.56, "text": " even more specifically ImageNet classifiers. So you know ImageNet is this big" }, { "end": 47.2, "start": 41.879999999999995, "text": " data set with a thousand classes of images and the images are somewhat like" }, { "end": 51.84, "start": 47.2, "text": " 200 by 200 pixels and you're just supposed to put them into one of these" }, { "end": 59.120000000000005, "start": 51.84, "text": " 1000 classes and networks have become really good at these kinds of things." }, { "end": 64.64, "start": 59.120000000000005, "text": " So our question is what do these networks learn? And there are a number of very cool" }, { "end": 71.12, "start": 64.64, "text": " works that have started to investigate what these networks learn. So this" }, { "end": 77.52000000000001, "start": 71.12, "text": " started with work like Deep Dream or even before that but this is a very good" }, { "end": 83.52, "start": 77.52, "text": " summary and also an overview of this. So in this article they first" }, { "end": 89.75999999999999, "start": 83.52, "text": " showcase these patterns here where you can see that as you go through the" }, { "end": 96.64, "start": 89.75999999999999, "text": " network in the layer, this is layer conf2d0, which is a very low layer in" }, { "end": 102.75999999999999, "start": 96.64, "text": " the network, you have things what looks like pattern, just pattern detectors. So" }, { "end": 106.56, "start": 102.75999999999999, "text": " these are the things that the network is excited by and we'll just get, we'll get" }, { "end": 113, "start": 106.56, "text": " in a second at how you create these things. Just be sure these are the things" }, { "end": 120.36, "start": 113, "text": " that these particular layers in the networks are most excited by. So this" }, { "end": 126.96000000000001, "start": 120.36, "text": " layer is super excited by these textures. But as you go higher the" }, { "end": 132.88, "start": 126.96000000000001, "text": " network gets excited by these kind of textures, then it gets excited by more" }, { "end": 138.96, "start": 132.88, "text": " complex things. So the pattern is the higher you go in the layers and" }, { "end": 143.72, "start": 138.96, "text": " these people have been seeing and claiming this and measuring this since" }, { "end": 152.24, "start": 143.72, "text": " RBF networks and whatnot, that the higher you go in these networks the" }, { "end": 159.07999999999998, "start": 152.24, "text": " more complex features that they build. So they always build hypothesis is they" }, { "end": 164.20000000000002, "start": 159.08, "text": " build very complex features from the lower layers and the lower layers they" }, { "end": 168.96, "start": 164.20000000000002, "text": " have less less complex features and until you're the very bottom layer they" }, { "end": 174.64000000000001, "start": 168.96, "text": " simply extract edges and patterns of texture like this. Whereas in the top" }, { "end": 180.60000000000002, "start": 174.64000000000001, "text": " layers all of these are hierarchically assembled to give you very very intricate" }, { "end": 188.08, "start": 180.60000000000002, "text": " features. So these usually look pretty funky that's why I like to investigate." }, { "end": 194.84, "start": 188.08, "text": " So this article focuses on how is this done and the answer is by optimization." }, { "end": 199.4, "start": 194.84, "text": " Now what do we mean? I actually have the article here somewhat printed out but" }, { "end": 206.84, "start": 199.4, "text": " the graphics they don't really print very well to the notebook here. So" }, { "end": 217.52, "start": 206.84, "text": " imagine this here. So what you want to do is you want to see how much" }, { "end": 222.32000000000002, "start": 217.52, "text": " activation is in the network for a particular input and you can do this in" }, { "end": 229.8, "start": 222.32000000000002, "text": " many many different ways. The easiest form, let's actually start over here, if" }, { "end": 235.76000000000002, "start": 229.8, "text": " you have a neural network here and this is the softmax classifier and these are" }, { "end": 242.96, "start": 235.76000000000002, "text": " the classes in this case let's say this is dog, this is cat and this is car, house." }, { "end": 253.64000000000001, "start": 242.96, "text": " Let's go with house. You can think of okay I want to know when" }, { "end": 259, "start": 253.64000000000001, "text": " the network sees a cat. What does the network think of cats? So what I would do" }, { "end": 267.2, "start": 259, "text": " is I would take an image that is just noise, just random noise like this" }, { "end": 274.28, "start": 267.2, "text": " one on the left here and I would start to optimize using backpropagation. I would" }, { "end": 279.12, "start": 274.28, "text": " start to optimize this image. Now you usually know backpropagation as the" }, { "end": 284.4, "start": 279.12, "text": " thing that will optimize these weights if we have given an image and a label." }, { "end": 291.2, "start": 284.4, "text": " But right now you have to, so and we optimize the weights here, sorry I" }, { "end": 297.92, "start": 291.2, "text": " should say that, but right now you have to rethink what we have given is the" }, { "end": 303.03999999999996, "start": 297.92, "text": " label and the weights. We keep them constant but we optimize the input image" }, { "end": 309.84, "start": 303.03999999999996, "text": " to maximize this label as much as possible. So we ask of xx please please" }, { "end": 316.71999999999997, "start": 309.84, "text": " update yourself such that the output is as much cat as possible. And then" }, { "end": 325.72, "start": 316.72, "text": " we just optimize for that. So we hope that this picture would turn out to" }, { "end": 333.32000000000005, "start": 325.72, "text": " be as much cat as we like. And usually here, I don't know exactly which class" }, { "end": 340, "start": 333.32000000000005, "text": " they have here, but you won't get a cat. Sorry over here. You won't get a cat or" }, { "end": 343.76000000000005, "start": 340, "text": " here if you optimize the logits it's the same, just you do the same thing before" }, { "end": 350.4, "start": 343.76, "text": " the softmax. What you will get is some weird trippy thing. So in our classifier" }, { "end": 358.03999999999996, "start": 350.4, "text": " you might get something like there's a cat here but there's also one here right" }, { "end": 362.64, "start": 358.03999999999996, "text": " because two cats are more cat than one cat and there's like a giant cat head" }, { "end": 368.52, "start": 362.64, "text": " here and inside the cat head there is another cat head and there's like a cat" }, { "end": 373.96, "start": 368.52, "text": " tail somewhere here and again there's a cat eye right here. So you get like" }, { "end": 378.71999999999997, "start": 373.96, "text": " something super trippy that is as much cat as possible." }, { "end": 385.32, "start": 378.71999999999997, "text": " Alright now so this is somewhat interesting because you can find" }, { "end": 389.2, "start": 385.32, "text": " out what does the network think is the most cat-like thing there is and" }, { "end": 394.59999999999997, "start": 389.2, "text": " the most dog-like thing there is. But what you can also do is you can see what" }, { "end": 400.8, "start": 394.6, "text": " do the intermediate layers of the networks, what do they get excited by. So" }, { "end": 405.12, "start": 400.8, "text": " what you can do is for example you can take an individual neuron in one of the" }, { "end": 410.88, "start": 405.12, "text": " layers right so this here might be a convolutional layer with its" }, { "end": 414.96000000000004, "start": 410.88, "text": " convolutional filters right and these are the different channels of the" }, { "end": 421.12, "start": 414.96000000000004, "text": " convolutional layer and then this thing here is just a single neuron there and" }, { "end": 433.72, "start": 421.12, "text": " what you can do is you can say okay X we have now again we input the" }, { "end": 441.64, "start": 433.72, "text": " image X and we optimize the image X such that with a given weight this" }, { "end": 447.92, "start": 441.64, "text": " particular neuron let's call it N and N is maybe this neuron right" }, { "end": 454.68, "start": 447.92, "text": " here right this particular neuron is as much activated as possible right so we" }, { "end": 459.08000000000004, "start": 454.68, "text": " no longer optimize for a label but we optimize for a particular neuron to be" }, { "end": 466.16, "start": 459.08000000000004, "text": " activated as much as possible and then we can sort of see what is an image that" }, { "end": 473.24, "start": 466.16, "text": " activates this neuron as much as possible in this case that thing you can" }, { "end": 476.88, "start": 473.24, "text": " actually do the same thing with an entire channel if you simply say when is" }, { "end": 482.08, "start": 476.88, "text": " the channel as a whole activated as much as possible and something like deep" }, { "end": 486.24, "start": 482.08, "text": " dream did the same thing but with an entire layer in the neural network like" }, { "end": 492.84, "start": 486.24, "text": " what is this layer activated by they can imagine there's not only one image that" }, { "end": 499.28, "start": 492.84, "text": " activates it the most but probably depending on your start here it's it's" }, { "end": 505.96, "start": 499.28, "text": " it's a you'll get different different results for depending on where you start" }, { "end": 511.79999999999995, "start": 505.96, "text": " we'll go into that later as well so these are the kinds of things you can do" }, { "end": 517, "start": 511.79999999999995, "text": " to investigate these neural networks and see what they what they pay attention to" }, { "end": 528.64, "start": 517, "text": " in each of the layers right so let's go on here and so they say they say okay if" }, { "end": 535.16, "start": 528.64, "text": " we optimize by optimization you get something like this here on the bottom" }, { "end": 542.52, "start": 535.16, "text": " row looks pretty funky right but what you could also do is you can visualize by" }, { "end": 547.52, "start": 542.52, "text": " what there's called data set examples and date with data set examples you" }, { "end": 552.64, "start": 547.52, "text": " don't do this thing you don't do the optimization procedure but you go into" }, { "end": 560.6, "start": 552.64, "text": " your into your database into your data set and you find the images that" }, { "end": 566.2, "start": 560.6, "text": " activate so the XI from your data set that activate a particular neuron the" }, { "end": 570, "start": 566.2, "text": " most so you simply sort all the images in your database and you just pick the" }, { "end": 574.9200000000001, "start": 570, "text": " ten or so that activate that particular neuron the most and that would also" }, { "end": 580.72, "start": 574.9200000000001, "text": " give you an understanding it's actually a valid thing and the AI microscope" }, { "end": 585.4, "start": 580.72, "text": " combines both of them so on the bottom you see what particular neurons are most" }, { "end": 592.72, "start": 585.4, "text": " excited by and at the top you see that that the data set examples they're most" }, { "end": 597.56, "start": 592.72, "text": " excited by so you can see that there is a there's a healthy diversity in the" }, { "end": 602.4, "start": 597.56, "text": " data set examples but they also all kind of map to the to the what it's most" }, { "end": 609.92, "start": 602.4, "text": " excited by it's pretty cool the last point they make in this article is that" }, { "end": 618.04, "start": 609.92, "text": " of diversity with the data set examples you do get naturally sort of a diverse" }, { "end": 622.36, "start": 618.04, "text": " set of images of course where you can guess you can also say okay whether the" }, { "end": 626.56, "start": 622.36, "text": " maximum activations and only slightly positive you can even give negative" }, { "end": 633.68, "start": 626.56, "text": " examples but with the positive you can only either maximize fully or you can" }, { "end": 638, "start": 633.68, "text": " you know take the negative and you can say okay what is this neuron not excited" }, { "end": 645.12, "start": 638, "text": " by at all but you you won't get kind of a spectrum of what the neuron is" }, { "end": 650.8, "start": 645.12, "text": " excited by or the unit or the layer and what they're doing is simply they add a" }, { "end": 657.52, "start": 650.8, "text": " diversity term they say that works best so what does that mean it means that" }, { "end": 666.32, "start": 657.52, "text": " here if we optimize X and Y right let's go up here if we optimize X in that" }, { "end": 675, "start": 666.32, "text": " right and try to maximize the activation of n we don't want to do X by itself but" }, { "end": 683.12, "start": 675, "text": " we want to do is we want to do an entire set of X I right so we feed an entire" }, { "end": 692.12, "start": 683.12, "text": " mini batch in there and we want to maximize n but also maximize a diversity" }, { "end": 703.64, "start": 692.12, "text": " term let's call that D between X 1 to X be right so you want to maximize the" }, { "end": 707.32, "start": 703.64, "text": " activation of the neuron but at the same time in your loss function that you" }, { "end": 713.12, "start": 707.32, "text": " optimize you also have this diversity term where you say the images that you" }, { "end": 717.32, "start": 713.12, "text": " produce should be far apart from each other or kind of apart from each other" }, { "end": 725.5600000000001, "start": 717.32, "text": " and thus you do get diverse samples okay the printing again doesn't work so here" }, { "end": 730.96, "start": 725.5600000000001, "text": " you see that if you just simply optimize you get the thing on the left but if you" }, { "end": 736.08, "start": 730.96, "text": " have a batch of things and you optimize them to be diverse from each other but" }, { "end": 742.7600000000001, "start": 736.08, "text": " also activate the layer or the unit you get a variety of high activations and" }, { "end": 747.76, "start": 742.76, "text": " you can see here that this is some sort of curve this could be a beak of a bird" }, { "end": 752.76, "start": 747.76, "text": " but on the right here this could also be kind of a snout of a monkey or something" }, { "end": 760.24, "start": 752.76, "text": " okay so it's just curvy curvy things that is activated by here they give" }, { "end": 764.8, "start": 760.24, "text": " another example you can clearly see that there is like some sort of eye in the" }, { "end": 769.4, "start": 764.8, "text": " picture but then if you optimize with diversity you can see that some of them" }, { "end": 776.88, "start": 769.4, "text": " do not have this eye thing and in fact also some of the data set examples do" }, { "end": 782.64, "start": 776.88, "text": " not have this eye thing so it might be interesting to to to optimize here with" }, { "end": 790.88, "start": 782.64, "text": " diversity and even in they say in higher layers it gets even more more diverse" }, { "end": 800.12, "start": 790.88, "text": " with what you achieve with this ball detector they also say they research" }, { "end": 805.28, "start": 800.12, "text": " interactions between neurons where they can interpolate between them right so" }, { "end": 811.76, "start": 805.28, "text": " you can here have two different new units let's say this top left one here" }, { "end": 816.44, "start": 811.76, "text": " is the thing that we've just seen with the curvature activator and then on the" }, { "end": 821.6, "start": 816.44, "text": " right we can select this thing here that is appears to be activated by these bird" }, { "end": 826.0400000000001, "start": 821.6, "text": " like bird like things and if you optimize an image that activates both of" }, { "end": 830.4000000000001, "start": 826.0400000000001, "text": " them you get the thing on the bottom left and this is very good for" }, { "end": 834.4000000000001, "start": 830.4000000000001, "text": " understanding how neural networks work because what a neural network will do is" }, { "end": 839.5600000000001, "start": 834.4000000000001, "text": " exactly it will take the thing on the top left and the top right from lower" }, { "end": 844.48, "start": 839.5600000000001, "text": " layers and it will combine them to form features of higher layers so while the" }, { "end": 850.52, "start": 844.48, "text": " top right thing looks like generic birds the bottom left thing looks much more" }, { "end": 854.6, "start": 850.52, "text": " like birds with let's say long necks and then kind of curved necks so more" }, { "end": 861.9200000000001, "start": 854.6, "text": " stork ish birds right because we've added in this curvature thing so this is" }, { "end": 867.02, "start": 861.9200000000001, "text": " very very cool to play around you can also here interpolate between into" }, { "end": 875.92, "start": 867.02, "text": " neurons like you would interpolate in a in a GAN right and yeah so they do make" }, { "end": 880.16, "start": 875.92, "text": " a point of regularization I don't want to go into that particularly but you" }, { "end": 885, "start": 880.16, "text": " have to be careful you can't just apply the optimization procedure as I said" }, { "end": 891.52, "start": 885, "text": " right now you have to actually have to do some regularization and to get to" }, { "end": 895.36, "start": 891.52, "text": " get rid of what are essentially adversarial examples in this process" }, { "end": 901.48, "start": 895.36, "text": " because if you just straight-up optimize you will get pretty high frequency" }, { "end": 909.84, "start": 901.48, "text": " crappy results I actually want to jump over now to this OpenAI microscope so" }, { "end": 914.6800000000001, "start": 909.84, "text": " this is a tool that lets you explore these visualizations so at the beginning" }, { "end": 919.4, "start": 914.6800000000001, "text": " you can pick one of these models and I'll pick inception v1 just because some" }, { "end": 925.8, "start": 919.4, "text": " of the other ones they don't have everything done quite yet the all the" }, { "end": 931.84, "start": 925.8, "text": " all the all the visualization so on the right here you can actually see the" }, { "end": 935.88, "start": 931.84, "text": " architecture of the network if you know what an inception network is this what" }, { "end": 941.28, "start": 935.88, "text": " this looks like and you would be able from here to select one of these units" }, { "end": 948.48, "start": 941.28, "text": " straight away but I'm gonna sorry we're gonna go to the left here so you have" }, { "end": 958.24, "start": 948.48, "text": " deep dream activated which means the entire layer is optimized for so per" }, { "end": 964.44, "start": 958.24, "text": " layer on the right side you have an image here and you can already see that" }, { "end": 970.08, "start": 964.44, "text": " if we go from the bottom what we saw before we get patterns that become more" }, { "end": 976.5600000000001, "start": 970.08, "text": " and more complex as you go up the layers and then more and more until you finally" }, { "end": 982.4, "start": 976.56, "text": " have what the network appears to be most activated by is mostly dogs which is" }, { "end": 989.5999999999999, "start": 982.4, "text": " okay because image net is dominated by dogs so you can click on any of these" }, { "end": 998.9599999999999, "start": 989.5999999999999, "text": " right here like this one and now you'll be able to inspect the individual nodes" }, { "end": 1002.4, "start": 998.9599999999999, "text": " in this layer so before we had the whole layer right the whole layer was" }, { "end": 1007.9599999999999, "start": 1002.4, "text": " activated by something but these layers they have different channels and also" }, { "end": 1013.12, "start": 1007.9599999999999, "text": " different neurons within the channel so you can select this here you can go" }, { "end": 1021.52, "start": 1013.12, "text": " neuron activation or channel activation and these are the images that these" }, { "end": 1027.08, "start": 1021.52, "text": " channels are excited by the most you see you get pretty funky pattern if we" }, { "end": 1034.6, "start": 1027.08, "text": " select one interesting one maybe this this one right here you can see on the" }, { "end": 1038.32, "start": 1034.6, "text": " left this is the channel optimizing optimization on the right this is the" }, { "end": 1044.48, "start": 1038.32, "text": " neuron optimization and here you get the data set examples that are most" }, { "end": 1053.32, "start": 1044.48, "text": " activated that mostly activate this particular channel or neuron so sorry" }, { "end": 1060.24, "start": 1053.32, "text": " this particular yeah channel so you can see this is pretty similar to the thing" }, { "end": 1068, "start": 1060.24, "text": " I drew where except for it being a cat it's some sort of a fox dog thing right" }, { "end": 1074.8799999999999, "start": 1068, "text": " and and you can explore the neural network in this fashion so you can go" }, { "end": 1085.2800000000002, "start": 1074.88, "text": " through the layer here and look at that good units this seems to be whiskers" }, { "end": 1094.1200000000001, "start": 1085.2800000000002, "text": " classifier and lo and behold things with whiskers will activate the the neuron" }, { "end": 1098.24, "start": 1094.1200000000001, "text": " and as you go up the layer and this is the the cool thing right so we're right" }, { "end": 1103.2800000000002, "start": 1098.24, "text": " now we're here in this layer for as you go up you will see more and more" }, { "end": 1112.52, "start": 1103.28, "text": " intricate patterns of activations I I could play around this for very very" }, { "end": 1119.76, "start": 1112.52, "text": " long time but I won't I won't waste your time too much they there is a dist sorry" }, { "end": 1128.16, "start": 1119.76, "text": " there is a slack workspace where people discuss interesting patterns what is" }, { "end": 1141.52, "start": 1128.16, "text": " this okay yes this is a some sort of temple temple constructor very cool" }, { "end": 1146.92, "start": 1141.52, "text": " there is a slack workspace where people discuss interesting things for example" }, { "end": 1153.6000000000001, "start": 1146.92, "text": " they discuss how the car detector that you see right here is one of the units" }, { "end": 1159.6, "start": 1153.6, "text": " there is literally endless units to look at that detects cars can be clearly seen" }, { "end": 1165.12, "start": 1159.6, "text": " to be built from lower level features such as this wheel detector you see this" }, { "end": 1175.24, "start": 1165.12, "text": " wheel detector here is the unit three three seven in the mixed four layer and" }, { "end": 1183.52, "start": 1175.24, "text": " this car hood detector right here is unit two three seven also in the in one" }, { "end": 1188.32, "start": 1183.52, "text": " of the layer fours right so these are both from layer four and then the car" }, { "end": 1194.88, "start": 1188.32, "text": " detector I haven't looked this up ah isn't layer four as well but let's check" }, { "end": 1203.08, "start": 1194.88, "text": " it out this isn't layer 4b this isn't layer 4b and this isn't layer 4c ah so" }, { "end": 1211.8, "start": 1203.08, "text": " you see I this was a risk the car detector is built from lower level" }, { "end": 1218.36, "start": 1211.8, "text": " features of car hood and car wheel right the car wheel right here detects wheels" }, { "end": 1226.4399999999998, "start": 1218.36, "text": " and the car hood detector detects hoods and then the car detector detects cars so" }, { "end": 1232.24, "start": 1226.4399999999998, "text": " there are very like I really invite you to go look at it check out what people" }, { "end": 1239.36, "start": 1232.24, "text": " find and explore these models all of this is based on this lucid library" }, { "end": 1244.48, "start": 1239.36, "text": " right here also invite you to check that out where you can perform such" }, { "end": 1262.68, "start": 1244.48, "text": " optimizations yourself I'll link to that and with that bye bye" } ]
-h1KB8ps11A
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Datasets for Data-Driven Reinforcement Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "reinforcement learning", "deep rl", "off-policy", "on-policy", "replay buffer", "dataset", "benchmark", "berkeley", "rail", "offline", "online" ]
Offline Reinforcement Learning has come more and more into focus recently in domains where classic on-policy RL algorithms are infeasible to train, such as safety-critical tasks or learning from expert demonstrations. This paper presents an extensive benchmark for evaluating offline RL algorithms in a variety of settings. Paper: https://arxiv.org/abs/2004.07219 Code: https://github.com/rail-berkeley/offline_rl Abstract: The offline reinforcement learning (RL) problem, also referred to as batch RL, refers to the setting where a policy must be learned from a dataset of previously collected data, without additional online data collection. In supervised learning, large datasets and complex deep neural networks have fueled impressive progress, but in contrast, conventional RL algorithms must collect large amounts of on-policy data and have had little success leveraging previously collected datasets. As a result, existing RL benchmarks are not well-suited for the offline setting, making progress in this area difficult to measure. To design a benchmark tailored to offline RL, we start by outlining key properties of datasets relevant to applications of offline RL. Based on these properties, we design a set of benchmark tasks and datasets that evaluate offline RL algorithms under these conditions. Examples of such properties include: datasets generated via hand-designed controllers and human demonstrators, multi-objective datasets, where an agent can perform different tasks in the same environment, and datasets consisting of a heterogeneous mix of high-quality and low-quality trajectories. By designing the benchmark tasks and datasets to reflect properties of real-world offline RL problems, our benchmark will focus research effort on methods that drive substantial improvements not just on simulated benchmarks, but ultimately on the kinds of real-world problems where offline RL will have the largest impact. Authors: Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, Sergey Levine Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at datasets for data-driven reinforcement learning by Justin Fu, Aviral Kumar, Ofer Natschum, George Tucker and Sergei Levine. So this is what you would call a dataset paper or a benchmark paper. And the main point or the main area of the paper was called offline reinforcement learning. So offline reinforcement learning, usually in reinforcement learning you have this task, right? You have the agent and you have the environment. And the agent gets some sort of observation and has to come up with an action in response to that observation. And then it gets back a reward and another observation. And again, it has to come up with an action. And the goal is to maximize the rewards over time that the agent gets while interacting with the environment. So usually this is organized in what are called episodes, which basically means if you have some sort of environment, right, and here is the agent and here is the goal, right, the goal is a inverted triangle. And there are a bunch of walls right here, right? So it looks kind of a maze that the agent has to navigate. Then one episode could be the agent moving around until it either finds the target or hits a wall or just kind of goes around and around. And then at some point you say, all right, that's enough, game over. And usually in reinforcement learning, you perform many of these episodes and then you learn from them. So you perform episodes and each episode gets into usually some sort of replay buffer, right? Let's call this replay buffer. And you do this many times and at the same time that you're doing this, you're using the things that you stored here in order to learn, right? So the agent learns from these things, right? So it acts with the environment in this loop, in this fashion. Then once it has done an episode, it puts it into the replay buffer and then it learns from the actions it has performed. This is what is usually called online reinforcement learning, right? So this loop is online. Online means because the agent learns from its own actions, right? Now in contrast to this, there is offline reinforcement learning. So in offline reinforcement learning, the agent has to learn from someone else's actions, right? So this connection here is severed. Instead you have other agents. Let's call these agent one, agent two, multiple agents, agent three. They all have their own interaction with the environment, right? Environment environment interactions and they feed their experience into they perform these episodes. They feed their experience into the replay buffer. And then the agent just has to learn from that. So whatever happened here, this was previous, right? And now the agent has to learn how to maximize its reward just from the experience that is in the replay buffer from these other agents. This is what's called offline reinforcement learning means the agent learns from someone else's actions. Basically the power of reinforcement learning of course comes from the fact that you learn from your own actions. It means that for example, if you already have some successful trajectories here, right? You found the target. You can try to replicate that because you know which actions you performed. And if you don't, you know, change anything, you're probably going to find the target again just by randomness. All right, because you've done it already once and so on. So you kind of know all the intrinsics of your own algorithm that led you to reach the target. Now this is an entirely different case with all of these other agents. You have no clue how they were acting, why they were acting, right? You just know, okay, they did a series of actions and that gave them some kind of reward. And you have no idea what their reasoning was or anything. All you really can learn from is their sequence of actions. Now why is that problematic, right? So if all of the agents, for example, if this is an actual platform and this is really steep here, this is all of here is really steep cliffs, right? And you can actually fall off. But the agents, they're humans, right? So they don't want to fall off. So what they're going to do is they're just going to take steps that are maybe like this or maybe like this, but they're humans, they're smart. They're never going to fall off here, right? Why is this a problem? If you're not trying to learn from this experience and your policy by some chance, because you might have some entropy in there or something, you do know what happens if you make a move like this. And you also know what happens if you make a move like this, right? Already two humans have done these moves. But what happens if you make a move like this? You just don't know, right? In classic reinforcement learning, you would get a negative reward and you could learn from that to not do this action anymore. But in this case, you simply don't have any data to tell you what happens when you go off there. So you see that there's a problem if you are not able to learn from your own experience, but you have to learn from something or someone else's experience. The distribution of experience that you have available to you might be not fully specific of the environment. It might be very different from what you would do. And it might be very not conducive to what you want to do with it. So the task of offline reinforcement learning is harder than online reinforcement learning. But it also has many, many applications. Sometimes it's just not possible to do online reinforcement learning. When for example, in medical field, right? Think of the medical field where you want a robot to perform a surgery. You can't just do reinforcement learning with our online techniques because they're just going to try a bunch of things and see what works. Maybe you want that, I don't want that. So necessarily, you're going to be left with, let's have this robot learn from human experts. So that's a task for offline reinforcement learning. There are many more tasks. For example, if you think of search engine, you will have many, many, many logs from human searching things, and you simply store them, you simply have them in a buffer. Now you want to maybe train a reinforcement learning agent that serves the best possible ads or something like this. You want to do this in a way that you can use all of that data, even though that data wasn't collected by that particular agent. The crucial difference to supervised learning again, is that you have this interactive structure, this multi-step interactive structure. Because in a supervised learning, you also have this buffer here. In supervised learning, you simply have your labeled data set. But the difference is in supervised learning, you always know what the right action is currently, because you have the labels. In offline reinforcement learning, you don't know. You might be here, and there are three actions available. All you know is that the demonstrator, these actors here, one of them has done this, and then this, and then this, and then got a two. You have no clue what happens if you do this, and then this, and then this. All you know is that this action here might eventually lead to a two. You also can't try it out, because you can't try out this path, because you don't get a reward here. You have to find, and this is the task here, you'll have to find some other example or stitch together. They make a good example here. This paper basically proposes a benchmark for offline RL algorithms. What they do is they have a bunch of data sets. They have a bunch of these replay buffers around for different tasks, a collection of this, that they collected with various techniques. There is human demonstration, there is other agents, and so on. They have that, and you're supposed to take one of them, learn something, learn an agent, and then evaluate it on an environment. They propose which ones are suitable for this. They give you the data, and they give you the environment to evaluate it on. In the end, you'll get a score, and you can compare your offline RL algorithm with others. They also provide some benchmark implementations for algorithms that already do this. They show that they don't really work well. One of the tasks is this maze here. In this maze, the task is you are somewhere, let's say here, and you need to go somewhere, let's say here, and you need to find your way. The demonstrations you have, the data in your replay buffer, is such that this is the same task, but never the same start and end points like you are tasked to. You might have one in your replay buffer, you might have one trajectory, one episode that went like this from one to two. And you'll be able to see the reward of that. And you might have one trajectory that was from two to three, like this. Both of these things actually give you really high reward. If you were an agent, and you had to learn, and now the task is please go from one to three, what you could do is you could simply say, I know the green thing gave a pretty high reward, and the yellow thing gave a pretty high reward. I know the green thing started at one, and I know the yellow thing ended at three, and I know they both have this common location. So what I might do just is I might go to that common location, and then go on on the different path, right? So you have to somehow stitch together experience from other agents in order to make your task work. This is a very explicit example, of course, what we want to do is we want to do this in a more implicit deep learning way, ideally, and not manually stitch together other trajectories. Though I'm pretty sure that would not be so dumb, right? I'm pretty sure there's a lot of data augmentation you could do during training simply by stitching together other trajectories, right? So from this trajectory, you could actually, not only could you make other gold conditioned ways, for example, from here to here, or from here to here, you could make from here to here anywhere where you have shared points, you could train a policy that goes there and then goes further or something like this. I'm pretty sure there's already an algorithm that does things like this, but I'm just thinking aloud here. Alright, so this is one of the tasks and you see that the that that you will have to learn a policy to go as fast as possible from any point to any other point. And you're all you're given is a database of experience that already exists from some other agent, but never will probably never the exact route that you need to learn right now. Alright, so the goal is how fast or how efficiently can you do this? This is one task in this data set. The next task is very similar is this grid world here where there is this red square, red triangle, that's your agent. And then there is the green square, that's your goal or vice versa. And so you're basically tasked to not hit the walls here and to go about your way finding the target. There are more elaborate things like this mojo co environment here, or the ant maze where you have this little ant with you know, the spider legs. So this is no longer you can just move in either direction, you have to actually control the legs. And there's also this arm, this robotic arm. So you see there is a wide diversity of tasks. And also, there is a wide diversity of how the replay buffer was constructed. So in some cases, the replay buffer is actually constructed by a human performing in this environment. So in this hand manipulation task, you'll have demonstrations from humans. You see it's not particularly many samples here. It's 5000 samples, which I guess are is a chopped up version of I'm not really sure how the human things were constructed. But you can clearly guess that the degrees of freedom that you have in a robotic hand is much, much higher than you could learn just from these 5000 samples if you were to, you know, an online or algorithm that just does random exploration will need much more than these 5000 samples. And the 5000 samples won't be I ID distributed with all the degrees of freedom, it will just be here's what a human does, right. And so you can think of algorithms like inverse reinforcement learning or something like this. But here in inverse reinforcement learning, usually you assume that the expert the expert is kind of trying to achieve the same reward as you do. But this is not necessarily the case here. You have a given reward structure, but you are tasked to simply learn from these demonstrations. You can see it's also possible that there is this is constructed by a policy. And that usually means that they so either it's it's constructed by let's say a reinforcement learning algorithm that was trained in an online fashion, but maybe not as well. But also I think they have behavior cloning policy that they got from human demonstration, I think so that there are many ways. Also sometimes you have a planner which is, can you imagine it's it's a it's an algorithm that wasn't machine learned. So I know almost unthinkable, but in these in these kind of mazes, you can actually do planning algorithms that can can sort of so I know this is crazy and crazy talk, the niche topic but there exists things like a star search where where where you can construct the kind of shortest path through these mazes and things like this. So yeah, that's I know, I know that that is that is very niche. But you can construct policies like this. And then you can use those as your replay buffer filling. And you can already see that this also will be a massively different distribution of data than you would get with an online RL algorithm, right. So in conclusion, they do test other they do test other algorithms on this. In conclusion, they say that most offline RL algorithms nowadays, they don't work well on these on these data sets. The only data sets where they do work well is where the replay buffer was generated by some sort of like here, by some sort of policy by some sort of reinforcement learning policy. So what they would do is they would train an online policy and the experience generated by that online policy while it learns will make up the replay buffer. And if you use that replay buffer for offline learning, then they say it tends to work okay. But if you have other methods of collecting the data that are very different from this offline, sorry, from an from a reinforcement learning collection approach, then it tends not to work as well. Alright, so if you are interested in offline RL, please check out this paper, all their code is available right here. Note that the link in the paper doesn't seem to work. The true link is here. I'll also put it in the description. And with that, I wish you a good day. Bye!
[ { "end": 5.26, "start": 0, "text": " Hi there, today we're looking at datasets for data-driven reinforcement learning by" }, { "end": 12.76, "start": 5.26, "text": " Justin Fu, Aviral Kumar, Ofer Natschum, George Tucker and Sergei Levine." }, { "end": 18.34, "start": 12.76, "text": " So this is what you would call a dataset paper or a benchmark paper." }, { "end": 26.16, "start": 18.34, "text": " And the main point or the main area of the paper was called offline reinforcement learning." }, { "end": 31.92, "start": 26.16, "text": " So offline reinforcement learning, usually in reinforcement learning you have this task," }, { "end": 32.92, "start": 31.92, "text": " right?" }, { "end": 36.32, "start": 32.92, "text": " You have the agent and you have the environment." }, { "end": 43.22, "start": 36.32, "text": " And the agent gets some sort of observation and has to come up with an action in response" }, { "end": 44.879999999999995, "start": 43.22, "text": " to that observation." }, { "end": 50.019999999999996, "start": 44.879999999999995, "text": " And then it gets back a reward and another observation." }, { "end": 53.28, "start": 50.019999999999996, "text": " And again, it has to come up with an action." }, { "end": 59.92, "start": 53.28, "text": " And the goal is to maximize the rewards over time that the agent gets while interacting" }, { "end": 62.38, "start": 59.92, "text": " with the environment." }, { "end": 68.16, "start": 62.38, "text": " So usually this is organized in what are called episodes, which basically means if you have" }, { "end": 75.44, "start": 68.16, "text": " some sort of environment, right, and here is the agent and here is the goal, right," }, { "end": 79.86, "start": 75.44, "text": " the goal is a inverted triangle." }, { "end": 83.8, "start": 79.86, "text": " And there are a bunch of walls right here, right?" }, { "end": 87.68, "start": 83.8, "text": " So it looks kind of a maze that the agent has to navigate." }, { "end": 98.62, "start": 87.68, "text": " Then one episode could be the agent moving around until it either finds the target or" }, { "end": 102.4, "start": 98.62, "text": " hits a wall or just kind of goes around and around." }, { "end": 107.44, "start": 102.4, "text": " And then at some point you say, all right, that's enough, game over." }, { "end": 113.39999999999999, "start": 107.44, "text": " And usually in reinforcement learning, you perform many of these episodes and then you" }, { "end": 115.08, "start": 113.39999999999999, "text": " learn from them." }, { "end": 124.64, "start": 115.08, "text": " So you perform episodes and each episode gets into usually some sort of replay buffer, right?" }, { "end": 127.32, "start": 124.64, "text": " Let's call this replay buffer." }, { "end": 134.72, "start": 127.32, "text": " And you do this many times and at the same time that you're doing this, you're using" }, { "end": 138.52, "start": 134.72, "text": " the things that you stored here in order to learn, right?" }, { "end": 142.62, "start": 138.52, "text": " So the agent learns from these things, right?" }, { "end": 148.42, "start": 142.62, "text": " So it acts with the environment in this loop, in this fashion." }, { "end": 154.2, "start": 148.42, "text": " Then once it has done an episode, it puts it into the replay buffer and then it learns" }, { "end": 156.86, "start": 154.2, "text": " from the actions it has performed." }, { "end": 161.24, "start": 156.86, "text": " This is what is usually called online reinforcement learning, right?" }, { "end": 164.56, "start": 161.24, "text": " So this loop is online." }, { "end": 172.04, "start": 164.56, "text": " Online means because the agent learns from its own actions, right?" }, { "end": 176.28, "start": 172.04, "text": " Now in contrast to this, there is offline reinforcement learning." }, { "end": 187.04, "start": 176.28, "text": " So in offline reinforcement learning, the agent has to learn from someone else's actions," }, { "end": 188.04, "start": 187.04, "text": " right?" }, { "end": 194.42000000000002, "start": 188.04, "text": " So this connection here is severed." }, { "end": 198.11999999999998, "start": 194.42, "text": " Instead you have other agents." }, { "end": 202.64, "start": 198.11999999999998, "text": " Let's call these agent one, agent two, multiple agents, agent three." }, { "end": 208.11999999999998, "start": 202.64, "text": " They all have their own interaction with the environment, right?" }, { "end": 218.48, "start": 208.11999999999998, "text": " Environment environment interactions and they feed their experience into they perform these" }, { "end": 219.61999999999998, "start": 218.48, "text": " episodes." }, { "end": 222.56, "start": 219.61999999999998, "text": " They feed their experience into the replay buffer." }, { "end": 225.42000000000002, "start": 222.56, "text": " And then the agent just has to learn from that." }, { "end": 230.8, "start": 225.42000000000002, "text": " So whatever happened here, this was previous, right?" }, { "end": 237.56, "start": 230.8, "text": " And now the agent has to learn how to maximize its reward just from the experience that is" }, { "end": 241.08, "start": 237.56, "text": " in the replay buffer from these other agents." }, { "end": 245.62, "start": 241.08, "text": " This is what's called offline reinforcement learning means the agent learns from someone" }, { "end": 248.88, "start": 245.62, "text": " else's actions." }, { "end": 253.6, "start": 248.88, "text": " Basically the power of reinforcement learning of course comes from the fact that you learn" }, { "end": 255.85999999999999, "start": 253.6, "text": " from your own actions." }, { "end": 263.52, "start": 255.85999999999999, "text": " It means that for example, if you already have some successful trajectories here, right?" }, { "end": 265.14, "start": 263.52, "text": " You found the target." }, { "end": 270.84, "start": 265.14, "text": " You can try to replicate that because you know which actions you performed." }, { "end": 275.4, "start": 270.84, "text": " And if you don't, you know, change anything, you're probably going to find the target again" }, { "end": 276.96, "start": 275.4, "text": " just by randomness." }, { "end": 280.84, "start": 276.96, "text": " All right, because you've done it already once and so on." }, { "end": 286, "start": 280.84, "text": " So you kind of know all the intrinsics of your own algorithm that led you to reach the" }, { "end": 287.64, "start": 286, "text": " target." }, { "end": 291.52, "start": 287.64, "text": " Now this is an entirely different case with all of these other agents." }, { "end": 296.29999999999995, "start": 291.52, "text": " You have no clue how they were acting, why they were acting, right?" }, { "end": 301.91999999999996, "start": 296.29999999999995, "text": " You just know, okay, they did a series of actions and that gave them some kind of reward." }, { "end": 307, "start": 301.92, "text": " And you have no idea what their reasoning was or anything." }, { "end": 310.88, "start": 307, "text": " All you really can learn from is their sequence of actions." }, { "end": 313.14000000000004, "start": 310.88, "text": " Now why is that problematic, right?" }, { "end": 322.92, "start": 313.14000000000004, "text": " So if all of the agents, for example, if this is an actual platform and this is really steep" }, { "end": 328.86, "start": 322.92, "text": " here, this is all of here is really steep cliffs, right?" }, { "end": 331.02000000000004, "start": 328.86, "text": " And you can actually fall off." }, { "end": 333.88, "start": 331.02, "text": " But the agents, they're humans, right?" }, { "end": 335.24, "start": 333.88, "text": " So they don't want to fall off." }, { "end": 341.03999999999996, "start": 335.24, "text": " So what they're going to do is they're just going to take steps that are maybe like this" }, { "end": 345.2, "start": 341.03999999999996, "text": " or maybe like this, but they're humans, they're smart." }, { "end": 350.08, "start": 345.2, "text": " They're never going to fall off here, right?" }, { "end": 351.08, "start": 350.08, "text": " Why is this a problem?" }, { "end": 359.47999999999996, "start": 351.08, "text": " If you're not trying to learn from this experience and your policy by some chance, because you" }, { "end": 365.76, "start": 359.48, "text": " might have some entropy in there or something, you do know what happens if you make a move" }, { "end": 366.76, "start": 365.76, "text": " like this." }, { "end": 369.54, "start": 366.76, "text": " And you also know what happens if you make a move like this, right?" }, { "end": 372.12, "start": 369.54, "text": " Already two humans have done these moves." }, { "end": 374.8, "start": 372.12, "text": " But what happens if you make a move like this?" }, { "end": 376.70000000000005, "start": 374.8, "text": " You just don't know, right?" }, { "end": 380.12, "start": 376.70000000000005, "text": " In classic reinforcement learning, you would get a negative reward and you could learn" }, { "end": 383.54, "start": 380.12, "text": " from that to not do this action anymore." }, { "end": 391.52000000000004, "start": 383.54, "text": " But in this case, you simply don't have any data to tell you what happens when you go" }, { "end": 392.52000000000004, "start": 391.52000000000004, "text": " off there." }, { "end": 398.32000000000005, "start": 392.52000000000004, "text": " So you see that there's a problem if you are not able to learn from your own experience," }, { "end": 402.84000000000003, "start": 398.32000000000005, "text": " but you have to learn from something or someone else's experience." }, { "end": 414.2, "start": 402.84, "text": " The distribution of experience that you have available to you might be not fully specific" }, { "end": 415.94, "start": 414.2, "text": " of the environment." }, { "end": 418.84, "start": 415.94, "text": " It might be very different from what you would do." }, { "end": 423.76, "start": 418.84, "text": " And it might be very not conducive to what you want to do with it." }, { "end": 429.35999999999996, "start": 423.76, "text": " So the task of offline reinforcement learning is harder than online reinforcement learning." }, { "end": 434.6, "start": 429.36, "text": " But it also has many, many applications." }, { "end": 439.92, "start": 434.6, "text": " Sometimes it's just not possible to do online reinforcement learning." }, { "end": 444.24, "start": 439.92, "text": " When for example, in medical field, right?" }, { "end": 450.64, "start": 444.24, "text": " Think of the medical field where you want a robot to perform a surgery." }, { "end": 456.44, "start": 450.64, "text": " You can't just do reinforcement learning with our online techniques because they're just" }, { "end": 461.28, "start": 456.44, "text": " going to try a bunch of things and see what works." }, { "end": 463.6, "start": 461.28, "text": " Maybe you want that, I don't want that." }, { "end": 472.48, "start": 463.6, "text": " So necessarily, you're going to be left with, let's have this robot learn from human experts." }, { "end": 475.78, "start": 472.48, "text": " So that's a task for offline reinforcement learning." }, { "end": 476.92, "start": 475.78, "text": " There are many more tasks." }, { "end": 483.84, "start": 476.92, "text": " For example, if you think of search engine, you will have many, many, many logs from human" }, { "end": 489.03999999999996, "start": 483.84, "text": " searching things, and you simply store them, you simply have them in a buffer." }, { "end": 495.44, "start": 489.03999999999996, "text": " Now you want to maybe train a reinforcement learning agent that serves the best possible" }, { "end": 498.03999999999996, "start": 495.44, "text": " ads or something like this." }, { "end": 503.88, "start": 498.03999999999996, "text": " You want to do this in a way that you can use all of that data, even though that data" }, { "end": 508.64, "start": 503.88, "text": " wasn't collected by that particular agent." }, { "end": 515.12, "start": 508.64, "text": " The crucial difference to supervised learning again, is that you have this interactive structure," }, { "end": 518.3199999999999, "start": 515.12, "text": " this multi-step interactive structure." }, { "end": 523, "start": 518.3199999999999, "text": " Because in a supervised learning, you also have this buffer here." }, { "end": 526.48, "start": 523, "text": " In supervised learning, you simply have your labeled data set." }, { "end": 533.48, "start": 526.48, "text": " But the difference is in supervised learning, you always know what the right action is currently," }, { "end": 535.22, "start": 533.48, "text": " because you have the labels." }, { "end": 538.3199999999999, "start": 535.22, "text": " In offline reinforcement learning, you don't know." }, { "end": 547.12, "start": 538.32, "text": " You might be here, and there are three actions available." }, { "end": 555.08, "start": 547.12, "text": " All you know is that the demonstrator, these actors here, one of them has done this, and" }, { "end": 558.36, "start": 555.08, "text": " then this, and then this, and then got a two." }, { "end": 567.32, "start": 558.36, "text": " You have no clue what happens if you do this, and then this, and then this." }, { "end": 573.44, "start": 567.32, "text": " All you know is that this action here might eventually lead to a two." }, { "end": 578.86, "start": 573.44, "text": " You also can't try it out, because you can't try out this path, because you don't get a" }, { "end": 579.98, "start": 578.86, "text": " reward here." }, { "end": 586.46, "start": 579.98, "text": " You have to find, and this is the task here, you'll have to find some other example or" }, { "end": 588.1, "start": 586.46, "text": " stitch together." }, { "end": 589.5600000000001, "start": 588.1, "text": " They make a good example here." }, { "end": 596.86, "start": 589.5600000000001, "text": " This paper basically proposes a benchmark for offline RL algorithms." }, { "end": 600.28, "start": 596.86, "text": " What they do is they have a bunch of data sets." }, { "end": 606.16, "start": 600.28, "text": " They have a bunch of these replay buffers around for different tasks, a collection of" }, { "end": 609.72, "start": 606.16, "text": " this, that they collected with various techniques." }, { "end": 614.6, "start": 609.72, "text": " There is human demonstration, there is other agents, and so on." }, { "end": 621.88, "start": 614.6, "text": " They have that, and you're supposed to take one of them, learn something, learn an agent," }, { "end": 627.2, "start": 621.88, "text": " and then evaluate it on an environment." }, { "end": 632.08, "start": 627.2, "text": " They propose which ones are suitable for this." }, { "end": 637.9, "start": 632.08, "text": " They give you the data, and they give you the environment to evaluate it on." }, { "end": 643.2, "start": 637.9, "text": " In the end, you'll get a score, and you can compare your offline RL algorithm with others." }, { "end": 649.72, "start": 643.2, "text": " They also provide some benchmark implementations for algorithms that already do this." }, { "end": 656.96, "start": 649.72, "text": " They show that they don't really work well." }, { "end": 661.44, "start": 656.96, "text": " One of the tasks is this maze here." }, { "end": 668.5600000000001, "start": 661.44, "text": " In this maze, the task is you are somewhere, let's say here, and you need to go somewhere," }, { "end": 673.08, "start": 668.5600000000001, "text": " let's say here, and you need to find your way." }, { "end": 680.1600000000001, "start": 673.08, "text": " The demonstrations you have, the data in your replay buffer, is such that this is the same" }, { "end": 685.24, "start": 680.1600000000001, "text": " task, but never the same start and end points like you are tasked to." }, { "end": 691.76, "start": 685.24, "text": " You might have one in your replay buffer, you might have one trajectory, one episode" }, { "end": 695.8000000000001, "start": 691.76, "text": " that went like this from one to two." }, { "end": 699.84, "start": 695.8000000000001, "text": " And you'll be able to see the reward of that." }, { "end": 707.12, "start": 699.84, "text": " And you might have one trajectory that was from two to three, like this." }, { "end": 711.9, "start": 707.12, "text": " Both of these things actually give you really high reward." }, { "end": 718.4, "start": 711.9, "text": " If you were an agent, and you had to learn, and now the task is please go from one to" }, { "end": 725.52, "start": 718.4, "text": " three, what you could do is you could simply say, I know the green thing gave a pretty" }, { "end": 729.12, "start": 725.52, "text": " high reward, and the yellow thing gave a pretty high reward." }, { "end": 734.08, "start": 729.12, "text": " I know the green thing started at one, and I know the yellow thing ended at three, and" }, { "end": 738.64, "start": 734.08, "text": " I know they both have this common location." }, { "end": 746.92, "start": 738.64, "text": " So what I might do just is I might go to that common location, and then go on on the different" }, { "end": 747.92, "start": 746.92, "text": " path, right?" }, { "end": 755, "start": 747.92, "text": " So you have to somehow stitch together experience from other agents in order to make your task" }, { "end": 756, "start": 755, "text": " work." }, { "end": 760.36, "start": 756, "text": " This is a very explicit example, of course, what we want to do is we want to do this in" }, { "end": 767.96, "start": 760.36, "text": " a more implicit deep learning way, ideally, and not manually stitch together other trajectories." }, { "end": 776.24, "start": 767.96, "text": " Though I'm pretty sure that would not be so dumb, right?" }, { "end": 781.28, "start": 776.24, "text": " I'm pretty sure there's a lot of data augmentation you could do during training simply by stitching" }, { "end": 786.04, "start": 781.28, "text": " together other trajectories, right?" }, { "end": 791.28, "start": 786.04, "text": " So from this trajectory, you could actually, not only could you make other gold conditioned" }, { "end": 796.88, "start": 791.28, "text": " ways, for example, from here to here, or from here to here, you could make from here to" }, { "end": 805.12, "start": 796.88, "text": " here anywhere where you have shared points, you could train a policy that goes there and" }, { "end": 807.36, "start": 805.12, "text": " then goes further or something like this." }, { "end": 812.24, "start": 807.36, "text": " I'm pretty sure there's already an algorithm that does things like this, but I'm just thinking" }, { "end": 813.44, "start": 812.24, "text": " aloud here." }, { "end": 821.48, "start": 813.44, "text": " Alright, so this is one of the tasks and you see that the that that you will have to learn" }, { "end": 827.04, "start": 821.48, "text": " a policy to go as fast as possible from any point to any other point." }, { "end": 832.28, "start": 827.04, "text": " And you're all you're given is a database of experience that already exists from some" }, { "end": 840.04, "start": 832.28, "text": " other agent, but never will probably never the exact route that you need to learn right" }, { "end": 841.56, "start": 840.04, "text": " now." }, { "end": 847.02, "start": 841.56, "text": " Alright, so the goal is how fast or how efficiently can you do this?" }, { "end": 849.8399999999999, "start": 847.02, "text": " This is one task in this data set." }, { "end": 857.0799999999999, "start": 849.8399999999999, "text": " The next task is very similar is this grid world here where there is this red square," }, { "end": 859.02, "start": 857.0799999999999, "text": " red triangle, that's your agent." }, { "end": 863.96, "start": 859.02, "text": " And then there is the green square, that's your goal or vice versa." }, { "end": 873.16, "start": 863.96, "text": " And so you're basically tasked to not hit the walls here and to go about your way finding" }, { "end": 874.9, "start": 873.16, "text": " the target." }, { "end": 882.4, "start": 874.9, "text": " There are more elaborate things like this mojo co environment here, or the ant maze" }, { "end": 886.6, "start": 882.4, "text": " where you have this little ant with you know, the spider legs." }, { "end": 890.08, "start": 886.6, "text": " So this is no longer you can just move in either direction, you have to actually control" }, { "end": 891.52, "start": 890.08, "text": " the legs." }, { "end": 899.14, "start": 891.52, "text": " And there's also this arm, this robotic arm." }, { "end": 903.9200000000001, "start": 899.14, "text": " So you see there is a wide diversity of tasks." }, { "end": 911.48, "start": 903.9200000000001, "text": " And also, there is a wide diversity of how the replay buffer was constructed." }, { "end": 918.72, "start": 911.48, "text": " So in some cases, the replay buffer is actually constructed by a human performing in this" }, { "end": 919.72, "start": 918.72, "text": " environment." }, { "end": 926, "start": 919.72, "text": " So in this hand manipulation task, you'll have demonstrations from humans." }, { "end": 929, "start": 926, "text": " You see it's not particularly many samples here." }, { "end": 939.5600000000001, "start": 929, "text": " It's 5000 samples, which I guess are is a chopped up version of I'm not really sure" }, { "end": 941.64, "start": 939.56, "text": " how the human things were constructed." }, { "end": 947.92, "start": 941.64, "text": " But you can clearly guess that the degrees of freedom that you have in a robotic hand" }, { "end": 954.0799999999999, "start": 947.92, "text": " is much, much higher than you could learn just from these 5000 samples if you were to," }, { "end": 958.76, "start": 954.0799999999999, "text": " you know, an online or algorithm that just does random exploration will need much more" }, { "end": 961.1999999999999, "start": 958.76, "text": " than these 5000 samples." }, { "end": 966.76, "start": 961.1999999999999, "text": " And the 5000 samples won't be I ID distributed with all the degrees of freedom, it will just" }, { "end": 969.64, "start": 966.76, "text": " be here's what a human does, right." }, { "end": 977.42, "start": 969.64, "text": " And so you can think of algorithms like inverse reinforcement learning or something like this." }, { "end": 986.76, "start": 977.42, "text": " But here in inverse reinforcement learning, usually you assume that the expert the expert" }, { "end": 991.3, "start": 986.76, "text": " is kind of trying to achieve the same reward as you do." }, { "end": 994.36, "start": 991.3, "text": " But this is not necessarily the case here." }, { "end": 1006.28, "start": 994.36, "text": " You have a given reward structure, but you are tasked to simply learn from these demonstrations." }, { "end": 1011.16, "start": 1006.28, "text": " You can see it's also possible that there is this is constructed by a policy." }, { "end": 1020.7, "start": 1011.16, "text": " And that usually means that they so either it's it's constructed by let's say a reinforcement" }, { "end": 1026.2, "start": 1020.7, "text": " learning algorithm that was trained in an online fashion, but maybe not as well." }, { "end": 1032, "start": 1026.2, "text": " But also I think they have behavior cloning policy that they got from human demonstration," }, { "end": 1034.56, "start": 1032, "text": " I think so that there are many ways." }, { "end": 1041.3600000000001, "start": 1034.56, "text": " Also sometimes you have a planner which is, can you imagine it's it's a it's an algorithm" }, { "end": 1043.88, "start": 1041.3600000000001, "text": " that wasn't machine learned." }, { "end": 1052.8000000000002, "start": 1043.88, "text": " So I know almost unthinkable, but in these in these kind of mazes, you can actually do" }, { "end": 1061.2800000000002, "start": 1052.8000000000002, "text": " planning algorithms that can can sort of so I know this is crazy and crazy talk, the niche" }, { "end": 1068.16, "start": 1061.2800000000002, "text": " topic but there exists things like a star search where where where you can construct" }, { "end": 1074.1200000000001, "start": 1068.16, "text": " the kind of shortest path through these mazes and things like this." }, { "end": 1081.0400000000002, "start": 1074.1200000000001, "text": " So yeah, that's I know, I know that that is that is very niche." }, { "end": 1085.8400000000001, "start": 1081.0400000000002, "text": " But you can construct policies like this." }, { "end": 1090.18, "start": 1085.8400000000001, "text": " And then you can use those as your replay buffer filling." }, { "end": 1094.92, "start": 1090.18, "text": " And you can already see that this also will be a massively different distribution of data" }, { "end": 1100.72, "start": 1094.92, "text": " than you would get with an online RL algorithm, right." }, { "end": 1106.96, "start": 1100.72, "text": " So in conclusion, they do test other they do test other algorithms on this." }, { "end": 1115.0800000000002, "start": 1106.96, "text": " In conclusion, they say that most offline RL algorithms nowadays, they don't work well" }, { "end": 1118.76, "start": 1115.0800000000002, "text": " on these on these data sets." }, { "end": 1128.36, "start": 1118.76, "text": " The only data sets where they do work well is where the replay buffer was generated by" }, { "end": 1135.18, "start": 1128.36, "text": " some sort of like here, by some sort of policy by some sort of reinforcement learning policy." }, { "end": 1141.44, "start": 1135.18, "text": " So what they would do is they would train an online policy and the experience generated" }, { "end": 1147.26, "start": 1141.44, "text": " by that online policy while it learns will make up the replay buffer." }, { "end": 1155.04, "start": 1147.26, "text": " And if you use that replay buffer for offline learning, then they say it tends to work okay." }, { "end": 1163.2, "start": 1155.04, "text": " But if you have other methods of collecting the data that are very different from this" }, { "end": 1169.92, "start": 1163.2, "text": " offline, sorry, from an from a reinforcement learning collection approach, then it tends" }, { "end": 1172.32, "start": 1169.92, "text": " not to work as well." }, { "end": 1178.24, "start": 1172.32, "text": " Alright, so if you are interested in offline RL, please check out this paper, all their" }, { "end": 1181.08, "start": 1178.24, "text": " code is available right here." }, { "end": 1184.08, "start": 1181.08, "text": " Note that the link in the paper doesn't seem to work." }, { "end": 1187.9199999999998, "start": 1184.08, "text": " The true link is here." }, { "end": 1190.6, "start": 1187.9199999999998, "text": " I'll also put it in the description." }, { "end": 1193.52, "start": 1190.6, "text": " And with that, I wish you a good day." }, { "end": 1210.32, "start": 1193.52, "text": " Bye!" } ]